I would like to let everyone know PolymerNG will use Vulkan and D3D12. I want to emphasize the D3D12 part of the renderer will get dropped when the Vulkan API gets released, and will not be in the eduke32 main depot until that point(or maybe not depending on what TX allows ). Initially though the renderer will use Direct3D 12. When Direct3D 12 gets deprecated by Vulkan the renderer will work on any OpenGL 4.1 compliant hardware, and any device/OS that supports the Vulkan API.
That was a heavy few sentences I would like to break down my reasoning for going down this path. The eduke32 3d renderer needed a complete overhaul. Staying with a soon to be deprecated API wasn't the right way to move PolymerNG forward. Vulkan used to be called "glNext". Vulkan and Direct3D 12 offer great performance gains that I will get into later.
The only parts I have kept of Polymost and Polymer for PolymerNG are the math functions related to build to 3D conversion, and I have made those API agnostic functions. For example say that we have rotate sprite:
Previous code:
Quote
{
[bunch of math]
[immediate draw call]
}
New code
Quote
{
[bunch of math]
[render state and vertex data get stored in "BuildRenderCommand"]
}
This allows render commands to get processed at more optimal times during a frame.
This brings me into the next part, the game and renderer are now in separate threads. When the engine calls "nextpage" the render thread kicks off and begins processing the "BuildRenderCommands". The game can then continue to process the next frame, but if the renderer is still rendering by the time the next nextpage function is called the game thread will wait. The game thread can only be one frame ahead of the renderer. EDuke32 used to poll windows messages at the start of the game loop, this now happens continuously in the main thread. Due to all of these modifications winlayer.c has been deprecated in PolymerNG, and replaced with a generic syslayer.cpp.
The renderer is also API agnostic, and all API specific calls are wrapped in a Render Hardware Interface Layer or RHI.
Example:
Quote
// BuildImage::UpdateImage
//
void BuildImage::UpdateImage()
{
if (!waloff[imageOpts.tileNum])
loadtile(imageOpts.tileNum);
//TODO: HQ textures don't need to get converted.
char *tempbuffer = ConvertARTImage();
if (!IsLoaded())
{
bool allowCPUWrites = (imageOpts.heapType == BUILDIMAGE_ALLOW_CPUWRITES);
RHI::RHITextureFormat format = RHI::TextureManager::GetRHITextureFormat(imageOpts.format);
switch (imageOpts.imageType)
{
case IMAGETYPE_2D:
texture = RHI::TextureManager::LoadTextureFromMemory(name, GetWidth(), GetHeight(), format, (const void *)tempbuffer, allowCPUWrites);
break;
case IMAGETYPE_3D:
texture = RHI::TextureManager::LoadTexture3DFromMemory(name, GetWidth(), GetHeight(), GetDepth(), format, (const void *)tempbuffer, allowCPUWrites);
break;
}
texture->SetHardwareDebugName(name.c_str());
Bfree(tempbuffer);
return;
}
if (imageOpts.heapType != BUILDIMAGE_ALLOW_CPUWRITES)
{
initprintf("BuildImage::UpdateImage: Image not set for CPU Writes");
Bfree(tempbuffer);
return;
}
texture->UploadRegion(0, 0, GetWidth(), GetHeight(), (const void *)tempbuffer);
Bfree(tempbuffer);
}
I still haven't answered the question why did I choose NG API's? Why did I start off with Direct3D 12? The fact is Vulkan and Direct3D 12 offer granular control over how commands are batched up and sent down to the GPU. There are great performance gains to be gained there. Second Pix saves development time. Pix only works with Direct3D and as I am getting the renderer on it's feet, I don't have time to dick around with trying to figure out where bugs are cropping up from. Pix gives great debugging information.
As I said when Vulkan gets released I will move PolymerNG over to Vulkan and that is the API we will use going forward. I hope this answers some questions and I would love to answer any other questions or concerns you guys have.