Questions About 3D Rendering on the Current Handmade Architecture

So, I followed a lot of Casey work and I really liked the separation that it makes for the game, the platform and the renderer at the code. For 2D I think it works great to have render commands and to batch geometry data for rendering. My problem is when doing 3D for more complex models... We can surely draw models using the current renderer with simpler primitives like triangles. Or we can even use one big vertex buffer/index buffer for drawing specifically meshes. The problem is that we are streaming the vertex data each frame to the GPU using this approach and this sounds like a waste. What I would like is to have vertex and index buffers for different meshes (and maybe batch it together for similar groups later), so that I could just load one the geometry one time to the GPU and then bind the buffers each time we went for a draw. But this comes with the question on how to do that on the current architecture of making the game assembly commands to the GPU. The only way I think about it is doing something similar to the platform and expose some rendering function pointers to the game to have these allocation/request of vertex/index buffers for the current mesh data we have loaded into CPU.


Edited by Mateus Caracciolo on

I don't remember how the Handmade Hero renderer works but I'm assuming texture are only uploaded once, so you could take inspiration in that system to be able to upload meshes only once.

I'm assuming you'll just have to have some render command like upload_mesh_to_gpu that you call during level loading (or when you need it) and keep track in the assets system (or another place) of the buffer ids.

Thank you for your response. Let me take a look more closely at the texture upload and the asset system and I will update on the idea of having a render command for upload meshes to the GPU.


Replying to mrmixer (#29288)