So, I followed a lot of Casey work and I really liked the separation that it makes for the game, the platform and the renderer at the code. For 2D I think it works great to have render commands and to batch geometry data for rendering. My problem is when doing 3D for more complex models... We can surely draw models using the current renderer with simpler primitives like triangles. Or we can even use one big vertex buffer/index buffer for drawing specifically meshes. The problem is that we are streaming the vertex data each frame to the GPU using this approach and this sounds like a waste. What I would like is to have vertex and index buffers for different meshes (and maybe batch it together for similar groups later), so that I could just load one the geometry one time to the GPU and then bind the buffers each time we went for a draw. But this comes with the question on how to do that on the current architecture of making the game assembly commands to the GPU. The only way I think about it is doing something similar to the platform and expose some rendering function pointers to the game to have these allocation/request of vertex/index buffers for the current mesh data we have loaded into CPU.