In the early episodes (I haven't watched much since 2015 or so), Casey used inversion of control by creating a windows platform layer that called the game specific code. Using this solution made since to me since he was passing in input data running a simulation and getting out a buffer of pixels to write to the screen, and it was interesting because he could do things like hot reload the game code.
However, now that Hand Made Hero is using a hardware renderer (I'm assuming), and the game code can't just take inputs and return pixels how does this work (at a high level)?
It works with "command buffers". Game layer fills in buffer with commands describing what it wants to happen on GPU (draw polygon, clear depth, etc..) and rendering code in platform layer executes these commands. This is very similar how Vulkan or D3D12 expects you to submit commands to GPU. In HH these commands are bit higher level. GL code executing these commands is isolated in "rendering" API, platform layer could easily choose to use different code for rendering, like D3D or even back to software - and game layer does not need to know about this.
I believe it’s separated into ‘three tiers’, where instead of passing a pixel buffer to the game code, you pass a ‘command buffer’ (basically memory to write your custom graphics commands in). Then when the game specific code runs, it fills up the buffer. Then on the platform side you run the third tier, where you pass the command buffer to execute on whatever graphics api your using. I believe this was pulled into a seperate dll as well.