Low-level audio programming in linux seems to be done exclusively using ALSA (asoundlib).
As asoundlib is user-space, at some point it will call into the kernel to play audio.
I understand that asoundlib is very small and in practice there's no real overhead in using it. However, to understand what is really going on I'm trying to bypass asoundlib and talk directly to the kernel to play audio (just a simple buffer like in Handmade Hero)
I can't find any source examples of doing this. Even embedded linux lecture slides I saw scouring youtube for linux audio they used asoundlib. Is it really that difficult to play an audio buffer without a user space program to abstract the linux kernel calls? Am I missing something here?