I worked on a live coding environment for audio.
The environment handles the configuration of audio input and output audio devices as well as audio parameters such as the sample rate, the channel count, and the audio buffer size. It loads a user-provided dynamic library which contains the user's dsp and UI functions. The system calls the user's dsp function, passing it input and output buffers. It also provides a UI view and calls the user's UI function to allow them drawing UI widgets suitable for audio applications, like sliders, dial knobs, and curves. The user dll is hot-reloaded when it changes, without interupting the audio. User code can also request blocks of memory that persist across reloads. On top of that, the environment exposes an API of dsp helpers to quickly create oscillators, envelopes, or simple filters.
Here is the basic window which allows selecting audio devices and configuring the audio parameters:
Here's a demo of non-interrupting hot-reloading:
Here is a demo of the curve UI used to create an envelope:
And here is a very simplistic synth demoing the basic oscillator and filter helpers:
I also made a series of more detailed devlogs to document my thoughts and my progress. You can find them here:
- Day 1: https://youtu.be/NFBSAP4n_NU
- Day 2: https://youtu.be/vSofGnl6Jzk
- Day 3: https://youtu.be/Jhh3dM9XydY
- Day 4: https://youtu.be/mwa3TOhPjO4
- Day 6: https://youtu.be/DmmKoz8XiJo
- Day 7 and wrap-up: https://youtu.be/eYZwwhwBDx0
All in all it has been a lot of fun working on that little project for a week. I managed to get the basic features I wanted working, and I got to experiment and learn about topics I didn't have the opportunity to work on before, like code reloading, or auto-layout, so it's quite satisfying.
I did spent a bit too much time fiddling with CoreAudio and trying to scavenge useful info from the docs or from sample code, but I think I now have a better understanding of that area, so I could go back and clean the audio device layer later.
I also whish I had more time to spend on more DSP-specific stuff, since that's more my area, and to do some more interesting synthesis demos, but the infrastructure had to be done first!
As for the devlog series, it took me a some time putting it together, that I could have spent programming, so I'm not sure I would aim for the same format next time, but I hope it's at least of some value to a few people.
So I would say it has been an amazing overall experience!
I don't exactly know what the project will become now, but I would certainly like to go back to it now and then if I find the time. I think now would be a great time to do more serious synthesis stuff to really see what is needed.
I could see it going in several directions, for example I could add feature to help play and sequence sound files, making it more like a live music sequencer, or add more ready-made modules' and push it in the direction of a modular synthesizer. Or I could see it as a prototyping tool, to quickly experiments with dsp algorithms before making them into plugins, for instance. Those different sets of features could be added as libraries that a usercan choose from and import in their module, allowing them to tailor the environment to their needs.
Let me know what you think!