It's a very simple idea. In goes some audio and image data, and out pops a video. Specifically designed for podcasts recorded with separate tracks for each speaker.

Recent Activity

End of jam report: I added a system for mixing all input tracks into a master track to perform additional analysis & visualization on the combined track. I added a pretty typical visualization for the mixed track that also serves to visually "frame" the video a little bit. I tuned up the effects to make them just right for the final videos. Finally I added command line parameterization so I can reuse the program without having to edit the parameters and generate noisy git commits.

Speaking of git you can now check out source over here: https://mr4th.com/link/podcastoscope

Day 2 progress report: I've definitely got an audio visualizer now. Today my goal was to get it good enough that I would be willing to use to actually render a video for YouTube. The biggest steps forward are a system to analyze when to highlight speakers that does not change too often, or ignore overlapping speakers, and sensitive enough frequency analysis that effects rendered from it really do match what you're hearing. It's still very bare bones, but I think it's looking pretty nice!