I started working on SNDTRK to get better with Processing. I'd had an idea for an interactive soundtrack composer and wanted to see if I could build a prototype exclusively with code.
I felt that YouTube videos and other amateur video production were made stronger by the addition of a soundtrack of some kind. I've been a musician for much of my life and I've been using music composition and production software like Apple Logic and Garageband for much of that time as well, so if needed I could pretty easily create a custom soundtrack for a video. In fact, I've done it for a few of the videos seen in my portfolio, including the one below and the backing music for Litemotif.
I knew that for someone without any music or production training, it would be very hard to score a video. YouTube does have rudimentary soundtrack options, but there's no option for varying the intensity or length of the songs.
With that challenge in mind, I decided to build a prototype that captured an intuitive, fun, visual way to create a soundtrack on-the-fly for any video of any length.
In my efforts to make soundtrack creation easier and more fun, I was first inspired by the "stack" of instruments seen in Logic and other music sequencing apps. Below is an example of that interface, from Apple's Logic Pro:
Each instrument can be seen layered above the other, and they can be enabled or disabled by clicking the "M / S" buttons to the left, below the instrument name. Normally the way tracks are enabled or disabled in the recording process is by adjusting the volume of each track as the whole song plays. That's what the classic "slider board" is used for.
Now, that's not entirely intuitive to the normal non-musical user, and it also doesn't allow for very easy on-the-fly music editing. So I started thinking about ways to make it easier.
I imagined an app that would allow the user to simply move the mouse (or a finger on a touchscreen) up and down, thus triggering and also controlling the volume of the track. This action would also replicate the "arc" of the story or video itself. As I thought through the UX I started sketching some early concepts, seen below.
Once I had a sense of the layout I decided to immediately start working in Processing, as the goal of the course was to learn programming. Processing is a simplified version of Java for visual designers and artists and proved to be a great introduction to code concepts.
I found a video that I could use, then composed a track that seemed to fit. I did some math to figure out how to divide the screen into five sections, and also to track and visually recreate the mouse position over the course of the track as the song and video played. Below you can see some examples of my code and also a screenshot from the app itself.
Once I had a working prototype, I found a way to export the final track with the correct audio fades and triggers. This was my most concrete way to prove the value of the project, as it would the be as easy as uploading the audio and video to YouTube.
Below is a video that shows the full functionality of the app from start to finish.
I found that the true utility of the application came through most strongly not from the description and sketches but from the actual interactive prototype. I got my best reactions from people actually playing around with it, moving the mouse around on the screen as the video played. Rather than jumping through hoops to explain the app, I was able to just say "Play with it" and stand back as users started to move the mouse around. Everybody understood the app within seconds of using it.
While I can't claim to be proficient enough with Processing to be able to build every single idea that I have, I was glad to know that it was possible to build a convincing prototype out of real code with a concerted effort and a lot of trial and error. This would come in handy later with my thesis project, LikeMine.