I got an email today with a question about how to handle playback of audio in time, synchronised with a clock. My ‘musical notepad’ app Loopy does this, and I thought I’d briefly explain how.
Any app that makes use of the Remote IO audio unit framework (which is generally necessary for the kind of responsiveness required in a realtime musical app) provides audio to the hardware via a callback, which is periodically called when the hardware is ready for more.
The trick here is to provide the right chunk of samples in this callback for the current time position.
Loopy achieves this by:
1. Keeping track of where in the timeline we are at the time the callback is called
This is easily accomplished by keeping a record of the time the clock was started, subtracting this from the current time, and possibly performing a modulus with the tempo. For example:
(now - startTime) % timePerBargives the number of time units into the current bar (lets call it
timeIntoBar / (timePerBar/beatsPerBar)gives the number of beats into the current bar, and
timeIntoBar % (timePerBar/beatsPerBar)gives us the time into the current beat.
2. Determining first if we should be playing audio at this time, and if so, which samples should be playing
This involves first converting our time units from step 1 into samples. For instance, you can convert microseconds to samples by dividing your time by
1000000/yourSampleRate. Aside: Of course, you can convert back from samples to time by multiplying instead of dividing.
Next, in the case of Loopy’s metronome, for example, we test for whether
samplesIntoBeat < sound.lengthInSamples. If so, that means we should be playing audio. If the sound was a loop, of course, we could be always playing.
The offset into the sound, in samples, is just samplesIntoBeat, in the case of the simple metronome. In the case of a loop, you probably will be more interested in the number of samples into your loop — so instead of determining
(now - startTime) % timePerBar, you may be interested in
(now - startTime) % timePerLoop.
So, we want to return the requested number of samples starting from this offset into the sample array representing our audio.
3. Returning smooth audio in time
Note that if you just go returning any old set of samples, willy-nilly, you’re going to get nasty clicks and pops from discontinuities you get by not matching the start of your next buffer to the last one.
To ensure smoothness, Loopy keeps track of the offset of the last samples we returned, and just return the immediately following bunch of samples — unless we’re more than some threshold number of samples out of time, in which case we’ll suffer the pop in order to stay synchronised. Actually, you can even generally avoid the pop if you smoothly blend buffers over a short time, removing any discontinuity.
The example above was a relatively trivial one, for a metronome sound. For longer audio that may span multiple bars, you’ll probably want to perform a modulus by the length of your audio clip, possibly quantised to your time signature, and possibly using a per-loop time base, so you can start the loop at any point in your timeline and have it begin from the start. This is something Loopy doesn’t currently do — Loopy will keep your loops synchronised so when you start a loop playing, it’ll play whatever part corresponds to the current timeline, not from the start of the loop. Maybe it’ll be an option in the future?
I wrote a little about the timing of loops in my second article on Loopy’s implementation.