A simple, fast circular buffer implementation for audio processing

Circular buffers are pretty much what they sound like – arrays that wrap around. They’re fantastically useful as scratch space for audio processing, and generally passing audio around efficiently.

They’re designed for FIFO (first-in-first-out) use, like storing audio coming in the microphone for later playback or processing.

Consider a naive alternative: You copy the incoming audio into an NSData you allocate, and then pass that NSData off. This means you’re allocating memory each time, and deallocating the memory later once you’re done processing. That allocation incurs a penalty, which can be a show-stopper when part of an audio pipeline – The Core Audio documentation advises against any allocations when within a render callback, for example.

Alternatively, you can allocate space in advance, and write to that, but that has problems too: Either you have a synchronisation nightmare, or you spend lots of time moving bytes around so that the unprocessed audio is always at the beginning of the array.

A better solution is to use a circular buffer, where data goes in at the head, and is read from the tail. When you produce data at the head, the head moves up the array, and wraps around at the end. When you consume at the tail, the tail moves up too, so the tail chases the head around the circle.

Here’s a simple C implementation I recently put together for my app Loopy: TPCircularBuffer

typedef struct {
    void             *buffer;
    int32_t           length;
    int32_t           tail;
    int32_t           head;
    volatile int32_t  fillCount;
} TPCircularBuffer;
bool  TPCircularBufferInit(TPCircularBuffer *buffer, int32_t length);
void  TPCircularBufferCleanup(TPCircularBuffer *buffer);
void  TPCircularBufferClear(TPCircularBuffer *buffer);
// Reading (consuming)
void* TPCircularBufferTail(TPCircularBuffer *buffer, int32_t* availableBytes);
void  TPCircularBufferConsume(TPCircularBuffer *buffer, int32_t amount);
// Writing (producing)
void* TPCircularBufferHead(TPCircularBuffer *buffer, int32_t* availableBytes);
void  TPCircularBufferProduce(TPCircularBuffer *buffer, int32_t amount);
int   TPCircularBufferProduceBytes(TPCircularBuffer *buffer, const void* src, int32_t len);

Update: The implementation has been recently amended to make use of a neat virtual memory mapping technique that inserts a virtual copy of the buffer memory directly after the buffer’s end, negating the need for any buffer wrap-around logic. Clients can simply use the returned memory address as if it were contiguous space.

The virtual memory technique was originally proposed by Philip Howard, and adapted to Darwin by Kurt Revis.

Use TPCircularBufferHead to get a pointer to write to the buffer, followed by TPCircularBufferProduce to submit the written data, then TPCircularBufferTail to get a pointer to the next data to read, followed by TPCircularBufferConsume to free up the space once processed.

TPCircularBufferProduceBytes is a convenience routine for writing data straight to the buffer.

The implementation is thread-safe (no need for locks) in the case of a single producer and single consumer.

Here’s an example, using circular buffers to implement a simple playthrough scheme that plays audio coming in the microphone:

@interface MyAudioController : NSObject {
  TPCircularBuffer buffer;
#define kBufferLength 1024
@implementation MyAudioController
- (void)setup {
    // Initialise buffer
    TPCircularBufferInit(&buffer, kBufferLength);
    // Setup audio, etc
- (void)dealloc {
    // Release buffer resources
    [super dealloc];
static OSStatus audioInputCallback(void *inRefCon, 
                                   AudioUnitRenderActionFlags *ioActionFlags, 
                                   const AudioTimeStamp *inTimeStamp, 
                                   UInt32 inBusNumber, 
                                   UInt32 inNumberFrames, 
                                   AudioBufferList *ioData) {
    MyAudioController *THIS = (MyAudioController *)inRefCon;
    // Render audio into buffer
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0].mNumberChannels = 2;
    bufferList.mBuffers[0].mData = NULL;
    bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16) * 2;
    OSStatus err = AudioUnitRender(THIS->ioAudioUnit, ioActionFlags, inTimeStamp, kInputBus, inNumberFrames, &bufferList);
    if ( !checkResultLite(err, "AudioUnitRender") ) { return err; }
    // Put audio into circular buffer
    TPCircularBufferProduceBytes(&THIS->buffer, bufferList.mBuffers[0].mData, inNumberFrames * 2 * sizeof(SInt16));
static OSStatus audioOutputCallback(void *inRefCon, 
                                    AudioUnitRenderActionFlags *ioActionFlags, 
                                    const AudioTimeStamp *inTimeStamp, 
                                    UInt32 inBusNumber, 
                                    UInt32 inNumberFrames, 
                                    AudioBufferList *ioData) {
    MyAudioController *THIS = (MyAudioController *)inRefCon;
    int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
    SInt16 *targetBuffer = (SInt16*)ioData->mBuffers[0].mData;
    // Pull audio from playthrough buffer
    int32_t availableBytes;
    SInt16 *buffer = TPCircularBufferTail(&THIS->buffer, &availableBytes);
    memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes));
    TPCircularBufferConsume(&THIS->buffer, sampleCount);
    return noErr;
, , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.


  1. Posted March 5, 2013 at 8:56 am | Permalink

    i’m a chinese girl, and a ios engineer,u blog is useful. thanks

    • Pete
      Posted April 22, 2013 at 8:26 am | Permalink

      i’m a chinese ios developer too, can we communicate with each other online? what’s your contact?

      • JerryRo
        Posted May 21, 2013 at 6:41 am | Permalink

        me too!

  2. Posted April 2, 2013 at 9:49 am | Permalink

    I have been looking around for a simple circular buffer implementation for an audio application I am writing, and this seems to fit the bill quite well. What is the license on the code? I need something that is either public domain, released under the Zlib/Libpng license, or under the Boost software license.

  3. Owolabi Olowookere
    Posted August 26, 2013 at 11:49 pm | Permalink

    Hi Michael,

    Thanks for all your efforts in making audio processing accessible.

    Please, I’m currently working on a a project that implements FSK demodulation in the iPhone. I formerly used the Ring buffer modulation method (in Learning core audio text) to process the audio samples but discovered that the sampling procedure did not differentiate between the frequencies of the square wave (FSK modulated signal) because when I reduced the frequency of the FSK modulated signal I still obtained the same number of positive and negative samples. I have tried to adapt your circular buffer method but i’m not getting any result on the console. The program is not reading the signal form the mic anymore, its not displaying the pulled sample from the console. here are the codes:

    BOOL TPCircularBufferInit(TPCircularBuffer *buffer, int32_t length);

    // Listing 10.28 Initial Setup of Render Callback from RemoteIO static OSStatus InputFSKDemodulationRenderCallback (

                                                    void                        inRefCon,      // A pointer to a struct containing the complete audio data
                                                    //    to play, as well as state information such as the
                                                    //    first sample to play on this invocation of the callback.
                                                    AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence
                                                    //    between sounds; for silence, also memset the ioData buffers to 0.
                                                    const AudioTimeStamp        *inTimeStamp,   // Unused here.
                                                    UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                                    //        frames of audio data to play.
                                                    UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                                    //        pointed to by the ioData parameter.
                                                    AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary
                                                    //        responsibility is to fill the buffer(s) in the 
                                                    //        AudioBufferList.
                                                    ) {
    AudioStruct effectState = (AudioStruct*) inRefCon;
    // Listing 10.29 Copying Captured Samples to Play-Out Buffer in RemoteIO Render Callback
    // Just copy samples
    UInt32 bus1 = 1;
               "Couldn't render from RemoteIO unit");

    // Listing 10.30 Performing FSK demodulation Effect on a Buffer of Samples // Walk the samples AudioUnitSampleType* outSample = (AudioUnitSampleType* )ioData->mBuffers[0].mData; AudioUnitSampleType sample = (AudioUnitSampleType) &outSample; AudioUnitSampleType samplesinelow = 0; AudioUnitSampleType samplecosinelow = 0; AudioUnitSampleType samplesinehigh = 0; AudioUnitSampleType samplecosinehigh = 0; AudioUnitSampleType samplesinesumlow = 0; AudioUnitSampleType samplesinesumhigh = 0; AudioUnitSampleType samplecosinesumlow = 0; AudioUnitSampleType samplecosinesumhigh = 0; AudioUnitSampleType samplesumlow = 0; AudioUnitSampleType samplesumhigh = 0; NSMutableArray *FSKdata = [NSMutableArray arrayWithCapacity:1000]; memset(outSample, 0, inNumberFrames * kUnitSize * 2); for (int bufCount = 0; bufCount<ioData->mNumberBuffers; bufCount++){ // AudioBuffer buf = ioData->mBuffers[bufCount]; int currentFrame = 0; while (currentFrame <= inNumberFrames) { // Pull audio from circular buffer int32_t availableBytes; AudioUnitSampleType *bufferTail = TPCircularBufferTail( &effectState->circularBuffer, &availableBytes);

    memcpy(outSample, bufferTail, MIN(availableBytes, inNumberFrames * kUnitSize * 2) );

    TPCircularBufferConsume(&effectState->circularBuffer, MIN(availableBytes, inNumberFrames * kUnitSize * 2) ); effectState->currentSampleNum += MIN(availableBytes / (kUnitSize * 2), inNumberFrames);

    NSLog(@” currentsample is: %ld”, sample);

    float lowtheta = effectState->lowPhase * M_PI * 2; float hightheta = effectState->highPhase * M_PI * 2; samplesinelow = sin(lowtheta) * sample/1048576; samplecosinelow = cos(lowtheta) * sample/1048576; samplesinehigh = sin(hightheta) * sample/1048576; samplecosinehigh = cos(hightheta) * sample/1048576; samplesinesumlow += samplesinelow; samplesinesumhigh += samplesinehigh; samplecosinesumlow += samplecosinelow; samplesinesumhigh += samplesinehigh;

               effectState->lowPhase += 1 /
               (effectState->asbd.mSampleRate /
               // NSLog (@" phase is: %f",effectState->lowPhase );

               /* effectState-&gt;highPhase += 0.615384615 / */
           effectState-&gt;highPhase += 1 /
                (effectState-&gt;asbd.mSampleRate /
           if (effectState-&gt;lowPhase &gt;= 1.0)
               effectState-&gt;lowPhase = 0.0;
           if (effectState -&gt; highPhase &gt;= 1.0)
               effectState -&gt; highPhase = 0.0;
           // NSLog (@" phase is: %f",effectState-&gt;highPhase );
           if ( counter &gt;= effectState-&gt;asbd.mSampleRate) {
                    samplesumhigh = abs (samplesinesumhigh) + abs (samplecosinesumhigh);
                    samplesumlow = abs (samplesinesumlow) + abs (samplecosinesumlow);
                // NSLog (@" samplesumhigh is: %ld", samplesinesumhigh);
               // lowfrequencycounter = 0;
                        if (samplesumlow &gt; samplesumhigh){
                    [FSKdata addObject:[NSNumber numberWithUnsignedInt:0]];
                    //  NSLog(@"bit= %i",0);
                        else if (samplesumhigh &gt; samplesumlow){
                            [FSKdata addObject:[NSNumber numberWithUnsignedInt:1]];
                         //  NSLog(@"bit= %i",1);
               samplesumhigh = 0;
               samplesumlow = 0;
               samplecosinesumhigh = 0;
               samplecosinesumlow = 0;
               samplesinesumhigh = 0;
               samplesinesumlow = 0;
               counter = 0;
         /*   NSLog(@" sample counter is: %i", counter);
       // }
    if (availableBytes &lt;= inNumberFrames * kUnitSize * 2) {
        // Buffer is running out or playback is finished
        // effectState-&gt;bufferIsReady = NO;
        //audioObject.playingiPod = NO;
        effectState-&gt;currentSampleNum = 0;
    // NSLog(@" sample counter is: %i", counter);

    } } // } // for the If statement return noErr;


    All I keep getting on the console is:

    2013-08-26 17:46:55.783 ElisaDongle[1164:907] hardwareSampleRate = 44100.000000 RIO started! 2013-08-26 17:46:55.886 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 2013-08-26 17:46:55.889 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 2013-08-26 17:46:55.891 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 2013-08-26 17:46:55.893 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 2013-08-26 17:46:55.895 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 2013-08-26 17:46:55.897 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 2013-08-26 17:46:55.900 ElisaDongle[1164:6403] currentsamplenumber is: 68146516 ……

    Please I would be grateful if you can be of help.

    Best Regards, Olowookere Owolabi