Blog

A simple, fast circular buffer implementation for audio processing

Circular buffers are pretty much what they sound like – arrays that wrap around. They’re fantastically useful as scratch space for audio processing, and generally passing audio around efficiently.

They’re designed for FIFO (first-in-first-out) use, like storing audio coming in the microphone for later playback or processing.

Consider a naive alternative: You copy the incoming audio into an NSData you allocate, and then pass that NSData off. This means you’re allocating memory each time, and deallocating the memory later once you’re done processing. That allocation incurs a penalty, which can be a show-stopper when part of an audio pipeline – The Core Audio documentation advises against any allocations when within a render callback, for example.

Alternatively, you can allocate space in advance, and write to that, but that has problems too: Either you have a synchronisation nightmare, or you spend lots of time moving bytes around so that the unprocessed audio is always at the beginning of the array.

A better solution is to use a circular buffer, where data goes in at the head, and is read from the tail. When you produce data at the head, the head moves up the array, and wraps around at the end. When you consume at the tail, the tail moves up too, so the tail chases the head around the circle.

Here’s a simple C implementation I recently put together for my app Loopy: TPCircularBuffer

typedef struct {
    void             *buffer;
    int32_t           length;
    int32_t           tail;
    int32_t           head;
    volatile int32_t  fillCount;
} TPCircularBuffer;
 
bool  TPCircularBufferInit(TPCircularBuffer *buffer, int32_t length);
void  TPCircularBufferCleanup(TPCircularBuffer *buffer);
void  TPCircularBufferClear(TPCircularBuffer *buffer);
 
// Reading (consuming)
void* TPCircularBufferTail(TPCircularBuffer *buffer, int32_t* availableBytes);
void  TPCircularBufferConsume(TPCircularBuffer *buffer, int32_t amount);
 
// Writing (producing)
void* TPCircularBufferHead(TPCircularBuffer *buffer, int32_t* availableBytes);
void  TPCircularBufferProduce(TPCircularBuffer *buffer, int32_t amount);
int   TPCircularBufferProduceBytes(TPCircularBuffer *buffer, const void* src, int32_t len);

Update: The implementation has been recently amended to make use of a neat virtual memory mapping technique that inserts a virtual copy of the buffer memory directly after the buffer’s end, negating the need for any buffer wrap-around logic. Clients can simply use the returned memory address as if it were contiguous space.

The virtual memory technique was originally proposed by Philip Howard, and adapted to Darwin by Kurt Revis.

Use TPCircularBufferHead to get a pointer to write to the buffer, followed by TPCircularBufferProduce to submit the written data, then TPCircularBufferTail to get a pointer to the next data to read, followed by TPCircularBufferConsume to free up the space once processed.

TPCircularBufferProduceBytes is a convenience routine for writing data straight to the buffer.

The implementation is thread-safe (no need for locks) in the case of a single producer and single consumer.

Here’s an example, using circular buffers to implement a simple playthrough scheme that plays audio coming in the microphone:

@interface MyAudioController : NSObject {
  TPCircularBuffer buffer;
}
 
@end
 
#define kBufferLength 1024
 
@implementation MyAudioController
 
- (void)setup {
    // Initialise buffer
    TPCircularBufferInit(&buffer, kBufferLength);
 
    // Setup audio, etc
}
 
- (void)dealloc {
    // Release buffer resources
    TPCircularBufferCleanup(&buffer);
 
    [super dealloc];
}
 
static OSStatus audioInputCallback(void *inRefCon, 
                                   AudioUnitRenderActionFlags *ioActionFlags, 
                                   const AudioTimeStamp *inTimeStamp, 
                                   UInt32 inBusNumber, 
                                   UInt32 inNumberFrames, 
                                   AudioBufferList *ioData) {
    MyAudioController *THIS = (MyAudioController *)inRefCon;
 
    // Render audio into buffer
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0].mNumberChannels = 2;
    bufferList.mBuffers[0].mData = NULL;
    bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16) * 2;
    OSStatus err = AudioUnitRender(THIS->ioAudioUnit, ioActionFlags, inTimeStamp, kInputBus, inNumberFrames, &bufferList);
    if ( !checkResultLite(err, "AudioUnitRender") ) { return err; }
 
    // Put audio into circular buffer
    TPCircularBufferProduceBytes(&THIS->buffer, bufferList.mBuffers[0].mData, inNumberFrames * 2 * sizeof(SInt16));
}
 
static OSStatus audioOutputCallback(void *inRefCon, 
                                    AudioUnitRenderActionFlags *ioActionFlags, 
                                    const AudioTimeStamp *inTimeStamp, 
                                    UInt32 inBusNumber, 
                                    UInt32 inNumberFrames, 
                                    AudioBufferList *ioData) {
    MyAudioController *THIS = (MyAudioController *)inRefCon;
 
    int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
    SInt16 *targetBuffer = (SInt16*)ioData->mBuffers[0].mData;
 
    // Pull audio from playthrough buffer
    int32_t availableBytes;
    SInt16 *buffer = TPCircularBufferTail(&THIS->buffer, &availableBytes);
    memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes));
    TPCircularBufferConsume(&THIS->buffer, sampleCount);
 
    return noErr;
}
@end
, , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.

62 Comments

  1. Paul
    Posted March 25, 2011 at 3:04 pm | Permalink

    Thanks so much for this post. I’ve been trying to research a good way to use circular buffers for a long time. How would you pull samples from a file? Is extaudiofileread the best option? Should it be called in a loop on a different thread?

    Thanks again for your help! Your posts have been invaluable.

    • Posted March 26, 2011 at 2:06 pm | Permalink

      My pleasure! Glad it’s helpful.

      It depends on the kind of file – if you’re reading a file that’s linear PCM and much the same as your target format, then you may find the AudioFile routines adequate; I think ExtAudioFile is a wrapper around AudioFile that takes care of compressed formats and such, but it may be overkill if you’re just loading a caf, for example.

      As far as loading in a different thread goes, that somewhat depends on your setup, and how much your app’s doing at once. I’d start by putting the loading stuff just in the render thread, and see how it performs. If it’s glitchy, then move it offline, and use something like a circular buffer to ferry the audio data around.

  2. Andrey Subbotin
    Posted April 2, 2011 at 9:53 am | Permalink

    It may be useful to check out the ring buffer impl. by Kurt Revis at http://www.snoize.com/ His PlayBufferedSoundFile ( http://www.snoize.com/Code/PlayBufferedSoundFile.tar.gz ) brilliantly uses memory mapping to make the buffer wrap-around transparently, plus the locking model there is quite effective. Cheers.

    • Posted April 2, 2011 at 10:52 am | Permalink

      Wow, thanks for the heads-up, Andrey – That implementation looks fantastic. Very clever use of the memory map!

  3. Jan
    Posted May 16, 2011 at 11:11 am | Permalink

    Thanks! What kind of license does the code fall under?

    • Posted May 16, 2011 at 6:19 pm | Permalink

      Hey Jan – I haven’t actually got a particular license in mind, but it’s basically free for any kind of use you wish, commercial or otherwise, but warranty-free. That old chestnut. Go for it =)

      (Also if you’re deciding upon an implementation, be sure to check out Kurt Revis’s implementation that Andrey mentioned above, as well – it’s great)

  4. James
    Posted May 31, 2011 at 12:36 am | Permalink

    I’ve been looking into circular buffers for use in AudioUnits – this looks great! Do you have any experience in implementing such a buffer in the AudioUnit’s ‘Process’ method?

    • Posted May 31, 2011 at 10:24 am | Permalink

      Great!

      I can’t say I do, no, but is there anything in particular you’re wondering about? There shouldn’t be any particularly special circumstances there.

      • James
        Posted June 1, 2011 at 11:50 pm | Permalink

        Ah, okay that’s good to know. I’m pretty new to C/C++ so I’ve just got to work out how to implement this code into my current project. The callback function gives me standard buffers, from which I make the circular buffers, I’m guessing?

  5. James
    Posted June 9, 2011 at 7:09 am | Permalink

    So in my standard callback routine, I’ll do something like this:

    • Receive a pointer to a buffer from the host (say, 10 samples)
    • Consume 10 samples from the ring buffer (which is, say 100 samples long)
    • Place the host-provided samples in the tail of the ring buffer
    • James
      Posted June 9, 2011 at 7:10 am | Permalink

      Is that right? I’d like to understand the method before I begin the implementation!

    • Posted June 9, 2011 at 10:52 am | Permalink

      Hey James,

      Almost – except ‘consume’ is what you do at the other end. You want to get a buffer to the available audio samples, the length of which will be indicated by the parameters – use that length, rather then any fixed value. Then, copy that buffer into the ring buffer. Use the ‘copy’ function for that, as it’s easiest – or you can do it manually then call ‘produce’, using the ‘space’ function to get an indication of how much space is available in the buffer.

      Then, in the place where you use the samples, you obtain the fill count of the ring buffer, grab a number of samples ≤ the fill count, process the samples, then call ‘consume’ to indicate that you’ve consumed the samples and that part of the ring buffer is now free.

      • James
        Posted June 11, 2011 at 8:13 am | Permalink

        Great! Every little bit of knowledge brings me a step closer. At the moment I’m getting build errors on line 15 of this: http://www.pastie.org/2050732 The error is: Undefined symbols: “TPCircularBufferInit(TPCircularBufferRecord*, int)”, referenced from:

        StereoEffect3::StereoEffect3(ComponentInstanceRecord*)in StereoEffect3-1DB483EC8D75827.o

        StereoEffect3::StereoEffect3(ComponentInstanceRecord*)in StereoEffect3-1DB483EC8D75827.o ld: symbol(s) not found collect2: ld returned 1 exit status

        Do you have any idea about why it can’t access the functions?

        • Posted June 11, 2011 at 10:12 am | Permalink

          Hi James,

          I’m not quite sure, but I have noticed that the c++ compiler/linker sometimes has trouble linking with c/obj-c sources. Maybe try renaming TPCircularBuffer.c to .cpp?

          • James
            Posted June 11, 2011 at 10:21 am | Permalink

            Oh that seems to have at least got rid of that error. It has now introduced 2 errors in the TPCircularBuffer.cpp, but I’m guessing that’s more because the syntax between Obj-C and C++ is different. I’ll look into that now. Thanks again Michael!

          • James
            Posted June 19, 2011 at 11:48 pm | Permalink

            This is weird. That second error has gone but I’m left with the first error. Renaming to .cpp didn’t make much of a difference! I must be doing something fundamentally wrong here.

          • Pier
            Posted October 30, 2012 at 7:27 am | Permalink

            I’ve had the same problem – one possible issue is that you added the files into the project but they were not included in Targets -> Build Phases -> Compile Sources. After I added the TPCircularBuffer.c file the problem went away. Hope this helps others who are reading this.

        • Shahzeb
          Posted March 14, 2012 at 4:15 pm | Permalink

          Not sure if it matters anymore, but I’ll post my solution anyways, in case somebody else is having the same problem as well.

          When you import the header, instead of writing

          import TPCircularBuffer.h

          Just write the following : extern “C” {

          include “TPCircularBuffer.h”

          };

  6. Posted July 10, 2011 at 11:58 am | Permalink

    If anyone’s watching this thread and using TPCircularBuffer, I recommend grabbing the latest version – I replaced the atomic increment with one that includes a memory barrier, which I should’ve done in the first place. This ensures that the head/tail pointer is stored to memory before altering the fill count, for better thread safety.

  7. dubbeat
    Posted August 23, 2011 at 12:27 pm | Permalink

    Hi Michael. I think this sample may suit my current needs quite well. I have been loading very small mp3s into audio buffer lists but now I’m using very large mp3s and quickly running out of memory. Do you reckon I could use this to only read data from an mp3 on disk as I need it?

    I have a remote IO that calls a callback that reads pcm from memory. Would it sound wise to you to:

    Replace the pcm from memory code with code that checks the amount of space left in the buffer. If the space left is equal to the buffer length , start extracting more mp3 into the buffer?

    Or….. should be my check to see when I need to extract more mp3 data be more along the lines when the space left is equal to have the lenght to give the audio more time to extract?

    • Posted August 23, 2011 at 1:05 pm | Permalink

      Hey,

      I’m not 100% sure I understood your question, but I think you’re wondering when to trigger a load of more audio data – at the buffer empty point, or at the half -empty point. The ‘low water mark’ trigger really depends on a number of factors (like where you’re doing the load, and how much you’re using the processor, which dictates how fast you’ll load the audio).

      Firstly, are you performing the file load from within the Remote IO thread, or from an offline thread?

      If it’s the former (loading from the Remote IO thread), then it doesn’t matter where your low water mark is. In fact, it’s quite meaningless, as you don’t even strictly need to manage a separate buffer: you can just load more data each time the callback is called. If you’re doing it this way, keep an eye on the audio performance, though. If you get stuttering, you’ll need to move the load to an offline thread. (You’ll notice the Remote IO sample code often says “Don’t hold locks, don’t take time” in the callback, ‘cos you have a tight deadline).

      If it’s the latter (loading from an offline thread), then I suggest you find your trigger empirically – you definitely want it above the buffer empty point, or else you’ll starve the IO thread, of course. Start with, say, 2 * the IO audio buffer length, and see how it goes. I’m pretty sure I’ve seen sample code around that does this – I seem to remember it using the term “low water mark” too, so maybe search for that, for ideas.

      Good luck.

      • dubbeat
        Posted August 23, 2011 at 1:18 pm | Permalink

        Hey.

        You actually understood the question 100% correctly :)

        Originally when I was dealing with tiny mp3s the entire file load was done outside of the remote IO even before the auGraph begun. Then in my remote IO I was just happily reading from memory.

        There’s a lot of processing going on in my callback such as sample by sample effects. I’m already pushing things very close to the deadline so I reckon my file open -> file property -> file seek-> file read process will have to be done in a separate async thread .

        Thanks for bouncing the idea back off of me. Other opinions are helpful! The term “low water mark” is new to me so hopefully a bit of searching will yield something useful.

  8. dubbeat
    Posted November 15, 2011 at 12:25 pm | Permalink

    Hi Micahael,

    Just a quick curiosity here. I see you use sint 16 everywhere which to my knowledge is of AudioSampleType. Throughout all my project I’ve been using AudioUintSampleType which is 32. Is there an advantage using one of the other?

    The reason I was using AudioUnitSampleType was that I read it’s to be used for audio processing.

    • Posted November 15, 2011 at 12:31 pm | Permalink

      Hi dubbeat,

      The main reason I’m using 16-bit audio is the lower storage and processing requirements – if you use 32-bit audio, then you have to handle double the data per time slice, in return for (for my purposes, anyway) a pretty limited return (greater dynamic resolution). Most audio’s 16-bit anyway, and 32-bit audio is only really useful – to my knowledge, anyway – for top-level professional studio applications.

      I personally don’t really see any real benefits of 32-bit over 16, for most applications, although some real audiophiles may disagree.

  9. Shilpa Modi
    Posted March 2, 2012 at 2:55 am | Permalink

    Great blog. Very helpful posts. Keep up the good work Michael.

  10. SD
    Posted March 10, 2012 at 1:23 am | Permalink

    so what goes in the // setup audio, etc ? I’m new to iOS audio. am I in over my head? Maybe a pointer to a good resource would help? Trying to learn on my own…

  11. StefanS
    Posted April 9, 2012 at 11:22 am | Permalink

    Hi Michael,

    I implemented your ring buffer in my Recording and Playback callbacks as you have described. It all seems to work fine. However, I would expect that if I changed the length of the buffer kBufferLength to be less than the inNumberFrames parameter, that some of the samples would be skipped. Strangely, this does not occur. Even if I set the length to a value as small as 1, it still works perfectly. How is this possible? What am I missing?

    Another thing: As I asked you a few days ago on the discussion about Remote IO, I need this buffer so I can supply my SPEEX codec 160 samples at a time (not 512). I guess I am going to need three ring buffers: One for getting the samples from the Audio Render and using those samples as input for the codec, another one for storing the coded data, and a third one for the decoded data and playback. Is that so?

    The most important thing: How am I going to do the synchronization? Whenever the Recording callback is called and 512 samples are supplied, that’s when I call the codec who needs 160 samples. This way, for every 512 samples I get 160. I need to somehow call my codec more often.

    Michael, thanks a lot. Your help is invaluable to me.

    StefanS

    • Posted April 9, 2012 at 12:26 pm | Permalink

      Hey Stefan,

      Oh, that’s my bad – insufficient documentation. I’ve just updated it. It’s due to the way the memory mirroring technique works – the buffer is actually allocated in multiples of the device page size (4096 bytes). So, it finds the lowest page size multiple greater than the buffer size you provide.

      I’m afraid I don’t have the spare mental bandwidth to help you with your problem – I suggest talking to the very capable folks on the Core Audio mailing list.

  12. StefanS
    Posted April 18, 2012 at 10:02 am | Permalink

    Hello Michael,

    This ring buffer seems to work perfectly for sample rate 44100. However, if I change the sample rate od the Remote IO unit to 8000 for example, it still works but I can hear some strange noises (like tiny cracks or interruptions). Do you have any idea where the problem might be? (If I don’t use the ring buffer, but instead I use an ordinary buffer, there is no such problem for 8000Hz)

    Thank you again, Stefan

    • Posted April 18, 2012 at 10:42 am | Permalink

      Hey Stefan,

      Are you using the AudioBufferList utilities, or just the plain functions? If it’s the former, then I’d be interested to either take a look at your code, or even see a sample project that demonstrates the issue – it is relatively young code, and I can’t guarantee there aren’t some edge case issues.

      If you’re just using the plain functions, then this is just a dumb pipe for bytes – what those bytes mean doesn’t make any difference to the buffer. I suspect if you’re seeing buffer continuity problems, then they lie in the code that enqueues/dequeues from the buffer.

      Either way, I must admit I’d be surprised if the problem lies in the buffer – on balance of probability, I think it’s more likely an issue in the way you’re enqueuing or dequeueing audio.

      • StefanS
        Posted April 18, 2012 at 12:57 pm | Permalink

        Actually, I am only using the plain functions. I’ll look into it once more.

        I am also planning on trying the AudioBufferList utilities, so I’ll keep you posted if anything unpredictable comes up.

        Stefan

        • StefanS
          Posted April 18, 2012 at 4:15 pm | Permalink

          Hey Michael,

          Just to let you know I fixed my problem with the unpleasant sound for 8000Hz. In the playback callback I replaced the line

          TPCircularBufferConsume(&THIS->buffer, sampleCount);

          with

          TPCircularBufferConsume(&THIS->buffer, MIN(bytesToCopy, availableBytes));

          Regards, Stefan

  13. owen
    Posted May 11, 2012 at 2:17 pm | Permalink

    ‘MyAudioController’ does not have a member named ‘ioAudioUnit’ Use of undeclared identifier ‘sampleCount’

    I have been working on this stuff for a few days now and i swear every time i find something i hit another wall :(. Do you know why i would be getting the above errors? I coppied and pasted the code into its own MyAudioController.m file with a plain MyAudioController.h file. i have tried changing the file to .mm as well as .c just to troubleshoot but that did not help.

  14. SamIAm
    Posted June 25, 2012 at 8:37 pm | Permalink

    I have been adapting the Apple MixerHost sample application to build a small 6 track mixer for an app that lets people remix my songs. My sound files take a long time to load into memory so I was very happy to find your code. It sounds like it will do exactly what I need, however I am having trouble following your example. I assume that I need to remove all the methods that i am using to read the files into memory and then in the inputRenderCallback method, read the audio data out of the circular buffer, but its not clear to me how I do this, not how I let the circular buffer know what files to read in the first place. Do you have an sample code loads two MP3 files into the circular buffer and then plays them?

  15. vasuki
    Posted July 23, 2012 at 3:48 pm | Permalink

    Hi MICHAEL TYSON,

    am New to Remote I/O .Thanks for  ur post . 
    

    I can Able to get buffer from audioinputcallback . but Always it return size will be power of two(what ever i set in buffersize property– kAudioSessionProperty_PreferredHardwareIOBufferDuration). i want 1600 bytes for some specific reason .

    i referred Some of ur previous post and try to implement circular buffer but stil am unable to get desired bytes. Could u help me to achieve desired byte.

  16. aly lero
    Posted August 14, 2012 at 6:20 pm | Permalink

    I am using your circular buffer for Audio Unit playback PCM data received from network packages. It works great. Basically, when the app receives incoming data, it calls a method that uses TPCircularBufferHead and TPCircularBufferProduce to insert data into the circular buffer and playing back with TPCircularBufferTail and TPCircularBufferConsume. It works fine if data is continuing coming in. However, if in coming data stops coming in and after several seconds later, data is coming again. Immediately, the availableBytes shows a very large number that is outside the buffer’s size limit and it keeps growing. At the same time, the quality of audio is degrading. Eventually, the app crashes. Any ideas what could the problem be?

    • Posted August 14, 2012 at 6:23 pm | Permalink

      Hey aly,

      Sounds like you’re not checking and adhering to the available bytes indicator – if you consume more from the buffer than there is, then you’ll bust the buffer’s integrity.

      So, always check the number of bytes in the buffer (it’s the number you pass in by reference in TPCircularBufferTail).

      If you are, then forgive me – I’ll need to see some code to help you.

      • aly lero
        Posted August 15, 2012 at 7:30 pm | Permalink

        Michael,

        The problem is fixed. The cause was there are more than one incoming data streams to a single buffers.

        Thanks.

  17. aly lero
    Posted August 15, 2012 at 4:43 am | Permalink

    Michael,

    Thanks for your quick response. I thought I use the MIN function. But here is the snippet.

        AudioBuffer buffer = ioData->mBuffers[0];
        UInt32 size = buffer.mDataByteSize;
        memset(buffer.mData, 0,  size);
        int32_t availableBytes;
        SInt16 *bufferTail = TPCircularBufferTail([iosAudio TPC] , &availableBytes);
    memcpy(buffer.mData, bufferTail, MIN(availableBytes, size)); buffer.mDataByteSize = size; TPCircularBufferConsume([iosAudio TPC], MIN(availableBytes, size));

  18. Patrick
    Posted August 21, 2012 at 9:17 pm | Permalink

    Hi Michael,

    thanks for sharing this great code! I have a problem using your ring buffer, though. My app runs perfectly in the simulator, but not on an actual device, where i get a NULL pointer when i expect a fresh buffer from the mic. I have posted a detailed description of my problem on Stack Overflow and i was wondering if you could have a look?

    Thank you so much, and keep up the good work!

    Patrick

  19. Phil
    Posted September 1, 2012 at 4:19 am | Permalink

    What happens if you need to handle an interruption? Such examples would be in the event that a microphone is unplugged or if a phone call happens. It seems that all of the audio change route callback examples are at the AV level.

  20. Posted September 5, 2012 at 1:08 pm | Permalink

    Hey Micheal, I was going over the iOS audio book (http://www.amazon.com/Learning-Core-Audio-Hands-On-Programming/dp/0321636848/ref=sr_1_1?ie=UTF8&qid=1346846807&sr=8-1&keywords=core+ios+audio) and it referred to a nativ iOS implementation of a ring buffer (located at /Developer/Extras/CoreAudio/PublicUtility).. any reason why you recreated this? any advantages?

    • Posted September 5, 2012 at 1:23 pm | Permalink

      Hey – A few reasons, yeah.

      1. Firstly, CARingBuffer is very specialised – it’ll only store AudioBufferLists. If you want to store anything else, you’re out of luck. TPCircularBuffer on its own is just a dumb pipe – you can put anything in it you want to, arbitrary structures, a simple array of bytes, whatever you want. I make extensive use of TPCircularBuffer for a variety of lock-free synchronisation tasks, not just audio.
      2. CARingBuffer is very limited even for working with audio buffers – for example, you can’t store and retrieve the audio timestamps along with buffers, as far as I can tell. You can’t deal with discontinuous audio. You can’t write audio directly to the buffer. You can’t browse through the data in advance of dequeuing. TPCircularBuffer+AudioBufferList has a large number of useful features when dealing with audio buffers, and it’s got a really simple implementation.
      3. Finally, CARingBuffer has got a really complicated, hard-to-read implementation, and it’s barely documented – not a good combination. It doesn’t make any statements about thread-safety – which is a huge downside. It may be thread safe for the one-producer, one-consumer case, like some people on StackOverflow seem to think. It may not be. It just doesn’t say anywhere, and I don’t want to run that risk.
    • Posted September 5, 2012 at 1:30 pm | Permalink

      Speaking of documentation – I really need to update mine, too =) There’s header docs, but I need to rewrite the programming guide…

  21. Posted September 5, 2012 at 1:49 pm | Permalink

    Micheal, I’m afraid you forgot to mention one very important advantage of TPCircularBuffer over CARingBuffer: excellent, prompt, and courteous support and feedback from its owner.

    Thanks a lot man for the quick response and for your hard work!

    • Posted September 5, 2012 at 1:50 pm | Permalink

      Shucks – thank you! =)

      • Posted September 18, 2012 at 6:19 pm | Permalink

        Hey Micheal, I want to use your solution to solve an architectural predicament I’ve visualized here (http://stackoverflow.com/questions/12462646/purge-the-contents-of-a-data-container-in-real-time). My concern with the ring-buffer solution is that data writing and reading on the ring buffer must happen about roughly the same speed. In my scenario I have no guarantee of that happening.. (the host phone is basically receiving audio in real time, and it cannot interrupt the audio reception as it will lose incoming data.. so I cannot control the rate of reception by locking/pausing it etc).. I figured I’ll just play around with the ring buffer size and see if I can hit a spot that works.. And you mentioned it’s 100% thread safe right? so i can read from and write to it from 2 different threads right?

      • Posted September 19, 2012 at 7:13 am | Permalink

        also as a quick note.. this article (http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing) shows that when it comes to real time audio.. there is extremely low tolerance for latency for glitch free performance.. it says that code should be lock free, avoid using free/malloc etc.. and a lot of restrictions.. I wonder what’s your take on all of that vis-a-vis your ring buffer

      • Posted September 19, 2012 at 7:19 am | Permalink

        and one more thing.. how do I declare a TPCircularBuffer property for a class? should it be atomic? you mentioned something about it’s atomicity in your git hub read me that i’m not sure about.

        ie @property (atomic, assign, readwrite) TPCircularBuffer buffer;

  22. jaap
    Posted September 28, 2012 at 8:47 pm | Permalink

    I don’t see it mentioned anywhere on the site or in the readme, but is this supposed to run as (OSX) kernel code? For a Linux userspace program I could use __sync_fetch_and_add() for the atomic add, but it looks like I also need to adapt the virtual memory allocation.

    • Posted September 28, 2012 at 9:16 pm | Permalink

      Aside from replacing the OSAtomic routines, it’s as you say – the virtual memory allocation stuff would be the trickiest part of porting it into kernelspace.

  23. Morgan S.
    Posted October 1, 2012 at 6:12 pm | Permalink

    Hi Michael… First of all, thank you for all you have done for the community. Your post on RemoteIO and ringBuffer has helped me a lot and saved me bunch of times !!

    Since you are clearly the best expert on the web about remoteIO that share & help people, I could really use your help about something…

    I’m like StefanS trying to create a VOIP application using speex. As you must know speex only accept 160 samples for encoding… Before trying to send data from one device to another, I’m trying to get the input / encode / decode / send to the output, on the same device… I tried before without the ringBuffer, my voice sounded like Darth Vador… Then I have realised (still not sure I’m right about it though) it was because RemoteIO gave me not 160 but 185 / 186 samples. I have then decided to try out your ring Buffer. but now it is just random sound (I can’t even hear my voice in the background…) Here’s my recording and playing callbacks static OSStatus recordingCallback ( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { PMVOIPAudioUnit *THIS = (PMVOIPAudioUnit *)inRefCon;

    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0].mNumberChannels = 1;
    bufferList.mBuffers[0].mData = NULL;
    bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16);
    bufferList.mBuffers[0].mData = malloc(inNumberFrames * sizeof(SInt16));
    OSStatus status = AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
    XThrowIfError(status, "Could not render input");

    // Put audio into circular buffer TPCircularBufferProduceBytes(&THIS->TPbuffer, bufferList.mBuffers[0].mData, inNumberFrames * sizeof(SInt16)); return status;

    }

    static OSStatus playingCallback ( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { PMVOIPAudioUnit *THIS = (PMVOIPAudioUnit *)inRefCon;

    int32_t availableBytes;
    int32_t realSize = 0;
    SInt16 *buffer = (short *)TPCircularBufferTail(&THIS->DecodedBuffer, &availableBytes);
    realSize = MIN(inNumberFrames, availableBytes);
    if (buffer == NULL) {
        NSLog(@"No data to play: %i", realSize);
        return noErr;
    }

    ioData->mBuffers[0].mDataByteSize = realSize; memcpy(ioData->mBuffers[0].mData, buffer, realSize);

    TPCircularBufferConsume(&THIS->DecodedBuffer, realSize);

    return noErr;

    }

    Here’s the encoding / decoding function (called in a background thread)

    - (void) decompressAudio { NSAutoreleasePool pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Starting to decompress audio"); int32_t availableBytes; int32_t realSize = 0; while (1) { usleep(20000); SInt16 *buffer = (short *)TPCircularBufferTail(&EncodedBuffer, &availableBytes); realSize = MIN(62, availableBytes); if (buffer == NULL || realSize < 62) { NSLog(@"DECOMPRESS AUDIO / Not enough data to decode: %i", realSize); continue; } spx_int16_t *decomp = decodeSpeex((char)buffer, 62);

        TPCircularBufferConsume(&EncodedBuffer, realSize);
        TPCircularBufferProduceBytes(&DecodedBuffer, decomp, 512);
    }
    [pool release];
    

    }

    • (void) EncodeAudio { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSLog(@"Starting to Encode audio"); int32_t availableBytes;

      while (1) { usleep(20000); SInt16 *buffer = (short *)TPCircularBufferTail(&TPbuffer, &availableBytes); int32_t realSize = MIN(160, availableBytes); if (buffer == NULL || realSize < 160) { NSLog(@"Does not fit speex needs: %i", realSize); continue; }

      TPCircularBufferConsume(&TPbuffer, realSize);
      int size = 0;
      char *encodedAudio = encodeSpeex((spx_int16_t*) buffer, realSize, &size);
      TPCircularBufferProduceBytes(&EncodedBuffer, encodedAudio, size);
      free(encodedAudio);
      

      } [pool release]; }

    The usleep here is just use to test out… removing it or changing it’s value doesn’t change the fact that I’m hearing random sounds…

    And finally Here’s my audio init function

    - (void) initAudio { try { //SPEEX CONFIG speex_bits_init(&bits_in); speex_bits_init(&bits_out); enc_state = speex_encoder_init(&speex_nb_mode); dec_state = speex_decoder_init(&speex_nb_mode); int quality = 10; speex_encoder_ctl(enc_state, SPEEX_SET_QUALITY, &quality); int tmp=1; speex_decoder_ctl(dec_state, SPEEX_SET_ENH, &tmp);

        int data = 0;
        speex_encoder_ctl(enc_state,SPEEX_GET_FRAME_SIZE,&data);
        NSLog(@"speex Frame size is: %i", data); // Giving 160
        speex_encoder_ctl(enc_state,SPEEX_GET_BITRATE,&data);
        NSLog(@"Bitrate is: %i", data); // Giving 24600
        speex_encoder_ctl(enc_state,SPEEX_GET_SAMPLING_RATE,&data);
        NSLog(@"Sampling rate is: %i", data); // Giving 8000

    OSStatus status;
    
    XThrowIfError(AudioSessionInitialize(NULL, NULL, rioInterruptionListener, self), "couldn't initialize audio session");
    
    float aBufferLength = 0.02; // In seconds
    status = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration,
                                     sizeof(aBufferLength), &amp;aBufferLength);
    XThrowIfError(status, "");
    
    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &amp;audioCategory), "couldn't set audio category");
    XThrowIfError(AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self), "couldn't set property listener");
    
    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    
    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &amp;desc);
    
    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &amp;rioUnit);
    XThrowIfError(status, "1");
    
    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioOutputUnitProperty_EnableIO, 
                                  kAudioUnitScope_Input, 
                                  kInputBus,
                                  &amp;flag, 
                                  sizeof(flag));
    XThrowIfError(status, "2");
    
    // Enable IO for playback
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioOutputUnitProperty_EnableIO, 
                                  kAudioUnitScope_Output, 
                                  kOutputBus,
                                  &amp;flag, 
                                  sizeof(flag));
    XThrowIfError(status, "3");
    
    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = 8000.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        =   kAudioFormatFlagIsSignedInteger |
                                        kAudioFormatFlagsNativeEndian |
                                        kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 8;
    audioFormat.mBytesPerPacket     = 1;
    audioFormat.mBytesPerFrame      = 1;
    
    // Apply format
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioUnitProperty_StreamFormat, 
                                  kAudioUnitScope_Output, 
                                  kInputBus, 
                                  &amp;audioFormat, 
                                  sizeof(audioFormat));
    XThrowIfError(status, "");
    
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioUnitProperty_StreamFormat, 
                                  kAudioUnitScope_Input, 
                                  kOutputBus, 
                                  &amp;audioFormat, 
                                  sizeof(audioFormat));
    XThrowIfError(status, "");
    
    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioOutputUnitProperty_SetInputCallback, 
                                  kAudioUnitScope_Global, 
                                  kInputBus, 
                                  &amp;callbackStruct, 
                                  sizeof(callbackStruct));
    XThrowIfError(status, "");
    
    // Set output callback
    callbackStruct.inputProc = playingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioUnitProperty_SetRenderCallback, 
                                  kAudioUnitScope_Global, 
                                  kOutputBus,
                                  &amp;callbackStruct, 
                                  sizeof(callbackStruct));
    XThrowIfError(status, "");
    
    // Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
    flag = 0;
    status = AudioUnitSetProperty(rioUnit, 
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output, 
                                  kInputBus,
                                  &amp;flag, 
                                  sizeof(flag));
    
    XThrowIfError(AudioSessionSetActive(true), "couldn't set audio session active\n");
    
    // Initialise
    status = AudioUnitInitialize(rioUnit);
    XThrowIfError(status, "");
    
    status = AudioOutputUnitStart(rioUnit);
    XThrowIfError(status, "");
    
    [self performSelectorInBackground:@selector(decompressAudio) withObject:nil];
    [self performSelectorInBackground:@selector(EncodeAudio) withObject:nil];
    

    } catch (CAXException &e) { NSLog(@"CAXException..."); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); }

    }

    I now There is a lot of code to read, but I’m really desperate to find an answer to this… you will see that most of the code is coming from your blog, so you won’t be lost.

    If I manage to make it work (with you help), I have the intention to write a full tutorial on how use remoteIO in order to create a VOIP Application…

    Thanks for your help,

    Regards,

    Morgan.

  24. mark
    Posted October 20, 2012 at 7:04 am | Permalink

    Hi Michael,

    I’m a little confused with a few things: 1. where are you defining ‘sampleCount’? 2. I am getting the following error: “Cannot initialize a variable of type SInt16 * (aka short *) with an rvalue of type ‘void *’…from my experience this is somehow related to ARC, but I have disabled that in my AudioSetup class. Here is the code from above:

    static OSStatus audioOutputCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { MANAudioComponent *THIS = (MANAudioComponent *)inRefCon;

    int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
    SInt16 targetBuffer = (SInt16)ioData->mBuffers[0].mData;

    // Pull audio from playthrough buffer int32_t availableBytes; SInt16 *buffer = TPCircularBufferTail(&THIS->buffer, &availableBytes); memcpy(targetBuffer, buffer, MIN(bytesToCopy, availableBytes)); TPCircularBufferConsume(&THIS->buffer, sampleCount); return noErr;

    }

    Thanks, any help would be appreciated!

    Mark

  25. Pier
    Posted October 31, 2012 at 5:55 am | Permalink

    Hi Michael,

    Thanks so much for this.

    I’ve been able to get the above code working. Am I able to use TPCircularBuffer as a saved buffer? For example, I click record in my app -> the data is recorded into TPCircularBuffer. I stop the recording. Then I click play to play back from this particular TPCircularBuffer. Is this possible or is this not the intended purpose? I’ve been trying to do this but my playback callback is giving me screeches and jitters. If I do it “live” (ie hit record and play) I can hear what I’m recording. This I guess is the intended purpose of the code you posted up there.

    Here’s my code. it’s based on yours. I’m trying to play the samples from rio->buffer.

    Thank you.

    int bytesToCopy = ioData->mBuffers[0].mDataByteSize;
    SInt16 targetBuffer = (SInt16)ioData->mBuffers[0].mData;

    // Pull audio from playthrough buffer int32_t availableBytes;

    if (rio->playing) { SInt16 * tempbuffer = TPCircularBufferTail(&rio->buffer, &availableBytes); memcpy(targetBuffer, tempbuffer, MIN(bytesToCopy, availableBytes)); TPCircularBufferConsume(&rio->buffer, MIN(bytesToCopy, availableBytes)); NSLog(@"Playing!"); } else { NSLog(@"Playing silence!");

    for (int i = 0 ; i < ioData->mNumberBuffers; i++){ //get the buffer to be filled AudioBuffer buffer = ioData->mBuffers[i]; UInt32 *frameBuffer = buffer.mData;

    //loop through the buffer and fill the frames
       for (int j = 0; j &lt; inNumberFrames; j++){
           frameBuffer[j] = 0;
       }
    

    } }

  26. Holger
    Posted November 5, 2012 at 10:19 am | Permalink

    Hi Michael, thanks for publishing TPCircularBuffer, it works really well!

    I found a strange problem today, when i created a static library containing TPCircularBuffer and my ow classes. The app crashes at the memcpy() call in TPCircularBufferProduceBytes when using the static lib. In the “debug” project that directly contains the class files, everything is fine. I’m using only one thread for the producer and 2nd for the consumer. It’s really strange – do you have an idea what could be the reason?

    Cheers, Holger

  27. Aly Lero
    Posted November 15, 2012 at 9:28 pm | Permalink

    Has anyone used multiple instances of TPCircularBuffer in an app? It works well in my app where only one buffer was used.

    • Pier
      Posted November 19, 2012 at 12:59 pm | Permalink

      I have, what issues are you having?

      • Aly Lero
        Posted November 21, 2012 at 1:31 am | Permalink

        It turned out it was my mistake. Thanks for response.

  28. Simon
    Posted January 9, 2013 at 12:03 pm | Permalink

    Hi,

    I’m writing a VOIP based iPhone app, used a few examples from your site and its helped me a lot. I’ve hit a road block, I was using this circular buffer but our system needs to have minimal latency, I was getting a long latency and tried to write my own. The latency improved quite a bit (450 miliseconds down to 120) but I have a weird issue now where any time a user touches the screen the audio stream glitches. This doesn’t happen with your circular buffer and I can’t figure out why.

    I have posted a stackoverflow question here: http://stackoverflow.com/questions/14162053/ios-ui-are-causing-a-glitch-in-my-audio-stream

    If you could take a look i’d really appreciate it as I’ve been working on this for days and just can’t spot it. I’m guessing its some kind of threading issue but I just can’t see it.

    Thanks, Simon

  29. Posted February 8, 2013 at 1:56 am | Permalink

    Thanks a lot for sharing the circular buffer implementation; easy to use, and fast.

    For the benefit of those who want to use the circular buffer for playing large audio files while processing PCM samples, I created a sample application:

    https://github.com/iitotoro/BufferedAudioPlayer.git

  30. Tuviah
    Posted February 15, 2013 at 6:33 am | Permalink

    Hello I’m developing a user space coreaudio driver ( audioserverplugin). The plugin is loaded by core audio daemon just once when it starts. My driver initiates a thread that reads audio from shared memory (the driver is a virtual microphone which routes audio from my application to others), and places it in a circular buffer to be consumed when coreaudio daemon requests the audio by sample number. Aside from the advantages of a circular buffer you just mentioned, another reason I use it is because coreaudio wants samples in groups of 512, while my application produces in groups of 1470 (44100/30 times a second)

    Currently I use a very simple circular buffer and ignore the sample number coreaudio specifies always reading ahead . This works perfectly fine when there is only one application using the driver. However when multiple apps use it, it sounds bad for both apps because coreaudio daemon is making one request for each application and each is looking at the same circular buffer taking turns to read and move the read head forward.

    So basically I need a circular buffer that one producer and multiple consumer where the consumers may need to access the data by index anywhere up to the write head. Is this possible with you circular buffer?