The Amazing Audio Engine: Funky Remote IO-based Core Audio Engine Coming Soon

The Amazing Audio EngineHuzzah! I’m announcing a new project which will be launching over the next couple of months.

It’s called The Amazing Audio Engine, and it represents the product of years of experience with iOS audio. It’s a sophisticated iOS audio engine that lets developers skip the Core Audio learning curve, and get on with writing great software.

The tech behind this is what drives Loopy and Loopy HD, as well as the in-development Audiobus app.

Subscribe at to be kept in the loop as it approaches launch time.

Some of the features:

  • Automatic mixing of multiple audio signals with per-channel volume and pan controls.
  • Built-in support for audio filtering and effects, including the ability to form complex filter chains, constructing channel groups, or even whole trees of groups, and filtering them as one composite signal.
  • Built-in support for audio input, including optional use of the Voice Processing IO unit, for automatic echo removal – great for VoIP.
  • Record or monitor the output of the whole audio system, for in-app session recording, or get the output of one channel, or any group of channels in the processing tree.
  • Support for any audio format (AudioStreamBasicDescription) that the hardware supports: Interleaved, non-interleaved, mono, stereo, 44.1kHz or any other supported sample rate, 16-bit, 8.24 fixed floating-point – whatever you need for your project.
  • Very light, efficient engine, designed from the ground up for speed. All Core Audio code is pure C; no Objective- C or BSD calls, no locks, no memory allocation.
  • Efficient mixing of input signals, using Apple’s MultiChannelMixer.
  • Fast, lock-free synchronisation mechanism, enabling developers to send messages to the main thread from the Core Audio context, and vice versa, without locking or memory allocation from the Core Audio thread. Message sending from the main thread is two-way, and can be asynchronous, with a response block, or synchronous.
, , , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.


  1. pete
    Posted March 28, 2012 at 12:10 am | Permalink

    Hey do you need any testers ? I’m just starting to build an app that looks like it needs to use remote i/o and it’s making my headspin. I would love to try out any code that you have …even if it isn’t ready for primetime. Either way I’m really excited to see this project !

  2. Ari
    Posted April 5, 2012 at 6:10 am | Permalink

    Any updates as to when you’re planning on releasing it?

    • Posted April 6, 2012 at 11:22 am | Permalink

      Not quite yet, Ari, but it won’t be long now =)

  3. Posted June 8, 2012 at 10:06 am | Permalink

    Really looking forward to this! Any idea yet on the release date?

    • Posted June 8, 2012 at 11:02 am | Permalink

      Cheers Bijoy =) I’ll probably be releasing it around the same time as Audiobus, whenever that may be (probably within a couple months, tops)

      • Posted August 30, 2012 at 5:28 pm | Permalink

        Hi Mike, How are you doing? Eagerly waiting for the release of this? How is it coming along? Any idea on the release date yet?

        • Posted August 31, 2012 at 11:24 am | Permalink

          Hey Bijoy – yeah, I’m sorry about the delay =)

          We’re waiting for the release of Audiobus before making any final steps towards release of TAAE – Audiobus is going to determine certain factors in how TAAE is released, so I’ve pushed it back ’till we understand the market better.

  4. Posted September 6, 2012 at 1:32 am | Permalink

    I am just going crazy trying to get remote io to work with rtsp streams decoded by ffmpeg. I first tried audio queues and that works pretty good for aac audio but fails badly with ulaw, for low sample rates no issues, but higher sample rates and audio drops out and does not recover.

    Next I tried audio units, the callback looks like it takes an array of uint32

    pragma mark Playback callback

    static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {

    But ffmpeg decodes the audio packets as Sint16 , so how do I fill this buffer.

    For audioqueues I have a ring buffer of uint8_t * , I get the bytes from the ring buffer in the audioQueueOutputCallback and fill the buffer.

    But with the audio queues I don’t have to worry about the decoding , with audio units I have to decode both the aac and the ulaw into raw lpcm. At least I think thats how I understand it. Anyone have a simple example of playing an audio stream from ffmpeg using audioUnits.

  5. Posted December 26, 2012 at 11:28 pm | Permalink

    Totally stoked! This is going to be a life saver for us low-level coders!

    Will there be support for creating musical instruments (like drums, keyboards, etc.) with ADSR filters (Attack, Decay, Sustain, Release) ?

    • Posted December 27, 2012 at 11:10 am | Permalink

      Cheers =)

      Nope, at least not yet (maybe someone will write a module) – it’s really just a sophisticated piece of audio infrastructure; the actual audio generation is mostly left to the developer.

  6. Leo
    Posted May 21, 2013 at 10:38 am | Permalink

    hey,Michael!I have a question that: now ,I have a song named a.m4a ,I want to quickly add effects(like reverb or delay) to it ,I hope I do not need to play it(no real-time)。how can I do?