Huzzah! I’m announcing a new project which will be launching over the next couple of months.
It’s called [The Amazing Audio Engine](http://theamazingaudioengine.com), and it represents the product of years of experience with iOS audio. It’s a sophisticated iOS audio engine that lets developers skip the Core Audio learning curve, and get on with writing great software.
The tech behind this is what drives [Loopy and Loopy HD](http://loopyapp.com), as well as the in-development [Audiobus](http://audiob.us) app.
[Subscribe at theamazingaudioengine.com](http://theamazingaudioengine.com) to be kept in the loop as it approaches launch time.
Some of the features:
- Automatic mixing of multiple audio signals with per-channel volume and pan controls.
- Built-in support for audio filtering and effects, including the ability to form complex filter chains, constructing channel groups, or even whole trees of groups, and filtering them as one composite signal.
- Built-in support for audio input, including optional use of the Voice Processing IO unit, for automatic echo removal – great for VoIP.
- Record or monitor the output of the whole audio system, for in-app session recording, or get the output of one channel, or any group of channels in the processing tree.
- Support for any audio format (AudioStreamBasicDescription) that the hardware supports: Interleaved, non-interleaved, mono, stereo, 44.1kHz or any other supported sample rate, 16-bit, 8.24 fixed floating-point – whatever you need for your project.
- Very light, efficient engine, designed from the ground up for speed. All Core Audio code is pure C; no Objective- C or BSD calls, no locks, no memory allocation.
- Efficient mixing of input signals, using Apple’s MultiChannelMixer.
- Fast, lock-free synchronisation mechanism, enabling developers to send messages to the main thread from the Core Audio context, and vice versa, without locking or memory allocation from the Core Audio thread. Message sending from the main thread is two-way, and can be asynchronous, with a response block, or synchronous.
Hey do you need any testers ? I’m just starting to build an app that looks like it needs to use remote i/o and it’s making my headspin. I would love to try out any code that you have …even if it isn’t ready for primetime. Either way I’m really excited to see this project !
Any updates as to when you’re planning on releasing it?
Not quite yet, Ari, but it won’t be long now =)
Really looking forward to this! Any idea yet on the release date?
Cheers Bijoy =) I’ll probably be releasing it around the same time as Audiobus, whenever that may be (probably within a couple months, tops)
Hi Mike, How are you doing? Eagerly waiting for the release of this? How is it coming along? Any idea on the release date yet?
Hey Bijoy – yeah, I’m sorry about the delay =)
We’re waiting for the release of Audiobus before making any final steps towards release of TAAE – Audiobus is going to determine certain factors in how TAAE is released, so I’ve pushed it back ’till we understand the market better.
I am just going crazy trying to get remote io to work with rtsp streams decoded by ffmpeg. I first tried audio queues and that works pretty good for aac audio but fails badly with ulaw, for low sample rates no issues, but higher sample rates and audio drops out and does not recover.
Next I tried audio units, the callback looks like it takes an array of uint32
pragma mark Playback callback
static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {
But ffmpeg decodes the audio packets as Sint16 , so how do I fill this buffer.
For audioqueues I have a ring buffer of uint8_t * , I get the bytes from the ring buffer in the audioQueueOutputCallback and fill the buffer.
But with the audio queues I don’t have to worry about the decoding , with audio units I have to decode both the aac and the ulaw into raw lpcm. At least I think thats how I understand it. Anyone have a simple example of playing an audio stream from ffmpeg using audioUnits.
Totally stoked! This is going to be a life saver for us low-level coders!
Will there be support for creating musical instruments (like drums, keyboards, etc.) with ADSR filters (Attack, Decay, Sustain, Release) ?
Cheers =)
Nope, at least not yet (maybe someone will write a module) – it’s really just a sophisticated piece of audio infrastructure; the actual audio generation is mostly left to the developer.
hey,Michael!I have a question that: now ,I have a song named a.m4a ,I want to quickly add effects(like reverb or delay) to it ,I hope I do not need to play it(no real-time)。how can I do?