Blog

Using RemoteIO audio unit

I’ve had nasty old time trying to get some audio stuff going on the iPhone, no thanks to Apple’s lack of documentation. If you’re an iPhone developer interested in getting RemoteIO/IO Remote/whatever it’s called working on the iPhone… Do I have good news for you. Read on.

Wanna skip the Core Audio learning curve and start writing code straight away? Check out my new project:

The Amazing Audio Engine: Core Audio, Cordially

Update: Thanks to Joel Reymont, we now have an explanation for the “CrashIfClientProvidedBogusAudioBufferList” iPhone simulator bug: The simulator doesn’t like mono audio. Thanks, Joel!

Update: Happily, Apple have now created some excellent documentation on Remote IO, with some good sample projects. I recommend using that as a resource, now that it’s there, as that will continue to be updated.

Update: Tom Zicarelli has created a very extensive sample app that demonstrates the use of AUGraph, with all sorts of goodies.

So, we need to obtain an instance of the RemoteIO audio unit, configure it, and hook it up to a recording callback, which is used to notify you that there is data ready to be grabbed, and where you pull the data from the audio unit.


Overview

  1. Identify the audio component (kAudioUnitType_Output/ kAudioUnitSubType_RemoteIO/ kAudioUnitManufacturerApple)
  2. Use AudioComponentFindNext(NULL, &descriptionOfAudioComponent) to obtain the AudioComponent, which is like the factory with which you obtain the audio unit
  3. Use AudioComponentInstanceNew(ourComponent, &audioUnit) to make an instance of the audio unit
  4. Enable IO for recording and possibly playback with AudioUnitSetProperty
  5. Describe the audio format in an AudioStreamBasicDescription structure, and apply the format using AudioUnitSetProperty
  6. Provide a callback for recording, and possibly playback, again using AudioUnitSetProperty
  7. Allocate some buffers
  8. Initialise the audio unit
  9. Start the audio unit
  10. Rejoice

Here’s my code: I’m using both recording and playback. Use what applies to you!

Initialisation

Initialisation looks like this. We have a member variable of type AudioComponentInstance which will contain our audio unit.

The audio format described below uses SInt16 for samples (i.e. signed, 16 bits per sample)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
#define kOutputBus 0
#define kInputBus 1
 
// ...
 
 
OSStatus status;
AudioComponentInstance audioUnit;
 
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
 
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
 
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &audioUnit);
checkStatus(status);
 
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));
checkStatus(status);
 
// Enable IO for playback
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Output, 
                              kOutputBus,
                              &flag, 
                              sizeof(flag));
checkStatus(status);
 
// Describe format
audioFormat.mSampleRate			= 44100.00;
audioFormat.mFormatID			= kAudioFormatLinearPCM;
audioFormat.mFormatFlags		= kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket	= 1;
audioFormat.mChannelsPerFrame	= 1;
audioFormat.mBitsPerChannel		= 16;
audioFormat.mBytesPerPacket		= 2;
audioFormat.mBytesPerFrame		= 2;
 
// Apply format
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Output, 
                              kInputBus, 
                              &audioFormat, 
                              sizeof(audioFormat));
checkStatus(status);
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_StreamFormat, 
                              kAudioUnitScope_Input, 
                              kOutputBus, 
                              &audioFormat, 
                              sizeof(audioFormat));
checkStatus(status);
 
 
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioOutputUnitProperty_SetInputCallback, 
                              kAudioUnitScope_Global, 
                              kInputBus, 
                              &callbackStruct, 
                              sizeof(callbackStruct));
checkStatus(status);
 
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              kOutputBus,
                              &callbackStruct, 
                              sizeof(callbackStruct));
checkStatus(status);
 
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output, 
                              kInputBus,
                              &flag, 
                              sizeof(flag));
 
// TODO: Allocate our own buffers if we want
 
// Initialise
status = AudioUnitInitialize(audioUnit);
checkStatus(status);

Then, when you’re ready to start:

1
2
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);

And to stop:

1
2
OSStatus status = AudioOutputUnitStop(audioUnit);
checkStatus(status);

Then, when we’re finished:

1
AudioComponentInstanceDispose(audioUnit);

And now for our callbacks.

Recording

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
static OSStatus recordingCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) {
 
    // TODO: Use inRefCon to access our interface object to do stuff
    // Then, use inNumberFrames to figure out how much data is available, and make
    // that much space available in buffers in an AudioBufferList.
 
    AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)
 
    // Then:
    // Obtain recorded samples
 
    OSStatus status;
 
    status = AudioUnitRender([audioInterface audioUnit], 
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             bufferList);
    checkStatus(status);
 
    // Now, we have the samples we just read sitting in buffers in bufferList
    DoStuffWithTheRecordedAudio(bufferList);
    return noErr;
}

Playback

1
2
3
4
5
6
7
8
9
10
11
static OSStatus playbackCallback(void *inRefCon, 
                                  AudioUnitRenderActionFlags *ioActionFlags, 
                                  const AudioTimeStamp *inTimeStamp, 
                                  UInt32 inBusNumber, 
                                  UInt32 inNumberFrames, 
                                  AudioBufferList *ioData) {    
    // Notes: ioData contains buffers (may be more than one!)
    // Fill them up as much as you can. Remember to set the size value in each buffer to match how
    // much data is in the buffer.
    return noErr;
}

Finally, rejoice with me in this discovery ;)

Resources that helped

No thanks at all to Apple for their lack of accessible documentation on this topic – They really have a long way to go here! Also boo to them with their lack of search engine, and refusal to open up their docs to Google. It’s a jungle out there!

Update: You can adjust the latency of RemoteIO (and, in fact, any other audio framework) by setting the kAudioSessionProperty_PreferredHardwareIOBufferDuration property:

float aBufferLength = 0.005; // In seconds
AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
                        sizeof(aBufferLength), &aBufferLength);

This adjusts the length of buffers that’re passed to you – if buffer length was originally, say, 1024 samples, then halving the number of samples halves the amount of time taken to process them.

Another Update: In the comments below, Florian Bomers pointed out that I was incorrectly using the AudioUnitUninitialize to clean up the Audio Unit. This is incorrect, and should in fact be AudioComponentInstanceDispose. Further discussion here. Cheers Florian!

, , , . Bookmark the permalink. Both comments and trackbacks are currently closed.

55 Comments

  1. sefiroths
    Posted December 7, 2010 at 2:54 pm | Permalink

    i’d like to meter the volume of some people speaking near mic of iphone. i thought to use peakPowerForChannel or averegePowerForChannel… i have made some tests, the maximum (0) is returned very soon, i say a letter with normal voice in the mic, and 0 is very soon reached… if i shout laud 0 is returned…

    i have -160 as a minimum (silence), 0 as max… it’s ok for me but the max is reached as i talk. i’d like to have something like: -160 noone is talking -100 somone is talking -50 somone is shouting 0 someone is shouting laud

    while i have now -160 silence 0 somone is talking

    will i have better luck with the method described here? or mic will go to saturation and sound metering will be always at max?

  2. Xample
    Posted December 7, 2010 at 6:52 pm | Permalink

    Hi, without giving too much code I wonder if you are taking in consideration the value of the samples to the square or at least their absolute value. Summing all the samples without doing this will eventually result to a mean of 0. Also think to normalize and cast the values at the right time. i.e. int sum = 0; sum+= (short)audioBuffer[frameIndex]; then sum/(nbFrames); (trivial I know…) btw I don’t know if the iPhone does a limiter on the input per default, you should check this too… but I don’t think so. Btw: if I had to do such a talk detector I would do this as follow: 1. take the input, 2. absolute value it 3. filter it through a simple lowpass 1 pole i.e: value[n] = alphavalue[n]+(1.-alpha)value[n-1]; or in integer values you can try : value[x] = (value[x]+value[x-1]*15)>>4; alpha being [0.0 , 1.0]. If alpha is near to 0 the inertia is higher i.e. more noise robust but slower the talk detection. Just put a thresh or a better a schmitt trigger on this new filtered buffer in order to detect if the people is talking or not… Good luck

  3. Sergey Vinogradov
    Posted December 10, 2010 at 7:05 am | Permalink

    How to change volume ?

  4. Xample
    Posted December 10, 2010 at 8:19 am | Permalink

    Hi, unlike OpenAL, there is no settings in order to change the volume by itself. You need to do it by your own. This is obviously achieved by multiplying the sound stream by a constant usually bounded by [0, 1]. For example you can simply do something like:audioStream[i] = (short)((float)audioStream[i]volume); /volume being a float bounded by [0,1]/but still this is bad in term of optimizations as this process involves casting a short into a float. I would strongly recommend to keep into the “integer space” and find your appropriate ratio in order to match your volume. Then do something like this: audioStream[i] = (short)((int)audioStream[i]65536/invRatio); /where invRatio ~= volume65536 */

  5. J Alex Antony Vijay
    Posted January 5, 2011 at 7:53 am | Permalink

    Hi, I tried with Remote I/O audio unit sample code. But in callback function, I am getting null value for AudioBufferList. Why we will get null value ? Please help me to resolve it.

    Thanks.

    • Posted January 5, 2011 at 10:40 am | Permalink

      It may be best if you post your code, to save us guessing – you could put it on pastebin.com, for example. You may be using an incorrect sample format, or perhaps haven’t properly initialised the audio unit, or a number of other possibilities.

  6. Posted January 17, 2011 at 7:18 pm | Permalink

    Hi Michael,

    Your post looks very useful, but I am a complete novice when it comes to CoreAudio. I have been teaching myself for a few months now, and I am at the stage where I understand what you’re doing in your code. But I cannot figure out how to implement the code within a project, and how it all links together. I’ve looked at the SpeakHere example, but this does not seem to help. I was wondering if you could upload an xcode project, just so I can teach myself how you’ve implemented the code?

    My situation is, that i need to record (up to 30 seconds) of audio directly into one audio buffer, so that i can process the signal (performing convolution algorithms). Looking at your post above, this will help me learn how to record audio into a buffer, which would be extremely useful in my case.

    If you don’t want to upload a project I completely understand.

    Many Thanks Sam

  7. Brennon Bortz
    Posted January 20, 2011 at 10:11 pm | Permalink

    Sorry…but I don’t see a declaration for audioFormat…can you let us know what type it is?

  8. Brennon Bortz
    Posted January 22, 2011 at 2:25 pm | Permalink

    Looks like it’s an AudioStreamBasicDescription?

  9. Anim
    Posted February 18, 2011 at 11:02 am | Permalink

    Hi

    Thanks for the great post.

    if I just want to stream from the microphone and process it but never play it, do I still need an output? Hmm, the render call back would be on the input of the output wouldn’t it?

    Have I just answered my own question lol?

    Thanks Anim

    • Posted February 18, 2011 at 11:05 am | Permalink

      That’s right, Anim, you don’t need an output – you can just drop that half (the output callback and the output enable call).

      • Anim
        Posted February 18, 2011 at 11:16 am | Permalink

        Thanks, will give it a go.

        Anim

  10. Toby Compton
    Posted March 8, 2011 at 3:21 pm | Permalink

    Don’t know if anyone has tried this, but I want to use remoteIO along with the AVFoundation library.

    Basically I have an app that captures the audio from the mic, does some effect in the callback etc. But I want to be able to record this post effect audio buffer in an AVCaptureSession. I am not sure AVFoundation is compatible with audio units and from what I can see you can only use a fixed AVCaptureDeviceInput with AVCaptureSession rather than specifying your own buffer.

    Has anyone else tried something similar?

    Thanks

  11. nonono
    Posted March 13, 2011 at 4:45 pm | Permalink

    help please!!

    float aBufferLength = 0.005; // In seconds
    AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
                            sizeof(aBufferLength), &aBufferLength);
    

    Has no effect in simulator as well as on iphone. inNumberFrames is still 2048. tried framesperslice, no effect either. iphone firmware 4.0.

    Please help, thank you, nonono

  12. Posted April 8, 2011 at 11:52 am | Permalink

    Hullo, I would like to play a number of contemporary sounds of different frequencies at volumes being ruled programmatically. I would like to have an object for each frequency to play its own sound and the given volume: how do I attain this simple task without going deep in the Core Audio technology?

    Thanks, Fabrizio

  13. Radhakrishnan
    Posted April 9, 2011 at 9:35 am | Permalink

    When I use, Remote IO to record and Playback, I get a noise always. Some suggested me to switch off speaker. But I dont know where to do it. Please explain what the problem could be?

  14. Max
    Posted April 27, 2011 at 1:10 pm | Permalink

    Awesome thank you so much for this! After messing round with AudioQueue for days attempting to make it responsive, I managed to get things going with RemoteIO in a couple of hours. In comparison it’s been a breeze.

    Much appreciated thanks!

  15. Posted April 30, 2011 at 2:28 pm | Permalink

    Awesome!! Thanks for you great work!! I’ve been working on this for ages!! The only problem left is, how can I keep playing the recoring (realtime) in background?? Any ideas?

  16. seth
    Posted May 5, 2011 at 8:19 am | Permalink

    Is there any sample code on how to add start/stop recording to file? I’m aiming at taking the playback buffer and saving it to file with a start/stop record button. I’m confused about how to go about getting the buffer to a saved file though.

    • Posted May 5, 2011 at 11:15 am | Permalink

      The best example is IOHost example available from WWDC session .

      • seth
        Posted May 5, 2011 at 6:24 pm | Permalink

        And how/where can I find/get that?

      • seth
        Posted May 5, 2011 at 9:04 pm | Permalink

        I found the WWDC session that shows IOHost, but IOHost only shows panning of audio, it doesn’t relate to saving audio to a file at all.

        • Posted May 6, 2011 at 8:18 am | Permalink

          Any specific reason why you want to record and store in file using Remote IO?

          Is there any specific reason you are not opting AVAudioRecorder?

          • seth
            Posted May 6, 2011 at 8:30 am | Permalink

            Yeah, AVAudioRecorder records input only whereas I’m trying to record output. A lot of music audio applications seem to have this ability where a user creates beats/sequences/loops, etc and is able then to render to file. This can’t be done with AVAudioRecorder though and since the buffer is already being filed by RemoteIO/Audio Units, I imagine somehow the buffer can be passed to file, but I’m confused in how to do so.

  17. Ms
    Posted June 2, 2011 at 1:54 pm | Permalink

    Hi, how do I to play the recorded sound?

  18. Akshay
    Posted June 9, 2011 at 6:24 am | Permalink

    Hi Michael,

    I tried using RemoteIO unit but I do not get any ouput. What I want to achieve is simply – Hear through speaker whatevr I speak in microphone.

    I have uploaded my code to Pastebin : http://pastebin.com/UX9vVpbr

    Please can you have a look at it and let me know any hints to make it work?

    Thanks, Akshay.

    • Posted June 9, 2011 at 11:11 am | Permalink

      Hi Akshay,

      • Your stream format is invalid – on line 93 you’re defining bytesPerPacket as being the same as bytesPerSample, but for a stereo format. You need bytesPerPacket to be bytesPerSample * number of channels.
      • You haven’t connected the nodes in the graph up – you need to use AUGraphConnectNodeInput to make a connection from the input of the remote IO node, to its output.
      • Akshay
        Posted June 9, 2011 at 11:40 am | Permalink

        Hi Michael,

        Thanks for your input ! That pin pointed the problem.

        However I made following changes to make it work: 1. stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;

        1. Added AUGraphConnectNodeInput(mProcessingGraph, mIONode, 1, mIONode, 0);

        2. Did not changed bytesPerPacket value.

        And it worked!!

        • Posted June 9, 2011 at 11:59 am | Permalink

          Glad to hear you got it sorted.

          Beware though that if you specify an invalid audio stream format, as you have, it’s not at all guaranteed to work on all devices, even if the device you’re currently testing on can figure out what you mean. It’s always safest to just provide a valid value, instead of taking a stab in the dark and hoping for the best – just multiply your bytesPerPacket by two, and it’ll be much safer, trust me.

          • Akshay
            Posted June 9, 2011 at 6:06 pm | Permalink

            Hi Michael,

            If I change to bytesPerPacket = bytesPerSample*2

            I do not get any audio.

            Any idea why?

          • Posted June 9, 2011 at 6:17 pm | Permalink

            Oh, yes, you have to set bytesPerFrame to bytesPerSample*2 as well

  19. Akshay
    Posted June 9, 2011 at 6:27 pm | Permalink

    Still the same result…

    • Posted June 9, 2011 at 6:33 pm | Permalink

      Ah, I hate mucking about with these settings; it’s always finnicky. I suggest either sticking at it until you get the settings right (maybe I missed something), or take the easier path of grabbing a sample format from the sample projects and using that. Here’s the format I use:

          // Linear PCM, stereo, noninterleaved stream at the hardware sample rate.
          AudioStreamBasicDescription audioDescription;
          memset(&amp;audioDescription, 0, sizeof(audioDescription));
          audioDescription.mFormatID          = kAudioFormatLinearPCM;
          audioDescription.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
          audioDescription.mChannelsPerFrame  = 2;
          audioDescription.mBytesPerPacket    = sizeof(LPAudioSampleType)<em>audioDescription.mChannelsPerFrame;
          audioDescription.mFramesPerPacket   = 1;
          audioDescription.mBytesPerFrame     = sizeof(LPAudioSampleType)</em>audioDescription.mChannelsPerFrame;
          audioDescription.mBitsPerChannel    = 8 * sizeof(SInt16);
          audioDescription.mSampleRate        = 44100;
  20. Akshay
    Posted June 12, 2011 at 6:20 am | Permalink

    Hi Michael,

    I am trying to achieve this:

    Input from Mic and a guitar file ————–> Multichannel Mixer Unit –> Speaker

    I get both of them on speaker.. but with lot of noise with audio coming from mic.

    From the debugging and analyzing the code, I found that:

    1. For Input from Mic using VoiceProcessing AudioUnit, ASBD requires following parameter set as mStereoStreamFormatForVPUnit.mFormatFlags = kAudioFormatFlagsCanonical; mStereoStreamFormatForVPUnit.mBytesPerPacket = bytesPerSample * 2; mStereoStreamFormatForVPUnit.mBytesPerFrame = bytesPerSample * 2;

    2. And For Guitar file the ASBD requires following parameters set mStereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; mStereoStreamFormat.mBytesPerPacket = bytesPerSample; mStereoStreamFormat.mBytesPerFrame = bytesPerSample;

    If any of the above values are altered, the graph initialization fails.

    And providing mStereoStreamFormat to mixer input to 2 buses gives a lot of noise. I have uploaded my code at “http://pastebin.com/N1zfFmqz” In this code if you change (1) (2) (3) and (4) to mStereoStreamFormatForVPUnit, The graph initialization would give error. Can you please throw some light on this?

    Help appreciated. Thanks, Akshay Shah.

    • Akshay
      Posted June 13, 2011 at 6:17 am | Permalink

      Hi Michael,

      Can you please help me on this? Appreciate your help..

      Thanks, Akshay Shah.

    • Posted June 13, 2011 at 11:35 am | Permalink

      Hi Akshay,

      Off the top of my head, I think you should be setting kAudioUnitProperty_StreamFormat for the VP unit’s input element/output bus, to tell it what format you want the incoming audio in. Also, it’s mono, so you shouldn’t need to be specifying a stereo format, unless something else is wrong. I’d be experimenting with those stream formats some more, but other than that, I can’t think of anything else.

      If all else fails, perhaps try posting the question on one of the dev forums where more people’ll see it. I’m currently driving from Scotland to Wales so I’m a bit distracted currently =)

      • Akshay
        Posted June 14, 2011 at 6:16 pm | Permalink

        What is mono? and where is it specified?

        I assume to keep both the guitar as well as the input audio to be stereo.

  21. airinys
    Posted July 13, 2011 at 11:27 pm | Permalink

    Crystal clear. Definitely the best kick off.

    Thanks.

  22. sirajudeen
    Posted July 20, 2011 at 7:06 am | Permalink

    Thanks for the good example can we play wav file using audio queue method

  23. Havard
    Posted August 19, 2011 at 11:25 pm | Permalink

    Hi.

    I notice in your example that you are using just one audio-unit to do both capture and playout.

    I have been experimenting with opening up two audio-units, one for capture and one for playout, but following your approach, I only get static on my capture. (The playout is OK).

    Do you think it is basically unsupported to use two audio-units for this? Should I only use one? Can you make your example work with two?

    Thanks!

    • Posted August 21, 2011 at 10:46 am | Permalink

      Hi Havard,

      Honestly, I’ve never seen two audio units used for the one device – I doubt it’s supported usage. Why not just use one, the normal way?

  24. Seth
    Posted August 28, 2011 at 5:22 am | Permalink

    Any chance someone could post an example on how to get this working with background audio (when the screen is locked and when the is closed in the background)?

  25. Posted September 19, 2011 at 4:21 pm | Permalink

    How do I modify this sample to play a .caf file?

  26. Posted September 29, 2011 at 4:31 pm | Permalink

    Recording from mic input appears to be broken on devices running iOS 5. In recordingCallback, AudioUnitRender returns an OSStatus of -50.

  27. James
    Posted October 11, 2011 at 6:29 am | Permalink

    Hi Michael, This is a little off-topic, apologies. I’m trying to avoid hard-coding samplerate values in my AU. Do you know how to query the hosting DAW to obtain the current samplerate setting? Sorry if this is an obvious question – I can’t find documentation on this anywhere! Cheers

    • Posted October 11, 2011 at 10:14 am | Permalink

      Hi James,

      kAudioSessionProperty_CurrentHardwareSampleRate is what you want (use it with AudioSessionGetProperty)

      • James
        Posted October 11, 2011 at 2:09 pm | Permalink

        Ah great! However, I’m writing my audio unit in c++ as purely an audio unit, do you know how to access kAudioSessionProperty_CurrentHardwareSampleRate in that context?

        • Posted October 11, 2011 at 2:11 pm | Permalink

          Oh, if you’re writing an audio unit, I can’t help you – I’ve only got experience with using audio units. Maybe you should seek help in the apple dev forums?

          • James
            Posted October 11, 2011 at 2:15 pm | Permalink

            Okay Michael, thanks again! I’ll check there…

  28. muzzkat
    Posted October 27, 2011 at 5:17 pm | Permalink

    Is it possible capture the OUTPUT (say from the iPod player or another Audio App) and direct that through my AudioUnit graph?

    Currently I have a AUFilePlayer->Mixer->iPodSpeed->RemoteIO-Output… this works well, but I want to use files from the iPod Library, or another app.

    Thanks

    • Posted October 27, 2011 at 5:36 pm | Permalink

      I’m afraid not, muzzkat, no. The only way to capture device output is to use a loopback audio device to run the output back through into the input.

  29. Jeremy
    Posted November 13, 2011 at 12:52 am | Permalink

    My code seems to work on the simulator but not on a device and I cannot work out why. AudioUnitRender returns -50 on a device. I am pretty sure it has something to do with the buffers but I am not sure. My init code: AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 44100.0; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; And my render code: AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList.mBuffers[0].mNumberChannels = 1; bufferList.mBuffers[0].mData = NULL; bufferList.mBuffers[0].mDataByteSize = inNumberFrames * sizeof(SInt16) * 2;

    OSStatus status = AudioUnitRender(viewController.ioUnit, 
                             ioActionFlags, 
                             inTimeStamp, 
                             1, 
                             inNumberFrames, 
                             &bufferList);
    

    • Posted November 14, 2011 at 3:32 pm | Permalink

      Ah, -50, my favourite error ;-)

      Firstly, I notice that you’re setting mDataByteSize of the render buffer to inNumberFrames * sizeof(SInt16) * 2 – it may make no difference, but that implies stereo audio (that is, frames * bytes per sample * 2 samples per frame). Either set your mChannelsPerFrame to 2 if stereo audio is what you actually want, or get rid of the * 2.

      Also, it couldn’t hurt to make absolutely certain viewController.ioUnit is initialised.

      Otherwise, if that doesn’t do it, perhaps it’s worth asking on the Core Audio mailing list, where they’re always very helpful.

  30. Posted November 23, 2011 at 1:06 pm | Permalink

    Hi Michael, First of all, thanks for the great article.

    Just want to tell that when you’re finished it is also good to call AudioComponentInstanceDispose(audioUnit);