上一篇讲了如果使用 AudioUnit进行播放音频文件,这一篇讲一下如何使用AudioUnit进行录音
这个两个过程其实很类似。只是在回调方法上的使用不同。
1.初始化
AudioUnit的初始化比较啰嗦,而且方法比较多。这里采用一种比较简单的。
AudioComponentDescription outputUinitDesc; //定义AudioUnit描述,下面是设置 unit 的类型。
memset(&outputUinitDesc, 0, sizeof(AudioComponentDescription));
outputUinitDesc.componentType = kAudioUnitType_Output;//输出类型
outputUinitDesc.componentSubType = kAudioUnitSubType_RemoteIO;
outputUinitDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
outputUinitDesc.componentFlags = 0;
outputUinitDesc.componentFlagsMask = 0;
AudioComponent outComponent = AudioComponentFindNext(NULL, &outputUinitDesc);
OSStatus status = AudioComponentInstanceNew(outComponent, &_outAudioUinit);
接下来要设置一下AudioUnit的属性,都是通过AudioUnitSetProperty这个接口来设置
AudioStreamBasicDescription recordFormat;
recordFormat.mSampleRate = 44100;
recordFormat.mFormatID = kAudioFormatLinearPCM;
recordFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger;
recordFormat.mFramesPerPacket = 1;
recordFormat.mChannelsPerFrame = 1;
recordFormat.mBitsPerChannel = 16;
recordFormat.mBytesPerFrame = recordFormat.mBytesPerPacket = 2;
status = AudioUnitSetProperty(recordUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &recordFormat, sizeof(recordFormat));
这个是设置 audioUnit对应的 ABSD。
设置录音对用的 recordCallBack
AURenderCallbackStruct recordCallback;
recordCallback.inputProcRefCon = (__bridge void * _Nullable)(self);
recordCallback.inputProc = RecordCallback;//回调函数
status = AudioUnitSetProperty(recordUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Output, 1, &recordCallback, sizeof(recordCallback));
if (status != noErr) {
NSLog(@"AURenderCallbackStruct error, ret: %d", status);
}
录制的音频文件会从回调函数中获得
设置 AudioBufferList
uint32_t numberBuffers = 1;
UInt32 bufferSize = 2048;
bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
bufferList->mNumberBuffers = numberBuffers;
bufferList->mBuffers[0].mData = malloc(bufferSize);
bufferList->mBuffers[0].mDataByteSize = bufferSize;
bufferList->mBuffers[0].mNumberChannels = 1;
初始化AudioUnit
OSStatus result = AudioUnitInitialize(recordUnit);
2. 编写录音的回调函数
当开始录音的时候,程序就会进入到上面的设置的回调函数。在这个函数中,把音频文件给取出来,然后进行下一步处理。
可以播放,可以进行加工编码等等。。
static OSStatus RecordCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
AudioUnitRecordController *self = (__bridge AudioUnitRecordController*)inRefCon;
if (inNumberFrames > 0) {
self->bufferList->mNumberBuffers = 1;
OSStatus stauts = AudioUnitRender(self->recordUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, self->bufferList);
if (stauts != noErr) {
NSLog(@"recordcallback error is %d",stauts);
}
[self->pcmData appendBytes:self->bufferList->mBuffers[0].mData length:self->bufferList->mBuffers[0].mDataByteSize];
}
return noErr;
}
在这个回调函数中,通过函数AudioUnitRender来获得录音的数据,这些音频数据保存在AudioBufferList中。