IOS端音频的采集与播放

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"概述","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 音频在网络传输中大致会经过以下的步骤,采集->前处理->编码->网络->解码->后处理->播放。在移动端,主要任务是音频的采集播放,编解码,以及音频的处理。苹果公司设计了Core Audio解决方案来完成音频在ios端的任务。CoreAudio设计了3个不同层次的API,如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/09/0984f50b34ae7c578e6b7e54b86773a4.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上图中Low-Level Services主要是关于音频的驱动和硬件,本文主要讲解3个常用的API:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Audio Unit Services: ios中音频底层API,主要用来实现音频pcm数据的采集播放","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Audio Converter Services:用于音频格式的转换,包含音频的编解码,pcm文件格式的转换","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Audio File Stream Services:用在流播放中,用于读取音频信息,分析音频帧。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过这3个api我们就能在ios上打通音频链路。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在ios上,有着一个管理着app如何使用音频的单例,那就是AVAudioSession,了解如何在ios端进行采集播放,首先要了解这个单例如何使用。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AVAudioSession","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" AVAudioSession 是一个只在iOS 上,Mac OS X 没有的API,用途是用来描述目前的App 打算如何使用audio,以及我们的App与其他App之间在audio 这部分应该是怎样的关系。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/06/065a0eb0cfeaa61d65948effc52d0d82.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如何使用AVAudioSession来管理我们app的audio呢?大致分为以下几个步骤:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)通过单例获取系统中的AVAudioSession","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"AVAudioSession* session= [AVAudioSession sharedInstance] ; ","attrs":{}}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2) 设置","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"AVAudioSession","attrs":{}}],"attrs":{}},{"type":"text","text":"的类别与模式,确定当前app如何使用audio","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)配置音频采样率,音频buffer大小等","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)添加AVAudioSession通知,例如音频中断与硬件线路改变","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)激活AVAudioSession","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"[session setActive:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"YES","attrs":{}},{"type":"text","text":" error:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"nil","attrs":{}},{"type":"text","text":"];","attrs":{}}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上述的第二步决定了当前app如何使用audio的核心,对应的API为:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"/// Set session category and mode with options.\n- (BOOL)setCategory:(AVAudioSessionCategory)category\n\t\t\t mode:(AVAudioSessionMode)mode\n\t\t\toptions:(AVAudioSessionCategoryOptions)options\n\t\t\t error:(NSError **)outError;","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession中Category","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AVAudioSession中Category定义了音频使用的主场景,IOS中现在定义了七种,如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/81/81544d33392d4398543ab89486625552.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过一些app常用场景来举例说明这些category的使用:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AVAudioSessionCategoryAmbient和AVAudioSessionCategorySoloAmbient可应用于手机游戏等,缺失语音并不会影响这个app的核心功能,他们两者的区别是AVAudioSessionCategoryAmbient可以与其他app进行混音播放,可以边玩游戏边听音乐;AVAudioSessionCategoryPlayback常应用于音乐播放器如网易云音乐中;AVAudioSessionCategoryRecord常应用于各种录音软件中;AVAudioSessionCategoryPlayAndRecord则运用于voip电话中;最后两种实在不常见,我也没有使用过。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession中Mode","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Mode可以对Category进行再设置,同样有七种mode来定制Category,不同的mode兼容不同的Category,兼容方式如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/2f/2f35bce3d63857d6dcdd3c95876b569c.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioSessionMode大多都与录音相关,不同的mode 代表了语音信号从硬件录制后会经过不同处理回调到录制API。常用的录音场景有voiceChat,gameChat,videoChat等。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession中Options","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"IOS设置了4个Option对音频选项进行更细化的管理。分别为:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/23/234840d7111fb84aeb7f1602282ee272.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最常用的是以下3个:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、AVAudioSessionCategoryOptionMixWithOthers:支持是否和其他音频APP进行混音,默认为false。在AVAudioSessionCategoryPlayAndRecord和AVAudioSessionCategoryMultiRoute下,即使当前的APP在录音或播放时,能够允许其他app在后台播放。在AVAudioSessionCategoryPlayback下,即使关闭了音量键,也允许当前app进行播放。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、AVAudioSessionCategoryOptionDuckOthers:系统智能降低其他APP声音,如高德地图播放导航时会降低音乐播放的声音。其他的APP音量会一直降低直到当前的app的audiosessiong进行deactive。打开这个option会设置AVAudioSessionCategoryOptionMixWithOthers。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3、AVAudioSessionCategoryOptionDefaultToSpeaker:设置默认输出到扬声器,兼容AVAudioSessionCategoryPlayAndRecord。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"设置好AVAudioSession的category,mode,options。在app的运行期间,如果audio状态出现了变化,我们要能够实时监听,这个时候就要使用添加AVAudioSession通知。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession的Notification","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 常用的监听为音频中断和音频路由改变。音频中断和音频路由改变都会发出系统通知,通过监听系统通知则可以完成音频状态改变的处理。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 当音频中断发生时:当前app在播放音频,此时打开另外一个app,或者系统铃声响起,会我们的app被打断的现象,此时我们需要暂停我们的播放界面,以及其它一系列的动作,那我们需要获取到当前打断的这个事件系统提供了一个打断通知 供我们进行打断处理","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"AVAudioSessionInterruptionNotification。","attrs":{}},{"type":"text","text":"事件AVAudioSessionInterruptionNotification中包含了3个key,分别为:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1 )AVAudioSessionInterruptionTypeKey","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以取值为:AVAudioSessionInterruptionTypeBegan和AVAudioSessionInterruptionTypeEnded","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2 )AVAudioSessionInterruptionOptionKey","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以取值为:AVAudioSessionInterruptionOptionShouldResume = 1","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3 )AVAudioSessionInterruptionWasSuspendedKey ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以取值为:AVAudioSessionInterruptionWasSuspendedKey为true表示当前app暂停,false表示被其他app中断","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 当音频路由改变时:当用户插拔耳机或链接蓝牙耳机,则IOS的音频线路便发生了切换,此时IOS系统会发生一个","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"AVAudioSessionRouteChangeNotification,","attrs":{}},{"type":"text","text":"通过监听该通知实现音频线路切换的回调。IOS定义了八种路由改变的原因 如下:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"typedef NS_ENUM(NSUInteger, AVAudioSessionRouteChangeReason) {\n /// The reason is unknown.\n AVAudioSessionRouteChangeReasonUnknown = 0,\n\n //发现有新的设备可用 比如蓝牙耳机或有限耳机的插入\n AVAudioSessionRouteChangeReasonNewDeviceAvailable = 1,\n\n //旧设备不可用 比如耳机的拔出\n AVAudioSessionRouteChangeReasonOldDeviceUnavailable = 2,\n\n // AVAudioSession的category改变\n AVAudioSessionRouteChangeReasonCategoryChange = 3,\n\n // APP修改输出设备\n AVAudioSessionRouteChangeReasonOverride = 4,\n\n // 设备唤醒\n AVAudioSessionRouteChangeReasonWakeFromSleep = 6,\n \n // 当前的路由不适配AVAudioSession的category\n AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory = 7,\n\n // 路由的设置改变了\n AVAudioSessionRouteChangeReasonRouteConfigurationChange = 8\n};","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在监听事件的回调中处理对于各种路由改变的原因进行处理:常见的处理逻辑为当AVAudioSessionRouteChangeReasonOldDeviceUnavailable发生时,检测是否耳机被拔出,若耳机被拔出,则停止播放,再次使用外放播放时,系统的音量不会出现巨大的改变。耳机被拔出时,正在使用耳机的麦克风录音的情况下应该停止录音。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当AVAudioSessionRouteChangeReasonNewDeviceAvailable发生时,检测是否耳机被插入,若耳机被插入,则配置sdk是否进行回声消除等模块。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"配置好AVAudioSession后,我们就该使用AudioUnit进行音频的采集播放","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AudioUnit","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"IOS中提供了七种AuidoUnit来满足四种不同场景的需求。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/1b/1b5c39484d183c1bab371f8435fd6dd7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文中使用audiounit来进行采集播放,只使用到了其中的I/O功能。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Audio Units由Scopes,Elements组成。在I/O功能下,Scope和Elements的使用如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f3/f3a80c9687e17e7d90ad52c10d3971ed.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  当使用I/O Unit情况下时,element是固定的。element直接与硬件挂钩,element1的inputscope为mic,element0的outputscope为speaker,我们需要处理的便是上图中黄色的部分。上图的理解为:系统通过mic采集音频到element1的inputscope,我们的APP通过element1的outputscope拿到音频输入数据。经过处理后,我们的APP输出到element0的inputscope,element0的outputscope从inputscope拿到数据后从扬声器完成音频的播放。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用AudioUnit经过以下步骤","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)描述要使用的audiounit。描述格式为AudioComponentDescription","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"AudioComponentDescription ioUnitDescription;\nioUnitDescription.componentType = kAudioUnitType_Output; //设置\nioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;\nioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;\nioUnitDescription.componentFlags = 0;\nioUnitDescription.componentFlagsMask = 0;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)通过api得到audiounit实例。一般是通过AudioComponentFindNext和AudioComponentInstanceNew两个API来完成。","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"//根据音频属性查找音频单元\nAudioComponent foundIoUnitReference = AudioComponentFindNext (NULL,&ioUnitDescription);\n\nAudioUnit audioUnit;\n//得到实例\nAudioComponentInstanceNew (foundIoUnitReference, &audioUnit);\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)通过AudioUnitSetProperty设置audiounit属性","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"//设置mic enable \n// 在io模式下:便要设置element1的input scope为 1\nint inputEnable = 1;\n//kAudioOutputUnitProperty_EnableIO 启用或禁止 I/O,默认输出开启,输入禁止。\nstatus = AudioUnitSetProperty(audioUnit,\n kAudioOutputUnitProperty_EnableIO,\n kAudioUnitScope_Input, // element 1\n kInputBus, //input scope\n &inputEnable,\n sizeof(inputEnable));\nCheckError(status, \"setProperty EnableIO error\");\n\n//设置采集数据的格式\n// 设置element1的output scope为 要采集的格式\nAudioStreamBasicDescription inputStreamDesc\n OSStatus status = AudioUnitSetProperty(audioUnit,\n kAudioUnitProperty_StreamFormat,\n kAudioUnitScope_Output,\n kInputBus,\n &inputStreamDesc,\n sizeof(inputStreamDesc));\nCheckError(status, \"setProperty StreamFormat error\");\n\n\n//设置speaker enbale\nint outputEnable = 1;\nresult = AudioUnitSetProperty(_auVoiceProcessing,\n kAudioOutputUnitProperty_EnableIO,\n kAudioUnitScope_Output,\n kOutputBus, // output bus\n &outputEnable,\n sizeof(outputEnable));\n//设置播放的格式\nAudioStreamBasicDescription streamDesc;\nOSStatus status = AudioUnitSetProperty(audioUnit,\n kAudioUnitProperty_StreamFormat,\n kAudioUnitScope_Input,\n kOutputBus,\n &streamDesc,\n sizeof(streamDesc));\nCheckError(status, \"SetProperty StreamFormat failure\");","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)注册采集播放回调","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/设置采集回调 input bus下的 output scope\n AURenderCallbackStruct inputCallBackStruce;\n inputCallBackStruce.inputProc = inputCallBackFun;\n inputCallBackStruce.inputProcRefCon = (__bridge void * _Nullable)(self);\n \n status = AudioUnitSetProperty(audioUnit,\n kAudioOutputUnitProperty_SetInputCallback,\n kAudioUnitScope_Output,\n kInputBus,\n &inputCallBackStruce,\n sizeof(inputCallBackStruce));\n CheckError(status, \"setProperty InputCallback error\");\n\n\t// 回调的静态函数\n\tstatic OSStatus inputCallBackFun( void * inRefCon,\n AudioUnitRenderActionFlags * ioActionFlags, //描述上下文信息\n const AudioTimeStamp * inTimeStamp, //采样时间戳\n UInt32 inBusNumber, //采样的总线数量\n UInt32 inNumberFrames, //多少帧的数据\n AudioBufferList * __nullable ioData)\n\t{ \n AudioRecord *recorder = (__bridge AudioRecord *)(inRefCon); //获取上下文\n return [recorder RecordProcessImpl:ioActionFlags stamp:inTimeStamp bus:inBusNumber numFrames:inNumberFrames]; //处理得到的数据\n }\n\n\t(OSStatus)RecordProcessImpl: (AudioUnitRenderActionFlags *)ioActionFlags\n stamp: (const AudioTimeStamp *)inTimeStamp\n bus: (uint32_t) inBusNumber\n numFrames: (uint32_t) inNumberFrames\n\t{\n uint32_t recordSamples = inNumberFrames *m_channels; // 采集了多少数据 int16\n if (m_recordTmpData != NULL) {\n delete [] m_recordTmpData;\n m_recordTmpData = NULL;\n }\n m_recordTmpData = new int8_t[2 * recordSamples]; \n memset(m_recordTmpData, 0, 2 * recordSamples);\n\n AudioBufferList bufferList;\n bufferList.mNumberBuffers = 1;\n bufferList.mBuffers[0].mData = m_recordTmpData;\n bufferList.mBuffers[0].mDataByteSize = 2*recordSamples;\n AudioUnitRender(audioUnit,\n ioActionFlags,\n inTimeStamp,\n kInputBus,\n inNumberFrames,\n &bufferList); \n AudioBuffer buffer = bufferList.mBuffers[0]; // 回调得到的数据\n int recordBytes = buffer.mDataByteSize;\n\n\t\t[dataWriter writeBytes:(Byte *)buffer.mData len:recordBytes]; //数据处理\n return noErr;\n\t}\n \n\n\n//设置播放回调 outputbus下的input scope\n AURenderCallbackStruct outputCallBackStruct;\n outputCallBackStruct.inputProc = outputCallBackFun;\n outputCallBackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);\n status = AudioUnitSetProperty(audioUnit,\n kAudioUnitProperty_SetRenderCallback,\n kAudioUnitScope_Input,\n kOutputBus,\n &outputCallBackStruct,\n sizeof(outputCallBackStruct));\n CheckError(status, \"SetProperty EnableIO failure\");\n\n\t//回调函数 \n static OSStatus outputCallBackFun( void * inRefCon,\n AudioUnitRenderActionFlags * ioActionFlags,\n const AudioTimeStamp * inTimeStamp,\n UInt32 inBusNumber,\n UInt32 inNumberFrames,\n AudioBufferList * __nullable ioData)\n {\n memset(ioData->mBuffers[0].mData, 0, ioData->mBuffers[0].mDataByteSize);\n // memset(ioData->mBuffers[1].mData, 0, ioData->mBuffers[1].mDataByteSize);\n AudioPlay *player = (__bridge AudioPlay *)(inRefCon);\n return [player PlayProcessImpl:ioActionFlags stamp:inTimeStamp bus:inBusNumber numFrames:inNumberFrames ioData:ioData];\n }\n\n\t//获取得到的数据\n - (OSStatus)PlayProcessImpl: (AudioUnitRenderActionFlags *)ioActionFlags\n stamp: (const AudioTimeStamp *)inTimeStamp\n bus: (uint32_t) inBusNumber\n numFrames: (uint32_t) inNumberFrames\n ioData:(AudioBufferList *)ioData\n {\n AudioBuffer buffer = ioData->mBuffers[0];\n int len = buffer.mDataByteSize; //需要的数据长度\n \n int readLen = [dataReader readData:len forData:(Byte*)buffer.mData];//读取录音文件\n buffer.mDataByteSize = readLen;\n if (readLen == 0){\n \t\t[_delegate playToEnd];\n [self stop];\n }\n return noErr;\n \n }","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)启动或停止audiounit","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"//首先得启动AVAduiosession\nNSError *error = nil;\n[AVAudioSession sharedInstance] \n \tsetCategory:AVAudioSessionCategoryPlayAndRecord \n withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker \n error:&error];\n[[AVAudioSession sharedInstance] setActive:YES error:&error];\n\nAudioOutputUnitStart(audioUnit); //开始\nAudioOutputUnitStop(audioUnit); //停止\n\n//释放audiounit步骤\nOSStatus result = AudioOutputUnitStop(audioUnit);\nresult = AudioUnitUninitialize(audioUnit); //最好将初始状态还原\n AudioComponentInstanceDispose(audioUnit); // 卸载当前的audiounit","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用AudioUnit的需注意的一些问题:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)ios每次回调的音频数据是不能确定的,因此需要设置音频采样时间间隔来达到每次最大可读取数据,可通过setPreferredIOBufferDuration设置缓存BUFFER的大小,比如: 采样率是44.1kHz, 采样位数是16, 声道数是1, 采样时间为0.01秒,则最大的采样数据为882. 所以即使我们设置超过此数值,系统最大也只能采集882个字节的音频数据.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)IOS中每次采集播放回调的时间大约在10ms左右,但不精确保证为10ms。、","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)在播放双声道语音时:注意mFormatFlags的kAudioFormatFlagIsNonInterleaved的使用","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)由于IOS每次回调的数据不是精确值,但我们app和audiounit交互都是10ms数据,因此,对audiounit的音频播放采集都需要设置音频缓存。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AudioConvert","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  AudioConverter用于IOS中音频格式的转换。包含pcm文件不同采样深度、采样率、采样精度、声道数之间的转换,以及pcm与各种压缩格式之间的相互转换。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 关于AudioConverter的函数和回调有:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/06/06155d7d9630a2bf525ea9dc07d4306f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioConcert的使用步骤如下:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)确定输入输出格式,创建audiconverter对象","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"OSStatus AudioConverterNew(const AudioStreamBasicDescription *inSourceFormat, \n const AudioStreamBasicDescription *inDestinationFormat,\n AudioConverterRef _Nullable *outAudioConverter);\n\nOSStatus AudioConverterNewSpecific(const AudioStreamBasicDescription *inSourceFormat, \n const AudioStreamBasicDescription *inDestinationFormat, \n UInt32 inNumberClassDescriptions, \n const AudioClassDescription *inClassDescriptions, \n AudioConverterRef _Nullable *outAudioConverter);","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)以回调的形式解码数据,支持packet和non-interleaved。在回调AudioConverterComplexInputDataProc在其中输入数据","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"OSStatus AudioConverterFillComplexBuffer(AudioConverterRef inAudioConverter,\n\n AudioConverterComplexInputDataProc inInputDataProc,\n\n void *inInputDataProcUserData, \n\n UInt32 *ioOutputDataPacketSize,\n\n AudioBufferList *outOutputData,\n\n AudioStreamPacketDescription *outPacketDescription);\n\ntypedef OSStatus (*AudioConverterComplexInputDataProc)(AudioConverterRef inAudioConverter,\n UInt32 *ioNumberDataPackets, //输入数据的包个数\n AudioBufferList *ioData, //输入数据\n AudioStreamPacketDescription * _Nullable *outDataPacketDescription, //输出数据的格式\n void *inUserData);","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3) 释放audioconverter","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"OSStatus AudioConverterDispose(AudioConverterRef inAudioConverter);","attrs":{}}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AudioFileStream","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioFileStream用在流播放中,用来读取音频信息,如采样率、声道数、码率、时长、分析音频帧,AudioFileStream不仅用于文件播放,同样可用于网络在线音频流播放。AudioFileStream只处理数据,不对文件源进行处理,因此数据的读取需要自己实现。因此,进行在线播放时,需要从网络流获取数据,在本地播放时,从文件流中获取数据。  ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    关于AudioFileStream的函数和回调有:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/95/95f44ddd173f2a1f2dffcb97809c9659.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioFileStream支持的格式有:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AIFF","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AIFC","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"WAVE","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CAF","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"NeXT","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ADTS","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MPEG Audio Layer 3","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AAC","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用AudioFileStream的步骤如下","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)AudioFileStreamOpen函数注册属性监听回调和解析帧回调","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1 上下文对象\n2 属性监听回调。每当解析器在数据流中找到属性的值时触发回调。\n3 解析帧回调。每当解析器解析出一帧后触发回调。\n4 文件类型的提示。当文件信息不完整的时候可以给AudioFileStream一定的提示。帮助其绕过文件中的错误或者缺失从而成功解析文件。所以在确定文件类型的情况下建议各位还是填上这个参数,如果无法确定可以传入0。AudioFileTypeID的类型为:\n5 返回的AudioFileStream实例对应的AudioFileStreamID,这个ID需要保存起来作为后续一些方法的参数使用;\n*/\nOSStatus AudioFileStreamOpen(void *inClientData, \n AudioFileStream_PropertyListenerProc inPropertyListenerProc, \n AudioFileStream_PacketsProc inPacketsProc, \n AudioFileTypeID inFileTypeHint, \n AudioFileStreamID _Nullable *outAudioFileStream);\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)AudioFileStreamParseBytes对传入的数据进行解析。并触发属性和解析帧这两个回调。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"`","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1.AudioFileStreamID\n2.解析数据的长度\n3.数据\n4.本次的解析和上一次解析是否是连续的关系,如果是连续的传入0,否则传入kAudioFileStreamParseFlag_Discontinuity。使用kAudioFileStreamParseFlag_Discontinuity的典型场景为:\n 1)seek完毕后,数据不连续\n 2)正常解析第一帧前都建议传入kAudioFileStreamParseFlag_Discontinuity\n5 返回值的错误类型\n*/\nextern OSStatus AudioFileStreamParseBytes(AudioFileStreamID inAudioFileStream,\n UInt32 inDataByteSize,\n const void* inData,\n UInt32 inFlags);","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)解析音频属性","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个回调会回调很多次,但是我们根据需要的音频格式信息进行处理。","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1、回调的第一个参数是Open方法中的上下文对象;\n\n2、第二个参数inAudioFileStream是和Open方法中第四个返回参数AudioFileStreamID一样,表示当前FileStream的ID;\n\n3、第三个参数是此次回调解析的信息ID。表示当前PropertyID对应的信息已经解析完成信息(例如数据格式、音频数据的偏移量等等),使用者可以通过AudioFileStreamGetProperty接口获取PropertyID对应的值或者数据结构;\n\n4、第四个参数ioFlags是一个返回参数,表示这个property是否需要被缓存,如果需要赋值kAudioFileStreamPropertyFlag_PropertyIsCached否则不赋值\n*/\ntypedef void (*AudioFileStream_PropertyListenerProc)(void * inClientData,\n AudioFileStreamID inAudioFileStream,\n AudioFileStreamPropertyID inPropertyID,\n UInt32 * ioFlags);\n\n//示例程序如下:\nstatic void ASPropertyListenerProc(void *\t\t\t\t\t\tinClientData,\n\t\t\t\t\t\t\t\tAudioFileStreamID\t\t\t\tinAudioFileStream,\n\t\t\t\t\t\t\t\tAudioFileStreamPropertyID\t\tinPropertyID,\n\t\t\t\t\t\t\t\tUInt32 *\t\t\t\t\t\tioFlags)\n{\t\n\t// this is called by audio file stream when it finds property values\n\tAudioStreamer* streamer = (AudioStreamer *)inClientData;\n\t[streamer\n\t\thandlePropertyChangeForFileStream:inAudioFileStream\n\t\tfileStreamPropertyID:inPropertyID\n\t\tioFlags:ioFlags];\n}\n\n(void)handlePropertyChangeForFileStream:(AudioFileStreamID)inAudioFileStream\n\tfileStreamPropertyID:(AudioFileStreamPropertyID)inPropertyID\n\tioFlags:(UInt32 *)ioFlags\n{\n\t@synchronized(self)\n\t{\t\n\t\tif (inPropertyID == kAudioFileStreamProperty_ReadyToProducePackets)\n\t\t{\n\t\t\tdiscontinuous = true; //准备好处理数据\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_DataOffset)\n\t\t{\n\t\t\tSInt64 offset;\n\t\t\tUInt32 offsetSize = sizeof(offset);\n\t\t\terr = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataOffset, &offsetSize, &offset);\t //得到偏移\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_AudioDataByteCount)\n\t\t{\n\t\t\tUInt32 byteCountSize = sizeof(UInt64); \n\t\t\terr = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_AudioDataByteCount, &byteCountSize, &audioDataByteCount);\n\t\t\tfileLength = dataOffset + audioDataByteCount; //获取文件长度\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_DataFormat)\n\t\t{\n\t\t\tif (asbd.mSampleRate == 0){\n\t\t\t\tUInt32 asbdSize = sizeof(asbd);\n\t\t\t\terr = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &asbdSize, &asbd);//获取流的格式\n\t\t\t}\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_FormatList)\n\t\t{\n\t\t\tBoolean outWriteable;\n\t\t\tUInt32 formatListSize;\n\t\t\terr = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_FormatList, &formatListSize, &outWriteable);//得到属性信息\n\t\t\n\t\t\tAudioFormatListItem *formatList = malloc(formatListSize);\n\t err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_FormatList, &formatListSize, formatList);//获取formatList\n\n\t\t\tfor (int i = 0; i * sizeof(AudioFormatListItem) < formatListSize; i += sizeof(AudioFormatListItem))\n\t\t\t{\n\t\t\t\tAudioStreamBasicDescription pasbd = formatList[i].mASBD;\n\t\t\t\tif (pasbd.mFormatID == kAudioFormatMPEG4AAC_HE ||\n\t\t\t\t\tpasbd.mFormatID == kAudioFormatMPEG4AAC_HE_V2){\n\t\t\t\t\tasbd = pasbd;\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t}\n\t\t\tfree(formatList);\n\t\t}\n\t}\n}","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)解析音频帧","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1、上下文对象;\n\n2、本次处理的数据大小;\n\n3、本次总共处理了多少帧(即代码里的Packet);\n\n4、本次处理的所有数据;\n\n5、AudioStreamPacketDescription数组,存储了每一帧数据是从第几个字节开始的,这一帧总共多少字节。\n*/\ntypedef void (*AudioFileStream_PacketsProc)(void * inClientData,\n UInt32 numberOfBytes,\n UInt32 numberOfPackets,\n const void * inInputData,\n AudioStreamPacketDescription * inPacketDescriptions);\n\n//示例程序如下:\nstatic void ASPacketsProc(\t\tvoid *\t\t\t\t\t\t\tinClientData,\n\t\t\t\t\t\t\t\tUInt32\t\t\t\t\t\t\tinNumberBytes,\n\t\t\t\t\t\t\t\tUInt32\t\t\t\t\t\t\tinNumberPackets,\n\t\t\t\t\t\t\t\tconst void *\t\t\t\t\tinInputData,\n\t\t\t\t\t\t\t\tAudioStreamPacketDescription\t*inPacketDescriptions)\n{\n\t// this is called by audio file stream when it finds packets of audio\n\tAudioStreamer* streamer = (AudioStreamer *)inClientData;\n\t[streamer\n\t\thandleAudioPackets:inInputData\n\t\tnumberBytes:inNumberBytes\n\t\tnumberPackets:inNumberPackets\n\t\tpacketDescriptions:inPacketDescriptions];\n}\n\n(void)handleAudioPackets:(const void *)inInputData\n\tnumberBytes:(UInt32)inNumberBytes\n\tnumberPackets:(UInt32)inNumberPackets\n\tpacketDescriptions:(AudioStreamPacketDescription *)inPacketDescriptions;\n{\n\t@synchronized(self)\n\t{\t\t\n\t\t// we have successfully read the first packests from the audio stream, so clear the \"discontinuous\" flag \n\t\tif (discontinuous){ //解析音频格式完成\n\t\t\tdiscontinuous = false;\n\t\t} \n\t\tif (!audioQueue){\n\t\t\t[self createQueue]; //创建AudioQueue 用于播放\n\t\t}\n\t}\n\n\t// inPacketDescriptions便是处理VBR数据 the following code assumes we're streaming VBR data. for CBR data, the second branch is used. \n\tif (inPacketDescriptions)\n\t{\n\t\tfor (int i = 0; i < inNumberPackets; ++i)\n\t\t{\n\t\t\tSInt64 packetOffset = inPacketDescriptions[i].mStartOffset;\n\t\t\tSInt64 packetSize = inPacketDescriptions[i].mDataByteSize;\n\t\t\t// AudioQueue缓冲区分析\n // ...\n \n AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];\n\t\t memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)inInputData + packetOffset, packetSize);\n\t\t}\t\n\t}\n\telse\n\t{\n\t\tsize_t offset = 0;\n\t\twhile (inNumberBytes)\n\t\t{\n\t\t // AudioQueue缓冲区分析\n // ...\n AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];\n memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);\n inNumberBytes -= copySize;\n\t\t\t\n\t\t}\n\t}\n}","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)若有必要,进行AudioFileStreamSeek,AudioFileStreamSeek是用来寻找精确的某一个帧的字节偏移。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"6)AudioFileStreamClose关闭AudioFileStream","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"小结","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 本文介绍了IOS中的音频采集播放API,其中AVAudioSession负责管理APP如何操作音频,AudioUnit负责采集播放PCM形式的音频数据,AudioConvert负责音频的编解码,AudioFileStream负责流播放,解析音频帧,播放网络流中的音频有很大的用处。以播放本地mp3文件为例,首先AudioFileStream解析mp3音频帧,解析出帧后送到AudioConvert解码成PCM数据,再送到AudioUnit进行音频播放。采集则是完全相反的步骤。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" IOS的音频播放其实最好的是参照一些开源的IOS音频播放器,这些都可以在github上找到,比如豆瓣开源的播放器","attrs":{}},{"type":"link","attrs":{"href":"https://github.com/douban/DOUAudioStreamer","title":"","type":null},"content":[{"type":"text","text":"https://github.com/douban/DOUAudioStreamer","attrs":{}}]},{"type":"text","text":"。最后,文章中有什么错误的地方,希望和大家一起讨论。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"参考","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"http://msching.github.io/blog/2014/07/09/audio-in-ios-3/","title":null,"type":null},"content":[{"type":"text","text":"IOS音频播放","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://github.com/mattgallagher/AudioStreamer","title":null,"type":null},"content":[{"type":"text","text":"https://github.com/mattgallagher/AudioStreamer","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://www.jianshu.com/p/25188072a11a","title":"","type":null},"content":[{"type":"text","text":"https://www.jianshu.com/p/25188072a11a","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://blog.csdn.net/netease_im/article/details/113875029?utm_medium=distribute.pc_relevant.none-task-blog-OPENSEARCH-10.control&dist_request_id=&depth_1-utm_source=distribute.pc_relevant.none-task-blog-OPENSEARCH-10.control","title":null,"type":null},"content":[{"type":"text","text":"webrtc系列之音频会话管理","attrs":{}}]},{"type":"link","attrs":{"href":"https://developer.apple.com/library/archive/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html","title":null,"type":null},"content":[{"type":"text","text":"https://developer.apple.com/library/archive/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://www.jianshu.com/p/fb0e5fb71b3c","title":null,"type":null},"content":[{"type":"text","text":"https://www.jianshu.com/p/fb0e5fb71b3c","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"http://mp.weixin.qq.com/s?__biz=MzI1MzYzMjE0MQ==&mid=2247488032&idx=1&sn=a8e8948fcd043cd0124e8bfe26aa0784&chksm=e9d0d9c2dea750d45cb31e321c2206cc5e2c1d6a6ce1ea888432d7529660254df18325bb0582&mpshare=1&scene=1&srcid=0302VQAMfPTnksVVWajpjD31&sharer_sharetime=1614681590686&sharer_shareid=56acb924444b93ede624b545b0383c04#rd","title":null,"type":null},"content":[{"type":"text","text":"在线教室 iOS 端声音问题综合解决方案","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://stackoverflow.com/questions/16841831/specifying-number-of-frames-to-process-in-audiounit-render-callback-on-ios","title":null,"type":null},"content":[{"type":"text","text":"https://stackoverflow.com/questions/16841831/specifying-number-of-frames-to-process-in-audiounit-render-callback-on-ios","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章