IOS端音頻的採集與播放

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"概述","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 音頻在網絡傳輸中大致會經過以下的步驟,採集->前處理->編碼->網絡->解碼->後處理->播放。在移動端,主要任務是音頻的採集播放,編解碼,以及音頻的處理。蘋果公司設計了Core Audio解決方案來完成音頻在ios端的任務。CoreAudio設計了3個不同層次的API,如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/09/0984f50b34ae7c578e6b7e54b86773a4.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上圖中Low-Level Services主要是關於音頻的驅動和硬件,本文主要講解3個常用的API:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Audio Unit Services: ios中音頻底層API,主要用來實現音頻pcm數據的採集播放","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Audio Converter Services:用於音頻格式的轉換,包含音頻的編解碼,pcm文件格式的轉換","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Audio File Stream Services:用在流播放中,用於讀取音頻信息,分析音頻幀。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過這3個api我們就能在ios上打通音頻鏈路。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在ios上,有着一個管理着app如何使用音頻的單例,那就是AVAudioSession,瞭解如何在ios端進行採集播放,首先要了解這個單例如何使用。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AVAudioSession","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" AVAudioSession 是一個只在iOS 上,Mac OS X 沒有的API,用途是用來描述目前的App 打算如何使用audio,以及我們的App與其他App之間在audio 這部分應該是怎樣的關係。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/06/065a0eb0cfeaa61d65948effc52d0d82.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如何使用AVAudioSession來管理我們app的audio呢?大致分爲以下幾個步驟:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)通過單例獲取系統中的AVAudioSession","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"AVAudioSession* session= [AVAudioSession sharedInstance] ; ","attrs":{}}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2) 設置","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"AVAudioSession","attrs":{}}],"attrs":{}},{"type":"text","text":"的類別與模式,確定當前app如何使用audio","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)配置音頻採樣率,音頻buffer大小等","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)添加AVAudioSession通知,例如音頻中斷與硬件線路改變","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)激活AVAudioSession","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"[session setActive:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"YES","attrs":{}},{"type":"text","text":" error:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"nil","attrs":{}},{"type":"text","text":"];","attrs":{}}],"attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上述的第二步決定了當前app如何使用audio的核心,對應的API爲:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"/// Set session category and mode with options.\n- (BOOL)setCategory:(AVAudioSessionCategory)category\n\t\t\t mode:(AVAudioSessionMode)mode\n\t\t\toptions:(AVAudioSessionCategoryOptions)options\n\t\t\t error:(NSError **)outError;","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession中Category","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AVAudioSession中Category定義了音頻使用的主場景,IOS中現在定義了七種,如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/81/81544d33392d4398543ab89486625552.png","alt":null,"title":"","style":[{"key":"width","value":"100%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通過一些app常用場景來舉例說明這些category的使用:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AVAudioSessionCategoryAmbient和AVAudioSessionCategorySoloAmbient可應用於手機遊戲等,缺失語音並不會影響這個app的核心功能,他們兩者的區別是AVAudioSessionCategoryAmbient可以與其他app進行混音播放,可以邊玩遊戲邊聽音樂;AVAudioSessionCategoryPlayback常應用於音樂播放器如網易雲音樂中;AVAudioSessionCategoryRecord常應用於各種錄音軟件中;AVAudioSessionCategoryPlayAndRecord則運用於voip電話中;最後兩種實在不常見,我也沒有使用過。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession中Mode","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Mode可以對Category進行再設置,同樣有七種mode來定製Category,不同的mode兼容不同的Category,兼容方式如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/2f/2f35bce3d63857d6dcdd3c95876b569c.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioSessionMode大多都與錄音相關,不同的mode 代表了語音信號從硬件錄製後會經過不同處理回調到錄製API。常用的錄音場景有voiceChat,gameChat,videoChat等。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession中Options","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"IOS設置了4個Option對音頻選項進行更細化的管理。分別爲:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/23/234840d7111fb84aeb7f1602282ee272.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最常用的是以下3個:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、AVAudioSessionCategoryOptionMixWithOthers:支持是否和其他音頻APP進行混音,默認爲false。在AVAudioSessionCategoryPlayAndRecord和AVAudioSessionCategoryMultiRoute下,即使當前的APP在錄音或播放時,能夠允許其他app在後臺播放。在AVAudioSessionCategoryPlayback下,即使關閉了音量鍵,也允許當前app進行播放。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、AVAudioSessionCategoryOptionDuckOthers:系統智能降低其他APP聲音,如高德地圖播放導航時會降低音樂播放的聲音。其他的APP音量會一直降低直到當前的app的audiosessiong進行deactive。打開這個option會設置AVAudioSessionCategoryOptionMixWithOthers。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3、AVAudioSessionCategoryOptionDefaultToSpeaker:設置默認輸出到揚聲器,兼容AVAudioSessionCategoryPlayAndRecord。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"設置好AVAudioSession的category,mode,options。在app的運行期間,如果audio狀態出現了變化,我們要能夠實時監聽,這個時候就要使用添加AVAudioSession通知。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"AVAudioSession的Notification","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 常用的監聽爲音頻中斷和音頻路由改變。音頻中斷和音頻路由改變都會發出系統通知,通過監聽系統通知則可以完成音頻狀態改變的處理。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 當音頻中斷髮生時:當前app在播放音頻,此時打開另外一個app,或者系統鈴聲響起,會我們的app被打斷的現象,此時我們需要暫停我們的播放界面,以及其它一系列的動作,那我們需要獲取到當前打斷的這個事件系統提供了一個打斷通知 供我們進行打斷處理","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"AVAudioSessionInterruptionNotification。","attrs":{}},{"type":"text","text":"事件AVAudioSessionInterruptionNotification中包含了3個key,分別爲:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1 )AVAudioSessionInterruptionTypeKey","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以取值爲:AVAudioSessionInterruptionTypeBegan和AVAudioSessionInterruptionTypeEnded","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2 )AVAudioSessionInterruptionOptionKey","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以取值爲:AVAudioSessionInterruptionOptionShouldResume = 1","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3 )AVAudioSessionInterruptionWasSuspendedKey ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以取值爲:AVAudioSessionInterruptionWasSuspendedKey爲true表示當前app暫停,false表示被其他app中斷","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 當音頻路由改變時:當用戶插拔耳機或鏈接藍牙耳機,則IOS的音頻線路便發生了切換,此時IOS系統會發生一個","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"AVAudioSessionRouteChangeNotification,","attrs":{}},{"type":"text","text":"通過監聽該通知實現音頻線路切換的回調。IOS定義了八種路由改變的原因 如下:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"typedef NS_ENUM(NSUInteger, AVAudioSessionRouteChangeReason) {\n /// The reason is unknown.\n AVAudioSessionRouteChangeReasonUnknown = 0,\n\n //發現有新的設備可用 比如藍牙耳機或有限耳機的插入\n AVAudioSessionRouteChangeReasonNewDeviceAvailable = 1,\n\n //舊設備不可用 比如耳機的拔出\n AVAudioSessionRouteChangeReasonOldDeviceUnavailable = 2,\n\n // AVAudioSession的category改變\n AVAudioSessionRouteChangeReasonCategoryChange = 3,\n\n // APP修改輸出設備\n AVAudioSessionRouteChangeReasonOverride = 4,\n\n // 設備喚醒\n AVAudioSessionRouteChangeReasonWakeFromSleep = 6,\n \n // 當前的路由不適配AVAudioSession的category\n AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory = 7,\n\n // 路由的設置改變了\n AVAudioSessionRouteChangeReasonRouteConfigurationChange = 8\n};","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 在監聽事件的回調中處理對於各種路由改變的原因進行處理:常見的處理邏輯爲當AVAudioSessionRouteChangeReasonOldDeviceUnavailable發生時,檢測是否耳機被拔出,若耳機被拔出,則停止播放,再次使用外放播放時,系統的音量不會出現巨大的改變。耳機被拔出時,正在使用耳機的麥克風錄音的情況下應該停止錄音。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"當AVAudioSessionRouteChangeReasonNewDeviceAvailable發生時,檢測是否耳機被插入,若耳機被插入,則配置sdk是否進行回聲消除等模塊。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"配置好AVAudioSession後,我們就該使用AudioUnit進行音頻的採集播放","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AudioUnit","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"IOS中提供了七種AuidoUnit來滿足四種不同場景的需求。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/1b/1b5c39484d183c1bab371f8435fd6dd7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文中使用audiounit來進行採集播放,只使用到了其中的I/O功能。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" Audio Units由Scopes,Elements組成。在I/O功能下,Scope和Elements的使用如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f3/f3a80c9687e17e7d90ad52c10d3971ed.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  當使用I/O Unit情況下時,element是固定的。element直接與硬件掛鉤,element1的inputscope爲mic,element0的outputscope爲speaker,我們需要處理的便是上圖中黃色的部分。上圖的理解爲:系統通過mic採集音頻到element1的inputscope,我們的APP通過element1的outputscope拿到音頻輸入數據。經過處理後,我們的APP輸出到element0的inputscope,element0的outputscope從inputscope拿到數據後從揚聲器完成音頻的播放。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用AudioUnit經過以下步驟","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)描述要使用的audiounit。描述格式爲AudioComponentDescription","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"AudioComponentDescription ioUnitDescription;\nioUnitDescription.componentType = kAudioUnitType_Output; //設置\nioUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;\nioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;\nioUnitDescription.componentFlags = 0;\nioUnitDescription.componentFlagsMask = 0;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)通過api得到audiounit實例。一般是通過AudioComponentFindNext和AudioComponentInstanceNew兩個API來完成。","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"//根據音頻屬性查找音頻單元\nAudioComponent foundIoUnitReference = AudioComponentFindNext (NULL,&ioUnitDescription);\n\nAudioUnit audioUnit;\n//得到實例\nAudioComponentInstanceNew (foundIoUnitReference, &audioUnit);\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)通過AudioUnitSetProperty設置audiounit屬性","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"//設置mic enable \n// 在io模式下:便要設置element1的input scope爲 1\nint inputEnable = 1;\n//kAudioOutputUnitProperty_EnableIO 啓用或禁止 I/O,默認輸出開啓,輸入禁止。\nstatus = AudioUnitSetProperty(audioUnit,\n kAudioOutputUnitProperty_EnableIO,\n kAudioUnitScope_Input, // element 1\n kInputBus, //input scope\n &inputEnable,\n sizeof(inputEnable));\nCheckError(status, \"setProperty EnableIO error\");\n\n//設置採集數據的格式\n// 設置element1的output scope爲 要採集的格式\nAudioStreamBasicDescription inputStreamDesc\n OSStatus status = AudioUnitSetProperty(audioUnit,\n kAudioUnitProperty_StreamFormat,\n kAudioUnitScope_Output,\n kInputBus,\n &inputStreamDesc,\n sizeof(inputStreamDesc));\nCheckError(status, \"setProperty StreamFormat error\");\n\n\n//設置speaker enbale\nint outputEnable = 1;\nresult = AudioUnitSetProperty(_auVoiceProcessing,\n kAudioOutputUnitProperty_EnableIO,\n kAudioUnitScope_Output,\n kOutputBus, // output bus\n &outputEnable,\n sizeof(outputEnable));\n//設置播放的格式\nAudioStreamBasicDescription streamDesc;\nOSStatus status = AudioUnitSetProperty(audioUnit,\n kAudioUnitProperty_StreamFormat,\n kAudioUnitScope_Input,\n kOutputBus,\n &streamDesc,\n sizeof(streamDesc));\nCheckError(status, \"SetProperty StreamFormat failure\");","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)註冊採集播放回調","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/設置採集回調 input bus下的 output scope\n AURenderCallbackStruct inputCallBackStruce;\n inputCallBackStruce.inputProc = inputCallBackFun;\n inputCallBackStruce.inputProcRefCon = (__bridge void * _Nullable)(self);\n \n status = AudioUnitSetProperty(audioUnit,\n kAudioOutputUnitProperty_SetInputCallback,\n kAudioUnitScope_Output,\n kInputBus,\n &inputCallBackStruce,\n sizeof(inputCallBackStruce));\n CheckError(status, \"setProperty InputCallback error\");\n\n\t// 回調的靜態函數\n\tstatic OSStatus inputCallBackFun( void * inRefCon,\n AudioUnitRenderActionFlags * ioActionFlags, //描述上下文信息\n const AudioTimeStamp * inTimeStamp, //採樣時間戳\n UInt32 inBusNumber, //採樣的總線數量\n UInt32 inNumberFrames, //多少幀的數據\n AudioBufferList * __nullable ioData)\n\t{ \n AudioRecord *recorder = (__bridge AudioRecord *)(inRefCon); //獲取上下文\n return [recorder RecordProcessImpl:ioActionFlags stamp:inTimeStamp bus:inBusNumber numFrames:inNumberFrames]; //處理得到的數據\n }\n\n\t(OSStatus)RecordProcessImpl: (AudioUnitRenderActionFlags *)ioActionFlags\n stamp: (const AudioTimeStamp *)inTimeStamp\n bus: (uint32_t) inBusNumber\n numFrames: (uint32_t) inNumberFrames\n\t{\n uint32_t recordSamples = inNumberFrames *m_channels; // 採集了多少數據 int16\n if (m_recordTmpData != NULL) {\n delete [] m_recordTmpData;\n m_recordTmpData = NULL;\n }\n m_recordTmpData = new int8_t[2 * recordSamples]; \n memset(m_recordTmpData, 0, 2 * recordSamples);\n\n AudioBufferList bufferList;\n bufferList.mNumberBuffers = 1;\n bufferList.mBuffers[0].mData = m_recordTmpData;\n bufferList.mBuffers[0].mDataByteSize = 2*recordSamples;\n AudioUnitRender(audioUnit,\n ioActionFlags,\n inTimeStamp,\n kInputBus,\n inNumberFrames,\n &bufferList); \n AudioBuffer buffer = bufferList.mBuffers[0]; // 回調得到的數據\n int recordBytes = buffer.mDataByteSize;\n\n\t\t[dataWriter writeBytes:(Byte *)buffer.mData len:recordBytes]; //數據處理\n return noErr;\n\t}\n \n\n\n//設置播放回調 outputbus下的input scope\n AURenderCallbackStruct outputCallBackStruct;\n outputCallBackStruct.inputProc = outputCallBackFun;\n outputCallBackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);\n status = AudioUnitSetProperty(audioUnit,\n kAudioUnitProperty_SetRenderCallback,\n kAudioUnitScope_Input,\n kOutputBus,\n &outputCallBackStruct,\n sizeof(outputCallBackStruct));\n CheckError(status, \"SetProperty EnableIO failure\");\n\n\t//回調函數 \n static OSStatus outputCallBackFun( void * inRefCon,\n AudioUnitRenderActionFlags * ioActionFlags,\n const AudioTimeStamp * inTimeStamp,\n UInt32 inBusNumber,\n UInt32 inNumberFrames,\n AudioBufferList * __nullable ioData)\n {\n memset(ioData->mBuffers[0].mData, 0, ioData->mBuffers[0].mDataByteSize);\n // memset(ioData->mBuffers[1].mData, 0, ioData->mBuffers[1].mDataByteSize);\n AudioPlay *player = (__bridge AudioPlay *)(inRefCon);\n return [player PlayProcessImpl:ioActionFlags stamp:inTimeStamp bus:inBusNumber numFrames:inNumberFrames ioData:ioData];\n }\n\n\t//獲取得到的數據\n - (OSStatus)PlayProcessImpl: (AudioUnitRenderActionFlags *)ioActionFlags\n stamp: (const AudioTimeStamp *)inTimeStamp\n bus: (uint32_t) inBusNumber\n numFrames: (uint32_t) inNumberFrames\n ioData:(AudioBufferList *)ioData\n {\n AudioBuffer buffer = ioData->mBuffers[0];\n int len = buffer.mDataByteSize; //需要的數據長度\n \n int readLen = [dataReader readData:len forData:(Byte*)buffer.mData];//讀取錄音文件\n buffer.mDataByteSize = readLen;\n if (readLen == 0){\n \t\t[_delegate playToEnd];\n [self stop];\n }\n return noErr;\n \n }","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)啓動或停止audiounit","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"//首先得啓動AVAduiosession\nNSError *error = nil;\n[AVAudioSession sharedInstance] \n \tsetCategory:AVAudioSessionCategoryPlayAndRecord \n withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker \n error:&error];\n[[AVAudioSession sharedInstance] setActive:YES error:&error];\n\nAudioOutputUnitStart(audioUnit); //開始\nAudioOutputUnitStop(audioUnit); //停止\n\n//釋放audiounit步驟\nOSStatus result = AudioOutputUnitStop(audioUnit);\nresult = AudioUnitUninitialize(audioUnit); //最好將初始狀態還原\n AudioComponentInstanceDispose(audioUnit); // 卸載當前的audiounit","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用AudioUnit的需注意的一些問題:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)ios每次回調的音頻數據是不能確定的,因此需要設置音頻採樣時間間隔來達到每次最大可讀取數據,可通過setPreferredIOBufferDuration設置緩存BUFFER的大小,比如: 採樣率是44.1kHz, 採樣位數是16, 聲道數是1, 採樣時間爲0.01秒,則最大的採樣數據爲882. 所以即使我們設置超過此數值,系統最大也只能採集882個字節的音頻數據.","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)IOS中每次採集播放回調的時間大約在10ms左右,但不精確保證爲10ms。、","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)在播放雙聲道語音時:注意mFormatFlags的kAudioFormatFlagIsNonInterleaved的使用","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)由於IOS每次回調的數據不是精確值,但我們app和audiounit交互都是10ms數據,因此,對audiounit的音頻播放採集都需要設置音頻緩存。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AudioConvert","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"  AudioConverter用於IOS中音頻格式的轉換。包含pcm文件不同採樣深度、採樣率、採樣精度、聲道數之間的轉換,以及pcm與各種壓縮格式之間的相互轉換。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 關於AudioConverter的函數和回調有:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/06/06155d7d9630a2bf525ea9dc07d4306f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioConcert的使用步驟如下:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)確定輸入輸出格式,創建audiconverter對象","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"OSStatus AudioConverterNew(const AudioStreamBasicDescription *inSourceFormat, \n const AudioStreamBasicDescription *inDestinationFormat,\n AudioConverterRef _Nullable *outAudioConverter);\n\nOSStatus AudioConverterNewSpecific(const AudioStreamBasicDescription *inSourceFormat, \n const AudioStreamBasicDescription *inDestinationFormat, \n UInt32 inNumberClassDescriptions, \n const AudioClassDescription *inClassDescriptions, \n AudioConverterRef _Nullable *outAudioConverter);","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)以回調的形式解碼數據,支持packet和non-interleaved。在回調AudioConverterComplexInputDataProc在其中輸入數據","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"OSStatus AudioConverterFillComplexBuffer(AudioConverterRef inAudioConverter,\n\n AudioConverterComplexInputDataProc inInputDataProc,\n\n void *inInputDataProcUserData, \n\n UInt32 *ioOutputDataPacketSize,\n\n AudioBufferList *outOutputData,\n\n AudioStreamPacketDescription *outPacketDescription);\n\ntypedef OSStatus (*AudioConverterComplexInputDataProc)(AudioConverterRef inAudioConverter,\n UInt32 *ioNumberDataPackets, //輸入數據的包個數\n AudioBufferList *ioData, //輸入數據\n AudioStreamPacketDescription * _Nullable *outDataPacketDescription, //輸出數據的格式\n void *inUserData);","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3) 釋放audioconverter","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"OSStatus AudioConverterDispose(AudioConverterRef inAudioConverter);","attrs":{}}],"attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"AudioFileStream","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioFileStream用在流播放中,用來讀取音頻信息,如採樣率、聲道數、碼率、時長、分析音頻幀,AudioFileStream不僅用於文件播放,同樣可用於網絡在線音頻流播放。AudioFileStream只處理數據,不對文件源進行處理,因此數據的讀取需要自己實現。因此,進行在線播放時,需要從網絡流獲取數據,在本地播放時,從文件流中獲取數據。  ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"    關於AudioFileStream的函數和回調有:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/95/95f44ddd173f2a1f2dffcb97809c9659.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AudioFileStream支持的格式有:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AIFF","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AIFC","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"WAVE","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CAF","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"NeXT","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ADTS","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MPEG Audio Layer 3","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AAC","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用AudioFileStream的步驟如下","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1)AudioFileStreamOpen函數註冊屬性監聽回調和解析幀回調","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1 上下文對象\n2 屬性監聽回調。每當解析器在數據流中找到屬性的值時觸發回調。\n3 解析幀回調。每當解析器解析出一幀後觸發回調。\n4 文件類型的提示。當文件信息不完整的時候可以給AudioFileStream一定的提示。幫助其繞過文件中的錯誤或者缺失從而成功解析文件。所以在確定文件類型的情況下建議各位還是填上這個參數,如果無法確定可以傳入0。AudioFileTypeID的類型爲:\n5 返回的AudioFileStream實例對應的AudioFileStreamID,這個ID需要保存起來作爲後續一些方法的參數使用;\n*/\nOSStatus AudioFileStreamOpen(void *inClientData, \n AudioFileStream_PropertyListenerProc inPropertyListenerProc, \n AudioFileStream_PacketsProc inPacketsProc, \n AudioFileTypeID inFileTypeHint, \n AudioFileStreamID _Nullable *outAudioFileStream);\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2)AudioFileStreamParseBytes對傳入的數據進行解析。並觸發屬性和解析幀這兩個回調。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"`","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1.AudioFileStreamID\n2.解析數據的長度\n3.數據\n4.本次的解析和上一次解析是否是連續的關係,如果是連續的傳入0,否則傳入kAudioFileStreamParseFlag_Discontinuity。使用kAudioFileStreamParseFlag_Discontinuity的典型場景爲:\n 1)seek完畢後,數據不連續\n 2)正常解析第一幀前都建議傳入kAudioFileStreamParseFlag_Discontinuity\n5 返回值的錯誤類型\n*/\nextern OSStatus AudioFileStreamParseBytes(AudioFileStreamID inAudioFileStream,\n UInt32 inDataByteSize,\n const void* inData,\n UInt32 inFlags);","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3)解析音頻屬性","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"這個回調會回調很多次,但是我們根據需要的音頻格式信息進行處理。","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1、回調的第一個參數是Open方法中的上下文對象;\n\n2、第二個參數inAudioFileStream是和Open方法中第四個返回參數AudioFileStreamID一樣,表示當前FileStream的ID;\n\n3、第三個參數是此次回調解析的信息ID。表示當前PropertyID對應的信息已經解析完成信息(例如數據格式、音頻數據的偏移量等等),使用者可以通過AudioFileStreamGetProperty接口獲取PropertyID對應的值或者數據結構;\n\n4、第四個參數ioFlags是一個返回參數,表示這個property是否需要被緩存,如果需要賦值kAudioFileStreamPropertyFlag_PropertyIsCached否則不賦值\n*/\ntypedef void (*AudioFileStream_PropertyListenerProc)(void * inClientData,\n AudioFileStreamID inAudioFileStream,\n AudioFileStreamPropertyID inPropertyID,\n UInt32 * ioFlags);\n\n//示例程序如下:\nstatic void ASPropertyListenerProc(void *\t\t\t\t\t\tinClientData,\n\t\t\t\t\t\t\t\tAudioFileStreamID\t\t\t\tinAudioFileStream,\n\t\t\t\t\t\t\t\tAudioFileStreamPropertyID\t\tinPropertyID,\n\t\t\t\t\t\t\t\tUInt32 *\t\t\t\t\t\tioFlags)\n{\t\n\t// this is called by audio file stream when it finds property values\n\tAudioStreamer* streamer = (AudioStreamer *)inClientData;\n\t[streamer\n\t\thandlePropertyChangeForFileStream:inAudioFileStream\n\t\tfileStreamPropertyID:inPropertyID\n\t\tioFlags:ioFlags];\n}\n\n(void)handlePropertyChangeForFileStream:(AudioFileStreamID)inAudioFileStream\n\tfileStreamPropertyID:(AudioFileStreamPropertyID)inPropertyID\n\tioFlags:(UInt32 *)ioFlags\n{\n\t@synchronized(self)\n\t{\t\n\t\tif (inPropertyID == kAudioFileStreamProperty_ReadyToProducePackets)\n\t\t{\n\t\t\tdiscontinuous = true; //準備好處理數據\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_DataOffset)\n\t\t{\n\t\t\tSInt64 offset;\n\t\t\tUInt32 offsetSize = sizeof(offset);\n\t\t\terr = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataOffset, &offsetSize, &offset);\t //得到偏移\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_AudioDataByteCount)\n\t\t{\n\t\t\tUInt32 byteCountSize = sizeof(UInt64); \n\t\t\terr = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_AudioDataByteCount, &byteCountSize, &audioDataByteCount);\n\t\t\tfileLength = dataOffset + audioDataByteCount; //獲取文件長度\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_DataFormat)\n\t\t{\n\t\t\tif (asbd.mSampleRate == 0){\n\t\t\t\tUInt32 asbdSize = sizeof(asbd);\n\t\t\t\terr = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &asbdSize, &asbd);//獲取流的格式\n\t\t\t}\n\t\t}\n\t\telse if (inPropertyID == kAudioFileStreamProperty_FormatList)\n\t\t{\n\t\t\tBoolean outWriteable;\n\t\t\tUInt32 formatListSize;\n\t\t\terr = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_FormatList, &formatListSize, &outWriteable);//得到屬性信息\n\t\t\n\t\t\tAudioFormatListItem *formatList = malloc(formatListSize);\n\t err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_FormatList, &formatListSize, formatList);//獲取formatList\n\n\t\t\tfor (int i = 0; i * sizeof(AudioFormatListItem) < formatListSize; i += sizeof(AudioFormatListItem))\n\t\t\t{\n\t\t\t\tAudioStreamBasicDescription pasbd = formatList[i].mASBD;\n\t\t\t\tif (pasbd.mFormatID == kAudioFormatMPEG4AAC_HE ||\n\t\t\t\t\tpasbd.mFormatID == kAudioFormatMPEG4AAC_HE_V2){\n\t\t\t\t\tasbd = pasbd;\n\t\t\t\t\tbreak;\n\t\t\t\t} \n\t\t\t}\n\t\t\tfree(formatList);\n\t\t}\n\t}\n}","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4)解析音頻幀","attrs":{}}]},{"type":"codeblock","attrs":{"lang":"objectivec"},"content":[{"type":"text","text":"/*\n1、上下文對象;\n\n2、本次處理的數據大小;\n\n3、本次總共處理了多少幀(即代碼裏的Packet);\n\n4、本次處理的所有數據;\n\n5、AudioStreamPacketDescription數組,存儲了每一幀數據是從第幾個字節開始的,這一幀總共多少字節。\n*/\ntypedef void (*AudioFileStream_PacketsProc)(void * inClientData,\n UInt32 numberOfBytes,\n UInt32 numberOfPackets,\n const void * inInputData,\n AudioStreamPacketDescription * inPacketDescriptions);\n\n//示例程序如下:\nstatic void ASPacketsProc(\t\tvoid *\t\t\t\t\t\t\tinClientData,\n\t\t\t\t\t\t\t\tUInt32\t\t\t\t\t\t\tinNumberBytes,\n\t\t\t\t\t\t\t\tUInt32\t\t\t\t\t\t\tinNumberPackets,\n\t\t\t\t\t\t\t\tconst void *\t\t\t\t\tinInputData,\n\t\t\t\t\t\t\t\tAudioStreamPacketDescription\t*inPacketDescriptions)\n{\n\t// this is called by audio file stream when it finds packets of audio\n\tAudioStreamer* streamer = (AudioStreamer *)inClientData;\n\t[streamer\n\t\thandleAudioPackets:inInputData\n\t\tnumberBytes:inNumberBytes\n\t\tnumberPackets:inNumberPackets\n\t\tpacketDescriptions:inPacketDescriptions];\n}\n\n(void)handleAudioPackets:(const void *)inInputData\n\tnumberBytes:(UInt32)inNumberBytes\n\tnumberPackets:(UInt32)inNumberPackets\n\tpacketDescriptions:(AudioStreamPacketDescription *)inPacketDescriptions;\n{\n\t@synchronized(self)\n\t{\t\t\n\t\t// we have successfully read the first packests from the audio stream, so clear the \"discontinuous\" flag \n\t\tif (discontinuous){ //解析音頻格式完成\n\t\t\tdiscontinuous = false;\n\t\t} \n\t\tif (!audioQueue){\n\t\t\t[self createQueue]; //創建AudioQueue 用於播放\n\t\t}\n\t}\n\n\t// inPacketDescriptions便是處理VBR數據 the following code assumes we're streaming VBR data. for CBR data, the second branch is used. \n\tif (inPacketDescriptions)\n\t{\n\t\tfor (int i = 0; i < inNumberPackets; ++i)\n\t\t{\n\t\t\tSInt64 packetOffset = inPacketDescriptions[i].mStartOffset;\n\t\t\tSInt64 packetSize = inPacketDescriptions[i].mDataByteSize;\n\t\t\t// AudioQueue緩衝區分析\n // ...\n \n AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];\n\t\t memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)inInputData + packetOffset, packetSize);\n\t\t}\t\n\t}\n\telse\n\t{\n\t\tsize_t offset = 0;\n\t\twhile (inNumberBytes)\n\t\t{\n\t\t // AudioQueue緩衝區分析\n // ...\n AudioQueueBufferRef fillBuf = audioQueueBuffer[fillBufferIndex];\n memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(inInputData + offset), copySize);\n inNumberBytes -= copySize;\n\t\t\t\n\t\t}\n\t}\n}","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5)若有必要,進行AudioFileStreamSeek,AudioFileStreamSeek是用來尋找精確的某一個幀的字節偏移。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"6)AudioFileStreamClose關閉AudioFileStream","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"小結","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 本文介紹了IOS中的音頻採集播放API,其中AVAudioSession負責管理APP如何操作音頻,AudioUnit負責採集播放PCM形式的音頻數據,AudioConvert負責音頻的編解碼,AudioFileStream負責流播放,解析音頻幀,播放網絡流中的音頻有很大的用處。以播放本地mp3文件爲例,首先AudioFileStream解析mp3音頻幀,解析出幀後送到AudioConvert解碼成PCM數據,再送到AudioUnit進行音頻播放。採集則是完全相反的步驟。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" IOS的音頻播放其實最好的是參照一些開源的IOS音頻播放器,這些都可以在github上找到,比如豆瓣開源的播放器","attrs":{}},{"type":"link","attrs":{"href":"https://github.com/douban/DOUAudioStreamer","title":"","type":null},"content":[{"type":"text","text":"https://github.com/douban/DOUAudioStreamer","attrs":{}}]},{"type":"text","text":"。最後,文章中有什麼錯誤的地方,希望和大家一起討論。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"參考","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"http://msching.github.io/blog/2014/07/09/audio-in-ios-3/","title":null,"type":null},"content":[{"type":"text","text":"IOS音頻播放","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://github.com/mattgallagher/AudioStreamer","title":null,"type":null},"content":[{"type":"text","text":"https://github.com/mattgallagher/AudioStreamer","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://www.jianshu.com/p/25188072a11a","title":"","type":null},"content":[{"type":"text","text":"https://www.jianshu.com/p/25188072a11a","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://blog.csdn.net/netease_im/article/details/113875029?utm_medium=distribute.pc_relevant.none-task-blog-OPENSEARCH-10.control&dist_request_id=&depth_1-utm_source=distribute.pc_relevant.none-task-blog-OPENSEARCH-10.control","title":null,"type":null},"content":[{"type":"text","text":"webrtc系列之音頻會話管理","attrs":{}}]},{"type":"link","attrs":{"href":"https://developer.apple.com/library/archive/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html","title":null,"type":null},"content":[{"type":"text","text":"https://developer.apple.com/library/archive/documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://www.jianshu.com/p/fb0e5fb71b3c","title":null,"type":null},"content":[{"type":"text","text":"https://www.jianshu.com/p/fb0e5fb71b3c","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"http://mp.weixin.qq.com/s?__biz=MzI1MzYzMjE0MQ==&mid=2247488032&idx=1&sn=a8e8948fcd043cd0124e8bfe26aa0784&chksm=e9d0d9c2dea750d45cb31e321c2206cc5e2c1d6a6ce1ea888432d7529660254df18325bb0582&mpshare=1&scene=1&srcid=0302VQAMfPTnksVVWajpjD31&sharer_sharetime=1614681590686&sharer_shareid=56acb924444b93ede624b545b0383c04#rd","title":null,"type":null},"content":[{"type":"text","text":"在線教室 iOS 端聲音問題綜合解決方案","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://stackoverflow.com/questions/16841831/specifying-number-of-frames-to-process-in-audiounit-render-callback-on-ios","title":null,"type":null},"content":[{"type":"text","text":"https://stackoverflow.com/questions/16841831/specifying-number-of-frames-to-process-in-audiounit-render-callback-on-ios","attrs":{}}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章