WebRTC學習記錄(1):採集microphone到文件原理實踐&講解

最近做這個webrtc,着實麻煩。

網上資料少,翻牆困難,即使成功下載速度也很慢。因爲我這邊是聯通,慢,慢,慢。我想研究下webrtc是如何採集音頻的,並如何將其寫入到文件的。

無奈不得不查看webrtc的源碼,怎麼查看,需要有好的方法。我在一次不經意間發現VoEFile是有關音頻讀寫文件的類。

這樣我查看其相關代碼voe_file.h,發現其裏面有個例子:

// This sub-API supports the following functionalities:
//
//  - File playback.
//  - File recording.
//  - File conversion.
//
// Usage example, omitting error checking:
//
//  using namespace webrtc;
//  VoiceEngine* voe = VoiceEngine::Create();
//  VoEBase* base = VoEBase::GetInterface(voe);
//  VoEFile* file  = VoEFile::GetInterface(voe);
//  base->Init();
//  int ch = base->CreateChannel();
//  ...
//  base->StartPlayout(ch);
//  file->StartPlayingFileAsMicrophone(ch, "data_file_16kHz.pcm", true);
//  ...
//  file->StopPlayingFileAsMicrophone(ch);
//  base->StopPlayout(ch);
//  ...
//  base->DeleteChannel(ch);
//  base->Terminate();
//  base->Release();
//  file->Release();
//  VoiceEngine::Delete(voe);
但是這個例子沒法直接使用,我經過多方研究,將其修改了一下,具體如下:


#include "webrtc/base/ssladapter.h"
#include "webrtc/base/win32socketinit.h"
#include "webrtc/base/win32socketserver.h"

#include "webrtc\voice_engine\voe_file_impl.h"
#include "webrtc\voice_engine\include\voe_base.h"
#include "webrtc/modules/audio_device/include/audio_device.h"

#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#include <assert.h>
#include "webrtc/modules/audio_device/include/audio_device.h"
#include "webrtc/common_audio/resampler/include/resampler.h"
#include "webrtc/modules/audio_processing/aec/include/echo_cancellation.h"
#include "webrtc/common_audio/vad/include/webrtc_vad.h"

#include "dbgtool.h"
#include "string_useful.h"

using namespace webrtc;
VoiceEngine* g_voe = NULL;
VoEBase* g_base = NULL;
VoEFile* g_file = NULL;
int g_ch = -1;

HANDLE g_hEvQuit = NULL;


void Begin_RecordMicrophone();
void End_RecordMicrophone();

/////////////////////////////////////////////////////////////////////////
// 錄製microphone輸入的聲音
void Begin_RecordMicrophone()
{
  int iRet = -1;
    
  g_voe = VoiceEngine::Create();
  g_base = VoEBase::GetInterface(g_voe);
  g_file = VoEFile::GetInterface(g_voe);
  g_base->Init();
  //g_ch = g_base->CreateChannel();

  g_hEvQuit = CreateEvent(NULL, FALSE, FALSE, NULL);

  // ...
  //base->StartPlayout(ch);

  // 播放輸入文件audio_long16.pcm並將其錄入到audio_long16_out.pcm中
  //iRet = file->StartPlayingFileLocally(ch, "E:\\webrtc_compile\\webrtc_windows\\src\\talk\\examples\\hh_sample\\audio_long16.pcm", true);
  //iRet = file->StartRecordingPlayout(ch, "E:\\webrtc_compile\\webrtc_windows\\src\\talk\\examples\\hh_sample\\audio_long16_out.pcm");

  // 錄製輸入的microphone的聲音到文件
  iRet = g_file->StartRecordingMicrophone("E:\\webrtc_compile\\webrtc_windows\\src\\talk\\examples\\hh_sample\\audio_long16_from_microphone.wav");

  while (TRUE) {
    DWORD dwRet = ::WaitForSingleObject(g_hEvQuit, 500);
    if (dwRet == WAIT_OBJECT_0) {
      End_RecordMicrophone();
      break;
    }
  }
}

void End_RecordMicrophone()
{
  g_file->StopRecordingMicrophone();
  g_base->Terminate();
  g_base->Release();
  g_file->Release();

  VoiceEngine::Delete(g_voe);
}

DWORD WINAPI ThreadFunc(LPVOID lpParameter) {

  Begin_RecordMicrophone();

  return 0;
}

int main()
{
  // 初始化SSL
  rtc::InitializeSSL();

  DWORD IDThread;
  HANDLE hThread;
  DWORD ExitCode;

  hThread = CreateThread(NULL,
    0,
    (LPTHREAD_START_ROUTINE)ThreadFunc,
    NULL,
    0,
    &IDThread);
  if (hThread == NULL) {
    return -1;
  }

  printf("Input 'Q' to stop recording!!!");
  char ch;
  while (ch = getch()) {
    if (ch == 'Q') {
      if (g_hEvQuit) {
        ::SetEvent(g_hEvQuit);
        if (hThread) {
          ::WaitForSingleObject(hThread, INFINITE);
          CloseHandle(hThread);
          hThread = NULL;
        }

        CloseHandle(g_hEvQuit);
        g_hEvQuit = NULL;
      }
      break;
    }
  }

  rtc::CleanupSSL();

  return 0;
}
上面這個例子實現了採集麥克風聲音到文件的功能。


然後我根據這個例子,研究了一下采集音頻buffer,並寫入到文件的流程,具體學習記錄如下:



/////////////////////////////////////////////////////
// Part A -- 初始化音頻輸入端,輸出端
1. 當用戶調用
VoEFileImpl::StartRecordingMicrophone錄製Microphone到文件時,其實在其內部將該操作轉交給了成員變量SharedData的成員變量TransmitMixer的StartRecordingMicrophone函數來實現,具體實現如下:


int VoEFileImpl::StartRecordingMicrophone(const char* fileNameUTF8,
                                          CodecInst* compression,
                                          int maxSizeBytes) {
 // ...
 if (_shared->transmit_mixer()->StartRecordingMicrophone(fileNameUTF8,
                                                          compression)) {
    WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
                 "StartRecordingMicrophone() failed to start recording");
    return -1;
  }
 // ...
}


2. 在TransmitMixer::StartRecordingMicrophone內部,其創建了一個FileRecorder,並委託其來錄製Microphone到音頻文件。具體實現如下:
int TransmitMixer::StartRecordingMicrophone(const char* fileName,
                                            const CodecInst* codecInst)
{
// ...


// 創建一個FileRecorder
_fileRecorderPtr =
        FileRecorder::CreateFileRecorder(_fileRecorderId,
                                         (const FileFormats) format);
// ...


// 委託給FileRecorder的StartRecordingAudioFile來處理
    if (_fileRecorderPtr->StartRecordingAudioFile(
        fileName,
        (const CodecInst&) *codecInst,
        notificationTime) != 0)
{
// ...
}




// ...
}

3. 而FileRecorder只是一個接口,其實現類爲FileRecorderImpl。
在FileRecorderImpl的StartRecordingAudioFile函數中,又將錄製的具體操作委託給
MediaFile來實現。
int32_t FileRecorderImpl::StartRecordingAudioFile(
    const char* fileName,
    const CodecInst& codecInst,
    uint32_t notificationTimeMs,
    ACMAMRPackingFormat amrFormat)
{
// ...
_moduleFile->StartRecordingAudioFile(fileName, _fileFormat,
                                                 codecInst,
                                                 notificationTimeMs);
// ...
}


4. 同樣的, MediaFile只是一個接口,其實現類爲MediaFileImpl。在其函數StartRecordingAudioFile內部,新建了一個FileWrapper代表寫出的文件流,然後在StartRecordingAudioStream做相關的設置工作,並將該文件裏指針保存到類型爲OutStream*的成員變量_ptrOutStream中。
代碼如下:
int32_t MediaFileImpl::StartRecordingAudioStream(
    OutStream& stream,
    const FileFormats format,
    const CodecInst& codecInst,
    const uint32_t notificationTimeMs)
{
// ...
FileWrapper* outputStream = FileWrapper::Create();
// ...
    if(StartRecordingAudioStream(*outputStream, format, codecInst,
                                 notificationTimeMs) == -1)
    {
        outputStream->CloseFile();
        delete outputStream;
        return -1;
    }

// ...
}


int32_t MediaFileImpl::StartRecordingAudioStream(
    OutStream& stream,
    const FileFormats format,
    const CodecInst& codecInst,
    const uint32_t notificationTimeMs)
{
// ...
_ptrOutStream = &stream;
// ...
}


// 至此,音頻輸出端各項參數都已經準備好了,但是音頻輸入端還沒有準備好。
下面又回到了VoEFileImpl::StartRecordingMicrophone函數,需要初始化音頻輸入端的各項參數。在初始化音頻輸入端各項參數時,會根本不同的平臺初始化不同系統SDK。
比如Windows平臺,使用的就是AudioDeviceWindowsCore,其他平臺也有相應的類。
在實際的StartRecording錄音函數裏面,會創建一個錄音線程不斷從聲卡獲取音頻buffer,這裏獲取的是PCM數據。
int VoEFileImpl::StartRecordingMicrophone(const char* fileNameUTF8,
                                          CodecInst* compression,
                                          int maxSizeBytes) {
{
// ...
// 初始化音頻輸出端各項參數
if (_shared->transmit_mixer()->StartRecordingMicrophone(fileNameUTF8,
                                                          compression)) {
    WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
                 "StartRecordingMicrophone() failed to start recording");
    return -1;
  }
  
// 初始化音頻輸入端各項參數,並開始錄音
if (!_shared->ext_recording()) {
    if (_shared->audio_device()->InitRecording() != 0) {
      WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
                   "StartRecordingMicrophone() failed to initialize recording");
      return -1;
    }
    if (_shared->audio_device()->StartRecording() != 0) {
      WEBRTC_TRACE(kTraceError, kTraceVoice, VoEId(_shared->instance_id(), -1),
                   "StartRecordingMicrophone() failed to start recording");
      return -1;
    }
  }
  
  return 0;
}




///////////////////////////////////////////////////////////////////////////////
// Part B -- 輸入端的buffer是如何寫到輸出端的
上面,把錄音的輸入端,輸出端都講完了,但是他們之間的接口呢。就是這個新運行的錄音線程錄取的音頻buffer是怎麼就到了輸出端的呢?看完請往下繼續看。


1. 在上面我們知道錄音的時候會創建一個新的錄音線程,該線程的入口就是AudioDeviceWindowsCore::DoCaptureThread()(Windows下使用Windows Core API的實現)。
在該函數裏面,使用IAudioCaptureClient接口獲取音頻buffer,然後調用DeliverRecordedData將其推到上層進行處理。
DWORD AudioDeviceWindowsCore::DoCaptureThread()
{
// ...


// 獲取音頻buffer
//  Find out how much capture data is available
//
hr = _ptrCaptureClient->GetBuffer(&pData,           // packet which is ready to be read by used
                                  &framesAvailable, // #frames in the captured packet (can be zero)
                                  &flags,           // support flags (check)
                                  &recPos,          // device position of first audio frame in data packet
                                  &recTime);        // value of performance counter at the time of recording the first audio frame


// ...
// 將其推送到上層處理
_ptrAudioBuffer->DeliverRecordedData();


// ...
}


2. AudioDeviceBuffer::DeliverRecordedData()裏面檢查相關的數據參數,然後委託其內部成員AudioTransport將錄音數據再往上層送。_ptrCbAudioTransport在該類裏面被具先話爲VoEBaseImpl。具體代碼如下:
int32_t AudioDeviceBuffer::DeliverRecordedData()
{
// ...
// 將錄音數據往上層繼續送
    res = _ptrCbAudioTransport->RecordedDataIsAvailable(&_recBuffer[0],
                                                        _recSamples,
                                                        _recBytesPerSample,
                                                        _recChannels,
                                                        _recSampleRate,
                                                        totalDelayMS,
                                                        _clockDrift,
                                                        _currentMicLevel,
                                                        _typingStatus,
                                                        newMicLevel);
// ...
}


3. _ptrCbAudioTransport其實是指向VoEBaseImpl的。在VoEBaseImpl::RecordedDataIsAvailable裏面只是簡單的將數據委託給本類的ProcessRecordedDataWithAPM繼續處理,代碼如下:
int32_t VoEBaseImpl::RecordedDataIsAvailable(
    const void* audioSamples, uint32_t nSamples, uint8_t nBytesPerSample,
    uint8_t nChannels, uint32_t samplesPerSec, uint32_t totalDelayMS,
    int32_t clockDrift, uint32_t micLevel, bool keyPressed,
    uint32_t& newMicLevel) {
  newMicLevel = static_cast<uint32_t>(ProcessRecordedDataWithAPM(
      nullptr, 0, audioSamples, samplesPerSec, nChannels, nSamples,
      totalDelayMS, clockDrift, micLevel, keyPressed));
  return 0;
}


4.在VoEBaseImpl::ProcessRecordedDataWithAPM裏將數據繼續委託給之前說的transmit_mixer執行,在那裏會將音頻buffer寫入到文件,代碼如下:
int VoEBaseImpl::ProcessRecordedDataWithAPM(
    const int voe_channels[], int number_of_voe_channels,
    const void* audio_data, uint32_t sample_rate, uint8_t number_of_channels,
    uint32_t number_of_frames, uint32_t audio_delay_milliseconds,
    int32_t clock_drift, uint32_t volume, bool key_pressed)
{
// ...


// 將數據寫入到文件
// Perform channel-independent operations
  // (APM, mix with file, record to file, mute, etc.)
  shared_->transmit_mixer()->PrepareDemux(
      audio_data, number_of_frames, number_of_channels, sample_rate,
      static_cast<uint16_t>(audio_delay_milliseconds), clock_drift,
      voe_mic_level, key_pressed);


// ...
}


4. 上面的shared->transmit_mixer()->PrepareDemux的實現類在TransmitMixer::PrepareDemux,在該函數裏面會繼續調用RecordAudioToFile將音頻buffer寫入到文件,代碼如下:
int32_t
TransmitMixer::PrepareDemux(const void* audioSamples,
                            uint32_t nSamples,
                            uint8_t nChannels,
                            uint32_t samplesPerSec,
                            uint16_t totalDelayMS,
                            int32_t clockDrift,
                            uint16_t currentMicLevel,
                            bool keyPressed)
{
// ...
// 音頻buffer寫入到文件
if (file_recording)
{
    RecordAudioToFile(_audioFrame.sample_rate_hz_);
}


// ...
}


5. TransmitMixer::RecordAudioToFile裏面寫入音頻buffer做了一個同步,然後委託給內部成員_fileRecorderPtr(其類型爲FileRecorder)來寫入音頻buffer到文件,注意到沒?這個_fileRecorderPtr不就是PartA-2裏面的那個FileRecorder嘛。
int32_t FileRecorderImpl::RecordAudioToFile(
    const AudioFrame& incomingAudioFrame,
    const TickTime* playoutTS)
{
// ...
if (WriteEncodedAudioData(_audioBuffer, encodedLenInBytes) == -1)
{
    return -1;
}


// ...
}


6. FileRecorderImpl::WriteEncodedAudioData裏面就簡單了,沒做啥事,只是委託給MediaFile* _moduleFile來執行工作,代碼如下:
int32_t FileRecorderImpl::WriteEncodedAudioData(const int8_t* audioBuffer,
                                                size_t bufferLength)
{
    return _moduleFile->IncomingAudioData(audioBuffer, bufferLength);
}


7. MediaFile的實現類爲MediaFileImpl,在MediaFileImpl::IncomingAudioData裏面我們將數據寫入到_ptrOutStream中,注意看這個_ptrOutStream不就是PartA-4裏面所說的那個_ptrOutStream嘛,簡略代碼如下:
int32_t MediaFileImpl::IncomingAudioData(
    const int8_t*  buffer,
    const size_t bufferLengthInBytes)
{
// ...
bytesWritten = _ptrFileUtilityObj->WritePCMData(
                        *_ptrOutStream,
                        buffer,
                        bufferLengthInBytes);


// ...
}




// 至此,音頻是如何採集,並保存到文件的,全部流程已經通了,但是中間有很多對音頻的處理,這裏並沒有講。因爲說這些會影響我對全局的理解,因此在以後的學習過程中慢慢的消化。


























































發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章