WebRTC音視頻同步分析

對於視頻點播還是實時視頻開發,音視頻同步是一個必要的環節。

目錄

一、音視頻同步原理

二、點播、直播視頻播放器

三、實時視頻

四、WebRTC音視頻同步源碼分析

五、總結


  • 一、音視頻同步原理

一般來說,音視頻同步就是視頻同步到音頻。視頻在渲染的時候,每一幀視頻根據與音頻的時間戳對比,來調整立即渲染還是延遲渲染。比如有一個音頻序列,他們的時間戳是A(0, 20, 40, 60,80,100, 120...),視頻序列V(0, 40, 80, 120...)。音畫同步的步驟如下:

1)取一幀音頻A(0),播放。取一幀視頻V(0),視頻幀時間戳與音頻相等,視頻立即渲染。

2)取一幀音頻A(20),播放。取一幀視頻V(40),視頻幀時間戳大於音頻,視頻太早,需要等待。

3)取一幀音頻A(40),播放。取出視頻,還是上面的V(40),視頻幀時間戳與音頻相等(真實場景中不一定完全相等,他們之間差值的絕對值在一個幀間隔時間內也可以認爲是相同的時間戳),視頻立即渲染。

對於視頻播放器和實時視頻,他們的同步原理如上面描述的一樣,逃不開時間戳的對齊,只是在實現的時候可能有些差異。

  • 二、點播、直播視頻播放器

對於初學視頻播放器的新手還是有多年音視頻開發經驗的人,都可以參照ffmpeg源碼目錄下的ffplay.c來了解音視頻同步原來。本文不會對此展開講。

  • 三、實時視頻

像我們生活中接觸到的微信視頻、視頻會議都是實時視頻的範疇。視頻採集到對方觀看的延遲一般最多在400ms。爲了更快的傳輸速度,一般都會採用udp實現。但是由於udp傳輸是不可靠的,數據包容易丟失和亂序,所以在使用UDP的時候,會增加丟包重傳、重排序的邏輯。這時就會引入了jitterbuffer的邏輯。實時視頻的音畫同步原理還是(一)中講的,只是實現手段上是通過音視頻兩個jitterbuffer的控制。

實時視頻比較好的框架就是WebRTC,本文將結合WebRTC的源碼分析音視頻同步的原理。

  • 四、WebRTC音視頻同步源碼分析

WebRTC源碼目錄下video/stream_synchronization.cc實現了音視頻同步

void RtpStreamsSynchronizer::Process() {
  RTC_DCHECK_RUN_ON(&process_thread_checker_);
  last_sync_time_ = rtc::TimeNanos();

  rtc::CritScope lock(&crit_);
  if (!syncable_audio_) {
    return;
  }
  RTC_DCHECK(sync_.get());

  absl::optional<Syncable::Info> audio_info = syncable_audio_->GetInfo();
  if (!audio_info || !UpdateMeasurements(&audio_measurement_, *audio_info)) {
    return;
  }

  int64_t last_video_receive_ms = video_measurement_.latest_receive_time_ms;
  absl::optional<Syncable::Info> video_info = syncable_video_->GetInfo();
  if (!video_info || !UpdateMeasurements(&video_measurement_, *video_info)) {
    return;
  }

  if (last_video_receive_ms == video_measurement_.latest_receive_time_ms) {
    // No new video packet has been received since last update.
    return;
  }

  int relative_delay_ms;
  // Calculate how much later or earlier the audio stream is compared to video.
  if (!sync_->ComputeRelativeDelay(audio_measurement_, video_measurement_,
                                   &relative_delay_ms)) {
    return;
  }

  TRACE_COUNTER1("webrtc", "SyncCurrentVideoDelay",
                 video_info->current_delay_ms);
  TRACE_COUNTER1("webrtc", "SyncCurrentAudioDelay",
                 audio_info->current_delay_ms);
  TRACE_COUNTER1("webrtc", "SyncRelativeDelay", relative_delay_ms);
  int target_audio_delay_ms = 0;
  int target_video_delay_ms = video_info->current_delay_ms;
  // Calculate the necessary extra audio delay and desired total video
  // delay to get the streams in sync.
  if (!sync_->ComputeDelays(relative_delay_ms, audio_info->current_delay_ms,
                            &target_audio_delay_ms, &target_video_delay_ms)) {
    return;
  }

  syncable_audio_->SetMinimumPlayoutDelay(target_audio_delay_ms);
  syncable_video_->SetMinimumPlayoutDelay(target_video_delay_ms);
}

爲了對WebRTC的同步有比較好的理解,我們先看Process方法中最下面的兩行代碼。

syncable_audio_->SetMinimumPlayoutDelay(target_audio_delay_ms);
syncable_video_->SetMinimumPlayoutDelay(target_video_delay_ms);

SetMinimumPlayoutDelay會傳遞一個最小playout_delay的值(target_xxx_delay_ms)下去,一直傳遞到音頻或視頻jitterbuffer上。告訴jitterbuffer:之後渲染的每一幀至少要延遲target_xxx_delay_ms時間才能輸出,除非下次重新傳遞新的值。特別注意:現在WebRTC最新代碼裏,jitterbuffer使用了新的實現modules/video_coding/frame_buffer2.cc,而且,這裏的target_xxx_delay_ms只是至少要延遲的時間,可能真實的延遲會略大於target_xxx_delay_ms。因爲jitterbuffer中還會計算一個current_delay,它包括了jitterdelay、renderdelay和requiredDecodeTime的總和。所以jitterbuffer總的delay時間爲:int actual_delay = std::max(current_delay_ms_, min_playout_delay_ms_);jitterbuffer的細節在本文中就不再擴展。

回過頭來,有了SetMinimumPlayoutDelay如何控制音畫同步呢?原理是這樣的:1、如果音頻播放比視頻快,調大音頻的target_audio_delay_ms或者調小target_video_delay_ms;2、如果音頻播放比視頻慢了,調小音頻的target_audio_delay_ms或者調大視頻的target_video_delay_ms;3、音視頻處於同步狀態,不作調整;

至於如何判斷音視頻誰快誰慢以及如何調整target_audio_delay_ms和target_video_delay_ms,我們繼續查看代碼。

absl::optional<Syncable::Info> audio_info = syncable_audio_->GetInfo();
if (!audio_info || !UpdateMeasurements(&audio_measurement_, *audio_info)) {
  return;
}

int64_t last_video_receive_ms = video_measurement_.latest_receive_time_ms;
absl::optional<Syncable::Info> video_info = syncable_video_->GetInfo();
if (!video_info || !UpdateMeasurements(&video_measurement_, *video_info)) {
  return;
}

if (last_video_receive_ms == video_measurement_.latest_receive_time_ms) {
  // No new video packet has been received since last update.
  return;
}

UpdateMeasurements用於獲取最新收到數據包的時間戳(latest_timestamp)和到達時刻(latest_receive_time_ms)。音頻和視頻分別記錄在audio_measurement_和video_measurement_中。

接下來,

int relative_delay_ms;
// Calculate how much later or earlier the audio stream is compared to video.
if (!sync_->ComputeRelativeDelay(audio_measurement_, video_measurement_,
                                 &relative_delay_ms)) {
  return;
}

ComputeRelativeDelay計算在網絡傳輸上音頻相對視頻提早了多少毫秒relative_delay_ms。

// Positive diff means that video_measurement is behind audio_measurement.
/// relative_delay_ms meams A - V.
*relative_delay_ms =
   video_measurement.latest_receive_time_ms -
   audio_measurement.latest_receive_time_ms -
   (video_last_capture_time_ms - audio_last_capture_time_ms);

上面一步只是算出了音頻數據包比視頻數據包網絡傳輸上相對提早時間(命名上叫delay,其實是提早時間),接下來開始計算target_audio_delay_ms和target_video_delay_ms。

int target_audio_delay_ms = 0;
int target_video_delay_ms = video_info->current_delay_ms;// 初始化爲視頻當前的TargetDelay
                                                    // 會被賦值給ComputeDelays中的current_video_delay_ms
// Calculate the necessary extra audio delay and desired total video
// delay to get the streams in sync.
if (!sync_->ComputeDelays(relative_delay_ms, audio_info->current_delay_ms,
                          &target_audio_delay_ms, &target_video_delay_ms)) {
  return;
}

ComputeDelays會結合relative_delay_ms和音視頻當前的target_delay_ms, 計算target_audio_delay_ms和target_video_delay_ms。

bool StreamSynchronization::ComputeDelays(int relative_delay_ms,
                                          int current_audio_delay_ms,
                                          int* total_audio_delay_target_ms,
                                          int* total_video_delay_target_ms) {
  assert(total_audio_delay_target_ms && total_video_delay_target_ms);

  int current_video_delay_ms = *total_video_delay_target_ms;
  RTC_LOG(LS_VERBOSE) << "Audio delay: " << current_audio_delay_ms
                      << " current diff: " << relative_delay_ms
                      << " for stream " << audio_stream_id_;
  // Calculate the difference between the lowest possible video delay and
  // the current audio delay.
  /*
   * 可以如下計算音頻與視頻playout時間的相差值,大於0代表音頻比視頻播放快,小於0代表音頻比視頻播放慢
   * A_playout_ts = relative_delay_ms - current_audio_delay_ms
   * V_playout_ts = -current_video_delay_ms
   * A-V_current_diff_ms = A_playout_ts - V_playout_ts
   */
  int current_diff_ms =
      current_video_delay_ms - current_audio_delay_ms + relative_delay_ms;

  // 平滑下current_diff_ms的值
  avg_diff_ms_ =
      ((kFilterLength - 1) * avg_diff_ms_ + current_diff_ms) / kFilterLength;
  if (abs(avg_diff_ms_) < kMinDeltaMs) {
    // Don't adjust if the diff is within our margin.
    return false;
  }

  // Make sure we don't move too fast.
  int diff_ms = avg_diff_ms_ / 2;
  diff_ms = std::min(diff_ms, kMaxChangeMs);
  diff_ms = std::max(diff_ms, -kMaxChangeMs);

  // Reset the average after a move to prevent overshooting reaction.
  avg_diff_ms_ = 0;

  if (diff_ms > 0) {
    // The minimum video delay is longer than the current audio delay.
    // We need to decrease extra video delay, or add extra audio delay.
    // 視頻的延遲比音頻大,我們可以減小視頻的額外延遲,或者增大音頻的額外延遲
    if (channel_delay_.extra_video_delay_ms > base_target_delay_ms_) {
      // We have extra delay added to ViE. Reduce this delay before adding
      // extra delay to VoE.
      channel_delay_.extra_video_delay_ms -= diff_ms;
      channel_delay_.extra_audio_delay_ms = base_target_delay_ms_;
    } else {  // channel_delay_.extra_video_delay_ms > 0
      // We have no extra video delay to remove, increase the audio delay.
      channel_delay_.extra_audio_delay_ms += diff_ms;
      channel_delay_.extra_video_delay_ms = base_target_delay_ms_;
    }
  } else {  // if (diff_ms > 0)
    // The video delay is lower than the current audio delay.
    // We need to decrease extra audio delay, or add extra video delay.
    // 視頻的延遲比音頻小,我們可以減小音頻的額外延遲,或者增大視頻的額外延遲
    if (channel_delay_.extra_audio_delay_ms > base_target_delay_ms_) {
      // We have extra delay in VoiceEngine.
      // Start with decreasing the voice delay.
      // Note: diff_ms is negative; add the negative difference.
      channel_delay_.extra_audio_delay_ms += diff_ms;
      channel_delay_.extra_video_delay_ms = base_target_delay_ms_;
    } else {  // channel_delay_.extra_audio_delay_ms > base_target_delay_ms_
      // We have no extra delay in VoiceEngine, increase the video delay.
      // Note: diff_ms is negative; subtract the negative difference.
      channel_delay_.extra_video_delay_ms -= diff_ms;  // X - (-Y) = X + Y.
      channel_delay_.extra_audio_delay_ms = base_target_delay_ms_;
    }
  }

  // Make sure that video is never below our target.
  channel_delay_.extra_video_delay_ms =
      std::max(channel_delay_.extra_video_delay_ms, base_target_delay_ms_);

  int new_video_delay_ms;
  if (channel_delay_.extra_video_delay_ms > base_target_delay_ms_) {
    new_video_delay_ms = channel_delay_.extra_video_delay_ms;
  } else {
    // No change to the extra video delay. We are changing audio and we only
    // allow to change one at the time.
    new_video_delay_ms = channel_delay_.last_video_delay_ms;
  }

  // Make sure that we don't go below the extra video delay.
  new_video_delay_ms =
      std::max(new_video_delay_ms, channel_delay_.extra_video_delay_ms);

  // Verify we don't go above the maximum allowed video delay.
  new_video_delay_ms =
      std::min(new_video_delay_ms, base_target_delay_ms_ + kMaxDeltaDelayMs);

  int new_audio_delay_ms;
  if (channel_delay_.extra_audio_delay_ms > base_target_delay_ms_) {
    new_audio_delay_ms = channel_delay_.extra_audio_delay_ms;
  } else {
    // No change to the audio delay. We are changing video and we only
    // allow to change one at the time.
    new_audio_delay_ms = channel_delay_.last_audio_delay_ms;
  }

  // Make sure that we don't go below the extra audio delay.
  new_audio_delay_ms =
      std::max(new_audio_delay_ms, channel_delay_.extra_audio_delay_ms);

  // Verify we don't go above the maximum allowed audio delay.
  new_audio_delay_ms =
      std::min(new_audio_delay_ms, base_target_delay_ms_ + kMaxDeltaDelayMs);

  // Remember our last audio and video delays.
  channel_delay_.last_video_delay_ms = new_video_delay_ms;
  channel_delay_.last_audio_delay_ms = new_audio_delay_ms;

  RTC_LOG(LS_VERBOSE) << "Sync video delay " << new_video_delay_ms
                      << " for video stream " << video_stream_id_
                      << " and audio delay "
                      << channel_delay_.extra_audio_delay_ms
                      << " for audio stream " << audio_stream_id_;

  // Return values.
  *total_video_delay_target_ms = new_video_delay_ms;
  *total_audio_delay_target_ms = new_audio_delay_ms;
  return true;
}

上述代碼結合網絡傳輸上音頻比視頻的提早時間relative_delay_ms和音視頻各自jitter buffer的delay時間,計算出音頻和視頻playout的相對差值current_diff_ms,用於判斷音頻和視頻誰快誰慢。當current_diff_ms > 0時,音頻比視頻播放更快,換句話說,視頻的延遲比音頻大,我們可以減小視頻的延遲,或者增大音頻的延遲;當current_diff_ms < 0時,視頻比音頻播放更快,換句話說,視頻的延遲比音頻小,我們可以減小音頻的延遲,或者增大視頻的延遲。

這裏面有幾個變量需要解釋下:

base_target_delay_ms_:至少需要的延遲時間,音頻和視頻至少需要這麼多延遲。可以通過提供的接口更改此值;

extra_video_delay_ms:視頻額外延遲時間,從字面上挺難理解這個值。剛開始初始化的時候,extra_video_delay_ms的值等於 base_target_delay_ms_,但在同步中可能會增加或者減少。經過糾正的邏輯,最後提供出來,作爲本次同步算出來的video_delay_target_ms。

extra_audio_delay_ms:同extra_video_delay_ms。

我們根據簡單的case來理解下這個delay的計算過程:

1、初始值:extra_video_delay_ms 和 extra_audio_delay_ms 都等於 base_target_delay_ms_

2、第一次進入ComputeDelays,這時如果判斷出diff_ms>0,也就是說視頻的延遲比音頻大時,由於當前extra_video_delay_ms等於base_target_delay_ms_,採取了增加extra_audio_delay_ms的方法,結果就是extra_audio_delay_ms=extra_audio_delay_ms+diff_ms。extra_audio_delay_ms的值作爲新的audio_delay_target_ms輸出,extra_video_delay_ms的值不變作爲新的video_delay_target_ms輸出。

3、第二次進入ComputeDelays,如果這時還判斷出diff_ms>0,由於當前extra_video_delay_ms等於base_target_delay_ms_,還是通過增加extra_audio_delay_ms的方法,結果就是extra_audio_delay_ms又累加了一個diff_ms。這裏是否會有疑問,爲什麼在第一次調整之後,還是會有視頻比音頻延遲大的問題?這裏可能的原因有兩個:1)我們在計算diff_ms的時候爲了不要調整力度太大,作了減半的處理,所以並沒有立刻調整過來;2)我們是否注意到:我們將求得的audio_delay_target_ms設置給jitterbuffer的時候,更改了min_playout_delay_ms_,若它的值小於jitterbuffer的current_delay_ms_,jitterbuffer真實在使用的是current_delay_ms_的值。當然,經過幾次的處理,extra_audio_delay_ms的值會越來越接近current_delay_ms_,直至超過它,這樣就能起作用。

4、第三次進入ComputeDelays,這時如果判斷出diff_ms<0,視頻延遲比音頻小,由於上兩次調整,extra_audio_delay_ms的值已經比base_target_delay_ms_,我們會優先去減少extra_audio_delay_ms的值,而不是增大extra_video_delay_ms的值。

總結:這裏的算法一直在保證extra_audio_delay_ms和extra_video_delay_ms的值不低於base_target_delay_ms_,只能往大於base_target_delay_ms_的方向累加。但在作同步處理的時候,有值extra_xxx_delay_ms大於base_target_delay_ms_的話,還是優先是向下減小extra_xxx_delay_ms的值,直至base_target_delay_ms_這個臨界值。

五、總結

到這裏已經分析完WebRTC的音畫同步原理。WebRTC在不干擾到網絡抖動至少需要的current_delay_ms_情況下,通過控制jitterbuffer中的min_playout_delay_ms_巧妙地做到音視頻同步。音畫同步的原理都是一樣的,但通過分析WebRTC的音畫同步原來,能學習到實時視頻相對於點播播放器的不同。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章