WebRTC系列 -- iOS 視頻採集

1. 採集

1.1 採集控制

webrtc 相機的初始化及參數設置都在RTCCameraVideoCapturer類中;這個類中調用系統AVFoundation中的AVCaptureSession,關於在iOS上捕獲 視頻這裏不做過多介紹,推薦objc中國的文章;這裏需要注意的是webrtc中只調用了視頻採集沒有使用音頻,同時 設置 capture session的sessionPreset值爲AVCaptureSessionPresetInputPriority

表示不去控制音頻與視頻輸出設置。而是通過已連接的捕獲設備的 activeFormat 來反過來控制 capture session 的輸出質量等級

webrtc中的參數設置如下:

#if defined(WEBRTC_IOS)
  _captureSession.sessionPreset = AVCaptureSessionPresetInputPriority;
  _captureSession.usesApplicationAudioSession = NO;
#endif

WebRTC在初始話的時候設置了系統通知UIDeviceOrientationDidChangeNotification監聽來監聽設備的方向改變通知設置給變量_orientation;同時也監聽了應用當前的狀態,當應用BecomeActive時:

if (self.isRunning && !self.captureSession.isRunning) {
                                   RTCLog(@"Restarting capture session on active.");
                                   [self.captureSession startRunning];
                                 }

1.2 採集輸出

上面設置完參數,且設置系統的delegate 後視頻採集的數據就會回調到:

- (void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
                      fromConnection:(AVCaptureConnection *)connection SampleBuffer:(CMSampleBufferRef)sampleBuffer

回調了,該回調裏會判斷當前的相機是前置後是後置相機的幀數據,然後根據手機當前的方向處理視頻幀需要旋轉的角度,接着講視頻採集到的數據封裝成WebRTC的數據RTCCVPixelBuffer,然後使用視頻幀的旋轉角及數據初始化幀RTCVideoFrame

RTCVideoFrame *videoFrame = [[RTCVideoFrame alloc] initWithBuffer:rtcPixelBuffer
                                                             rotation:_rotation
                                                          timeStampNs:timeStampNs];

1.3 其他主要接口調用

[2019-09-24 16:04:02][010:981] [163087] (objc_video_track_source.mm:69): hosten ObjCVideoTrackSource::OnCapturedFrame() timestamp_us = 126774279136 translated_timestamp_us = 126774255396
[2019-09-24 16:04:02][010:982] [163087] (objc_video_track_source.mm:87): hosten ObjCVideoTrackSource::OnCapturedFrame()
[2019-09-24 16:04:02][010:982] [163087] (adapted_video_track_source.cc:51): hosten AdaptedVideoTrackSource::OnFrame()
[2019-09-24 16:04:02][010:982] [163087] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:02][010:982] [163087] (video_stream_encoder.cc:880): hosten VideoStreamEncoder::OnFrame  incoming_frame = 
[2019-09-24 16:04:02][010:982] [163087] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:02][010:982] [163087] (webrtc_video_engine.cc:2644): hosten WebRtcVideoReceiveStream::OnFrame()
[2019-09-24 16:04:02][010:984] [163087] (trendline_estimator.cc:121): Using Trendline filter for delay change estimation with window size 20
[2019-09-24 16:04:02][010:985] [163087] (send_statistics_proxy.cc:1016): hosten SendStatisticsProxy::OnIncomingFrame
[2019-09-24 16:04:02][010:985] [163087] (video_stream_encoder.cc:1094): hosten VideoStreamEncoder::MaybeEncodeVideoFrame()
[2019-09-24 16:04:02][010:985] [163087] (video_stream_encoder.cc:1221): hosten VideoStreamEncoder::EncodeVideoFrame()
[2019-09-24 16:04:02][010:992] [163087] (video_stream_encoder.cc:1347): hosten MaybeEncodeVideoFrame() ---------encoder_->Encode------------
[2019-09-24 16:04:02][010:992] [163087] (libvpx_vp8_encoder.cc:908): hosten LibvpxVp8Encoder::Encode
[2019-09-24 16:04:03][010:997] [168711] (webrtc_video_engine.cc:2644): hosten WebRtcVideoReceiveStream::OnFrame()
[2019-09-24 16:04:03][010:998] [168711] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:03][010:998] [168711] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:03][010:998] [168711] (webrtc_video_engine.cc:2644): hosten WebRtcVideoReceiveStream::OnFrame()
[2019-09-24 16:04:03][010:998] [168711] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:03][010:998] [168711] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:03][010:998] [168711] (objc_video_track_source.mm:69): hosten ObjCVideoTrackSource::OnCapturedFrame() timestamp_us = 126774345775 translated_timestamp_us = 126774320925
[2019-09-24 16:04:03][010:999] [168711] (objc_video_track_source.mm:87): hosten ObjCVideoTrackSource::OnCapturedFrame()
[2019-09-24 16:04:03][010:999] [168711] (adapted_video_track_source.cc:51): hosten AdaptedVideoTrackSource::OnFrame()
[2019-09-24 16:04:03][010:999] [168711] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:03][010:999] [168711] (video_stream_encoder.cc:880): hosten VideoStreamEncoder::OnFrame  incoming_frame = 
[2019-09-24 16:04:03][010:999] [168711] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:03][011:051] [18691] (webrtc_video_engine.cc:2644): hosten WebRtcVideoReceiveStream::OnFrame()
[2019-09-24 16:04:03][011:051] [18691] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:03][011:051] [18691] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:03][011:051] [18691] (webrtc_video_engine.cc:2644): hosten WebRtcVideoReceiveStream::OnFrame()
[2019-09-24 16:04:03][011:051] [18691] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:03][011:051] [18691] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:03][011:055] [18691] (objc_video_track_source.mm:69): hosten ObjCVideoTrackSource::OnCapturedFrame() timestamp_us = 126774412414 translated_timestamp_us = 126774386059
[2019-09-24 16:04:03][011:056] [18691] (objc_video_track_source.mm:87): hosten ObjCVideoTrackSource::OnCapturedFrame()
[2019-09-24 16:04:03][011:056] [18691] (adapted_video_track_source.cc:51): hosten AdaptedVideoTrackSource::OnFrame()
[2019-09-24 16:04:03][011:056] [18691] (video_broadcaster.cc:59): hosten VideoBroadcaster::OnFrame().
[2019-09-24 16:04:03][011:056] [18691] (video_stream_encoder.cc:880): hosten VideoStreamEncoder::OnFrame  incoming_frame = 
[2019-09-24 16:04:03][011:058] [18691] (video_broadcaster.cc:93): hosten VideoBroadcaster::OnFrame().sink_pair.sink->OnFrame(frame);
[2019-09-24 16:04:03][011:066] [153859] (connection.cc:913):Conn[48801e00:audio:Net[en0:192.168.1.x/24:Wifi:id=1]:vHXXkWpw:1:0:local:udp:192.168.1.x:59020->QiUnx/AY:1:41885695:relay:udp:39.97.72.x:51570|C--I|-|0|0|179897694439751166|-]: Sent STUN ping, id=454970447a496f4e34646473, use_candidate=0, nomination=0
[2019-09-24 16:04:03][011:069] [163087] (video_stream_encoder.cc:1422): hosten EncodedImageCallback::Result VideoStreamEncoder::OnEncodedImage()
[2019-09-24 16:04:03][011:069] [163087] (frame_encode_metadata_writer.cc:264): Frame with no encode started time recordings. Encoder may be reordering frames or not preserving RTP timestamps.
[2019-09-24 16:04:03][011:069] [163087] (frame_encode_metadata_writer.cc:268): Too many log messages. Further frames reordering warnings will be throttled.
[2019-09-24 16:04:03][011:069] [163087] (video_send_stream_impl.cc:590): hosten VideoSendStreamImpl::OnEncodedImage()
[2019-09-24 16:04:03][011:070] [163087] (video_send_stream_impl.cc:631): hosten VideoSendStreamImpl::OnEncodedImage()  rtp_video_sender_->OnEncodedImage
[2019-09-24 16:04:03][011:070] [163087] (rtp_video_sender.cc:393): hosten RtpVideoSender::OnEncodedImage()

基本調用流程圖:
在這裏插入圖片描述

2. 編碼 VideoStreamEncoder

2.1 主要構造過程

視頻的編碼最終都是在VideoStreamEncoder類中開始的,其中有各種編碼狀態的回調及編碼器的一些處理;那麼先看一下它的構造函數;

  VideoStreamEncoder(Clock* clock,
                     uint32_t number_of_cores,
                     VideoStreamEncoderObserver* encoder_stats_observer,
                     const VideoStreamEncoderSettings& settings,
                     std::unique_ptr<OveruseFrameDetector> overuse_detector,
                     TaskQueueFactory* task_queue_factory);

其中VideoStreamEncoderObserver就是編碼各種狀態的設置,settings裏包含了那種編碼器的信息;該類是通過/Source/api/vodeo/video_stream_encoder_create.cc文件進行初始化,該文件只有一個方法:

std::unique_ptr<VideoStreamEncoderInterface> CreateVideoStreamEncoder(
    Clock* clock,
    TaskQueueFactory* task_queue_factory,
    uint32_t number_of_cores,
    VideoStreamEncoderObserver* encoder_stats_observer,
    const VideoStreamEncoderSettings& settings) {
  return absl::make_unique<VideoStreamEncoder>(
      clock, number_of_cores, encoder_stats_observer, settings,
      absl::make_unique<OveruseFrameDetector>(encoder_stats_observer),
      task_queue_factory);
}

該文件中的方法是在/Source/api/vodeo/video_send_stream.ccVideoSendStream類的構造函數VideoSendStream()中調用並獲得VideoStreamEncoder對象;構造函數中的主要代碼如下:

 video_stream_encoder_ =
      CreateVideoStreamEncoder(clock, task_queue_factory, num_cpu_cores,
                               &stats_proxy_, config_.encoder_settings);
  // TODO(srte): Initialization should not be done posted on a task queue.
  // Note that the posted task must not outlive this scope since the closure
  // references local variables.
  worker_queue_->PostTask(ToQueuedTask(
      [this, clock, call_stats, transport, bitrate_allocator, send_delay_stats,
       event_log, &suspended_ssrcs, &encoder_config, &suspended_payload_states,
       &fec_controller]() {
        send_stream_.reset(new VideoSendStreamImpl(
            clock, &stats_proxy_, worker_queue_, call_stats, transport,
            bitrate_allocator, send_delay_stats, video_stream_encoder_.get(),
            event_log, &config_, encoder_config.max_bitrate_bps,
            encoder_config.bitrate_priority, suspended_ssrcs,
            suspended_payload_states, encoder_config.content_type,
            std::move(fec_controller), config_.media_transport));
      },

可以看出這裏WebRTC轉到工作線程開始處理,同時這裏也初始化了VideoSendStreamImpl類,在該類的構造函數中初始化了上面流程圖中的rtp_video_sender_;以下是該構造函數的主要代碼:

VideoSendStreamImpl::VideoSendStreamImpl(...,
    SendStatisticsProxy* stats_proxy,...
    VideoStreamEncoderInterface* video_stream_encoder,
    RtcEventLog* event_log,
    const VideoSendStream::Config* config,
     ...,
    MediaTransportInterface* media_transport)
    :...,
     stats_proxy_(stats_proxy),
     ...,
      transport_(transport),
    ...,
      video_stream_encoder_(video_stream_encoder),
      encoder_feedback_(clock, config_->rtp.ssrcs, video_stream_encoder),
      ...,
      rtp_video_sender_(transport_->CreateRtpVideoSender(
          suspended_ssrcs,
          suspended_payload_states,
          config_->rtp,
          config_->rtcp_report_interval_ms,
          config_->send_transport,
          CreateObservers(call_stats,
                          &encoder_feedback_,
                          stats_proxy_,
                          send_delay_stats),
          event_log,
          std::move(fec_controller),
          CreateFrameEncryptionConfig(config_))),
      media_transport_(media_transport) {//構造函數內部實現
  encoder_feedback_.SetRtpVideoSender(rtp_video_sender_);

  if (media_transport_) {
    // The configured ssrc is interpreted as a channel id, so there must be
    // exactly one.
    RTC_DCHECK_EQ(config_->rtp.ssrcs.size(), 1);
    media_transport_->SetKeyFrameRequestCallback(&encoder_feedback_);
  } else {
    RTC_DCHECK(!config_->rtp.ssrcs.empty());
  }
  //設置編碼後的數據回調
  video_stream_encoder_->SetSink(this, rotation_applied);

這裏VideoSendStream類是在Source/call/Call.cc中的CreateVideoSendStream()函數中創建,Call類是在Source/media/engine/webrtc_video_engine.cc:2354位置的WebRtcVideoSendStream類的 RecreateWebRtcStream()中調用Call的CreateVideoSendStream()方法:
Call 中創建VideoSendStream源碼:

// This method can be used for Call tests with external fec controller factory.
webrtc::VideoSendStream* Call::CreateVideoSendStream(
    webrtc::VideoSendStream::Config config,
    VideoEncoderConfig encoder_config,
    std::unique_ptr<FecController> fec_controller) {
  TRACE_EVENT0("webrtc", "Call::CreateVideoSendStream");
    
  RTC_DCHECK_RUN_ON(&configuration_sequence_checker_);

  RTC_DCHECK(media_transport() == config.media_transport);
  RTC_LOG(LS_INFO) << "hosten Call::CreateVideoSendStream()";
  RegisterRateObserver();

  video_send_delay_stats_->AddSsrcs(config);
  for (size_t ssrc_index = 0; ssrc_index < config.rtp.ssrcs.size();
       ++ssrc_index) {
    event_log_->Log(absl::make_unique<RtcEventVideoSendStreamConfig>(
        CreateRtcLogStreamConfig(config, ssrc_index)));
  }

  // TODO(mflodman): Base the start bitrate on a current bandwidth estimate, if
  // the call has already started.
  // Copy ssrcs from |config| since |config| is moved.
  std::vector<uint32_t> ssrcs = config.rtp.ssrcs;

  VideoSendStream* send_stream = new VideoSendStream(
      clock_, num_cpu_cores_, module_process_thread_.get(), task_queue_factory_,
      call_stats_.get(), transport_send_ptr_, bitrate_allocator_.get(),
      video_send_delay_stats_.get(), event_log_, std::move(config),
      std::move(encoder_config), suspended_video_send_ssrcs_,
      suspended_video_payload_states_, std::move(fec_controller));

  {
    WriteLockScoped write_lock(*send_crit_);
    for (uint32_t ssrc : ssrcs) {
      RTC_DCHECK(video_send_ssrcs_.find(ssrc) == video_send_ssrcs_.end());
      video_send_ssrcs_[ssrc] = send_stream;
    }
    video_send_streams_.insert(send_stream);
  }
  UpdateAggregateNetworkState();

  return send_stream;
}

WebRtcVideoSendStream中調用:

 stream_ = call_->CreateVideoSendStream(std::move(config),
                                         parameters_.encoder_config.Copy());

調用日誌:
在這裏插入圖片描述

2.1 編碼主要流程

WebRTC的數據經過OC層開始採集後,通過sink方法回調到onFrame接口;這裏主要介紹數據流程及編碼

首先經過相機採集的數據通過封裝WebRTC成視頻幀後通過delegate傳遞出去;在ObjCVideoTrackSource類中實現了RTCCameraVideoCapturer的代理方法;然後將回調的視頻幀數據調用類的OnCapturedFrame(frame),在該類中調用父類(AdaptedVideoTrackSource)的OnFrame()方法,通過broadcaster_OnFrame()方法將數據回調到多個實現接口的地方;這裏研究下VideoBroadcasterOnFrame()方法:

void VideoBroadcaster::OnFrame(const webrtc::VideoFrame& frame) {
  bool current_frame_was_discarded = false;
  for (auto& sink_pair : sink_pairs()) {//這裏循環把幀數據發送給多個sink
    if (sink_pair.wants.rotation_applied &&
        frame.rotation() != webrtc::kVideoRotation_0) {
      // Calls to OnFrame are not synchronized with changes to the sink wants.
      // When rotation_applied is set to true, one or a few frames may get here
      // with rotation still pending. Protect sinks that don't expect any
      // pending rotation.
      RTC_LOG(LS_VERBOSE) << "Discarding frame with unexpected rotation.";
      sink_pair.sink->OnDiscardedFrame();
      current_frame_was_discarded = true;
      continue;
    }
    if (sink_pair.wants.black_frames) {
      webrtc::VideoFrame black_frame =
          webrtc::VideoFrame::Builder()
              .set_video_frame_buffer(
                  GetBlackFrameBuffer(frame.width(), frame.height()))
              .set_rotation(frame.rotation())
              .set_timestamp_us(frame.timestamp_us())
              .set_id(frame.id())
              .build();
      sink_pair.sink->OnFrame(black_frame);
    } else if (!previous_frame_sent_to_all_sinks_) {
      // Since last frame was not sent to some sinks, full update is needed.
      webrtc::VideoFrame copy = frame;
      copy.set_update_rect(
          webrtc::VideoFrame::UpdateRect{0, 0, frame.width(), frame.height()});
      sink_pair.sink->OnFrame(copy);
    } else {//iOS基本走的是這裏發送
      sink_pair.sink->OnFrame(frame);
    }
  }
  previous_frame_sent_to_all_sinks_ = !current_frame_was_discarded;
}

然後就是編碼:VideoStreamEncoder繼承自
VideoStreamEncoderInterface接口,VideoStreamEncoderInterface繼承自rtc::VideoSinkInterface;rtc::VideoSinkInterface<VideoFrame>,在該接口中定義了兩個接口

virtual void OnFrame(const VideoFrameT& frame) = 0;

  // Should be called by the source when it discards the frame due to rate
  // limiting.
  virtual void OnDiscardedFrame() {}

VideoStreamEncoder實現了這兩個接口,其中OnFrame()就是視頻幀數據傳遞接口;相機採集的每一幀的數據後通過該接口處理;

void VideoStreamEncoder::OnFrame(const VideoFrame& video_frame)

該方法中會先處理一下幀的時間戳;然後會創建一個編碼隊列,並將其插入到編碼隊列中

 encoder_queue_.PostTask(
      [this, incoming_frame, post_time_us, log_stats]()

在看源碼前先知道編碼器的初始化:encoder_變量在頭文件的第250行定義爲std::unique_ptr<VideoEncoder> encoder_ RTC_GUARDED_BY(&encoder_queue_) RTC_PT_GUARDED_BY(&encoder_queue_);初始化是在ReconfigureEncoder函數的726行,通過構造函數初始化的settings_始化: encoder_ = settings_.encoder_factory->CreateVideoEncoder(encoder_config_.video_format);
既然編碼是從onFrame(),那麼看下該方法的實現:

//補充20191022:關於回調Sink的設置,VideoStreamEncoder()構造函數中 source_proxy_(new VideoSourceProxy(this))以this指針初始化了VideoSourceProxy類;然後在VideoStreamEncoder::SetSource()方法中調用了source_proxy_->SetSource(source,...),source這裏是WebRtcVideoSendStream(在webrtc_video_engine.cc文件第2388行RecreateWebRtcStream()方法中 stream->SetSource(this,...));VideoSourceProxy類在Video/video_stream_encoder.cc文件中,VideoSourceProxy的SetSource()方法裏調用source->addOrUpdateSink(video_stream_encoder_,wants);這樣就可以將數據通過VideoBroadcaster類回調到OnFrame()中;
void VideoStreamEncoder::OnFrame(const VideoFrame& video_frame) {
  RTC_DCHECK_RUNS_SERIALIZED(&incoming_frame_race_checker_);
  VideoFrame incoming_frame = video_frame;
  // Local time in webrtc time base.
  int64_t current_time_us = clock_->TimeInMicroseconds();
  int64_t current_time_ms = current_time_us / rtc::kNumMicrosecsPerMillisec;
  // In some cases, e.g., when the frame from decoder is fed to encoder,
  // the timestamp may be set to the future. As the encoding pipeline assumes
  // capture time to be less than present time, we should reset the capture
  // timestamps here. Otherwise there may be issues with RTP send stream.
  if (incoming_frame.timestamp_us() > current_time_us)
    incoming_frame.set_timestamp_us(current_time_us);

  // Capture time may come from clock with an offset and drift from clock_.
  int64_t capture_ntp_time_ms;
  if (video_frame.ntp_time_ms() > 0) {
    capture_ntp_time_ms = video_frame.ntp_time_ms();
  } else if (video_frame.render_time_ms() != 0) {
    capture_ntp_time_ms = video_frame.render_time_ms() + delta_ntp_internal_ms_;
  } else {
    capture_ntp_time_ms = current_time_ms + delta_ntp_internal_ms_;
  }
  incoming_frame.set_ntp_time_ms(capture_ntp_time_ms);

  // Convert NTP time, in ms, to RTP timestamp.
  const int kMsToRtpTimestamp = 90;
  incoming_frame.set_timestamp(
      kMsToRtpTimestamp * static_cast<uint32_t>(incoming_frame.ntp_time_ms()));

  if (incoming_frame.ntp_time_ms() <= last_captured_timestamp_) {
    // We don't allow the same capture time for two frames, drop this one.
    RTC_LOG(LS_WARNING) << "Same/old NTP timestamp ("
                        << incoming_frame.ntp_time_ms()
                        << " <= " << last_captured_timestamp_
                        << ") for incoming frame. Dropping.";
    encoder_queue_.PostTask([this, incoming_frame]() {
      RTC_DCHECK_RUN_ON(&encoder_queue_);
      accumulated_update_rect_.Union(incoming_frame.update_rect());
    });
    return;
  }

  bool log_stats = false;
  if (current_time_ms - last_frame_log_ms_ > kFrameLogIntervalMs) {
    last_frame_log_ms_ = current_time_ms;
    log_stats = true;
  }

  last_captured_timestamp_ = incoming_frame.ntp_time_ms();

  int64_t post_time_us = rtc::TimeMicros();
  ++posted_frames_waiting_for_encode_;

  encoder_queue_.PostTask(
      [this, incoming_frame, post_time_us, log_stats]() {
      //to do...
   };

上述源碼中WebRTC 設置及計算了需要的時間戳後,postTask後在編解碼的線程中開始編碼;
注意:在隊列中如果已經有新的幀正在進行編碼那麼久不會對該幀進行編碼,然後記錄該幀信息;經過MaybeEncodeVideoFrame()方法後最終在EncodeVideoFrame()中處理;

 [this, incoming_frame, post_time_us, log_stats]() {
        RTC_DCHECK_RUN_ON(&encoder_queue_);
        encoder_stats_observer_->OnIncomingFrame(incoming_frame.width(),
                                                 incoming_frame.height());
        ++captured_frame_count_;
        const int posted_frames_waiting_for_encode =
            posted_frames_waiting_for_encode_.fetch_sub(1);
        RTC_DCHECK_GT(posted_frames_waiting_for_encode, 0);
        if (posted_frames_waiting_for_encode == 1) {
          MaybeEncodeVideoFrame(incoming_frame, post_time_us);//交給這個函數進行編碼
        } else {
          // There is a newer frame in flight. Do not encode this frame
          ++dropped_frame_count_;
          encoder_stats_observer_->OnFrameDropped(
              VideoStreamEncoderObserver::DropReason::kEncoderQueue);
          accumulated_update_rect_.Union(incoming_frame.update_rect());
        }
        if (log_stats) {
          RTC_LOG(LS_INFO) << "Number of frames: captured "
                           << captured_frame_count_
                           << ", dropped (due to encoder blocked) "
                           << dropped_frame_count_ << ", interval_ms "
                           << kFrameLogIntervalMs;
          captured_frame_count_ = 0;
          dropped_frame_count_ = 0;
        }
      }

其中的encoder_stats_observer_是VideoStreamEncoderObserver接口的實例,該變量通過VideoStreamEncoder構造時候傳入;

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章