Webrtc源碼開發筆記1 —Webrtc視頻編碼打包流程模塊圖解

 

目錄

 

Webrtc源碼開發筆記1 —Webrtc視頻編碼打包流程模塊圖解

1. RtpTransceiver

2.Channel相關模塊

2.1 VideoChannel

2.2BaseChannel

2.3 WebRtcVideoChannel

3.Call模塊與Stream

4.RTP/RTCP


Webrtc源碼開發筆記1 —Webrtc視頻編碼打包流程模塊圖解

本章旨在梳理webrtc從transceiver到transport流程,從而宏觀上了解webrtc視頻採集,編碼,打包發送等相關流程與相關模塊的對應關係,爲開發和快速定位問題提供參考。

第1小節選擇以transceiver爲起點,transceiver在peerconnection中承接了控制數據收發的功能,內部關聯了ChannelInterface與RtpSenderInternal和RtpReceiverInternal。其中channel相關模塊負責維護數據收發的業務流程,以視頻發送爲例,其中Stream相關模塊用於視頻編碼和實現rtp/rtcp相關功能,最終打包後的數據由BaseChannel內的tramsport模塊實現視頻發送。下圖列舉了從採集到視頻幀到編碼和打包的主要模塊和流程。

第2小節主要梳理Channel相關模塊。Channel是Webrtc實現數據收發相關業務的主要模塊,目前Webrtc中有多種Channel,我們將通過視頻打包發送流程簡單梳理這些Channel的功能以及之間的關係。

第3小節主要介紹Call和Stream相關模塊。Call中創建了各種Stream,這些Stream存在於Channel中,封裝了數據編解碼以及RTP打包、RTCP處理相關功能。

第4小節梳理了視頻發送RTP/RTCP相關模塊間的關係。

graffle文件下載地址 https://download.csdn.net/download/lidec/12517879

 

1. RtpTransceiver

下面是RtpTransceiver的註釋

// Implementation of the public RtpTransceiverInterface.

//

// The RtpTransceiverInterface is only intended to be used with a PeerConnection

// that enables Unified Plan SDP. Thus, the methods that only need to implement

// public API features and are not used internally can assume exactly one sender

// and receiver.

//

// Since the RtpTransceiver is used internally by PeerConnection for tracking

// RtpSenders, RtpReceivers, and BaseChannels, and PeerConnection needs to be

// backwards compatible with Plan B SDP, this implementation is more flexible

// than that required by the WebRTC specification.

//

// With Plan B SDP, an RtpTransceiver can have any number of senders and

// receivers which map to a=ssrc lines in the m= section.

// With Unified Plan SDP, an RtpTransceiver will have exactly one sender and one

// receiver which are encapsulated by the m= section.

//

// This class manages the RtpSenders, RtpReceivers, and BaseChannel associated

// with this m= section. Since the transceiver, senders, and receivers are

// reference counted and can be referenced from JavaScript (in Chromium), these

// objects must be ready to live for an arbitrary amount of time. The

// BaseChannel is not reference counted and is owned by the ChannelManager, so

// the PeerConnection must take care of creating/deleting the BaseChannel and

// setting the channel reference in the transceiver to null when it has been

// deleted.

//

// The RtpTransceiver is specialized to either audio or video according to the

// MediaType specified in the constructor. Audio RtpTransceivers will have

// AudioRtpSenders, AudioRtpReceivers, and a VoiceChannel. Video RtpTransceivers

// will have VideoRtpSenders, VideoRtpReceivers, and a VideoChannel.

這裏RtpTransceiver實現RtpTransceiverInterface,用於實現PeerConnection中Unified Plan SDP所指定的通用功能。其主要用於PeerConnection中維護RtpSenders, RtpReceivers和BaseChannels,同時其設計兼容B SDP。這裏Unified Plan SDP中 m= section  指定維護一個sender和一個receiver,Plan B SDP中會維護多個sender和receiver,用a=ssrc區分。這裏BaseChannel的生命週期是由ChannelManager管理的,RtpTransceiver中只保持對其指針的使用。對於音頻或者視頻數據的收發,分別通過AudioRtpSenders, AudioRtpReceivers和VoiceChannel 與 VideoRtpSenders, VideoRtpReceivers,VideoChannel實現。

//RtpTransceiver創建實例
audio_transceiver_ = RtpTransceiverProxyWithInternal<RtpTransceiver>::Create(

            signaling_thread_,new RtpTransceiver(audio_sender_, audio_receiver_, channel_manager_));

//設置ssrc與channel

audio_transceiver_->internal()->SetChannel(voice_channel_);

audio_transceiver_->internal()->sender_internal()->SetSsrc(streams[0].first_ssrc());

//詳細方法用法可以參考 "pc/rtp_transceiver_unitest.cc”和PeerConnection。

2.Channel相關模塊

上文RtpTransceiver主要用於對應SDP中收發相關實現,具體業務邏輯是在Channel相關模塊中實現,相比之前VieChannel現在Channel模塊中包含的流程有所減少,主要維護編解碼,RTP/RTCP相關邏輯模塊以及維護Transport模塊發送等,其中編解碼與RTP/RTCP相關處理邏輯主要在Call模塊下創建的各種Stream中封裝。目前Webrtc中有多種Channel,下面簡單梳理一下Channel間關係,然後針對視頻發送流程整理一下每層Channel中對應的主要功能。

 

 

2.1 VideoChannel

VideoChannel是channel中的最外層,對應音頻爲VoiceChannel,RtpTransceiver模塊中的BaseChannel可以設置爲VideoChannel或者VoiceChannel。這裏對外主要提供SetLocalContent_wSetRemoteContent_w方法,也就是隻要得到SDP解析後的封裝cricket::VideoContentDescription的對象,就可以初始化VideoChannel。另外一個重要方法就是SetRtpTransport,這裏可以設置當前選中真正數據發送的Transport模塊。

2.2BaseChannel

BaseChannel是VideoChannel的父類,這裏關聯了兩個重要模塊。RtpTransportInternal指向實際負責數據收發功能的模塊,最終打包好的RTP/RTCP信息會調用這裏的發送功能發出。MediaChannel指針最終指向的實現是WebRtcVideoChannel,這裏封裝了編碼和rtp打包rtcp處理的相關操作。

2.3 WebRtcVideoChannel

WebRtcVideoChannel是各種channel中最下層的一類,其中維護了WebRtcVideoSendStream和WebRtcVideoReceiveStream,這些模塊中封裝了上層傳入的視頻source和encoder,其內部VideoSendStream最終封裝了編碼器模塊和RTP/RTCP模塊。

3.Call模塊與Stream

call/call.h中註釋如下

// A Call instance can contain several send and/or receive streams. All streams

// are assumed to have the same remote endpoint and will share bitrate estimates

// etc.

這裏介紹Call實例中包含多個發送和接收流,同時這裏規定這些流都是與同一個遠端通信,共享一個帶寬估計。

Call相關模塊主要在源碼目錄中call文件夾下,其中在初始化Webrtc時需要初始化一個Call對象,主要提供VideoSendStream,VideoReceiveStream等模塊。創建Call時可以使用webrtc::Call::Config作爲參數,配置帶寬相關,同時還可以配置fec,neteq,network_state_predictor的相關factory,可見數據包收發控制相關內容都與Call相關模塊有關。下面是一個初始化Call模塊的簡單例子。

webrtc::Call::Config call_config(event_log);

  
FieldTrialParameter<DataRate> min_bandwidth("min", DataRate::kbps(30));
FieldTrialParameter<DataRate> start_bandwidth("start", DataRate::kbps(300));
FieldTrialParameter<DataRate> max_bandwidth("max", DataRate::kbps(2000));

ParseFieldTrial({ &min_bandwidth, &start_bandwidth, &max_bandwidth },
trials_->Lookup("WebRTC-PcFactoryDefaultBitrates"));

    
call_config.bitrate_config.min_bitrate_bps = rtc::saturated_cast<int>(min_bandwidth->bps());

call_config.bitrate_config.start_bitrate_bps =rtc::saturated_cast<int>(start_bandwidth->bps());

call_config.bitrate_config.max_bitrate_bps = rtc::saturated_cast<int>(max_bandwidth->bps());

    
call_config.fec_controller_factory = nullptr;
call_config.task_queue_factory = task_queue_factory_.get();
call_config.network_state_predictor_factory = nullptr;
call_config.neteq_factory = nullptr;
call_config.trials = trials_.get();

    
std::unique_ptr<Call>(call_factory_->CreateCall(call_config));

VideoSendStream由Call對象創建,真正實現是實現internal::VideoSendStream。

 

VideoSendStreamImpl,位於video/video_send_stream_impl.cc。下面是VideoSendStreamImpl的註釋。

// VideoSendStreamImpl implements internal::VideoSendStream.

// It is created and destroyed on |worker_queue|. The intent is to decrease the

// need for locking and to ensure methods are called in sequence.

// Public methods except |DeliverRtcp| must be called on |worker_queue|.

// DeliverRtcp is called on the libjingle worker thread or a network thread.

// An encoder may deliver frames through the EncodedImageCallback on an

// arbitrary thread.

VideoSendStreamImpl創建銷燬位了保證順序,需要在worker_queue中進行。DeliverRtcp會在網絡線程被調用。編碼器可能通過任意線程通過EncodedImageCallback接口拋出編碼後的數據。這裏數據會轉到RtpVideoSenderInterface的實現類中進行RTP打包和相關處理。

VideoStreamEncoder中封裝了編碼相關操作,Source傳入原始數據,Sink吐出編碼後的數據,同時有碼率設置,關鍵幀請求等接口。

4.RTP/RTCP

視頻編碼完成數據到達RtpVideoSenderInterface接口後,下一步就進入RTP/RTCP相關操作的模塊中。這裏類比較多,命名相似,需要仔細梳理。

4.1 RtpStreamSender

位於call/rtp_video_sender.h / call/rtp_video_sender.cc

// RtpVideoSender routes outgoing data to the correct sending RTP module, based

// on the simulcast layer in RTPVideoHeader.

這裏RtpVideoSender的作用是根據RTP頭將RTP包送到對應的模塊。RtpVideoSender中持有一個RtpStreamSender數組,這裏就是創建每個具體的rtp流發送模塊,這裏會根據當前rtp設置中ssrc的數量創建多個RtpStreamSender。RtpStreamSender中的很多模塊將在RtpVideoSender中初始化。其中RtpRtcpRTPSenderVideo是其主要組成部分。下面這個函數非常重要,可以具體看到RtpStreamSender中的元素是如何初始化的。

std::vector<RtpStreamSender> CreateRtpStreamSenders(

    Clock* clock,

    const RtpConfig& rtp_config,

    const RtpSenderObservers& observers,

    int rtcp_report_interval_ms,

    Transport* send_transport,

    RtcpBandwidthObserver* bandwidth_callback,

    RtpTransportControllerSendInterface* transport,

    FlexfecSender* flexfec_sender,

    RtcEventLog* event_log,

    RateLimiter* retransmission_rate_limiter,

    OverheadObserver* overhead_observer,

    FrameEncryptorInterface* frame_encryptor,

    const CryptoOptions& crypto_options) {

  RTC_DCHECK_GT(rtp_config.ssrcs.size(), 0);



  RtpRtcp::Configuration configuration;

  configuration.clock = clock;

  configuration.audio = false;

  configuration.receiver_only = false;

  configuration.outgoing_transport = send_transport;

  configuration.intra_frame_callback = observers.intra_frame_callback;

  configuration.rtcp_loss_notification_observer =

      observers.rtcp_loss_notification_observer;

  configuration.bandwidth_callback = bandwidth_callback;

  configuration.network_state_estimate_observer =

      transport->network_state_estimate_observer();

  configuration.transport_feedback_callback =

      transport->transport_feedback_observer();

  configuration.rtt_stats = observers.rtcp_rtt_stats;

  configuration.rtcp_packet_type_counter_observer =

      observers.rtcp_type_observer;

  configuration.paced_sender = transport->packet_sender();

  configuration.send_bitrate_observer = observers.bitrate_observer;

  configuration.send_side_delay_observer = observers.send_delay_observer;

  configuration.send_packet_observer = observers.send_packet_observer;

  configuration.event_log = event_log;

  configuration.retransmission_rate_limiter = retransmission_rate_limiter;

  configuration.overhead_observer = overhead_observer;

  configuration.rtp_stats_callback = observers.rtp_stats;

  configuration.frame_encryptor = frame_encryptor;

  configuration.require_frame_encryption =

      crypto_options.sframe.require_frame_encryption;

  configuration.extmap_allow_mixed = rtp_config.extmap_allow_mixed;

  configuration.rtcp_report_interval_ms = rtcp_report_interval_ms;



  std::vector<RtpStreamSender> rtp_streams;

  const std::vector<uint32_t>& flexfec_protected_ssrcs =

      rtp_config.flexfec.protected_media_ssrcs;

  RTC_DCHECK(rtp_config.rtx.ssrcs.empty() ||

             rtp_config.rtx.ssrcs.size() == rtp_config.rtx.ssrcs.size());

  for (size_t i = 0; i < rtp_config.ssrcs.size(); ++i) {

    configuration.local_media_ssrc = rtp_config.ssrcs[i];

    bool enable_flexfec = flexfec_sender != nullptr &&

                          std::find(flexfec_protected_ssrcs.begin(),

                                    flexfec_protected_ssrcs.end(),

                                    configuration.local_media_ssrc) !=

                              flexfec_protected_ssrcs.end();

    configuration.flexfec_sender = enable_flexfec ? flexfec_sender : nullptr;

    auto playout_delay_oracle = std::make_unique<PlayoutDelayOracle>();



    configuration.ack_observer = playout_delay_oracle.get();

    if (rtp_config.rtx.ssrcs.size() > i) {

      configuration.rtx_send_ssrc = rtp_config.rtx.ssrcs[i];

    }



    auto rtp_rtcp = RtpRtcp::Create(configuration);

    rtp_rtcp->SetSendingStatus(false);

    rtp_rtcp->SetSendingMediaStatus(false);

    rtp_rtcp->SetRTCPStatus(RtcpMode::kCompound);

    // Set NACK.

    rtp_rtcp->SetStorePacketsStatus(true, kMinSendSidePacketHistorySize);



    FieldTrialBasedConfig field_trial_config;

    RTPSenderVideo::Config video_config;

    video_config.clock = configuration.clock;

    video_config.rtp_sender = rtp_rtcp->RtpSender();

    video_config.flexfec_sender = configuration.flexfec_sender;

    video_config.playout_delay_oracle = playout_delay_oracle.get();

    video_config.frame_encryptor = frame_encryptor;

    video_config.require_frame_encryption =

        crypto_options.sframe.require_frame_encryption;

    video_config.need_rtp_packet_infos = rtp_config.lntf.enabled;

    video_config.enable_retransmit_all_layers = false;

    video_config.field_trials = &field_trial_config;

    const bool should_disable_red_and_ulpfec =

        ShouldDisableRedAndUlpfec(enable_flexfec, rtp_config);

    if (rtp_config.ulpfec.red_payload_type != -1 &&

        !should_disable_red_and_ulpfec) {

      video_config.red_payload_type = rtp_config.ulpfec.red_payload_type;

    }

    if (rtp_config.ulpfec.ulpfec_payload_type != -1 &&

        !should_disable_red_and_ulpfec) {

      video_config.ulpfec_payload_type = rtp_config.ulpfec.ulpfec_payload_type;

    }

    auto sender_video = std::make_unique<RTPSenderVideo>(video_config);

    rtp_streams.emplace_back(std::move(playout_delay_oracle),

                             std::move(rtp_rtcp), std::move(sender_video));

  }

  return rtp_streams;

}

RtpRtcp這裏負責RTCP數據的處理和收發,對應的實現是ModuleRtpRtcpImpl,ModuleRtpRtcpImpl實現了RtpRtcp和MoudleRtpRtcp兩個接口,實現了rtcp處理相關功能。同時內部還持有實體std::unique_ptr<RtpSenderContext> rtp_sender_,RTCPSender rtcp_sender,RTCPReceiver rtcp_receiver_用於rtp/rtcp的收發。

RTPSenderVideo封裝了具體發送的邏輯,內部發送模塊實際指向RtpRtcp中的實體。這裏還有fec相關功能開關的控制邏輯。

RTPSender實例在RtpRtcp中,RTPSenderVideo中持有這個實例的指針,其中封裝了Pacer和PacketHistory模塊,用於控制發送頻率和重傳。最終在PacedSender中交給PacketRouter,最終交給RtpSenderEgressTransport指針指向對象。這個對象就是最開始的WebRtcVideoChannel,WebRtcVideoChannel調用其中的RTPTransport將rtp包發送到網絡。

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章