live555學習筆記9-h264 RTP傳輸詳解(1)

九 h264 RTP傳輸詳解(1)


前幾章對Server端的介紹中有個比較重要的問題沒有仔細探究:如何打開文件並獲得其SDP信息。我們就從這裏入手吧。


當RTSPServer收到對某個媒體的DESCRIBE請求時,它會找到對應的ServerMediaSession,調用ServerMediaSession::generateSDPDescription()。generateSDPDescription()中會遍歷調用ServerMediaSession中所有的調用ServerMediaSubsession,通過subsession->sdpLines()取得每個Subsession的sdp,合併成一個完整的SDP返回之。
我們幾乎可以斷定,文件的打開和分析應該是在每個Subsession的sdpLines()函數中完成的,看看這個函數:

char const* OnDemandServerMediaSubsession::sdpLines()
{
	if (fSDPLines == NULL) {
		// We need to construct a set of SDP lines that describe this
		// subsession (as a unicast stream).  To do so, we first create
		// dummy (unused) source and "RTPSink" objects,
		// whose parameters we use for the SDP lines:
		unsigned estBitrate;
		FramedSource* inputSource = createNewStreamSource(0, estBitrate);
		if (inputSource == NULL)
			return NULL; // file not found

		struct in_addr dummyAddr;
		dummyAddr.s_addr = 0;
		Groupsock dummyGroupsock(envir(), dummyAddr, 0, 0);
		unsigned char rtpPayloadType = 96 + trackNumber() - 1; // if dynamic
		RTPSink* dummyRTPSink = createNewRTPSink(&dummyGroupsock,
				rtpPayloadType, inputSource);

		setSDPLinesFromRTPSink(dummyRTPSink, inputSource, estBitrate);
		Medium::close(dummyRTPSink);
		closeStreamSource(inputSource);
	}

	return fSDPLines;
}


其所爲如是:Subsession中直接保存了對應媒體文件的SDP,但是在第一次獲取時fSDPLines爲NULL,所以需先獲取fSDPLines。其做法比較費事,竟然是建了臨時的Source和RTPSink,把它們連接成一個StreamToken,Playing一段時間之後才取得了fSDPLines。createNewStreamSource()和createNewRTPSink()都是虛函數,所以此處創建的source和sink都是繼承類指定的,我們分析的是H264,也就是H264VideoFileServerMediaSubsession所指定的,來看一下這兩個函數:

FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(
		unsigned /*clientSessionId*/,
		unsigned& estBitrate)
{
	estBitrate = 500; // kbps, estimate

	// Create the video source:
	ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(),
			fFileName);
	if (fileSource == NULL)
		return NULL;
	fFileSize = fileSource->fileSize();

	// Create a framer for the Video Elementary Stream:
	return H264VideoStreamFramer::createNew(envir(), fileSource);
}

RTPSink* H264VideoFileServerMediaSubsession::createNewRTPSink(
		Groupsock* rtpGroupsock,
		unsigned char rtpPayloadTypeIfDynamic,
		FramedSource* /*inputSource*/)
{
	return H264VideoRTPSink::createNew(envir(), rtpGroupsock,
			rtpPayloadTypeIfDynamic);
}


可以看到,分別創建了H264VideoStreamFramer和H264VideoRTPSink。可以肯定H264VideoStreamFramer也是一個Source,但它內部又利用了另一個source--ByteStreamFileSource。後面會分析爲什麼要這樣做,這裏先不要管它。還沒有看到真正打開文件的代碼,繼續探索:

void OnDemandServerMediaSubsession::setSDPLinesFromRTPSink(
		RTPSink* rtpSink,
		FramedSource* inputSource,
		unsigned estBitrate)
{
	if (rtpSink == NULL)
		return;

	char const* mediaType = rtpSink->sdpMediaType();
	unsigned char rtpPayloadType = rtpSink->rtpPayloadType();
	struct in_addr serverAddrForSDP;
	serverAddrForSDP.s_addr = fServerAddressForSDP;
	char* const ipAddressStr = strDup(our_inet_ntoa(serverAddrForSDP));
	char* rtpmapLine = rtpSink->rtpmapLine();
	char const* rangeLine = rangeSDPLine();
	char const* auxSDPLine = getAuxSDPLine(rtpSink, inputSource);
	if (auxSDPLine == NULL)
		auxSDPLine = "";

	char const* const sdpFmt = "m=%s %u RTP/AVP %d\r\n"
			"c=IN IP4 %s\r\n"
			"b=AS:%u\r\n"
			"%s"
			"%s"
			"%s"
			"a=control:%s\r\n";
	unsigned sdpFmtSize = strlen(sdpFmt) + strlen(mediaType) + 5 /* max short len */
	+ 3 /* max char len */
	+ strlen(ipAddressStr) + 20 /* max int len */
	+ strlen(rtpmapLine) + strlen(rangeLine) + strlen(auxSDPLine)
			+ strlen(trackId());
	char* sdpLines = new char[sdpFmtSize];
	sprintf(sdpLines, sdpFmt, mediaType, // m= <media>
			fPortNumForSDP, // m= <port>
			rtpPayloadType, // m= <fmt list>
			ipAddressStr, // c= address
			estBitrate, // b=AS:<bandwidth>
			rtpmapLine, // a=rtpmap:... (if present)
			rangeLine, // a=range:... (if present)
			auxSDPLine, // optional extra SDP line
			trackId()); // a=control:<track-id>
	delete[] (char*) rangeLine;
	delete[] rtpmapLine;
	delete[] ipAddressStr;

	fSDPLines = strDup(sdpLines);
	delete[] sdpLines;
}


此函數中取得Subsession的sdp並保存到fSDPLines。打開文件應在rtpSink->rtpmapLine()甚至是Source創建時已經做了。我們不防先把它放一放,而是先把SDP的獲取過程搞個通透。所以把焦點集中到getAuxSDPLine()上。

char const* OnDemandServerMediaSubsession::getAuxSDPLine(
		RTPSink* rtpSink,
		FramedSource* /*inputSource*/)
{
	// Default implementation:
	return rtpSink == NULL ? NULL : rtpSink->auxSDPLine();
}


很簡單,調用了rtpSink->auxSDPLine()那麼我們要看H264VideoRTPSink::auxSDPLine():不用看了,很簡單,取得source 中保存的PPS,SPS等形成a=fmpt行。但事實上並沒有這麼簡單,H264VideoFileServerMediaSubsession重寫了getAuxSDPLine()!如果不重寫,則說明auxSDPLine已經在前面分析文件時獲得了,那麼既然重寫,就說明前面沒有獲取到,只能在這個函數中重寫。look H264VideoFileServerMediaSubsession中這個函數:

char const* H264VideoFileServerMediaSubsession::getAuxSDPLine(
		RTPSink* rtpSink,
		FramedSource* inputSource)
{
	if (fAuxSDPLine != NULL)
		return fAuxSDPLine; // it's already been set up (for a previous client)

	if (fDummyRTPSink == NULL) { // we're not already setting it up for another, concurrent stream
		// Note: For H264 video files, the 'config' information ("profile-level-id" and "sprop-parameter-sets") isn't known
		// until we start reading the file.  This means that "rtpSink"s "auxSDPLine()" will be NULL initially,
		// and we need to start reading data from our file until this changes.
		fDummyRTPSink = rtpSink;

		// Start reading the file:
		fDummyRTPSink->startPlaying(*inputSource, afterPlayingDummy, this);

		// Check whether the sink's 'auxSDPLine()' is ready:
		checkForAuxSDPLine(this);
	}

	envir().taskScheduler().doEventLoop(&fDoneFlag);

	return fAuxSDPLine;
}


註釋裏面解釋得很清楚,H264不能在文件頭中取得PPS/SPS,必須在播放一下後(當然,它是一個原始流文件,沒有文件頭)才行。也就是說不能從rtpSink中取得了。爲了保證在函數退出前能取得AuxSDP,把大循環搬到這裏來了。afterPlayingDummy()是在播放結束也就是取得aux sdp之後執行。在大循環之前的checkForAuxSDPLine()做了什麼呢?

void H264VideoFileServerMediaSubsession::checkForAuxSDPLine1()
{
	char const* dasl;

	if (fAuxSDPLine != NULL) {
		// Signal the event loop that we're done:
		setDoneFlag();
	} else if (fDummyRTPSink != NULL
			&& (dasl = fDummyRTPSink->auxSDPLine()) != NULL) {
		fAuxSDPLine = strDup(dasl);
		fDummyRTPSink = NULL;

		// Signal the event loop that we're done:
		setDoneFlag();
	} else {
		// try again after a brief delay:
		int uSecsToDelay = 100000; // 100 ms
		nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecsToDelay,
				(TaskFunc*) checkForAuxSDPLine, this);
	}
}


它檢查是否已取得Aux sdp,如果取得了,設置結束標誌,直接返回。如果沒有,就檢查是否sink中已取得了aux sdp,如果是,也設置結束標誌,返回。如果還沒有取得,則把這個檢查函數做爲delay task加入計劃任務中。每100毫秒檢查一次,每檢查一次主要就是調用一次fDummyRTPSink->auxSDPLine()。大循環在檢測到fDoneFlag改變時停止,此時已取得了aux sdp。但是如果直到文件結束也沒有得到aux sdp,則afterPlayingDummy()被執行,在其中停止掉這個大循環。然後在父Subsession類中關掉這些臨時的source和sink。在直正播放時重新創建。

發佈了7 篇原創文章 · 獲贊 2 · 訪問量 2萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章