PLAY命令概述
PLAY命令要求在SETUP命令之後進行,此命令處理過程中就開始發送數據了,在處理PLAY命令過程中還創建了RTCPInstance實例。
客戶端可以通過PLAY命令的Scale頭部域,指定播放速率,不過這個功能要看服務器對特定媒體的具體實現,當sacale=1時正常播放,sacale>1時快進,sacale<0時快退。
客戶端可以通過PLAY命令的Range頭部域,指定播放的時間範圍,同樣此功能也依賴於服務器中特定媒體的具體實現。
對於PLAY命令請求中的URL有以下幾種情況(與PAUSE、TEARDOWN、GET_PARAMETER、SET_PARAMETER處理是一樣的):
1)非聚合,如rtsp://192.168.1.1/urlPreSuffix/urlSuffix,urlPreSuffix作爲stream name, urlSuffix作爲subsession的trackId
2)非聚合的情況下,才能根據trackId找到subsession
3)聚合,如
rtsp://192.168.1.1/urlPreSuffix/urlSuffix, 將urlSuffix作爲stream name,而urlPreSuffix忽略
rtsp://192.168.1.1/urlPreSuffix, 只存在urlPreSuffix,並將其作爲stream name, 這應該是最常見的情況
4)聚合,如rtsp://192.168.1.1/urlPreSuffix/urlSuffix, 將urlPreSuffix/urlSuffix整個作爲stream name
我們可以對session中的subsession進行單獨控制(這需要提供subsession的trackId), 也可以對整個session進行控制(這種情況應該是最常見的吧)。
貼一個SETUP消息實例:
PLAY rtsp://192.168.9.80/123.mpg/ RTSP/1.0
CSeq: 5
Session: 263BD44B
Range: npt=0.000-
User-Agent: LibVLC/1.1.0 (LIVE555 Streaming Media v2010.03.16)
response: RTSP/1.0 200 OK
CSeq: 5
Date: Wed, Nov 30 2011 06:55:07 GMT
Range: npt=0.000-
Session: 263BD44B
RTP-Info: url=rtsp://192.168.9.80/123.mpg/track1;seq=38851;rtptime=1434098600,ur
l=rtsp://192.168.9.80/123.mpg/track2;seq=27752;rtptime=3595585826
代碼分析的過程比較煩瑣,就先把總結性的東西放到最前面了
1.關於SETUP命令請求包中的ULR處理
void RTSPServer::RTSPClientSession
::handleCmd_withinSession(char const* cmdName,
char const* urlPreSuffix, char const* urlSuffix,
char const* cseq, char const* fullRequestStr) {
//非聚合,如rtsp://192.168.1.1/urlPreSuffix/urlSuffix,urlPreSuffix作爲stream name, urlSuffix作爲subsession的trackId
//非聚合的情況下,才能根據trackId找到subsession
//聚合,如
//1)rtsp://192.168.1.1/urlPreSuffix/urlSuffix, 將urlSuffix作爲stream name,而urlPreSuffix忽略
//2)rtsp://192.168.1.1/urlPreSuffix, 只存在urlPreSuffix,並將其作爲stream name, 這應該是最常見的情況
//聚合,如rtsp://192.168.1.1/urlPreSuffix/urlSuffix, 將urlPreSuffix/urlSuffix整個作爲stream name
// This will either be:
// - an operation on the entire server, if "urlPreSuffix" is "", and "urlSuffix" is "*" (i.e., the special "*" URL), or
// - a non-aggregated operation, if "urlPreSuffix" is the session (stream)
// name and "urlSuffix" is the subsession (track) name, or
// - an aggregated operation, if "urlSuffix" is the session (stream) name,
// or "urlPreSuffix" is the session (stream) name, and "urlSuffix" is empty,
// or "urlPreSuffix" and "urlSuffix" are both nonempty, but when concatenated, (with "/") form the session (stream) name.
// Begin by figuring out which of these it is:
ServerMediaSubsession* subsession;
if (urlPreSuffix[0] == '\0' && urlSuffix[0] == '*' && urlSuffix[1] == '\0') {
// An operation on the entire server. This works only for GET_PARAMETER and SET_PARAMETER:
if (strcmp(cmdName, "GET_PARAMETER") == 0) {
handleCmd_GET_PARAMETER(NULL, cseq, fullRequestStr);
} else if (strcmp(cmdName, "SET_PARAMETER") == 0) {
handleCmd_SET_PARAMETER(NULL, cseq, fullRequestStr);
} else {
handleCmd_notSupported(cseq);
}
return;
} else if (fOurServerMediaSession == NULL) { // There wasn't a previous SETUP!
handleCmd_notSupported(cseq);
return;
} else if (urlSuffix[0] != '\0' && strcmp(fOurServerMediaSession->streamName(), urlPreSuffix) == 0) {
//非聚合,如rtsp://192.168.1.1/urlPreSuffix/urlSuffix,urlPreSuffix作爲stream name, urlSuffix作爲subsession的trackId
//非聚合的情況下,才能根據trackId找到subsession
// Non-aggregated operation.
// Look up the media subsession whose track id is "urlSuffix":
ServerMediaSubsessionIterator iter(*fOurServerMediaSession);
while ((subsession = iter.next()) != NULL) {
if (strcmp(subsession->trackId(), urlSuffix) == 0) break; // success
}
if (subsession == NULL) { // no such track!
handleCmd_notFound(cseq);
return;
}
} else if (strcmp(fOurServerMediaSession->streamName(), urlSuffix) == 0 ||
(urlSuffix[0] == '\0' && strcmp(fOurServerMediaSession->streamName(), urlPreSuffix) == 0)) {
//聚合,如
//1)rtsp://192.168.1.1/urlPreSuffix/urlSuffix, 將urlSuffix作爲stream name,而urlPreSuffix忽略
//2)rtsp://192.168.1.1/urlPreSuffix, 只存在urlPreSuffix,並將其作爲stream name
// Aggregated operation
subsession = NULL;
} else if (urlPreSuffix[0] != '\0' && urlSuffix[0] != '\0') {
//聚合,如rtsp://192.168.1.1/urlPreSuffix/urlSuffix, 將urlPreSuffix/urlSuffix整個作爲stream name
//Aggregated operation,
// Aggregated operation, if <urlPreSuffix>/<urlSuffix> is the session (stream) name:
unsigned const urlPreSuffixLen = strlen(urlPreSuffix);
if (strncmp(fOurServerMediaSession->streamName(), urlPreSuffix, urlPreSuffixLen) == 0 &&
fOurServerMediaSession->streamName()[urlPreSuffixLen] == '/' &&
strcmp(&(fOurServerMediaSession->streamName())[urlPreSuffixLen+1], urlSuffix) == 0) {
subsession = NULL;
} else {
handleCmd_notFound(cseq);
return;
}
} else { // the request doesn't match a known stream and/or track at all!
handleCmd_notFound(cseq);
return;
}
if (strcmp(cmdName, "TEARDOWN") == 0) {
handleCmd_TEARDOWN(subsession, cseq);
} else if (strcmp(cmdName, "PLAY") == 0) {
handleCmd_PLAY(subsession, cseq, fullRequestStr);
} else if (strcmp(cmdName, "PAUSE") == 0) {
handleCmd_PAUSE(subsession, cseq);
} else if (strcmp(cmdName, "GET_PARAMETER") == 0) {
handleCmd_GET_PARAMETER(subsession, cseq, fullRequestStr);
} else if (strcmp(cmdName, "SET_PARAMETER") == 0) {
handleCmd_SET_PARAMETER(subsession, cseq, fullRequestStr);
}
}
2.PLAY命令處理函數handleCmd_PLAY(1.1)
void RTSPServer::RTSPClientSession
::handleCmd_PLAY(ServerMediaSubsession* subsession, char const* cseq,
char const* fullRequestStr) {
char* rtspURL = fOurServer.rtspURL(fOurServerMediaSession, fClientInputSocket);
unsigned rtspURLSize = strlen(rtspURL);
//分析"Scale:"頭部
//Scale頭,指示了播放的速率,scale = 1爲正常播放,大於1快進,小於0則表示快退
// Parse the client's "Scale:" header, if any:
float scale;
Boolean sawScaleHeader = parseScaleHeader(fullRequestStr, scale);
//測試scale的值是否能滿足,這期間可能會改變scale的值
// Try to set the stream's scale factor to this value:
if (subsession == NULL /*aggregate op*/) { //聚合的情況下,subsession還不確定
fOurServerMediaSession->testScaleFactor(scale); //測試scale的值(見2.1)
} else {
subsession->testScaleFactor(scale);
}
char buf[100];
char* scaleHeader;
if (!sawScaleHeader) {
buf[0] = '\0'; // Because we didn't see a Scale: header, don't send one back
} else {
sprintf(buf, "Scale: %f\r\n", scale);
}
scaleHeader = strDup(buf);
//分析"Range:"頭部
//"Range:"頭部,表示要播放的時間範圍。如Range: npt=0.000-,從0時刻開始播放看到結束
//不含Range 首部域的PLAY 請求也是合法的。它從媒體流開頭開始播放,直到媒體流被暫停
// Parse the client's "Range:" header, if any:
double rangeStart = 0.0, rangeEnd = 0.0;
Boolean sawRangeHeader = parseRangeHeader(fullRequestStr, rangeStart, rangeEnd);
//關於"Range:"頭部的其它操作
...
//下面創建響應中的"RTP-Info:"行
// Create a "RTP-Info:" line. It will get filled in from each subsession's state:
char const* rtpInfoFmt =
"%s" // "RTP-Info:", plus any preceding rtpInfo items
"%s" // comma separator, if needed
"url=%s/%s"
";seq=%d"
";rtptime=%u"
;
unsigned rtpInfoFmtSize = strlen(rtpInfoFmt);
char* rtpInfo = strDup("RTP-Info: ");
unsigned i, numRTPInfoItems = 0;
//根據要求,在每個subsession上進行seeking/scaling操作
// Do any required seeking/scaling on each subsession, before starting streaming:
for (i = 0; i < fNumStreamStates; ++i) {
if (subsession == NULL /* means: aggregated operation */
|| subsession == fStreamStates[i].subsession) {
if (sawScaleHeader) {
fStreamStates[i].subsession->setStreamScale(fOurSessionId, //設置subsession的scale值(見2.1)
fStreamStates[i].streamToken,
scale);
}
if (sawRangeHeader) {
//計算流的播放時間streamDuration
double streamDuration = 0.0; // by default; means: stream until the end of the media
if (rangeEnd > 0.0 && (rangeEnd+0.001) < duration) { // the 0.001 is because we limited the values to 3 decimal places
// We want the stream to end early. Set the duration we want:
streamDuration = rangeEnd - rangeStart;
if (streamDuration < 0.0) streamDuration = -streamDuration; // should happen only if scale < 0.0 這裏情況下進行快退操作
}
u_int64_t numBytes;
fStreamStates[i].subsession->seekStream(fOurSessionId, //設置每個subsession上的播放時間範圍(見2.2)
fStreamStates[i].streamToken,
rangeStart, streamDuration, numBytes);
}
}
}
// Create the "Range:" header that we'll send back in our response.
// (Note that we do this after seeking, in case the seeking operation changed the range start time.)
char* rangeHeader;
if (!sawRangeHeader) {
buf[0] = '\0'; // Because we didn't see a Range: header, don't send one back
} else if (rangeEnd == 0.0 && scale >= 0.0) {
sprintf(buf, "Range: npt=%.3f-\r\n", rangeStart);
} else {
sprintf(buf, "Range: npt=%.3f-%.3f\r\n", rangeStart, rangeEnd);
}
rangeHeader = strDup(buf);
//現在終於開始媒體數據傳輸了
// Now, start streaming:
for (i = 0; i < fNumStreamStates; ++i) {
if (subsession == NULL /* means: aggregated operation */
|| subsession == fStreamStates[i].subsession) {
unsigned short rtpSeqNum = 0;
unsigned rtpTimestamp = 0;
//開始各個subsession上的數據傳輸, 即開始播放了(見2.3)
fStreamStates[i].subsession->startStream(fOurSessionId,
fStreamStates[i].streamToken,
(TaskFunc*)noteClientLiveness, this,
rtpSeqNum, rtpTimestamp,
handleAlternativeRequestByte, this);
const char *urlSuffix = fStreamStates[i].subsession->trackId();
char* prevRTPInfo = rtpInfo;
unsigned rtpInfoSize = rtpInfoFmtSize
+ strlen(prevRTPInfo)
+ 1
+ rtspURLSize + strlen(urlSuffix)
+ 5 /*max unsigned short len*/
+ 10 /*max unsigned (32-bit) len*/
+ 2 /*allows for trailing \r\n at final end of string*/;
rtpInfo = new char[rtpInfoSize];
//subsession中的信息添加到"RTP-Info:"行中
sprintf(rtpInfo, rtpInfoFmt,
prevRTPInfo,
numRTPInfoItems++ == 0 ? "" : ",",
rtspURL, urlSuffix,
rtpSeqNum,
rtpTimestamp
);
delete[] prevRTPInfo;
}
}
//下面是組裝響應包的操作
...
}
3.關於播放速度參數scale(2.1)
scale參數指定了播放的速率,scale = 1爲正常播放,大於1快進,小於0則表示快退。是否能滿足scale的要求,要看服務器是否支持。看ServerMediaSession中的測試函數
void ServerMediaSession::testScaleFactor(float& scale) {
// First, try setting all subsessions to the desired scale.
// If the subsessions' actual scales differ from each other, choose the
// value that's closest to 1, and then try re-setting all subsessions to that
// value. If the subsessions' actual scales still differ, re-set them all to 1.
float minSSScale = 1.0;
float maxSSScale = 1.0;
float bestSSScale = 1.0;
float bestDistanceTo1 = 0.0;
ServerMediaSubsession* subsession;
for (subsession = fSubsessionsHead; subsession != NULL;
subsession = subsession->fNext) {
float ssscale = scale;
subsession->testScaleFactor(ssscale);
if (subsession == fSubsessionsHead) { // this is the first subsession
minSSScale = maxSSScale = bestSSScale = ssscale;
bestDistanceTo1 = (float)fabs(ssscale - 1.0f);
} else {
if (ssscale < minSSScale) {
minSSScale = ssscale;
} else if (ssscale > maxSSScale) {
maxSSScale = ssscale;
}
float distanceTo1 = (float)fabs(ssscale - 1.0f);
if (distanceTo1 < bestDistanceTo1) {
bestSSScale = ssscale;
bestDistanceTo1 = distanceTo1;
}
}
}
if (minSSScale == maxSSScale) {
// All subsessions are at the same scale: minSSScale == bestSSScale == maxSSScale
scale = minSSScale;
return;
}
// The scales for each subsession differ. Try to set each one to the value
// that's closest to 1:
for (subsession = fSubsessionsHead; subsession != NULL;
subsession = subsession->fNext) {
float ssscale = bestSSScale;
subsession->testScaleFactor(ssscale);
if (ssscale != bestSSScale) break; // no luck
}
if (subsession == NULL) {
// All subsessions are at the same scale: bestSSScale
scale = bestSSScale;
return;
}
// Still no luck. Set each subsession's scale to 1:
for (subsession = fSubsessionsHead; subsession != NULL;
subsession = subsession->fNext) {
float ssscale = 1;
subsession->testScaleFactor(ssscale);
}
scale = 1;
}
上面的函數處理過程有些繁鎖,主要是處理各subsession支持不同的scale的情況, 這種情況下將選取scale值最靠近1的值(1爲正常速率)。上面的函數中實際上調用了subsession上的testScaleFactor函數。其在ServerMediaSubsession類中有默認實現,如下:
void ServerMediaSubsession::testScaleFactor(float& scale) {
// default implementation: Support scale = 1 only
scale = 1;
}
默認只能正常播放,再來看subsession上的setStreamScale函數
void ServerMediaSubsession::setStreamScale(unsigned /*clientSessionId*/,
void* /*streamToken*/, float /*scale*/) {
// default implementation: do nothing
}
do nothing!不過這是一個虛函數,在nDemandServerMediaSubsession中有重新實現
void OnDemandServerMediaSubsession::setStreamScale(unsigned /*clientSessionId*/,
void* streamToken, float scale) {
//當多個客戶端從同一個source接收數據時,sacale值是不能改變的
// Changing the scale factor isn't allowed if multiple clients are receiving data
// from the same source:
if (fReuseFirstSource) return;
StreamState* streamState = (StreamState*)streamToken;
if (streamState != NULL && streamState->mediaSource() != NULL) {
setStreamSourceScale(streamState->mediaSource(), scale);
}
}
繼續看setStreamSourceScale函數
void OnDemandServerMediaSubsession
::setStreamSourceScale(FramedSource* /*inputSource*/, float /*scale*/) {
// Default implementation: Do nothing
}
什麼都做,這樣如果要實現快進/快退操作, 只需要在自己的subsession中重新實現這個函數即可。
4.設置播放的時間範圍(2.2)
seekStream是ServerMediaSubsession類的一個虛函數,默認實現do nothing,直接看OnDemandServerMediaSubsession類中的實現
void OnDemandServerMediaSubsession::seekStream(unsigned /*clientSessionId*/,
void* streamToken, double& seekNPT, double streamDuration, u_int64_t& numBytes) {
numBytes = 0; // by default: unknown
//同樣的多個客戶端對應同一個source時,不充許此操作
// Seeking isn't allowed if multiple clients are receiving data from
// the same source:
if (fReuseFirstSource) return;
StreamState* streamState = (StreamState*)streamToken;
if (streamState != NULL && streamState->mediaSource() != NULL) {
seekStreamSource(streamState->mediaSource(), seekNPT, streamDuration, numBytes);
}
}
seekStreamSource也是定義在OnDemandServerMediaSubsession上的虛函數,默認實現也是do nothing, 需要在自己的子類中重新實現。看了下H264VideoFileServerMediaSubsession類,並沒有去實現seekStreamSource,所以*.264的文件每次打開只能從頭看起了。
5.開始播放(2.3)
進行了如此多的工作,終於要開始播放了,期待。。。
startStream函數是定義在ServerMediaSubsession類中的純虛函數,首先來看其子類OnDemandServerMediaSubsession中的實現
void OnDemandServerMediaSubsession::startStream(unsigned clientSessionId,
void* streamToken,
TaskFunc* rtcpRRHandler,
void* rtcpRRHandlerClientData,
unsigned short& rtpSeqNum,
unsigned& rtpTimestamp,
ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,
void* serverRequestAlternativeByteHandlerClientData) {
StreamState* streamState = (StreamState*)streamToken;
Destinations* destinations
= (Destinations*)(fDestinationsHashTable->Lookup((char const*)clientSessionId));
if (streamState != NULL) {
streamState->startPlaying(destinations,
rtcpRRHandler, rtcpRRHandlerClientData,
serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);
if (streamState->rtpSink() != NULL) {
rtpSeqNum = streamState->rtpSink()->currentSeqNo(); //這個rtpSeqNum有什麼用呢?
rtpTimestamp = streamState->rtpSink()->presetNextTimestamp();
}
}
}
這裏主要調用函數StreamState::startPlaying
void StreamState::startPlaying(Destinations* dests,
TaskFunc* rtcpRRHandler, void* rtcpRRHandlerClientData,
ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,
void* serverRequestAlternativeByteHandlerClientData) {
if (dests == NULL) return;
//創建RTCPInstance實例
if (fRTCPInstance == NULL && fRTPSink != NULL) {
// Create (and start) a 'RTCP instance' for this RTP sink:
fRTCPInstance
= RTCPInstance::createNew(fRTPSink->envir(), fRTCPgs,
fTotalBW, (unsigned char*)fMaster.fCNAME,
fRTPSink, NULL /* we're a server */);
// Note: This starts RTCP running automatically
}
if (dests->isTCP) {
//使用TCP傳輸RTP 和 RTCP
// Change RTP and RTCP to use the TCP socket instead of UDP:
if (fRTPSink != NULL) {
fRTPSink->addStreamSocket(dests->tcpSocketNum, dests->rtpChannelId);
fRTPSink->setServerRequestAlternativeByteHandler(dests->tcpSocketNum, serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);
}
if (fRTCPInstance != NULL) {
fRTCPInstance->addStreamSocket(dests->tcpSocketNum, dests->rtcpChannelId);
fRTCPInstance->setSpecificRRHandler(dests->tcpSocketNum, dests->rtcpChannelId,
rtcpRRHandler, rtcpRRHandlerClientData);
}
} else {
//使用UDP傳輸RTP、RTCP
// Tell the RTP and RTCP 'groupsocks' about this destination
// (in case they don't already have it):
if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort);
if (fRTCPgs != NULL) fRTCPgs->addDestination(dests->addr, dests->rtcpPort);
if (fRTCPInstance != NULL) {
fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,
rtcpRRHandler, rtcpRRHandlerClientData);
}
}
//下面調用sink上的sdtartPlaying函數開始傳輸數據
if (!fAreCurrentlyPlaying && fMediaSource != NULL) {
if (fRTPSink != NULL) { //通過RTP協議傳輸
fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);
fAreCurrentlyPlaying = True;
} else if (fUDPSink != NULL) { //裸的UDP數據包,不使用RTP協議
fUDPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);
fAreCurrentlyPlaying = True;
}
}
}
上面的函數最終調用了MediaSink::startPlaying函數,開始傳輸數據,這個過程的函數調用比較複雜,將在下一篇文章中單獨分析。大致過程是,先取一個幀發送出去,然後此函數就返回了,再處理response包的發送等剩餘操作。
---------------------
作者:gavinr
來源:CSDN
原文:https://blog.csdn.net/gavinr/article/details/7031437
版權聲明:本文爲博主原創文章,轉載請附上博文鏈接!