進入testProgs目錄,執行./openRTSP rtsp://xxxx/test.mp4
對於RTSP協議的處理部分,可設置斷點在setupStreams函數中,並跟蹤即可進行分析。
這裏主要分析進入如下的while(1)循環中的代碼
- void BasicTaskScheduler0::doEventLoop(char* watchVariable)
- {
- // Repeatedly loop, handling readble sockets and timed events:
- while (1)
- {
- if (watchVariable != NULL && *watchVariable != 0) break;
- SingleStep();
- }
- }
從這裏可知,live555在客戶端處理數據實際上是單線程的程序,不斷執行SingleStep()函數中的代碼。通過查看該函數代碼裏,下面一句代碼爲重點
- (*handler->handlerProc)(handler->clientData, resultConditionSet);
其中該條代碼出現了兩次,通過調試跟蹤它的執行軌跡,第一次出現調用的函數是爲了處理和RTSP服務器的通信協議的商定,而第二次出現調用的函數纔是處理真正的視頻和音頻數據。對於RTSP通信協議的分析我們暫且不討論,而直接進入第二次調用該函數的部分。
在我們的調試過程中在執行到上面的函數時就直接調用到livemedia目錄下的如下函數
- void MultiFramedRTPSource::networkReadHandler(MultiFramedRTPSource* source, int /*mask*/)
- {
- source->networkReadHandler1();
- }
//下面這個函數實現的主要功能就是從socket端讀取數據並存儲數據
- void MultiFramedRTPSource::networkReadHandler1()
- {
- BufferedPacket* bPacket = fPacketReadInProgress;
- if (bPacket == NULL)
- {
- // Normal case: Get a free BufferedPacket descriptor to hold the new network packet:
- //分配一塊新的存儲空間來存儲從socket端讀取的數據
- bPacket = fReorderingBuffer->getFreePacket(this);
- }
- // Read the network packet, and perform sanity checks on the RTP header:
- Boolean readSuccess = False;
- do
- {
- Boolean packetReadWasIncomplete = fPacketReadInProgress != NULL;
- //fillInData()函數封裝了從socket端獲取數據的過程,到此函數執行完已經將數據保存到了bPacket對象中
- if (!bPacket->fillInData(fRTPInterface, packetReadWasIncomplete))
- {
- if (bPacket->bytesAvailable() == 0)
- {
- envir() << "MultiFramedRTPSource error: Hit limit when reading incoming packet over TCP. Increase \"MAX_PACKET_SIZE\"\n";
- }
- break;
- }
- if (packetReadWasIncomplete)
- {
- // We need additional read(s) before we can process the incoming packet:
- fPacketReadInProgress = bPacket;
- return;
- } else
- {
- fPacketReadInProgress = NULL;
- }
- //省略關於RTP包的處理
- ...
- ...
- ...
- //fReorderingBuffer爲MultiFramedRTPSource類中的對象,該對象建立了一個存儲Packet數據包對象的鏈表
- //下面的storePacket()函數即將上面獲取的數據包存儲在鏈表中
- if (!fReorderingBuffer->storePacket(bPacket)) break;
- readSuccess = True;
- } while (0);
- if (!readSuccess) fReorderingBuffer->freePacket(bPacket);
- doGetNextFrame1();
- // If we didn't get proper data this time, we'll get another chance
- }
//下面的這個函數則實現從上面函數中介紹的存儲數據包鏈表的對象(即fReorderingBuffer)中取出數據包並調用相應函數使用它
//代碼1.1
- void MultiFramedRTPSource::doGetNextFrame1()
- {
- while (fNeedDelivery)
- {
- // If we already have packet data available, then deliver it now.
- Boolean packetLossPrecededThis;
- //從fReorderingBuffer對象中取出一個數據包
- BufferedPacket* nextPacket
- = fReorderingBuffer->getNextCompletedPacket(packetLossPrecededThis);
- if (nextPacket == NULL) break;
- fNeedDelivery = False;
- if (nextPacket->useCount() == 0)
- {
- // Before using the packet, check whether it has a special header
- // that needs to be processed:
- unsigned specialHeaderSize;
- if (!processSpecialHeader(nextPacket, specialHeaderSize))
- {
- // Something's wrong with the header; reject the packet:
- fReorderingBuffer->releaseUsedPacket(nextPacket);
- fNeedDelivery = True;
- break;
- }
- nextPacket->skip(specialHeaderSize);
- }
- // Check whether we're part of a multi-packet frame, and whether
- // there was packet loss that would render this packet unusable:
- if (fCurrentPacketBeginsFrame)
- {
- if (packetLossPrecededThis || fPacketLossInFragmentedFrame)
- {
- // We didn't get all of the previous frame.
- // Forget any data that we used from it:
- fTo = fSavedTo; fMaxSize = fSavedMaxSize;
- fFrameSize = 0;
- }
- fPacketLossInFragmentedFrame = False;
- } else if (packetLossPrecededThis)
- {
- // We're in a multi-packet frame, with preceding packet loss
- fPacketLossInFragmentedFrame = True;
- }
- if (fPacketLossInFragmentedFrame)
- {
- // This packet is unusable; reject it:
- fReorderingBuffer->releaseUsedPacket(nextPacket);
- fNeedDelivery = True;
- break;
- }
- // The packet is usable. Deliver all or part of it to our caller:
- unsigned frameSize;
- //將上面取出的數據包拷貝到fTo指針所指向的地址
- nextPacket->use(fTo, fMaxSize, frameSize, fNumTruncatedBytes,
- fCurPacketRTPSeqNum, fCurPacketRTPTimestamp,
- fPresentationTime, fCurPacketHasBeenSynchronizedUsingRTCP,
- fCurPacketMarkerBit);
- fFrameSize += frameSize;
- if (!nextPacket->hasUsableData())
- {
- // We're completely done with this packet now
- fReorderingBuffer->releaseUsedPacket(nextPacket);
- }
- if (fCurrentPacketCompletesFrame) //如果完整的取出了一幀數據,則可調用需要該幀數據的函數去處理它
- {
- // We have all the data that the client wants.
- if (fNumTruncatedBytes > 0)
- {
- envir() << "MultiFramedRTPSource::doGetNextFrame1(): The total received frame size exceeds the client's buffer size ("
- << fSavedMaxSize << "). "
- << fNumTruncatedBytes << " bytes of trailing data will be dropped!\n";
- }
- // Call our own 'after getting' function, so that the downstream object can consume the data:
- if (fReorderingBuffer->isEmpty())
- {
- // Common case optimization: There are no more queued incoming packets, so this code will not get
- // executed again without having first returned to the event loop. Call our 'after getting' function
- // directly, because there's no risk of a long chain of recursion (and thus stack overflow):
- afterGetting(this); //調用函數去處理取出的數據幀
- } else
- {
- // Special case: Call our 'after getting' function via the event loop.
- nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
- (TaskFunc*)FramedSource::afterGetting, this);
- }
- }
- else
- {
- // This packet contained fragmented data, and does not complete
- // the data that the client wants. Keep getting data:
- fTo += frameSize; fMaxSize -= frameSize;
- fNeedDelivery = True;
- }
- }
- }
//下面這個函數即開始調用執行需要該幀數據的函數
- void FramedSource::afterGetting(FramedSource* source)
- {
- source->fIsCurrentlyAwaitingData = False;
- // indicates that we can be read again
- // Note that this needs to be done here, in case the "fAfterFunc"
- // called below tries to read another frame (which it usually will)
- if (source->fAfterGettingFunc != NULL)
- {
- (*(source->fAfterGettingFunc))(source->fAfterGettingClientData,
- source->fFrameSize, source->fNumTruncatedBytes,
- source->fPresentationTime,
- source->fDurationInMicroseconds);
- }
- }
上面的fAfterGettingFunc爲我們自己註冊的函數,如果運行的是testProgs中的openRTSP實例,則該函數指向下列代碼中通過調用getNextFrame()註冊的afterGettingFrame()函數
- Boolean FileSink::continuePlaying()
- {
- if (fSource == NULL) return False;
- fSource->getNextFrame(fBuffer, fBufferSize,
- afterGettingFrame, this,
- onSourceClosure, this);
- return True;
- }
如果運行的是testProgs中的testRTSPClient中的實例,則該函數指向這裏註冊的afterGettingFrame()函數
- Boolean DummySink::continuePlaying()
- {
- if (fSource == NULL) return False; // sanity check (should not happen)
- // Request the next frame of data from our input source. "afterGettingFrame()" will get called later, when it arrives:
- fSource->getNextFrame(fReceiveBuffer, DUMMY_SINK_RECEIVE_BUFFER_SIZE,
- afterGettingFrame, this,
- onSourceClosure, this);
- return True;
- }
從上面的代碼中可以看到getNextFrame()函數的第一個參數爲分別在各自類中定義的buffer,我們繼續以openRTSP爲運行程序來分析,fBuffer爲FileSink類裏定義的指針:unsigned char* fBuffer;
這裏我們先繞一個彎,看看getNextFrame()函數裏做了什麼
- void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
- afterGettingFunc* afterGettingFunc,
- void* afterGettingClientData,
- onCloseFunc* onCloseFunc,
- void* onCloseClientData)
- {
- // Make sure we're not already being read:
- if (fIsCurrentlyAwaitingData)
- {
- envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
- envir().internalError();
- }
- fTo = to;
- fMaxSize = maxSize;
- fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
- fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
- fAfterGettingFunc = afterGettingFunc;
- fAfterGettingClientData = afterGettingClientData;
- fOnCloseFunc = onCloseFunc;
- fOnCloseClientData = onCloseClientData;
- fIsCurrentlyAwaitingData = True;
- doGetNextFrame();
- }
從代碼可以知道上面getNextFrame()中傳入的第一個參數fBuffer指向了指針fTo,而我們在前面分析代碼1.1中的void MultiFramedRTPSource::doGetNextFrame1()函數中有下面一段代碼:
- //將上面取出的數據包拷貝到fTo指針所指向的地址
- nextPacket->use(fTo, fMaxSize, frameSize, fNumTruncatedBytes,
- fCurPacketRTPSeqNum, fCurPacketRTPTimestamp,
- fPresentationTime, fCurPacketHasBeenSynchronizedUsingRTCP,
- fCurPacketMarkerBit);
實際上現在應該明白了,從getNextFrame()函數中傳入的第一個參數fBuffer最終存儲的即是從數據包鏈表對象中取出的數據,並且在調用上面的use()函數後就可以使用了。
而在void MultiFramedRTPSource::doGetNextFrame1()函數中代碼顯示的最終調用我們註冊的void FileSink::afterGettingFrame()正好是在use()函數調用之後的afterGetting(this)中調用。我們再看看afterGettingFrame()做了什麼處理:
- void FileSink::afterGettingFrame(void* clientData, unsigned frameSize,
- unsigned numTruncatedBytes,
- struct timeval presentationTime,
- unsigned /*durationInMicroseconds*/)
- {
- FileSink* sink = (FileSink*)clientData;
- sink->afterGettingFrame(frameSize, numTruncatedBytes, presentationTime);
- }
- void FileSink::afterGettingFrame(unsigned frameSize,
- unsigned numTruncatedBytes,
- struct timeval presentationTime)
- {
- if (numTruncatedBytes > 0)
- {
- envir() << "FileSink::afterGettingFrame(): The input frame data was too large for our buffer size ("
- << fBufferSize << "). "
- << numTruncatedBytes << " bytes of trailing data was dropped! Correct this by increasing the \"bufferSize\" parameter in the \"createNew()\" call to at least "
- << fBufferSize + numTruncatedBytes << "\n";
- }
- addData(fBuffer, frameSize, presentationTime);
- if (fOutFid == NULL || fflush(fOutFid) == EOF)
- {
- // The output file has closed. Handle this the same way as if the
- // input source had closed:
- onSourceClosure(this);
- stopPlaying();
- return;
- }
- if (fPerFrameFileNameBuffer != NULL)
- {
- if (fOutFid != NULL) { fclose(fOutFid); fOutFid = NULL; }
- }
- // Then try getting the next frame:
- continuePlaying();
- }
從上面代碼可以看到調用了addData()函數將數據保存到文件中,然後繼續continuePlaying()又去獲取下一幀數據然後處理,直到遇到循環結束然後依次退出調用函數。最後看看addData()函數的實現即可知:
- void FileSink::addData(unsigned char const* data, unsigned dataSize,
- struct timeval presentationTime)
- {
- if (fPerFrameFileNameBuffer != NULL)
- {
- // Special case: Open a new file on-the-fly for this frame
- sprintf(fPerFrameFileNameBuffer, "%s-%lu.%06lu", fPerFrameFileNamePrefix,
- presentationTime.tv_sec, presentationTime.tv_usec);
- fOutFid = OpenOutputFile(envir(), fPerFrameFileNameBuffer);
- }
- // Write to our file:
- #ifdef TEST_LOSS
- static unsigned const framesPerPacket = 10;
- static unsigned const frameCount = 0;
- static Boolean const packetIsLost;
- if ((frameCount++)%framesPerPacket == 0)
- {
- packetIsLost = (our_random()%10 == 0); // simulate 10% packet loss #####
- }
- if (!packetIsLost)
- #endif
- if (fOutFid != NULL && data != NULL)
- {
- fwrite(data, 1, dataSize, fOutFid);
- }
- }
最後調用系統函數fwrite()實現寫入文件功能。
總結:從上面的分析可知,如果要取得從RTSP服務器端接收並保存的數據幀,我們只需要定義一個類並實現如下格式兩個的函數,並聲明一個指針地址buffer用於指向數據幀,再在continuePlaying()函數中調用getNextFrame(buffer,...)即可。
- typedef void (afterGettingFunc)(void* clientData, unsigned frameSize,
- unsigned numTruncatedBytes,
- struct timeval presentationTime,
- unsigned durationInMicroseconds);
- typedef void (onCloseFunc)(void* clientData);
然後再在afterGettingFunc的函數中即可使用buffer。.