還是從mediaplayer.cpp開始分析,看start函數的實現:
status_t MediaPlayer::start()
{
mPlayer->setLooping(mLoop);
mPlayer->setVolume(mLeftVolume, mRightVolume);
mPlayer->setAuxEffectSendLevel(mSendLevel);
mCurrentState = MEDIA_PLAYER_STARTED;
ret = mPlayer->start();
return ret;
}
核心代碼就是這些,需要注意的是,這裏的mPlayer是IMediaPlayer類型的,是IMediaPlayer這個匿名Binder Server的Bp端,所以最終是通過這個匿名Binder Server類來傳輸消息的,傳輸的目的地是MediaPlayerService,其他函數暫時不分析,就從最後的start函數開始分析。
通過IMediaPlayer的Bp端傳送到Bn端,最後到達MediaPlayerService,而MediaPlayerService爲這個客戶端創建了一個Client,所以最終對應的函數就是:MediaPlayerService::Client::start()
status_t MediaPlayerService::Client::start()
{
ALOGV("[%d] start", mConnId);
sp<MediaPlayerBase> p = getPlayer();
if (p == 0) return UNKNOWN_ERROR;
p->setLooping(mLoop);
return p->start();
}
這裏獲取到的MediaPlayer是NuPlayerDriver,所以最後還是調用到NuPlayerDriver的start函數:
status_t NuPlayerDriver::start() {
ALOGD("start(%p), state is %d, eos is %d", this, mState, mAtEOS);
Mutex::Autolock autoLock(mLock);
switch (mState) {
case STATE_PREPARED:
{
mAtEOS = false;
mPlayer->start();
if (mStartupSeekTimeUs >= 0) {
mPlayer->seekToAsync(mStartupSeekTimeUs);
mStartupSeekTimeUs = -1;
}
break;
}
經過上面的prepare步驟,此時的狀態,已經是STATE_PREPARED了,而且這裏的mPlayer是NuPlayer,所以繼續調用到NuPlayer中的start函數:
void NuPlayer::start() {
(new AMessage(kWhatStart, this))->post();
}
通過消息機制,繼續傳......
NuPlayer::onMessageReceived(const sp<AMessage> &msg)
case kWhatStart:
{
ALOGV("kWhatStart");
if (mStarted) {
// do not resume yet if the source is still buffering
if (!mPausedForBuffering) {
onResume();
}
} else {
onStart();
}
mPausedByClient = false;
break;
}
終於找到核心函數了,下面就仔細分析這個onStart函數:
void NuPlayer::onStart(int64_t startPositionUs) {
if (!mSourceStarted) {
mSourceStarted = true;
mSource->start();
}
if (startPositionUs > 0) {
performSeek(startPositionUs);
if (mSource->getFormat(false /* audio */) == NULL) {
return;
}
}
mOffloadAudio = false;
mAudioEOS = false;
mVideoEOS = false;
mStarted = true;
uint32_t flags = 0;
if (mSource->isRealTime()) {
flags |= Renderer::FLAG_REAL_TIME;
}
sp<MetaData> audioMeta = mSource->getFormatMeta(true /* audio */);
audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
if (mAudioSink != NULL) {
streamType = mAudioSink->getAudioStreamType();
}
sp<AMessage> videoFormat = mSource->getFormat(false /* audio */);
mOffloadAudio =
canOffloadStream(audioMeta, (videoFormat != NULL), mSource->isStreaming(), streamType);
if (mOffloadAudio) {
flags |= Renderer::FLAG_OFFLOAD_AUDIO;
}
sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
++mRendererGeneration;
notify->setInt32("generation", mRendererGeneration);
mRenderer = new Renderer(mAudioSink, notify, flags);
mRendererLooper = new ALooper;
mRendererLooper->setName("NuPlayerRenderer");
mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
mRendererLooper->registerHandler(mRenderer);
status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);
if (err != OK) {
mSource->stop();
mSourceStarted = false;
notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
return;
}
float rate = getFrameRate();
if (rate > 0) {
mRenderer->setVideoFrameRate(rate);
}
if (mVideoDecoder != NULL) {
mVideoDecoder->setRenderer(mRenderer);
}
if (mAudioDecoder != NULL) {
mAudioDecoder->setRenderer(mRenderer);
}
if(mVideoDecoder != NULL){
scheduleSetVideoDecoderTime();
}
postScanSources();
}
下面先放出這個函數的整體流程圖:
1. 首先來看mSource->start()函數,在之前的NuPlayer::setDataSourceAsync函數中,創建了一個GenericSource:
sp<GenericSource> source = new GenericSource(notify, mUIDValid, mUID);
然後又在NuPlayer::onMessageReceived函數的kWhatSetDataSource case中設置了NuPlayer中的mSource是創建的這個GenericSource:
void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
switch (msg->what()) {
case kWhatSetDataSource:
{
ALOGV("kWhatSetDataSource");
CHECK(mSource == NULL);
status_t err = OK;
sp<RefBase> obj;
CHECK(msg->findObject("source", &obj));
if (obj != NULL) {
mSource = static_cast<Source *>(obj.get());
所以這裏的mSource->start()函數最終跑到GenericSource.cpp中去執行了,
void NuPlayer::GenericSource::start() {
ALOGI("start");
mStopRead = false;
if (mAudioTrack.mSource != NULL) {
postReadBuffer(MEDIA_TRACK_TYPE_AUDIO);
}
if (mVideoTrack.mSource != NULL) {
postReadBuffer(MEDIA_TRACK_TYPE_VIDEO);
}
setDrmPlaybackStatusIfNeeded(Playback::START, getLastReadPosition() / 1000);
mStarted = true;
(new AMessage(kWhatStart, this))->post();
}
這裏通過postReadBuffer函數來分別發送Video Track和Audio Track的數據,併發送了一個kWhatStart的msg。
先來看看postReadBuffer函數,這個函數中會根據不同的媒體類型來執行不同的操作。
void NuPlayer::GenericSource::postReadBuffer(media_track_type trackType) {
Mutex::Autolock _l(mReadBufferLock);
if ((mPendingReadBufferTypes & (1 << trackType)) == 0) {
mPendingReadBufferTypes |= (1 << trackType);
sp<AMessage> msg = new AMessage(kWhatReadBuffer, this);
msg->setInt32("trackType", trackType);
msg->post();
}
}
void NuPlayer::GenericSource::onMessageReceived(const sp<AMessage> &msg) {
case kWhatReadBuffer:
{
onReadBuffer(msg);
break;
}
void NuPlayer::GenericSource::onReadBuffer(sp<AMessage> msg) {
int32_t tmpType;
CHECK(msg->findInt32("trackType", &tmpType));
media_track_type trackType = (media_track_type)tmpType;
readBuffer(trackType);
{
// only protect the variable change, as readBuffer may
// take considerable time.
Mutex::Autolock _l(mReadBufferLock);
mPendingReadBufferTypes &= ~(1 << trackType);
}
}
又是通過一系列的轉換,最終直到readBuffer函數中,這個函數根據不同的媒體類型來執行不同的操作,繼續追蹤:
void NuPlayer::GenericSource::readBuffer(
media_track_type trackType, int64_t seekTimeUs, int64_t *actualTimeUs, bool formatChange) {
ALOGV("GenericSource readBuffer BEGIN type=%d",trackType);
// Do not read data if Widevine source is stopped
if (mStopRead) {
return;
}
Track *track;
size_t maxBuffers = 1;
switch (trackType) {
case MEDIA_TRACK_TYPE_VIDEO:
track = &mVideoTrack;
if (mIsWidevine) {
maxBuffers = 2;
} else {
maxBuffers = 8; // too large of a number may influence seeks
}
break;
case MEDIA_TRACK_TYPE_AUDIO:
track = &mAudioTrack;
if (mIsStreaming) {
maxBuffers = 8;
} else if (mVideoTrack.mSource == NULL) {
maxBuffers = 64;
} else {
maxBuffers = 16;
}
break;
case MEDIA_TRACK_TYPE_SUBTITLE:
track = &mSubtitleTrack;
break;
case MEDIA_TRACK_TYPE_TIMEDTEXT:
track = &mTimedTextTrack;
break;
default:
TRESPASS();
}
if (track->mSource == NULL) {
return;
}
if (actualTimeUs) {
*actualTimeUs = seekTimeUs;
}
MediaSource::ReadOptions options;
bool seeking = false;
if (seekTimeUs >= 0) {
options.setSeekTo(seekTimeUs, MediaSource::ReadOptions::SEEK_PREVIOUS_SYNC);
seeking = true;
}
const bool couldReadMultiple = (!mIsWidevine && track->mSource->supportReadMultiple());
if (mIsWidevine || couldReadMultiple) {
options.setNonBlocking();
}
int64_t videoSeekTimeResultUs = -1;
int64_t startUs = ALooper::GetNowUs();
int64_t nowUs = startUs;
if(mLowLatencyRTPStreaming)
maxBuffers = 1;
for (size_t numBuffers = 0; numBuffers < maxBuffers; ) {
Vector<MediaBuffer *> mediaBuffers;
status_t err = NO_ERROR;
if (couldReadMultiple) {
err = track->mSource->readMultiple(
&mediaBuffers, maxBuffers - numBuffers, &options);
} else {
MediaBuffer *mbuf = NULL;
err = track->mSource->read(&mbuf, &options);
if (err == OK && mbuf != NULL) {
mediaBuffers.push_back(mbuf);
}
}
options.clearNonPersistent();
size_t id = 0;
size_t count = mediaBuffers.size();
for (; id < count; ++id) {
int64_t timeUs;
MediaBuffer *mbuf = mediaBuffers[id];
if (!mbuf->meta_data()->findInt64(kKeyTime, &timeUs)) {
mbuf->meta_data()->dumpToLog();
track->mPackets->signalEOS(ERROR_MALFORMED);
break;
}
if(mLowLatencyRTPStreaming && doDropPacket(trackType,timeUs)){
continue;
}
if (trackType == MEDIA_TRACK_TYPE_AUDIO) {
mAudioTimeUs = timeUs;
mBufferingMonitor->updateQueuedTime(true /* isAudio */, timeUs);
} else if (trackType == MEDIA_TRACK_TYPE_VIDEO) {
mVideoTimeUs = timeUs;
if(seeking == true && numBuffers == 0)
videoSeekTimeResultUs = timeUs; //save the first frame timestamp after seek in order to seek audio.
mBufferingMonitor->updateQueuedTime(false /* isAudio */, timeUs);
}
queueDiscontinuityIfNeeded(seeking, formatChange, trackType, track);
sp<ABuffer> buffer = mediaBufferToABuffer(
mbuf, trackType, seekTimeUs,
numBuffers == 0 ? actualTimeUs : NULL);
track->mPackets->queueAccessUnit(buffer);
formatChange = false;
seeking = false;
++numBuffers;
}
if (id < count) {
// Error, some mediaBuffer doesn't have kKeyTime.
for (; id < count; ++id) {
mediaBuffers[id]->release();
}
break;
}
if (err == WOULD_BLOCK) {
break;
} else if (err == INFO_FORMAT_CHANGED) {
#if 0
track->mPackets->queueDiscontinuity(
ATSParser::DISCONTINUITY_FORMATCHANGE,
NULL,
false /* discard */);
#endif
} else if (err != OK) {
queueDiscontinuityIfNeeded(seeking, formatChange, trackType, track);
track->mPackets->signalEOS(err);
break;
}
//quit from loop when reading too many audio buffer
nowUs = ALooper::GetNowUs();
if(nowUs - startUs > 250000LL)
break;
}
if(videoSeekTimeResultUs > 0)
*actualTimeUs = videoSeekTimeResultUs;
if(mLowLatencyRTPStreaming)
notifyNeedCurrentPosition();
ALOGV("GenericSource readBuffer END,type=%d",trackType);
}
最核心的函數是這個:track->mSource->read(&mbuf, &options);
同時根據不同的track類型,track爲對應的真正的track實體,以Video Track爲例,track = &mVideoTrack;
再來回顧一下,在NuPlayer::GenericSource::initFromDataSource()函數中,通過外部的一個for循環,以及內部的 sp<MediaSource> track = extractor->getTrack(i); 最終在FslExtractor::getTrack函數中獲取到各個track,先new 一個FslMediaSource來保存,然後在GenericSource.cpp中保存到mAudioTrack / mVideoTrack 以及Vector<sp<MediaSource> > mSources 這個Vector中。
所以這裏調用的mVideoTrack最終就是FslExtractor中的FslMediaSource,而這個track->mSource->read也就對應爲FslMediaSource::read()函數(FslExtractor.cpp文件中):
status_t FslMediaSource::read(MediaBuffer **out, const ReadOptions *options)
{
status_t ret = OK;
*out = NULL;
uint32_t seekFlag = 0;
//int64_t targetSampleTimeUs = -1ll;
size_t srcSize = 0;
size_t srcOffset = 0;
int32_t i = 0;
int64_t seekTimeUs;
ReadOptions::SeekMode mode;
int64_t outTs = 0;
const char *containerMime = NULL;
const char *mime = NULL;
if (options && options->getSeekTo(&seekTimeUs, &mode)) {
switch (mode) {
case ReadOptions::SEEK_PREVIOUS_SYNC:
seekFlag = SEEK_FLAG_NO_LATER;
break;
case ReadOptions::SEEK_NEXT_SYNC:
seekFlag = SEEK_FLAG_NO_EARLIER;
break;
case ReadOptions::SEEK_CLOSEST_SYNC:
case ReadOptions::SEEK_CLOSEST:
seekFlag = SEEK_FLAG_NEAREST;
break;
default:
seekFlag = SEEK_FLAG_NEAREST;
break;
}
clearPendingFrames();
sp<MetaData> meta = mExtractor->getMetaData();
if(meta != NULL){
meta->findCString(kKeyMIMEType, &containerMime);
mFormat->findCString(kKeyMIMEType, &mime);
if(mFrameSent < 10 && containerMime && !strcasecmp(containerMime, MEDIA_MIMETYPE_CONTAINER_FLV)
&& mime && !strcasecmp(mime,MEDIA_MIMETYPE_VIDEO_SORENSON))
{
ALOGV("read first frame before seeking track, mFrameSent %d", mFrameSent);
int64_t time = 0;
int32_t j=0;
ret = mExtractor->HandleSeekOperation(mSourceIndex,&time,seekFlag);
while (mPendingFrames.empty()) {
status_t err = mExtractor->GetNextSample(mSourceIndex,false);
if (err != OK) {
clearPendingFrames();
return err;
}
j++;
if(j > 1 && OK != mExtractor->CheckInterleaveEos(mSourceIndex)){
ALOGE("get interleave eos");
return ERROR_END_OF_STREAM;
}
}
MediaBuffer *frame = *mPendingFrames.begin();
frame->meta_data()->setInt64(kKeyTime, seekTimeUs);
}
}
ret = mExtractor->HandleSeekOperation(mSourceIndex,&seekTimeUs,seekFlag);
}
while (mPendingFrames.empty()) {
status_t err = mExtractor->GetNextSample(mSourceIndex,false);
if (err != OK) {
clearPendingFrames();
return err;
}
i++;
if(i > 1 && OK != mExtractor->CheckInterleaveEos(mSourceIndex)){
ALOGE("get interleave eos");
return ERROR_END_OF_STREAM;
}
}
MediaBuffer *frame = *mPendingFrames.begin();
mPendingFrames.erase(mPendingFrames.begin());
*out = frame;
mBufferSize -= frame->size();
mFrameSent++;
//frame->meta_data()->findInt64(kKeyTime, &outTs);
ALOGV("FslMediaSource::read mSourceIndex=%d size=%d,time %lld",mSourceIndex,frame->size(),outTs);
if(!mIsAVC && !mIsHEVC){
return OK;
}
//convert to nal frame
uint8_t *srcPtr =
(uint8_t *)frame->data() + frame->range_offset();
srcSize = frame->range_length();
if(srcPtr[0] == 0x0 && srcPtr[1] == 0x0 && srcPtr[2] == 0x0 && srcPtr[3] == 0x1){
return OK;
}
if(0 == mNALLengthSize)
return OK;
//replace the 4 bytes when nal length size is 4
if(4 == mNALLengthSize){
while(srcOffset + mNALLengthSize <= srcSize){
size_t NALsize = U32_AT(srcPtr + srcOffset);
srcPtr[srcOffset++] = 0;
srcPtr[srcOffset++] = 0;
srcPtr[srcOffset++] = 0;
srcPtr[srcOffset++] = 1;
//memcpy(&srcPtr[srcOffset], "\x00\x00\x00\x01", 4);
srcOffset += NALsize;
}
if(srcOffset < srcSize){
frame->release();
frame = NULL;
return ERROR_MALFORMED;
}
ALOGV("FslMediaSource::read 2 size=%d",srcSize);
return OK;
}
//create a new MediaBuffer and copy all data from old buffer to new buffer.
size_t dstSize = 0;
MediaBuffer *buffer = NULL;
uint8_t *dstPtr = NULL;
//got the buffer size when pass is 0, then copy buffer when pass is 1
for (int32_t pass = 0; pass < 2; pass++) {
ALOGV("FslMediaSource::read pass=%d,begin",pass);
size_t srcOffset = 0;
size_t dstOffset = 0;
while (srcOffset + mNALLengthSize <= srcSize) {
size_t NALsize;
switch (mNALLengthSize) {
case 1: NALsize = srcPtr[srcOffset]; break;
case 2: NALsize = U16_AT(srcPtr + srcOffset); break;
case 3: NALsize = U24_AT(srcPtr + srcOffset); break;
case 4: NALsize = U32_AT(srcPtr + srcOffset); break;
default:
TRESPASS();
}
if (NALsize == 0) {
frame->release();
frame = NULL;
return ERROR_MALFORMED;
} else if (srcOffset + mNALLengthSize + NALsize > srcSize) {
break;
}
if (pass == 1) {
memcpy(&dstPtr[dstOffset], "\x00\x00\x00\x01", 4);
memcpy(&dstPtr[dstOffset + 4],
&srcPtr[srcOffset + mNALLengthSize],
NALsize);
ALOGV("FslMediaSource::read 3 copy %d",4+NALsize);
}
dstOffset += 4; // 0x00 00 00 01
dstOffset += NALsize;
srcOffset += mNALLengthSize + NALsize;
}
if (srcOffset < srcSize) {
// There were trailing bytes or not enough data to complete
// a fragment.
frame->release();
frame = NULL;
return ERROR_MALFORMED;
}
if (pass == 0) {
dstSize = dstOffset;
buffer = new MediaBuffer(dstSize);
int64_t timeUs;
CHECK(frame->meta_data()->findInt64(kKeyTime, &timeUs));
int32_t isSync;
CHECK(frame->meta_data()->findInt32(kKeyIsSyncFrame, &isSync));
buffer->meta_data()->setInt64(kKeyTime, timeUs);
buffer->meta_data()->setInt32(kKeyIsSyncFrame, isSync);
dstPtr = (uint8_t *)buffer->data();
ALOGV("FslMediaSource::read 3 size=%d,ts=%lld",dstSize,timeUs);
}
}
frame->release();
frame = NULL;
*out = buffer;
return OK;
}
這個read函數太複雜了,大致意思是:在FslMediaSource
類中維護着一個 List<MediaBuffer *> mPendingFrames,如果這個list爲空的話,就調用
mExtractor->GetNextSample(mSourceIndex,false)函數去Source中讀取一幀的數據,然後把獲得的這幀數據保存在*out中,傳遞到函數外面,後面的操作是 convert to nal frame,目前位置不知道它的作用是什麼,以後分析它。
而mExtractor->GetNextSample函數內部就是調用IParser->getFileNextSample函數來獲取的幀數據,這個函數就是Parser的lib提供出來的接口函數。
在NuPlayer::GenericSource::readBuffer函數中,設置了不同Track類型需要讀取的最大Buffer的數量,AudioBuffer爲64個,VideoBuffer爲4個。
退出FslMediaSource::read函數,退回到NuPlayer::GenericSource::readBuffer函數中,當讀取完所需的buffer後,如果執行了formatChange / seeking 操作的話,就會調用 queueDiscontinuityIfNeeded函數來不連續的Queue Buffer。
同時NuPlayer::GenericSource::start()函數中還發送了一個kWhatStart的msg,這個msg會通過NuPlayer::GenericSource::BufferingMonitor來維護着GenericSource的整個Buffer流程。
2. 繼續回到NuPlayer::onStart()函數中,看似簡單的一行mSource->start()代碼就執行了這麼多,下面繼續追蹤流程。剩下的就是創建Renderer相關的,包括new一個Renderer,設置Renderer中的Rate,最後設置這個Renderer的Looper等等操作,並設置FrameRate。
3. 最後一個很重要的函數,就是postScanSources(),又是簡單的一行,但是裏面做了很多工作,包括初始化Decoder,並且啓動Decoder:
void NuPlayer::postScanSources() {
if (mScanSourcesPending) {
return;
}
sp<AMessage> msg = new AMessage(kWhatScanSources, this);
msg->setInt32("generation", mScanSourcesGeneration);
msg->post();
mScanSourcesPending = true;
}
void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
case kWhatScanSources:
{
int32_t generation;
CHECK(msg->findInt32("generation", &generation));
if (generation != mScanSourcesGeneration) {
// Drop obsolete msg.
break;
}
mScanSourcesPending = false;
ALOGV("scanning sources haveAudio=%d, haveVideo=%d",
mAudioDecoder != NULL, mVideoDecoder != NULL);
bool mHadAnySourcesBefore =
(mAudioDecoder != NULL) || (mVideoDecoder != NULL);
// initialize video before audio because successful initialization of
// video may change deep buffer mode of audio.
if (mSurface != NULL) {
instantiateDecoder(false, &mVideoDecoder);
}
// Don't try to re-open audio sink if there's an existing decoder.
if (mAudioSink != NULL && mAudioDecoder == NULL) {
instantiateDecoder(true, &mAudioDecoder);
}
if (!mHadAnySourcesBefore
&& (mAudioDecoder != NULL || mVideoDecoder != NULL)) {
// This is the first time we've found anything playable.
if (mSourceFlags & Source::FLAG_DYNAMIC_DURATION) {
schedulePollDuration();
}
}
status_t err;
if ((err = mSource->feedMoreTSData()) != OK) {
if (mAudioDecoder == NULL && mVideoDecoder == NULL) {
// We're not currently decoding anything (no audio or
// video tracks found) and we just ran out of input data.
if (err == ERROR_END_OF_STREAM) {
notifyListener(MEDIA_PLAYBACK_COMPLETE, 0, 0);
} else {
notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
}
}
break;
}
if ((mAudioDecoder == NULL && mAudioSink != NULL)
|| (mVideoDecoder == NULL && mSurface != NULL)) {
msg->post(100000ll);
mScanSourcesPending = true;
}
break;
}
CCDecoder是字幕解碼器,在new Video Decoder的時候,還會把這個CCDecoder作爲一個參數傳進去。
這裏會根據是否設置了Surface來決定要不要創建VideoDecoder,同時根據mAudioSink是否存在來決定要不要創建AudioDecoder,都是通過instantiateDecoder函數來完成的,所以下面看這個函數咯:
status_t NuPlayer::instantiateDecoder(bool audio, sp<DecoderBase> *decoder) {
// The audio decoder could be cleared by tear down. If still in shut down
// process, no need to create a new audio decoder.
if (*decoder != NULL || (audio && mFlushingAudio == SHUT_DOWN)) {
return OK;
}
sp<AMessage> format = mSource->getFormat(audio);
if (format == NULL) {
return -EWOULDBLOCK;
}
format->setInt32("priority", 0 /* realtime */);
if (!audio) {
AString mime;
CHECK(format->findString("mime", &mime));
bool bVideoIsAVC = !strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mime.c_str());
if (bVideoIsAVC && mSource->isAVCReorderDisabled())
format->setString("disreorder", "1");
else
format->setString("disreorder", "0");
sp<AMessage> ccNotify = new AMessage(kWhatClosedCaptionNotify, this);
if (mCCDecoder == NULL) {
mCCDecoder = new CCDecoder(ccNotify);
}
if (mSourceFlags & Source::FLAG_SECURE) {
format->setInt32("secure", true);
}
if (mSourceFlags & Source::FLAG_PROTECTED) {
format->setInt32("protected", true);
}
float rate = getFrameRate();
if (rate > 0) {
format->setFloat("operating-rate", rate * mPlaybackSettings.mSpeed);
}
}
if (audio) {
sp<AMessage> notify = new AMessage(kWhatAudioNotify, this);
++mAudioDecoderGeneration;
notify->setInt32("generation", mAudioDecoderGeneration);
determineAudioModeChange();
if (mOffloadAudio) {
const bool hasVideo = (mSource->getFormat(false /*audio */) != NULL);
format->setInt32("has-video", hasVideo);
*decoder = new DecoderPassThrough(notify, mSource, mRenderer);
} else {
*decoder = new Decoder(notify, mSource, mPID, mRenderer);
}
} else {
sp<AMessage> notify = new AMessage(kWhatVideoNotify, this);
++mVideoDecoderGeneration;
notify->setInt32("generation", mVideoDecoderGeneration);
*decoder = new Decoder(
notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);
// enable FRC if high-quality AV sync is requested, even if not
// directly queuing to display, as this will even improve textureview
// playback.
{
char value[PROPERTY_VALUE_MAX];
if (property_get("persist.sys.media.avsync", value, NULL) &&
(!strcmp("1", value) || !strcasecmp("true", value))) {
format->setInt32("auto-frc", 1);
}
}
}
(*decoder)->init();
(*decoder)->configure(format);
// allocate buffers to decrypt widevine source buffers
if (!audio && (mSourceFlags & Source::FLAG_SECURE)) {
Vector<sp<ABuffer> > inputBufs;
CHECK_EQ((*decoder)->getInputBuffers(&inputBufs), (status_t)OK);
Vector<MediaBuffer *> mediaBufs;
for (size_t i = 0; i < inputBufs.size(); i++) {
const sp<ABuffer> &buffer = inputBufs[i];
MediaBuffer *mbuf = new MediaBuffer(buffer->data(), buffer->size());
mediaBufs.push(mbuf);
}
status_t err = mSource->setBuffers(audio, mediaBufs);
if (err != OK) {
for (size_t i = 0; i < mediaBufs.size(); ++i) {
mediaBufs[i]->release();
}
mediaBufs.clear();
ALOGE("Secure source didn't support secure mediaBufs.");
return err;
}
}
return OK;
}
這個函數對一些條件進行判斷,核心是:
*decoder = new Decoder( notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);
//new視頻解碼器,這裏還會把字幕解碼器作爲一個參數傳進來
(*decoder)->init(); //解碼器初始化
(*decoder)->configure(format); //解碼器格式化
(*decoder)->init();的實現在NuPlayerDecoderBase.cpp中,裏面只是註冊了Looper的Handler。
(*decoder)->configure(format);這個函數內部還是做了很多東西,一步一步分析吧:
函數執行流程:
(*decoder)->configure(format); ---> NuPlayer::DecoderBase::configure() ---> NuPlayer::DecoderBase::onMessageReceived() kWhatConfigure case ---> NuPlayer::Decoder::onConfigure()
這個函數如下:
void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {
CHECK(mCodec == NULL);
mFormatChangePending = false;
mTimeChangePending = false;
++mBufferGeneration;
AString mime;
CHECK(format->findString("mime", &mime));
mIsAudio = !strncasecmp("audio/", mime.c_str(), 6);
mIsVideoAVC = !strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mime.c_str());
mComponentName = mime;
mComponentName.append(" decoder");
ALOGV("[%s] onConfigure (surface=%p)", mComponentName.c_str(), mSurface.get());
mCodec = MediaCodec::CreateByType(
mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);
int32_t secure = 0;
if (format->findInt32("secure", &secure) && secure != 0) {
if (mCodec != NULL) {
mCodec->getName(&mComponentName);
mComponentName.append(".secure");
mCodec->release();
ALOGI("[%s] creating", mComponentName.c_str());
mCodec = MediaCodec::CreateByComponentName(
mCodecLooper, mComponentName.c_str(), NULL /* err */, mPid);
}
}
if (mCodec == NULL) {
ALOGE("Failed to create %s%s decoder",
(secure ? "secure " : ""), mime.c_str());
handleError(UNKNOWN_ERROR);
return;
}
mIsSecure = secure;
mCodec->getName(&mComponentName);
if (mComponentName.startsWith("OMX.Freescale.std.video_decoder") && mComponentName.endsWith("hw-based")){
format->setInt32("color-format", 21);//OMX_COLOR_FormatYUV420SemiPlanar
}
status_t err;
if (mSurface != NULL) {
// disconnect from surface as MediaCodec will reconnect
err = native_window_api_disconnect(
mSurface.get(), NATIVE_WINDOW_API_MEDIA);
// We treat this as a warning, as this is a preparatory step.
// Codec will try to connect to the surface, which is where
// any error signaling will occur.
ALOGW_IF(err != OK, "failed to disconnect from surface: %d", err);
}
err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */);
if (err != OK) {
ALOGE("Failed to configure %s decoder (err=%d)", mComponentName.c_str(), err);
mCodec->release();
mCodec.clear();
handleError(err);
return;
}
rememberCodecSpecificData(format);
// the following should work in configured state
CHECK_EQ((status_t)OK, mCodec->getOutputFormat(&mOutputFormat));
CHECK_EQ((status_t)OK, mCodec->getInputFormat(&mInputFormat));
mStats->setString("mime", mime.c_str());
mStats->setString("component-name", mComponentName.c_str());
if (!mIsAudio) {
int32_t width, height;
if (mOutputFormat->findInt32("width", &width)
&& mOutputFormat->findInt32("height", &height)) {
mStats->setInt32("width", width);
mStats->setInt32("height", height);
}
}
sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
mCodec->setCallback(reply);
err = mCodec->start();
if (err != OK) {
ALOGE("Failed to start %s decoder (err=%d)", mComponentName.c_str(), err);
mCodec->release();
mCodec.clear();
handleError(err);
return;
}
releaseAndResetMediaBuffers();
mPaused = false;
mResumePending = false;
}
3.1
mCodec = MediaCodec::CreateByType(
mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);
sp<MediaCodec> MediaCodec::CreateByType(
const sp<ALooper> &looper, const char *mime, bool encoder, status_t *err, pid_t pid) {
sp<MediaCodec> codec = new MediaCodec(looper, pid);
const status_t ret = codec->init(mime, true /* nameIsType */, encoder);
if (err != NULL) {
*err = ret;
}
return ret == OK ? codec : NULL; // NULL deallocates codec.
}
在這篇文章的分析中,只是分析到MediaCodec.cpp這一層,不繼續向下分析ACodec和OMX,這個會在後面的文章中詳細分析,這裏的目的是知道MediaCodec是幹什麼的,如果向下挖的太深,不好出坑,對宏觀層面沒有一個很好的理解。
首先是new了一個MediaCodec類,這個類就可以理解爲Decoder的wrapper,它的下一層是ACodec,每個ACodec對應一個解碼器,在codec->init中會爲MediaCodec中的mCodec賦值:
mCodec = new ACodec; 通過這個就與ACodec聯繫上了,同時還會設置一個mCodecLooper來供ACodec使用。
這裏還有一點需要注意,在NuPlayer::Decoder中有個mCodec是sp<MediaCodec> 類型的,
在MediaCodec中也有一個mCodec是sp<CodecBase> 類型的,即ACodec的父類,注意區分這兩個,如果在NuPlayerDecoder.cpp中使用mCodec就是跳到MediaCodec.cpp中了,如果在MediaCodec.cpp中使用mCodec,就是對應ACodec.cpp中。
3.2
err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */);
這裏的mCodec是sp<MediaCodec> 類型的,所以這個函數是MediaCodec::configure()函數,在這個函數中設置了Vector<MediaResource>,然後發送kWhatConfigure這個AMessage,在MediaCodec::onMessageReceived函數的kWhatConfigure case中,通過 handleSetSurface()來設置Surface,然後通過setState(CONFIGURING);來設置狀態爲CONFIGURING,這個狀態在OMX中很重要,整個OMX就是通過狀態來驅動的,最後是一個mCodec->initiateConfigureComponent(format);函數,這裏的mCodec是ACodec了,所以跳到ACodec::initiateConfigureComponent()中執行,Async執行,最後跳到ACodec::LoadedState::onConfigureComponent中,然後這個函數通過mCodec->configureCodec函數來設置Decoder,這個函數很重要,在裏面設置了Audio和Video。
最後配置完畢後,通過kWhatComponentConfigured這個notify通知外層的MediaCodec,在case CodecBase::kWhatComponentConfigured:這個case中,設置狀態爲CONFIGURED。
3.3
mCodec->setCallback(reply);
設置callback函數,打印出: MediaCodec will operate in async mode
3.4
mCodec->start();
對應到 MediaCodec::start()函數,在這個函數中設置了Vector<MediaResource>,然後發送kWhatStart AMessage,在處理函數中,這時候的狀態爲CONFIGURED,所以不會執行onInputBufferAvailable函數,而是繼續向下執行,首先通過setState設置狀態爲STARTING,然後執行mCodec->initiateStart();這個mCodec是ACodec了,繼續跳到ACodec::initiateStart()中去執行,最後執行到ACodec::LoadedState::onStart()函數中,在這個函數中通過mCodec->mOMX->sendCommand設置狀態爲OMX_StateIdle,然後設置:mCodec->changeState(mCodec->mLoadedToIdleState)
我猜測最後會到達:ACodec::LoadedToIdleState::stateEntered()裏面,在這裏面通過allocateBuffers函數來分配內存,然後就開始通過OMX來驅動了。
3.5 小節一下
從上層來看,MediaCodec就是一個黑盒,只需要是如何驅動它的,而不需要關心它內部是如何實現解碼的,對於這個黑盒,它有一個input port,一個output port,buffer是如何運轉就會非常重要,所以在這裏關注的就是NuPlayerDecoder和MediaCodec的交互關係。
來看看MediaCodec在整個NuPlayer架構中的位置:
在上面分析到OMX會分配buffer,然後,在input port就有buffer了,這時候就會調用MediaCodec::onInputBufferAvailable,來告訴NuPlayerDecoder在MediaCodec的輸入端口有個可以使用的buffer,然後NuPlayerDecoder就調用handleAnInputBuffer來向裏面填充數據。
填充完數據後,MediaCodec就可以通過OMX來解碼了,解碼後的數據就會到達output port,這時候,MediaCodec就會調用onOutputBufferAvailable來通知NuPlayerDecoder,它的output port有個可以使用的buffer,NuPlayerDecoder可以把它發送到下一階段了,所以NuPlayerDecoder就調用handleAnOutputBuffer來處理這個buffer,在這個函數中通過mRenderer->queueBuffer(mIsAudio, buffer, reply),把解碼後的數據發送給Renderer。
MediaCodec的工作流程圖就如下所示:
這樣,整個流程就開始運轉起來了,下一步按說應該分析Renderer了,但是想再繼續深入研究一下MediaCodec。最起碼把3.5中的幾個函數講清楚。