11. Android MultiMedia框架完全解析 - start流程分析

还是从mediaplayer.cpp开始分析,看start函数的实现:

status_t MediaPlayer::start()
{
mPlayer->setLooping(mLoop);
    mPlayer->setVolume(mLeftVolume, mRightVolume);
    mPlayer->setAuxEffectSendLevel(mSendLevel);
    mCurrentState = MEDIA_PLAYER_STARTED;
    ret = mPlayer->start();
    return ret;
}

核心代码就是这些,需要注意的是,这里的mPlayer是IMediaPlayer类型的,是IMediaPlayer这个匿名Binder Server的Bp端,所以最终是通过这个匿名Binder Server类来传输消息的,传输的目的地是MediaPlayerService,其他函数暂时不分析,就从最后的start函数开始分析。

通过IMediaPlayer的Bp端传送到Bn端,最后到达MediaPlayerService,而MediaPlayerService为这个客户端创建了一个Client,所以最终对应的函数就是:MediaPlayerService::Client::start()

status_t MediaPlayerService::Client::start()
{
    ALOGV("[%d] start", mConnId);
    sp<MediaPlayerBase> p = getPlayer();
    if (p == 0) return UNKNOWN_ERROR;
    p->setLooping(mLoop);
    return p->start();
}

 

这里获取到的MediaPlayer是NuPlayerDriver,所以最后还是调用到NuPlayerDriver的start函数:

status_t NuPlayerDriver::start() {
    ALOGD("start(%p), state is %d, eos is %d", this, mState, mAtEOS);
    Mutex::Autolock autoLock(mLock);

    switch (mState) {
case STATE_PREPARED:
        {
            mAtEOS = false;
            mPlayer->start();

            if (mStartupSeekTimeUs >= 0) {
                mPlayer->seekToAsync(mStartupSeekTimeUs);
                mStartupSeekTimeUs = -1;
            }
            break;
        }

经过上面的prepare步骤,此时的状态,已经是STATE_PREPARED了,而且这里的mPlayer是NuPlayer,所以继续调用到NuPlayer中的start函数:

void NuPlayer::start() {
    (new AMessage(kWhatStart, this))->post();
}

通过消息机制,继续传......

NuPlayer::onMessageReceived(const sp<AMessage> &msg) 
case kWhatStart:
        {
            ALOGV("kWhatStart");
            if (mStarted) {
                // do not resume yet if the source is still buffering
                if (!mPausedForBuffering) {
                    onResume();
                }
            } else {
                onStart();
            }
            mPausedByClient = false;
            break;
        }

终于找到核心函数了,下面就仔细分析这个onStart函数:

void NuPlayer::onStart(int64_t startPositionUs) {
    if (!mSourceStarted) {
        mSourceStarted = true;
        mSource->start();
    }
    if (startPositionUs > 0) {
        performSeek(startPositionUs);
        if (mSource->getFormat(false /* audio */) == NULL) {
            return;
        }
    }

    mOffloadAudio = false;
    mAudioEOS = false;
    mVideoEOS = false;
    mStarted = true;

    uint32_t flags = 0;

    if (mSource->isRealTime()) {
        flags |= Renderer::FLAG_REAL_TIME;
    }

    sp<MetaData> audioMeta = mSource->getFormatMeta(true /* audio */);
    audio_stream_type_t streamType = AUDIO_STREAM_MUSIC;
    if (mAudioSink != NULL) {
        streamType = mAudioSink->getAudioStreamType();
    }

    sp<AMessage> videoFormat = mSource->getFormat(false /* audio */);

    mOffloadAudio =
        canOffloadStream(audioMeta, (videoFormat != NULL), mSource->isStreaming(), streamType);
    if (mOffloadAudio) {
        flags |= Renderer::FLAG_OFFLOAD_AUDIO;
    }

    sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
    ++mRendererGeneration;
    notify->setInt32("generation", mRendererGeneration);
    mRenderer = new Renderer(mAudioSink, notify, flags);
    mRendererLooper = new ALooper;
    mRendererLooper->setName("NuPlayerRenderer");
    mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
    mRendererLooper->registerHandler(mRenderer);

    status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);
    if (err != OK) {
        mSource->stop();
        mSourceStarted = false;
        notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
        return;
    }

    float rate = getFrameRate();
    if (rate > 0) {
        mRenderer->setVideoFrameRate(rate);
    }

    if (mVideoDecoder != NULL) {
        mVideoDecoder->setRenderer(mRenderer);
    }
    if (mAudioDecoder != NULL) {
        mAudioDecoder->setRenderer(mRenderer);
    }

    if(mVideoDecoder != NULL){
        scheduleSetVideoDecoderTime();
    }
    postScanSources();
}

下面先放出这个函数的整体流程图:

1. 首先来看mSource->start()函数,在之前的NuPlayer::setDataSourceAsync函数中,创建了一个GenericSource:

sp<GenericSource> source = new GenericSource(notify, mUIDValid, mUID);

然后又在NuPlayer::onMessageReceived函数的kWhatSetDataSource case中设置了NuPlayer中的mSource是创建的这个GenericSource:

void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
    switch (msg->what()) {
        case kWhatSetDataSource:
        {
            ALOGV("kWhatSetDataSource");

            CHECK(mSource == NULL);

            status_t err = OK;
            sp<RefBase> obj;
            CHECK(msg->findObject("source", &obj));
            if (obj != NULL) {
                mSource = static_cast<Source *>(obj.get());

所以这里的mSource->start()函数最终跑到GenericSource.cpp中去执行了,

void NuPlayer::GenericSource::start() {
    ALOGI("start");

    mStopRead = false;
    if (mAudioTrack.mSource != NULL) {
        postReadBuffer(MEDIA_TRACK_TYPE_AUDIO);
    }

    if (mVideoTrack.mSource != NULL) {
        postReadBuffer(MEDIA_TRACK_TYPE_VIDEO);
    }

    setDrmPlaybackStatusIfNeeded(Playback::START, getLastReadPosition() / 1000);
    mStarted = true;

    (new AMessage(kWhatStart, this))->post();
}

这里通过postReadBuffer函数来分别发送Video Track和Audio Track的数据,并发送了一个kWhatStart的msg。

 

先来看看postReadBuffer函数,这个函数中会根据不同的媒体类型来执行不同的操作。

void NuPlayer::GenericSource::postReadBuffer(media_track_type trackType) {
    Mutex::Autolock _l(mReadBufferLock);

    if ((mPendingReadBufferTypes & (1 << trackType)) == 0) {
        mPendingReadBufferTypes |= (1 << trackType);
        sp<AMessage> msg = new AMessage(kWhatReadBuffer, this);
        msg->setInt32("trackType", trackType);
        msg->post();
    }
}

void NuPlayer::GenericSource::onMessageReceived(const sp<AMessage> &msg) {
case kWhatReadBuffer:
      {
          onReadBuffer(msg);
          break;
      }

void NuPlayer::GenericSource::onReadBuffer(sp<AMessage> msg) {
    int32_t tmpType;
    CHECK(msg->findInt32("trackType", &tmpType));
    media_track_type trackType = (media_track_type)tmpType;
    readBuffer(trackType);
    {
        // only protect the variable change, as readBuffer may
        // take considerable time.
        Mutex::Autolock _l(mReadBufferLock);
        mPendingReadBufferTypes &= ~(1 << trackType);
    }
}

又是通过一系列的转换,最终直到readBuffer函数中,这个函数根据不同的媒体类型来执行不同的操作,继续追踪:

void NuPlayer::GenericSource::readBuffer(
        media_track_type trackType, int64_t seekTimeUs, int64_t *actualTimeUs, bool formatChange) {
    ALOGV("GenericSource readBuffer BEGIN type=%d",trackType);
    // Do not read data if Widevine source is stopped
    if (mStopRead) {
        return;
    }
    Track *track;
    size_t maxBuffers = 1;
    switch (trackType) {
        case MEDIA_TRACK_TYPE_VIDEO:
            track = &mVideoTrack;
            if (mIsWidevine) {
                maxBuffers = 2;
            } else {
                maxBuffers = 8;  // too large of a number may influence seeks
            }
            break;
        case MEDIA_TRACK_TYPE_AUDIO:
            track = &mAudioTrack;
            if (mIsStreaming) {
                maxBuffers = 8;
            } else if (mVideoTrack.mSource == NULL) {
                maxBuffers = 64;
            } else {
                maxBuffers = 16;
            }
            break;
        case MEDIA_TRACK_TYPE_SUBTITLE:
            track = &mSubtitleTrack;
            break;
        case MEDIA_TRACK_TYPE_TIMEDTEXT:
            track = &mTimedTextTrack;
            break;
        default:
            TRESPASS();
    }
    if (track->mSource == NULL) {
        return;
    }

    if (actualTimeUs) {
        *actualTimeUs = seekTimeUs;
    }

    MediaSource::ReadOptions options;

    bool seeking = false;
    if (seekTimeUs >= 0) {
        options.setSeekTo(seekTimeUs, MediaSource::ReadOptions::SEEK_PREVIOUS_SYNC);
        seeking = true;
    }

    const bool couldReadMultiple = (!mIsWidevine && track->mSource->supportReadMultiple());

    if (mIsWidevine || couldReadMultiple) {
        options.setNonBlocking();
    }

    int64_t videoSeekTimeResultUs = -1;
    int64_t startUs = ALooper::GetNowUs();
    int64_t nowUs = startUs;

    if(mLowLatencyRTPStreaming)
        maxBuffers = 1;

    for (size_t numBuffers = 0; numBuffers < maxBuffers; ) {
        Vector<MediaBuffer *> mediaBuffers;
        status_t err = NO_ERROR;

        if (couldReadMultiple) {
            err = track->mSource->readMultiple(
                    &mediaBuffers, maxBuffers - numBuffers, &options);
        } else {
            MediaBuffer *mbuf = NULL;
            err = track->mSource->read(&mbuf, &options);
            if (err == OK && mbuf != NULL) {
                mediaBuffers.push_back(mbuf);
            }
        }

        options.clearNonPersistent();

        size_t id = 0;
        size_t count = mediaBuffers.size();
        for (; id < count; ++id) {
            int64_t timeUs;
            MediaBuffer *mbuf = mediaBuffers[id];
            if (!mbuf->meta_data()->findInt64(kKeyTime, &timeUs)) {
                mbuf->meta_data()->dumpToLog();
                track->mPackets->signalEOS(ERROR_MALFORMED);
                break;
            }

            if(mLowLatencyRTPStreaming && doDropPacket(trackType,timeUs)){
                continue;
            }

            if (trackType == MEDIA_TRACK_TYPE_AUDIO) {
                mAudioTimeUs = timeUs;
                mBufferingMonitor->updateQueuedTime(true /* isAudio */, timeUs);
            } else if (trackType == MEDIA_TRACK_TYPE_VIDEO) {
                mVideoTimeUs = timeUs;
                if(seeking == true && numBuffers == 0)
                    videoSeekTimeResultUs = timeUs; //save the first frame timestamp after seek in order to seek audio.
                mBufferingMonitor->updateQueuedTime(false /* isAudio */, timeUs);
            }

            queueDiscontinuityIfNeeded(seeking, formatChange, trackType, track);

            sp<ABuffer> buffer = mediaBufferToABuffer(
                    mbuf, trackType, seekTimeUs,
                    numBuffers == 0 ? actualTimeUs : NULL);
            track->mPackets->queueAccessUnit(buffer);
            formatChange = false;
            seeking = false;
            ++numBuffers;
        }
        if (id < count) {
            // Error, some mediaBuffer doesn't have kKeyTime.
            for (; id < count; ++id) {
                mediaBuffers[id]->release();
            }
            break;
        }

        if (err == WOULD_BLOCK) {
            break;
        } else if (err == INFO_FORMAT_CHANGED) {
#if 0
            track->mPackets->queueDiscontinuity(
                    ATSParser::DISCONTINUITY_FORMATCHANGE,
                    NULL,
                    false /* discard */);
#endif
        } else if (err != OK) {
            queueDiscontinuityIfNeeded(seeking, formatChange, trackType, track);
            track->mPackets->signalEOS(err);
            break;
        }
        //quit from loop when reading too many audio buffer
        nowUs = ALooper::GetNowUs();
        if(nowUs - startUs > 250000LL)
            break;
    }

    if(videoSeekTimeResultUs > 0)
        *actualTimeUs = videoSeekTimeResultUs;

    if(mLowLatencyRTPStreaming)
        notifyNeedCurrentPosition();
    ALOGV("GenericSource readBuffer END,type=%d",trackType);
}

最核心的函数是这个:track->mSource->read(&mbuf, &options);

同时根据不同的track类型,track为对应的真正的track实体,以Video Track为例,track = &mVideoTrack;

 

再来回顾一下,在NuPlayer::GenericSource::initFromDataSource()函数中,通过外部的一个for循环,以及内部的 sp<MediaSource> track = extractor->getTrack(i); 最终在FslExtractor::getTrack函数中获取到各个track,先new 一个FslMediaSource来保存,然后在GenericSource.cpp中保存到mAudioTrack / mVideoTrack 以及Vector<sp<MediaSource> > mSources 这个Vector中。

所以这里调用的mVideoTrack最终就是FslExtractor中的FslMediaSource,而这个track->mSource->read也就对应为FslMediaSource::read()函数(FslExtractor.cpp文件中):

status_t FslMediaSource::read(MediaBuffer **out, const ReadOptions *options)
{
    status_t ret = OK;
    *out = NULL;
    uint32_t seekFlag = 0;
    //int64_t targetSampleTimeUs = -1ll;
    size_t srcSize = 0;
    size_t srcOffset = 0;
    int32_t i = 0;
    int64_t seekTimeUs;
    ReadOptions::SeekMode mode;
    int64_t outTs = 0;
    const char *containerMime = NULL;
    const char *mime = NULL;

    if (options && options->getSeekTo(&seekTimeUs, &mode)) {
        switch (mode) {
            case ReadOptions::SEEK_PREVIOUS_SYNC:
                seekFlag = SEEK_FLAG_NO_LATER;
                break;
            case ReadOptions::SEEK_NEXT_SYNC:
                seekFlag = SEEK_FLAG_NO_EARLIER;
                break;
            case ReadOptions::SEEK_CLOSEST_SYNC:
            case ReadOptions::SEEK_CLOSEST:
                seekFlag = SEEK_FLAG_NEAREST;
                break;
            default:
                seekFlag = SEEK_FLAG_NEAREST;
                break;
        }

        clearPendingFrames();

        sp<MetaData> meta = mExtractor->getMetaData();
        if(meta != NULL){
            meta->findCString(kKeyMIMEType, &containerMime);
            mFormat->findCString(kKeyMIMEType, &mime);

            if(mFrameSent < 10 && containerMime && !strcasecmp(containerMime, MEDIA_MIMETYPE_CONTAINER_FLV)
                        && mime && !strcasecmp(mime,MEDIA_MIMETYPE_VIDEO_SORENSON))
            {
                ALOGV("read first frame before seeking track, mFrameSent %d", mFrameSent);
                int64_t time = 0;
                int32_t j=0;
                ret = mExtractor->HandleSeekOperation(mSourceIndex,&time,seekFlag);
                while (mPendingFrames.empty()) {
                    status_t err = mExtractor->GetNextSample(mSourceIndex,false);
                    if (err != OK) {
                        clearPendingFrames();
                        return err;
                    }
                    j++;
                    if(j > 1 && OK != mExtractor->CheckInterleaveEos(mSourceIndex)){
                        ALOGE("get interleave eos");
                        return ERROR_END_OF_STREAM;
                    }
                }
                MediaBuffer *frame = *mPendingFrames.begin();
                frame->meta_data()->setInt64(kKeyTime, seekTimeUs);
            }

        }

        ret = mExtractor->HandleSeekOperation(mSourceIndex,&seekTimeUs,seekFlag);
    }

    while (mPendingFrames.empty()) {
        status_t err = mExtractor->GetNextSample(mSourceIndex,false);

        if (err != OK) {
            clearPendingFrames();

            return err;
        }
        i++;
        if(i > 1 && OK != mExtractor->CheckInterleaveEos(mSourceIndex)){
            ALOGE("get interleave eos");
            return ERROR_END_OF_STREAM;
        }
    }

    MediaBuffer *frame = *mPendingFrames.begin();
    mPendingFrames.erase(mPendingFrames.begin());

    *out = frame;
    mBufferSize -= frame->size();

    mFrameSent++;
    //frame->meta_data()->findInt64(kKeyTime, &outTs);
    ALOGV("FslMediaSource::read mSourceIndex=%d size=%d,time %lld",mSourceIndex,frame->size(),outTs);

    if(!mIsAVC && !mIsHEVC){
        return OK;
    }

    //convert to nal frame
    uint8_t *srcPtr =
        (uint8_t *)frame->data() + frame->range_offset();
    srcSize = frame->range_length();

    if(srcPtr[0] == 0x0 && srcPtr[1] == 0x0 && srcPtr[2] == 0x0 && srcPtr[3] == 0x1){
        return OK;
    }

    if(0 == mNALLengthSize)
        return OK;

    //replace the 4 bytes when nal length size is 4
    if(4 == mNALLengthSize){

        while(srcOffset + mNALLengthSize <= srcSize){
            size_t NALsize = U32_AT(srcPtr + srcOffset);

            srcPtr[srcOffset++] = 0;
            srcPtr[srcOffset++] = 0;
            srcPtr[srcOffset++] = 0;
            srcPtr[srcOffset++] = 1;

            //memcpy(&srcPtr[srcOffset], "\x00\x00\x00\x01", 4);
            srcOffset += NALsize;
        }
        if(srcOffset < srcSize){
            frame->release();
            frame = NULL;

            return ERROR_MALFORMED;
        }
        ALOGV("FslMediaSource::read 2 size=%d",srcSize);

        return OK;
    }

    //create a new MediaBuffer and copy all data from old buffer to new buffer.
    size_t dstSize = 0;
    MediaBuffer *buffer = NULL;
    uint8_t *dstPtr = NULL;
    //got the buffer size when pass is 0, then copy buffer when pass is 1
    for (int32_t pass = 0; pass < 2; pass++) {
        ALOGV("FslMediaSource::read pass=%d,begin",pass);
        size_t srcOffset = 0;
        size_t dstOffset = 0;
        while (srcOffset + mNALLengthSize <= srcSize) {
            size_t NALsize;
            switch (mNALLengthSize) {
                case 1: NALsize = srcPtr[srcOffset]; break;
                case 2: NALsize = U16_AT(srcPtr + srcOffset); break;
                case 3: NALsize = U24_AT(srcPtr + srcOffset); break;
                case 4: NALsize = U32_AT(srcPtr + srcOffset); break;
                default:
                    TRESPASS();
            }

            if (NALsize == 0) {
                frame->release();
                frame = NULL;

                return ERROR_MALFORMED;
            } else if (srcOffset + mNALLengthSize + NALsize > srcSize) {
                break;
            }

            if (pass == 1) {
                memcpy(&dstPtr[dstOffset], "\x00\x00\x00\x01", 4);

                memcpy(&dstPtr[dstOffset + 4],
                       &srcPtr[srcOffset + mNALLengthSize],
                       NALsize);
                ALOGV("FslMediaSource::read 3 copy %d",4+NALsize);
            }

            dstOffset += 4;  // 0x00 00 00 01
            dstOffset += NALsize;

            srcOffset += mNALLengthSize + NALsize;
        }

        if (srcOffset < srcSize) {
            // There were trailing bytes or not enough data to complete
            // a fragment.

            frame->release();
            frame = NULL;

            return ERROR_MALFORMED;
        }

        if (pass == 0) {
            dstSize = dstOffset;

            buffer = new MediaBuffer(dstSize);

            int64_t timeUs;
            CHECK(frame->meta_data()->findInt64(kKeyTime, &timeUs));
            int32_t isSync;
            CHECK(frame->meta_data()->findInt32(kKeyIsSyncFrame, &isSync));

            buffer->meta_data()->setInt64(kKeyTime, timeUs);
            buffer->meta_data()->setInt32(kKeyIsSyncFrame, isSync);

            dstPtr = (uint8_t *)buffer->data();
            ALOGV("FslMediaSource::read 3 size=%d,ts=%lld",dstSize,timeUs);
        }

    }
    frame->release();
    frame = NULL;
    *out = buffer;

    return OK;
}

这个read函数太复杂了,大致意思是:在FslMediaSource

类中维护着一个 List<MediaBuffer *> mPendingFrames,如果这个list为空的话,就调用

mExtractor->GetNextSample(mSourceIndex,false)函数去Source中读取一帧的数据,然后把获得的这帧数据保存在*out中,传递到函数外面,后面的操作是 convert to nal frame,目前位置不知道它的作用是什么,以后分析它。

而mExtractor->GetNextSample函数内部就是调用IParser->getFileNextSample函数来获取的帧数据,这个函数就是Parser的lib提供出来的接口函数。

在NuPlayer::GenericSource::readBuffer函数中,设置了不同Track类型需要读取的最大Buffer的数量,AudioBuffer为64个,VideoBuffer为4个。

退出FslMediaSource::read函数,退回到NuPlayer::GenericSource::readBuffer函数中,当读取完所需的buffer后,如果执行了formatChange / seeking 操作的话,就会调用 queueDiscontinuityIfNeeded函数来不连续的Queue Buffer。

同时NuPlayer::GenericSource::start()函数中还发送了一个kWhatStart的msg,这个msg会通过NuPlayer::GenericSource::BufferingMonitor来维护着GenericSource的整个Buffer流程。

 

 

2. 继续回到NuPlayer::onStart()函数中,看似简单的一行mSource->start()代码就执行了这么多,下面继续追踪流程。剩下的就是创建Renderer相关的,包括new一个Renderer,设置Renderer中的Rate,最后设置这个Renderer的Looper等等操作,并设置FrameRate。

 

3. 最后一个很重要的函数,就是postScanSources(),又是简单的一行,但是里面做了很多工作,包括初始化Decoder,并且启动Decoder:

void NuPlayer::postScanSources() {
    if (mScanSourcesPending) {
        return;
    }

    sp<AMessage> msg = new AMessage(kWhatScanSources, this);
    msg->setInt32("generation", mScanSourcesGeneration);
    msg->post();

    mScanSourcesPending = true;
}

void NuPlayer::onMessageReceived(const sp<AMessage> &msg) {
case kWhatScanSources:
        {
            int32_t generation;
            CHECK(msg->findInt32("generation", &generation));
            if (generation != mScanSourcesGeneration) {
                // Drop obsolete msg.
                break;
            }

            mScanSourcesPending = false;

            ALOGV("scanning sources haveAudio=%d, haveVideo=%d",
                 mAudioDecoder != NULL, mVideoDecoder != NULL);

            bool mHadAnySourcesBefore =
                (mAudioDecoder != NULL) || (mVideoDecoder != NULL);

            // initialize video before audio because successful initialization of
            // video may change deep buffer mode of audio.
            if (mSurface != NULL) {
                instantiateDecoder(false, &mVideoDecoder);
            }

            // Don't try to re-open audio sink if there's an existing decoder.
            if (mAudioSink != NULL && mAudioDecoder == NULL) {
                instantiateDecoder(true, &mAudioDecoder);
            }

            if (!mHadAnySourcesBefore
                    && (mAudioDecoder != NULL || mVideoDecoder != NULL)) {
                // This is the first time we've found anything playable.

                if (mSourceFlags & Source::FLAG_DYNAMIC_DURATION) {
                    schedulePollDuration();
                }
            }

            status_t err;
            if ((err = mSource->feedMoreTSData()) != OK) {
                if (mAudioDecoder == NULL && mVideoDecoder == NULL) {
                    // We're not currently decoding anything (no audio or
                    // video tracks found) and we just ran out of input data.

                    if (err == ERROR_END_OF_STREAM) {
                        notifyListener(MEDIA_PLAYBACK_COMPLETE, 0, 0);
                    } else {
                        notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
                    }
                }
                break;
            }

            if ((mAudioDecoder == NULL && mAudioSink != NULL)
                    || (mVideoDecoder == NULL && mSurface != NULL)) {
                msg->post(100000ll);
                mScanSourcesPending = true;
            }
            break;
        }

 

CCDecoder是字幕解码器,在new Video Decoder的时候,还会把这个CCDecoder作为一个参数传进去。

 

这里会根据是否设置了Surface来决定要不要创建VideoDecoder,同时根据mAudioSink是否存在来决定要不要创建AudioDecoder,都是通过instantiateDecoder函数来完成的,所以下面看这个函数咯:

 

status_t NuPlayer::instantiateDecoder(bool audio, sp<DecoderBase> *decoder) {
    // The audio decoder could be cleared by tear down. If still in shut down
    // process, no need to create a new audio decoder.
    if (*decoder != NULL || (audio && mFlushingAudio == SHUT_DOWN)) {
        return OK;
    }

    sp<AMessage> format = mSource->getFormat(audio);

    if (format == NULL) {
        return -EWOULDBLOCK;
    }

    format->setInt32("priority", 0 /* realtime */);

    if (!audio) {
        AString mime;
        CHECK(format->findString("mime", &mime));
        bool bVideoIsAVC = !strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mime.c_str());
        if (bVideoIsAVC && mSource->isAVCReorderDisabled())
            format->setString("disreorder", "1");
        else
            format->setString("disreorder", "0");

        sp<AMessage> ccNotify = new AMessage(kWhatClosedCaptionNotify, this);
        if (mCCDecoder == NULL) {
            mCCDecoder = new CCDecoder(ccNotify);
        }

        if (mSourceFlags & Source::FLAG_SECURE) {
            format->setInt32("secure", true);
        }

        if (mSourceFlags & Source::FLAG_PROTECTED) {
            format->setInt32("protected", true);
        }

        float rate = getFrameRate();
        if (rate > 0) {
            format->setFloat("operating-rate", rate * mPlaybackSettings.mSpeed);
        }
    }

    if (audio) {
        sp<AMessage> notify = new AMessage(kWhatAudioNotify, this);
        ++mAudioDecoderGeneration;
        notify->setInt32("generation", mAudioDecoderGeneration);

        determineAudioModeChange();
        if (mOffloadAudio) {
            const bool hasVideo = (mSource->getFormat(false /*audio */) != NULL);
            format->setInt32("has-video", hasVideo);
            *decoder = new DecoderPassThrough(notify, mSource, mRenderer);
        } else {
            *decoder = new Decoder(notify, mSource, mPID, mRenderer);
        }
    } else {
        sp<AMessage> notify = new AMessage(kWhatVideoNotify, this);
        ++mVideoDecoderGeneration;
        notify->setInt32("generation", mVideoDecoderGeneration);

        *decoder = new Decoder(
                notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);

        // enable FRC if high-quality AV sync is requested, even if not
        // directly queuing to display, as this will even improve textureview
        // playback.
        {
            char value[PROPERTY_VALUE_MAX];
            if (property_get("persist.sys.media.avsync", value, NULL) &&
                    (!strcmp("1", value) || !strcasecmp("true", value))) {
                format->setInt32("auto-frc", 1);
            }
        }
    }
    (*decoder)->init();
    (*decoder)->configure(format);

    // allocate buffers to decrypt widevine source buffers
    if (!audio && (mSourceFlags & Source::FLAG_SECURE)) {
        Vector<sp<ABuffer> > inputBufs;
        CHECK_EQ((*decoder)->getInputBuffers(&inputBufs), (status_t)OK);

        Vector<MediaBuffer *> mediaBufs;
        for (size_t i = 0; i < inputBufs.size(); i++) {
            const sp<ABuffer> &buffer = inputBufs[i];
            MediaBuffer *mbuf = new MediaBuffer(buffer->data(), buffer->size());
            mediaBufs.push(mbuf);
        }

        status_t err = mSource->setBuffers(audio, mediaBufs);
        if (err != OK) {
            for (size_t i = 0; i < mediaBufs.size(); ++i) {
                mediaBufs[i]->release();
            }
            mediaBufs.clear();
            ALOGE("Secure source didn't support secure mediaBufs.");
            return err;
        }
    }
    return OK;
}

这个函数对一些条件进行判断,核心是:

*decoder = new Decoder( notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);

//new视频解码器,这里还会把字幕解码器作为一个参数传进来

(*decoder)->init(); //解码器初始化

(*decoder)->configure(format); //解码器格式化

(*decoder)->init();的实现在NuPlayerDecoderBase.cpp中,里面只是注册了Looper的Handler。

(*decoder)->configure(format);这个函数内部还是做了很多东西,一步一步分析吧:

 

函数执行流程:

(*decoder)->configure(format); ---> NuPlayer::DecoderBase::configure() ---> NuPlayer::DecoderBase::onMessageReceived() kWhatConfigure case ---> NuPlayer::Decoder::onConfigure()

 

这个函数如下:

void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {
    CHECK(mCodec == NULL);

    mFormatChangePending = false;
    mTimeChangePending = false;

    ++mBufferGeneration;

    AString mime;
    CHECK(format->findString("mime", &mime));

    mIsAudio = !strncasecmp("audio/", mime.c_str(), 6);
    mIsVideoAVC = !strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mime.c_str());

    mComponentName = mime;
    mComponentName.append(" decoder");
    ALOGV("[%s] onConfigure (surface=%p)", mComponentName.c_str(), mSurface.get());

    mCodec = MediaCodec::CreateByType(
            mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);
    int32_t secure = 0;
    if (format->findInt32("secure", &secure) && secure != 0) {
        if (mCodec != NULL) {
            mCodec->getName(&mComponentName);
            mComponentName.append(".secure");
            mCodec->release();
            ALOGI("[%s] creating", mComponentName.c_str());
            mCodec = MediaCodec::CreateByComponentName(
                    mCodecLooper, mComponentName.c_str(), NULL /* err */, mPid);
        }
    }
    if (mCodec == NULL) {
        ALOGE("Failed to create %s%s decoder",
                (secure ? "secure " : ""), mime.c_str());
        handleError(UNKNOWN_ERROR);
        return;
    }
    mIsSecure = secure;

    mCodec->getName(&mComponentName);

    if (mComponentName.startsWith("OMX.Freescale.std.video_decoder") && mComponentName.endsWith("hw-based")){
        format->setInt32("color-format", 21);//OMX_COLOR_FormatYUV420SemiPlanar
    }

    status_t err;
    if (mSurface != NULL) {
        // disconnect from surface as MediaCodec will reconnect
        err = native_window_api_disconnect(
                mSurface.get(), NATIVE_WINDOW_API_MEDIA);
        // We treat this as a warning, as this is a preparatory step.
        // Codec will try to connect to the surface, which is where
        // any error signaling will occur.
        ALOGW_IF(err != OK, "failed to disconnect from surface: %d", err);
    }
    err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */);
    if (err != OK) {
        ALOGE("Failed to configure %s decoder (err=%d)", mComponentName.c_str(), err);
        mCodec->release();
        mCodec.clear();
        handleError(err);
        return;
    }
    rememberCodecSpecificData(format);

    // the following should work in configured state
    CHECK_EQ((status_t)OK, mCodec->getOutputFormat(&mOutputFormat));
    CHECK_EQ((status_t)OK, mCodec->getInputFormat(&mInputFormat));

    mStats->setString("mime", mime.c_str());
    mStats->setString("component-name", mComponentName.c_str());

    if (!mIsAudio) {
        int32_t width, height;
        if (mOutputFormat->findInt32("width", &width)
                && mOutputFormat->findInt32("height", &height)) {
            mStats->setInt32("width", width);
            mStats->setInt32("height", height);
        }
    }

    sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
    mCodec->setCallback(reply);

    err = mCodec->start();
    if (err != OK) {
        ALOGE("Failed to start %s decoder (err=%d)", mComponentName.c_str(), err);
        mCodec->release();
        mCodec.clear();
        handleError(err);
        return;
    }

    releaseAndResetMediaBuffers();

    mPaused = false;
    mResumePending = false;
}

3.1 
mCodec = MediaCodec::CreateByType(
            mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);

sp<MediaCodec> MediaCodec::CreateByType(
        const sp<ALooper> &looper, const char *mime, bool encoder, status_t *err, pid_t pid) {
    sp<MediaCodec> codec = new MediaCodec(looper, pid);

    const status_t ret = codec->init(mime, true /* nameIsType */, encoder);
    if (err != NULL) {
        *err = ret;
    }
    return ret == OK ? codec : NULL; // NULL deallocates codec.
}

在这篇文章的分析中,只是分析到MediaCodec.cpp这一层,不继续向下分析ACodec和OMX,这个会在后面的文章中详细分析,这里的目的是知道MediaCodec是干什么的,如果向下挖的太深,不好出坑,对宏观层面没有一个很好的理解。

首先是new了一个MediaCodec类,这个类就可以理解为Decoder的wrapper,它的下一层是ACodec,每个ACodec对应一个解码器,在codec->init中会为MediaCodec中的mCodec赋值:

mCodec = new ACodec; 通过这个就与ACodec联系上了,同时还会设置一个mCodecLooper来供ACodec使用。

 

这里还有一点需要注意,在NuPlayer::Decoder中有个mCodec是sp<MediaCodec> 类型的,

在MediaCodec中也有一个mCodec是sp<CodecBase> 类型的,即ACodec的父类,注意区分这两个,如果在NuPlayerDecoder.cpp中使用mCodec就是跳到MediaCodec.cpp中了,如果在MediaCodec.cpp中使用mCodec,就是对应ACodec.cpp中。

 

3.2

err = mCodec->configure(format, mSurface, NULL /* crypto */, 0 /* flags */);

这里的mCodec是sp<MediaCodec> 类型的,所以这个函数是MediaCodec::configure()函数,在这个函数中设置了Vector<MediaResource>,然后发送kWhatConfigure这个AMessage,在MediaCodec::onMessageReceived函数的kWhatConfigure case中,通过 handleSetSurface()来设置Surface,然后通过setState(CONFIGURING);来设置状态为CONFIGURING,这个状态在OMX中很重要,整个OMX就是通过状态来驱动的,最后是一个mCodec->initiateConfigureComponent(format);函数,这里的mCodec是ACodec了,所以跳到ACodec::initiateConfigureComponent()中执行,Async执行,最后跳到ACodec::LoadedState::onConfigureComponent中,然后这个函数通过mCodec->configureCodec函数来设置Decoder,这个函数很重要,在里面设置了Audio和Video。

最后配置完毕后,通过kWhatComponentConfigured这个notify通知外层的MediaCodec,在case CodecBase::kWhatComponentConfigured:这个case中,设置状态为CONFIGURED。

 

3.3

mCodec->setCallback(reply);

设置callback函数,打印出: MediaCodec will operate in async mode

 

3.4

mCodec->start();

对应到 MediaCodec::start()函数,在这个函数中设置了Vector<MediaResource>,然后发送kWhatStart AMessage,在处理函数中,这时候的状态为CONFIGURED,所以不会执行onInputBufferAvailable函数,而是继续向下执行,首先通过setState设置状态为STARTING,然后执行mCodec->initiateStart();这个mCodec是ACodec了,继续跳到ACodec::initiateStart()中去执行,最后执行到ACodec::LoadedState::onStart()函数中,在这个函数中通过mCodec->mOMX->sendCommand设置状态为OMX_StateIdle,然后设置:mCodec->changeState(mCodec->mLoadedToIdleState)

我猜测最后会到达:ACodec::LoadedToIdleState::stateEntered()里面,在这里面通过allocateBuffers函数来分配内存,然后就开始通过OMX来驱动了。

 

3.5 小节一下

从上层来看,MediaCodec就是一个黑盒,只需要是如何驱动它的,而不需要关心它内部是如何实现解码的,对于这个黑盒,它有一个input port,一个output port,buffer是如何运转就会非常重要,所以在这里关注的就是NuPlayerDecoder和MediaCodec的交互关系。

 

来看看MediaCodec在整个NuPlayer架构中的位置:

在上面分析到OMX会分配buffer,然后,在input port就有buffer了,这时候就会调用MediaCodec::onInputBufferAvailable,来告诉NuPlayerDecoder在MediaCodec的输入端口有个可以使用的buffer,然后NuPlayerDecoder就调用handleAnInputBuffer来向里面填充数据。

填充完数据后,MediaCodec就可以通过OMX来解码了,解码后的数据就会到达output port,这时候,MediaCodec就会调用onOutputBufferAvailable来通知NuPlayerDecoder,它的output port有个可以使用的buffer,NuPlayerDecoder可以把它发送到下一阶段了,所以NuPlayerDecoder就调用handleAnOutputBuffer来处理这个buffer,在这个函数中通过mRenderer->queueBuffer(mIsAudio, buffer, reply),把解码后的数据发送给Renderer。

 

MediaCodec的工作流程图就如下所示:

这样,整个流程就开始运转起来了,下一步按说应该分析Renderer了,但是想再继续深入研究一下MediaCodec。最起码把3.5中的几个函数讲清楚。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章