14. Android MultiMedia框架完全解析 - NuPlayerDecoder與MediaCodec的交互

上一篇文章中詳細分析了MediaCodec,以及由它向下的內容,但是在MediaCodec外面包裹的是一層NuPlayerDecoder,這裏就看看它們兩者之間是如何溝通的。

從理論上來講,既然NuPlayerDecoder包裹在MediaCodec外層,所以它相對於MediaCodec也可以理解爲App,它調用MediaCodec的API來完成一些任務。下面就詳細看看這個流程:

 

1. 解碼順序的啓動過程

實際的解碼是從setRenderer開始,在NuPlayer::onStart()函數中,

mVideoDecoder->setRenderer(mRenderer);

到NuPlayer::DecoderBase::setRenderer()函數中,發送kWhatSetRenderer這個msg,然後跳到onSetRenderer()函數中,

void NuPlayer::Decoder::onSetRenderer(const sp<Renderer> &renderer) {
    bool hadNoRenderer = (mRenderer == NULL);
    mRenderer = renderer;
    if (hadNoRenderer && mRenderer != NULL) {
        // this means that the widevine legacy source is ready
        onRequestInputBuffers();
    }
}

在onSetRenderer函數中,調用到onRequestInputBuffers()函數,這個函數挺奇葩的,來看一下它的執行邏輯:

void NuPlayer::DecoderBase::onRequestInputBuffers() {
    if (mRequestInputBuffersPending) {
        return;
    }

    // doRequestBuffers() return true if we should request more data
    if (doRequestBuffers()) {
        mRequestInputBuffersPending = true;

        sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
        msg->post(2 * 1000ll);
    }
}

void NuPlayer::DecoderBase::onMessageReceived(const sp<AMessage> &msg) {
case kWhatRequestInputBuffers:
        {
            mRequestInputBuffersPending = false;
            onRequestInputBuffers();
            break;
        }

 

看到這個代碼邏輯了沒,在onRequestInputBuffers函數中,會去發送kWhatRequestInputBuffers這個msg,而在msg的async處理函數中,還會繼續去調用onRequestInputBuffers函數,就這樣循環下去了,嗯哼???還有這操作?

 

那唯一能阻止這個循環操作的,就是onRequestInputBuffers函數中的if (doRequestBuffers())判斷語句了,只有它判斷爲0,纔會終止這個循環操作。

 

那麼再來看這個doRequestBuffers()函數,它實際上是一個while循環,當需要更多的數據時,這個函數返回true :

/*
 * returns true if we should request more data
 */
bool NuPlayer::Decoder::doRequestBuffers() {
    // mRenderer is only NULL if we have a legacy widevine source that
    // is not yet ready. In this case we must not fetch input.
    if (isDiscontinuityPending() || mRenderer == NULL) {
        return false;
    }
    status_t err = OK;
    while (err == OK && !mDequeuedInputBuffers.empty()) {
        size_t bufferIx = *mDequeuedInputBuffers.begin();
        sp<AMessage> msg = new AMessage();
        msg->setSize("buffer-ix", bufferIx);
        err = fetchInputData(msg);  //取一個輸入buffer
        if (err != OK && err != ERROR_END_OF_STREAM) {
            // if EOS, need to queue EOS buffer
            break;
        }
        mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());

        if (!mPendingInputMessages.empty()
                || !onInputBufferFetched(msg)) {
            mPendingInputMessages.push_back(msg); //實際取出的數據放到這個緩衝消息隊列中
        }
    }

    return err == -EWOULDBLOCK
            && mSource->feedMoreTSData() == OK;
}

 

在NuPlayer::Decoder::fetchInputData()函數中,通過調用mSource->dequeueAccessUnit()函數,來跟GenericSource打交道,就把壓縮數據填充到buffer中了。

而NuPlayer::Decoder::onInputBufferFetched()函數內部,會通過mCodec->queueInputBuffer()來把數據加到BufferQueue中。

 

2. 循環邏輯

來看看下面這個圖,我的理解是NuPlayerDecoder是包裝在MediaCodec外面的一層,而這個input和output都是相對於MediaCodec來說的,NuPlayerDecoder與MediaCodec交互發生在兩個port,MediaCodec中也維護着一個BufferQueue,當inputport端口有buffer時,就會調用MediaCodec::onInputBufferAvailable()函數,這個函數就會發送一個CB_INPUT_AVAILABLE msg到NuPlayerDecoder中,通知它在MediaCodec的input port有一個buffer,那麼NuPlayerDecoder就會相應的調用NuPlayer::Decoder::handleAnInputBuffer()函數來處理這個事情,怎麼處理呢?MediaCodec的作用就是解碼,Decoder就是從demux(MediaExtractor)中取數據,交給MediaCodec去處理。

 

當MediaCodec處理完這些數據後(怎麼處理?把H264的碼流解析成YUV格式的),數據在內部從input port流到ouput port,這時候就會觸發MediaCodec::onOutputBufferAvailable()函數,來告訴NuPlayerDecoder在MediaCodec的ouput port有一個buffer,通過發送一個CB_OUTPUT_AVAILABLE msg,當NuPlayerDecoder接收到這個msg後,調用NuPlayer::Decoder::handleAnOutputBuffer()函數來處理,怎麼處理呢?

Decoder的下一步就是Render了,所以下一步就是把數據發送給Renderer。

這是一個循環邏輯,一步一步來看。

 

2.1 異步消息的建立過程

首先是NuPlayerDecoder是如何與MediaCodec建立消息聯繫的?

在NuPlayer::Decoder::onConfigure()函數中有如下的代碼:

sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
mCodec->setCallback(reply);

這裏就是將MediaCodec的消息發送給當前類Decoder消息隊列中,然後在onMessageReceived中處理。

 

2.2 向MediaCodec輸入數據的過程

首先是在MediaCodec內部會向NuPlayerDecoder通過MediaCodec::onInputBufferAvailable() 函數向NuPlayerDecoder函數發送一個CB_INPUT_AVAILABLE msg,在NuPlayerDecoder中:

 

case MediaCodec::CB_INPUT_AVAILABLE:
                {
                    int32_t index;
                    CHECK(msg->findInt32("index", &index));

                    handleAnInputBuffer(index);
                    break;
                }

bool NuPlayer::Decoder::handleAnInputBuffer(size_t index) {
    if (isDiscontinuityPending()) {
        return false;
    }

    sp<ABuffer> buffer;
    mCodec->getInputBuffer(index, &buffer); //首先從MediaCodec中獲取一個可用的輸入緩衝

    if (buffer == NULL) {
        handleError(UNKNOWN_ERROR);
        return false;
    }

    if (index >= mInputBuffers.size()) {
        for (size_t i = mInputBuffers.size(); i <= index; ++i) {
            mInputBuffers.add();
            mMediaBuffers.add();
            mInputBufferIsDequeued.add();
            mMediaBuffers.editItemAt(i) = NULL;
            mInputBufferIsDequeued.editItemAt(i) = false;
        }
    }
    mInputBuffers.editItemAt(index) = buffer;

    //CHECK_LT(bufferIx, mInputBuffers.size());

    if (mMediaBuffers[index] != NULL) {
        mMediaBuffers[index]->release();
        mMediaBuffers.editItemAt(index) = NULL;
    }
    mInputBufferIsDequeued.editItemAt(index) = true;

    if (!mCSDsToSubmit.isEmpty()) {
        sp<AMessage> msg = new AMessage();
        msg->setSize("buffer-ix", index);

        sp<ABuffer> buffer = mCSDsToSubmit.itemAt(0);
        ALOGI("[%s] resubmitting CSD", mComponentName.c_str());
        msg->setBuffer("buffer", buffer);
        mCSDsToSubmit.removeAt(0);
        CHECK(onInputBufferFetched(msg));
        return true;
    }

    while (!mPendingInputMessages.empty()) {
        sp<AMessage> msg = *mPendingInputMessages.begin();
        if (!onInputBufferFetched(msg)) {//這裏是把數據queue到MediaCodec的Input中
//在這個循環中,更重要的一點是對EOS進行處理,如果遇到EOS,這裏就會跳出循環。
            break;
        }
        mPendingInputMessages.erase(mPendingInputMessages.begin());
    }

    if (!mInputBufferIsDequeued.editItemAt(index)) {
        return true;
    }

    mDequeuedInputBuffers.push_back(index);

    onRequestInputBuffers();
    return true;
}

真正的循環在onRequestInputBuffers()函數中,這個函數如下所示:

void NuPlayer::DecoderBase::onRequestInputBuffers() {
    if (mRequestInputBuffersPending) {
        return;
    }

    // doRequestBuffers() return true if we should request more data
    if (doRequestBuffers()) {
        mRequestInputBuffersPending = true;

        sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
        msg->post(2 * 1000ll);
    }
}

如果還需要獲取數據的話,doRequestBuffers()函數就會返回true,然後發送kWhatRequestInputBuffers msg,這個msg的處理函數中仍會去調用onRequestInputBuffers()函數,所以就這麼一直循環下去了。所以核心是這個doRequestBuffers()函數。

bool NuPlayer::Decoder::doRequestBuffers() {
    // mRenderer is only NULL if we have a legacy widevine source that
    // is not yet ready. In this case we must not fetch input.
    if (isDiscontinuityPending() || mRenderer == NULL) {
        return false;
    }
    status_t err = OK;
    while (err == OK && !mDequeuedInputBuffers.empty()) {
        size_t bufferIx = *mDequeuedInputBuffers.begin();
        sp<AMessage> msg = new AMessage();
        msg->setSize("buffer-ix", bufferIx);
        err = fetchInputData(msg);
        if (err != OK && err != ERROR_END_OF_STREAM) {
            // if EOS, need to queue EOS buffer
            break;
        }
        mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());

        if (!mPendingInputMessages.empty()
                || !onInputBufferFetched(msg)) {
            mPendingInputMessages.push_back(msg);
        }
    }

    return err == -EWOULDBLOCK
            && mSource->feedMoreTSData() == OK;
}

這裏面有2個重要的函數,fetchInputData(msg)函數是從Source裏面取數據,onInputBufferFetched()函數裏面填充數據,並把數據傳給MediaCodec。來看看onInputBufferFetched()函數是如何完成這些操作的:

bool NuPlayer::Decoder::onInputBufferFetched(const sp<AMessage> &msg) {
    size_t bufferIx;
    CHECK(msg->findSize("buffer-ix", &bufferIx));
    CHECK_LT(bufferIx, mInputBuffers.size());
    sp<ABuffer> codecBuffer = mInputBuffers[bufferIx];

    sp<ABuffer> buffer;
    bool hasBuffer = msg->findBuffer("buffer", &buffer);

    // handle widevine classic source - that fills an arbitrary input buffer
    MediaBuffer *mediaBuffer = NULL;
    if (hasBuffer) {
        mediaBuffer = (MediaBuffer *)(buffer->getMediaBufferBase());
        if (mediaBuffer != NULL) {
            // likely filled another buffer than we requested: adjust buffer index
            size_t ix;
            for (ix = 0; ix < mInputBuffers.size(); ix++) {
                const sp<ABuffer> &buf = mInputBuffers[ix];
                if (buf->data() == mediaBuffer->data()) {
                    // all input buffers are dequeued on start, hence the check
                    if (!mInputBufferIsDequeued[ix]) {
                        ALOGV("[%s] received MediaBuffer for #%zu instead of #%zu",
                                mComponentName.c_str(), ix, bufferIx);
                        mediaBuffer->release();
                        return false;
                    }

                    // TRICKY: need buffer for the metadata, so instead, set
                    // codecBuffer to the same (though incorrect) buffer to
                    // avoid a memcpy into the codecBuffer
                    codecBuffer = buffer;
                    codecBuffer->setRange(
                            mediaBuffer->range_offset(),
                            mediaBuffer->range_length());
                    bufferIx = ix;
                    break;
                }
            }
            CHECK(ix < mInputBuffers.size());
        }
    }

    if (buffer == NULL /* includes !hasBuffer */) {
        int32_t streamErr = ERROR_END_OF_STREAM;
        CHECK(msg->findInt32("err", &streamErr) || !hasBuffer);

        CHECK(streamErr != OK);

        // attempt to queue EOS
        status_t err = mCodec->queueInputBuffer(
                bufferIx,
                0,
                0,
                0,
                MediaCodec::BUFFER_FLAG_EOS);
        if (err == OK) {
            mInputBufferIsDequeued.editItemAt(bufferIx) = false;
        } else if (streamErr == ERROR_END_OF_STREAM) {
            streamErr = err;
            // err will not be ERROR_END_OF_STREAM
        }

        if (streamErr != ERROR_END_OF_STREAM) {
            ALOGE("Stream error for %s (err=%d), EOS %s queued",
                    mComponentName.c_str(),
                    streamErr,
                    err == OK ? "successfully" : "unsuccessfully");
            handleError(streamErr);
        }
    } else {
        sp<AMessage> extra;
        if (buffer->meta()->findMessage("extra", &extra) && extra != NULL) {
            int64_t resumeAtMediaTimeUs;
            if (extra->findInt64(
                        "resume-at-mediaTimeUs", &resumeAtMediaTimeUs)) {
                ALOGI("[%s] suppressing rendering until %lld us",
                        mComponentName.c_str(), (long long)resumeAtMediaTimeUs);
                mSkipRenderingUntilMediaTimeUs = resumeAtMediaTimeUs;
            }
        }

        int64_t timeUs = 0;
        uint32_t flags = 0;
        CHECK(buffer->meta()->findInt64("timeUs", &timeUs));

        int32_t eos, csd;
        // we do not expect SYNCFRAME for decoder
        if (buffer->meta()->findInt32("eos", &eos) && eos) {
            flags |= MediaCodec::BUFFER_FLAG_EOS;
        } else if (buffer->meta()->findInt32("csd", &csd) && csd) {
            flags |= MediaCodec::BUFFER_FLAG_CODECCONFIG;
        }

        // copy into codec buffer
        if (buffer != codecBuffer) {
            CHECK_LE(buffer->size(), codecBuffer->capacity());
            codecBuffer->setRange(0, buffer->size());
            memcpy(codecBuffer->data(), buffer->data(), buffer->size());
//實際複製數據的地方
        }

        status_t err = mCodec->queueInputBuffer( //在這裏把buffer交給解碼器Codec
                        bufferIx,
                        codecBuffer->offset(),
                        codecBuffer->size(),
                        timeUs,
                        flags);
        if (err != OK) {
            if (mediaBuffer != NULL) {
                mediaBuffer->release();
            }
            ALOGE("Failed to queue input buffer for %s (err=%d)",
                    mComponentName.c_str(), err);
            handleError(err);
        } else {
            mInputBufferIsDequeued.editItemAt(bufferIx) = false;
            if (mediaBuffer != NULL) {
                CHECK(mMediaBuffers[bufferIx] == NULL);
                mMediaBuffers.editItemAt(bufferIx) = mediaBuffer;
            }
        }
    }
    return true;
}

至此,就把數據交給MediaCodec了,解碼前的數據就準備好了。

 

2.3 下面看看MediaCodec解碼後的數據流向

在MediaCodec的output port有數據時,就會調用MediaCodec::onOutputBufferAvailable()函數,這個函數就是發送CB_OUTPUT_AVAILABLE msg到NuPlayerDecoder中,同樣在NuPlayer::Decoder::onMessageReceived函數中進行處理:

case MediaCodec::CB_OUTPUT_AVAILABLE:
                {
                    int32_t index;
                    size_t offset;
                    size_t size;
                    int64_t timeUs;
                    int32_t flags;

                    CHECK(msg->findInt32("index", &index));
                    CHECK(msg->findSize("offset", &offset));
                    CHECK(msg->findSize("size", &size));
                    CHECK(msg->findInt64("timeUs", &timeUs));
                    CHECK(msg->findInt32("flags", &flags));

                    handleAnOutputBuffer(index, offset, size, timeUs, flags);
                    break;
                }

 

繼續看:

bool NuPlayer::Decoder::handleAnOutputBuffer(
        size_t index,
        size_t offset,
        size_t size,
        int64_t timeUs,
        int32_t flags) {
//    CHECK_LT(bufferIx, mOutputBuffers.size());
    sp<ABuffer> buffer;
    mCodec->getOutputBuffer(index, &buffer);

    if (index >= mOutputBuffers.size()) {
        for (size_t i = mOutputBuffers.size(); i <= index; ++i) {
            mOutputBuffers.add();
        }
    }

    mOutputBuffers.editItemAt(index) = buffer;

    buffer->setRange(offset, size);
    buffer->meta()->clear();
    buffer->meta()->setInt64("timeUs", timeUs);

    bool eos = flags & MediaCodec::BUFFER_FLAG_EOS;
    // we do not expect CODECCONFIG or SYNCFRAME for decoder

    sp<AMessage> reply = new AMessage(kWhatRenderBuffer, this);
    reply->setSize("buffer-ix", index);
    reply->setInt32("generation", mBufferGeneration);

    if (eos) {
        ALOGI("[%s] saw output EOS", mIsAudio ? "audio" : "video");

        buffer->meta()->setInt32("eos", true);
        reply->setInt32("eos", true);
    } else if (mSkipRenderingUntilMediaTimeUs >= 0) {
        if (timeUs < mSkipRenderingUntilMediaTimeUs) {
            ALOGV("[%s] dropping buffer at time %lld as requested.",
                     mComponentName.c_str(), (long long)timeUs);

            reply->post();
            return true;
        }

        mSkipRenderingUntilMediaTimeUs = -1;
    }

    mNumFramesTotal += !mIsAudio;

    // wait until 1st frame comes out to signal resume complete
    notifyResumeCompleteIfNecessary();

    if (mRenderer != NULL) {
        // send the buffer to renderer.
        mRenderer->queueBuffer(mIsAudio, buffer, reply);
        if (eos && !isDiscontinuityPending()) {
            mRenderer->queueEOS(mIsAudio, ERROR_END_OF_STREAM);
        }
    }

    return true;
}

 

通過這個mRenderer->queueBuffer()函數,就把數據發送給Renderer了。

Render中根據時間來判斷這個Buffer是否需要渲染,是否需要丟幀,然後通過一個notify反饋給NuPlayerDecoder,然後在下面的onRenderBuffer處理函數中,會根據這個notify來判斷是否渲染這一幀數據。

 

同時這裏還發送了一個kWhatRenderBuffer msg,這個處理函數如下:

case kWhatRenderBuffer:
        {
            if (!isStaleReply(msg)) {
                onRenderBuffer(msg);
            }
            break;
        }


void NuPlayer::Decoder::onRenderBuffer(const sp<AMessage> &msg) {
    status_t err;
    int32_t render;
    size_t bufferIx;
    int32_t eos;
    CHECK(msg->findSize("buffer-ix", &bufferIx));

    if (!mIsAudio) {
        int64_t timeUs;
        sp<ABuffer> buffer = mOutputBuffers[bufferIx];
        buffer->meta()->findInt64("timeUs", &timeUs);

        if (mCCDecoder != NULL && mCCDecoder->isSelected()) {
            mCCDecoder->display(timeUs);
        }
    }

    if (msg->findInt32("render", &render) && render) {
        int64_t timestampNs;
        CHECK(msg->findInt64("timestampNs", &timestampNs));
        err = mCodec->renderOutputBufferAndRelease(bufferIx, timestampNs);
    } else {
        mNumOutputFramesDropped += !mIsAudio;
        err = mCodec->releaseOutputBuffer(bufferIx);
    }
    if (err != OK) {
        ALOGE("failed to release output buffer for %s (err=%d)",
                mComponentName.c_str(), err);
        handleError(err);
    }
    if (msg->findInt32("eos", &eos) && eos
            && isDiscontinuityPending()) {
        finishHandleDiscontinuity(true /* flushOnTimeChange */);
    }
}

這個(msg->findInt32("render", &render) && render)就是從Renderer中傳回來的notify,如果需要渲染的話,這個判斷語句就是true,會調用mCodec->renderOutputBufferAndRelease函數渲染這一幀數據,然後釋放buffer,把buffer返還給MediaCodec。

如果這個語句判斷爲false,就會直接調用mCodec->releaseOutputBuffer函數釋放buffer,不去渲染,直接把buffer返還給MediaCodec。

 

至此,Decoder的過程也就分析完畢了,以後有時間的話再分析MediaCodec有OMX的交互。下一步先分析Render。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章