高通Camera框架--數據流淺談01

    本文重點:stagefrightRecorder.cpp    OMXCodec.cpp   MPEG4Writer.cpp  CameraSource.cpp 之間的調用關係

===============================================================================

     最初看的時候,有些地方還是不清楚,關於解碼和文件的讀寫之間的關係不是很瞭解。只是知道底層回調的數據會經過CameraSource.cpp回調,只是知道數據會在OMXCodec.cpp 中完成解碼,只是在MPEG4Writer.cpp 會有讀寫線程和軌跡線程,只是知道在stagefrightRecorder.cpp 中會將OMXCodex.cpp、MPEG4Writer.cpp和CameraSource.cpp 做相互的配合調用。之前一直糾結的是MPEG4Writer.cpp是直接讀CameraSource.cpp 的數據,那和解碼之間又是怎樣的聯繫呢?

     是自己知識面不廣,瞭解的不多,讀代碼能力也還得加強。

     這次看源碼,總算是理清了上面的疑點。

    OMXCodec.cpp 的read函數,直接讀的是CameraSource.cpp 中數據,然後MPEG4Writer.cpp 中的軌跡線程的mSource->read()調用的則是OMXCodec.cpp 中數據。那也就是說,底層數據經過CameraSource.cpp回調的時候,是先經過解碼,然後在將數據寫入文件。

   >>>>>>>>這裏直接從stagefrightRecorder.cpp 的start函數開始了,會在start()函數中調用startMPEG4Recording()函數

stagefrightRecorder.cpp

status_t StagefrightRecorder::start() {
      ......
    switch (mOutputFormat) {
        case OUTPUT_FORMAT_DEFAULT:
        case OUTPUT_FORMAT_THREE_GPP:
        case OUTPUT_FORMAT_MPEG_4:
            status = startMPEG4Recording();
           
     ......
}

      >>>>>>>>在startMPEG4Recording()方法中過,調用的重要方法已經標紅

status_t StagefrightRecorder::startMPEG4Recording() {
   ......

      status_t err = setupMPEG4Recording(
            mOutputFd, mVideoWidth, mVideoHeight,
            mVideoBitRate, &totalBitRate, &mWriter);
    sp<MetaData> meta = new MetaData;


    setupMPEG4MetaData(startTimeUs, totalBitRate, &meta);


    err = mWriter->start(meta.get());
  ......
}


     >>>>>>>>在setupMPEG4Recording()方法中,我們看到 sp<MediaWriter> writer = new MPEG4Writer(outputFd); 這個是完成writer的初始化,那我們現在就知道這個writer是MPEG4Writer了,這個還是蠻重要的。在這個方法中,會調用setupMediaSource(),完成source的初始化,而這個source就是CameraSource,還會繼續調用setupVideoEncoder()方法,完成coder的初始化,而這個coder則是OMXCodex。還得注意下的就是writer->addSource(encoder); 這裏是把解碼的數據交給了writer,這樣MPEG4Writer.cpp和OMXCodec.cpp直接就連續起來了


status_t StagefrightRecorder::setupMPEG4Recording(
      ......
        sp<MediaWriter> *mediaWriter) {
    mediaWriter->clear();

  
    sp<MediaWriter> writer = new MPEG4Writer(outputFd);


    if (mVideoSource < VIDEO_SOURCE_LIST_END) {


        sp<MediaSource> mediaSource;       
        err = setupMediaSource(&mediaSource);
        if (err != OK) {
            return err;
        }


        sp<MediaSource> encoder;
        err = setupVideoEncoder(mediaSource, videoBitRate, &encoder);
        if (err != OK) {
            return err;
        }


        writer->addSource(encoder);

        *totalBitRate += videoBitRate;
    }


    // Audio source is added at the end if it exists.
    // This help make sure that the "recoding" sound is suppressed for
    // camcorder applications in the recorded files.
    if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
        err = setupAudioEncoder(writer);
        if (err != OK) return err;
        *totalBitRate += mAudioBitRate;
    }


    if (mInterleaveDurationUs > 0) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setInterleaveDuration(mInterleaveDurationUs);
    }
    if (mLongitudex10000 > -3600000 && mLatitudex10000 > -3600000) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setGeoData(mLatitudex10000, mLongitudex10000);
    }
    if (mMaxFileDurationUs != 0) {
        writer->setMaxFileDuration(mMaxFileDurationUs);
    }
    if (mMaxFileSizeBytes != 0) {
        writer->setMaxFileSize(mMaxFileSizeBytes);
    }


    mStartTimeOffsetMs = mEncoderProfiles->getStartTimeOffsetMs(mCameraId);
    if (mStartTimeOffsetMs > 0) {
        reinterpret_cast<MPEG4Writer *>(writer.get())->
            setStartTimeOffsetMs(mStartTimeOffsetMs);
    }


    writer->setListener(mListener);
    *mediaWriter = writer;
    return OK;
}

  >>>>>>>>在setupMediaSource()方法中是完成了cameraSource的初始化

status_t StagefrightRecorder::setupMediaSource(
                      sp<MediaSource> *mediaSource) {
    if (mVideoSource == VIDEO_SOURCE_DEFAULT
            || mVideoSource == VIDEO_SOURCE_CAMERA) {
        sp<CameraSource> cameraSource;
        status_t err = setupCameraSource(&cameraSource);

        if (err != OK) {
            return err;
        }
        *mediaSource = cameraSource;
    } else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) {
        // If using GRAlloc buffers, setup surfacemediasource.
        // Later a handle to that will be passed
        // to the client side when queried
        status_t err = setupSurfaceMediaSource();
        if (err != OK) {
            return err;
        }
        *mediaSource = mSurfaceMediaSource;
    } else {
        return INVALID_OPERATION;
    }
    return OK;
}

      >>>>>>>在setupVideoEncoder()方法中是完成了OMXCodec的初始化,這裏注意下OMXCodec::create(...,camerasource,...,...);我們看到create的時候,傳進入的參數中,那個source是cameraSource,所以後面在OMXCodec.cpp 中調用的mSoure->read();直接調用的就是CameraSoure.cpp中的read()方法

status_t StagefrightRecorder::setupVideoEncoder(
        ......
    sp<MediaSource> encoder = OMXCodec::Create(
            client.interface(), enc_meta,
            true /* createEncoder */, cameraSource,
            NULL, encoder_flags);


    if (encoder == NULL) {
        ALOGW("Failed to create the encoder");
        // When the encoder fails to be created, we need
        // release the camera source due to the camera's lock
        // and unlock mechanism.
        cameraSource->stop();
        return UNKNOWN_ERROR;
    }


    mVideoSourceNode = cameraSource;
    mVideoEncoderOMX = encoder;


    *source = encoder;


    return OK;
}


-----------------------------

    >>>>>上面有說到在stagefrigheRecorder.cpp中有調用到MPEG4Writer.cpp中addSource()方法 [writer->addSource(encoder);],而addSource中傳進來的參數是解碼的數據,這樣MPEG4Writer.cpp和OMXCodec.cpp之間就有了聯繫,MPEG4Writer.cpp 讀寫的就是OMXCodec.cpp 中解碼後的數據。

MPEG4Writer.cpp 


    >>>>>> 在MPEG4Writer.cpp 的addSource()方法中,注意看下Track *track = new Track(this, source, 1 + mTracks.size());  我們看到new Track(...,source,...)的時候,是傳進去了source,而這個source,從上面的分析,我們已經知道它是解碼後的數據


status_t MPEG4Writer::addSource(const sp<MediaSource> &source) {
    Mutex::Autolock l(mLock);
    if (mStarted) {
        ALOGE("Attempt to add source AFTER recording is started");
        return UNKNOWN_ERROR;
    }


    // At most 2 tracks can be supported.
    if (mTracks.size() >= 2) {
        ALOGE("Too many tracks (%d) to add", mTracks.size());
        return ERROR_UNSUPPORTED;
    }


    CHECK(source.get() != NULL);


    // A track of type other than video or audio is not supported.
    const char *mime;
    sp<MetaData> meta = source->getFormat();
    CHECK(meta->findCString(kKeyMIMEType, &mime));
    bool isAudio = !strncasecmp(mime, "audio/", 6);
    bool isVideo = !strncasecmp(mime, "video/", 6);
    if (!isAudio && !isVideo) {
        ALOGE("Track (%s) other than video or audio is not supported",
            mime);
        return ERROR_UNSUPPORTED;
    }


    // At this point, we know the track to be added is either
    // video or audio. Thus, we only need to check whether it
    // is an audio track or not (if it is not, then it must be
    // a video track).


    // No more than one video or one audio track is supported.
    for (List<Track*>::iterator it = mTracks.begin();
         it != mTracks.end(); ++it) {
        if ((*it)->isAudio() == isAudio) {
            ALOGE("%s track already exists", isAudio? "Audio": "Video");
            return ERROR_UNSUPPORTED;
        }
    }


    // This is the first track of either audio or video.
    // Go ahead to add the track.
    Track *track = new Track(this, source, 1 + mTracks.size());       -------------------------------
    mTracks.push_back(track);                                                                                               |
                                                                                                                                               |
                                                                                                                                               |
    mHFRRatio = ExtendedUtils::HFR::getHFRRatio(meta);                                                   |
                                                                                                                                               |
                                                                                                                                               |
    return OK;                                                                                                                           |
}                                                                                                                                               |

                                                                                                                                                |

MPEG4Writer::Track::Track(       --------------------------------------------------------------------------- |
        MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
    : mOwner(owner),
      mMeta(source->getFormat()),
      mSource(source),
      mDone(false),
      mPaused(false),
      mResumed(false),
      mStarted(false),
      mTrackId(trackId),
      mTrackDurationUs(0),
      mEstimatedTrackSizeBytes(0),
      mSamplesHaveSameSize(true),
      mStszTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mStcoTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mCo64TableEntries(new ListTableEntries<off64_t>(1000, 1)),
      mStscTableEntries(new ListTableEntries<uint32_t>(1000, 3)),
      mStssTableEntries(new ListTableEntries<uint32_t>(1000, 1)),
      mSttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
      mCttsTableEntries(new ListTableEntries<uint32_t>(1000, 2)),
      mCodecSpecificData(NULL),
      mCodecSpecificDataSize(0),
      mGotAllCodecSpecificData(false),
      mReachedEOS(false),
      mRotation(0),
      mHFRRatio(1) {
    getCodecSpecificDataFromInputFormatIfPossible();


    const char *mime;
    mMeta->findCString(kKeyMIMEType, &mime);
    mIsAvc = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_AVC);
    mIsAudio = !strncasecmp(mime, "audio/", 6);
    mIsMPEG4 = !strcasecmp(mime, MEDIA_MIMETYPE_VIDEO_MPEG4) ||
               !strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_AAC);


    setTimeScale();
}

  >>>>>> theradEntry()是軌跡線程真正執行的方法,在這個方法中會通過mSource->read(&buffer)去不斷的讀取數據,那我們想知道,它是read的哪裏的數據,所以就需要找到mSource是在哪裏初始化的,我們搜索下,會發現是在

    MPEG4Writer::Track::Track(  )
        MPEG4Writer *owner, const sp<MediaSource> &source, size_t trackId)
    : mOwner(owner),
   
      mSource(source),

  這裏進行了初始化。直接看上面,已經連線標出來的方法,就知道,這個source就是解碼後的數據了


status_t MPEG4Writer::Track::threadEntry() {

    while (!mDone && (err = mSource->read(&buffer)) == OK) {

       ......

    }
}

--------------------

     >>>>>>在上面的分析中,我們知道在stagefrightRecorder.cpp中會完成OMXCodec的初始化,而且在初始化中,就將CameraSource傳進來,這裏只是想說 下面的source就是CameraSource,這樣CameraSource.cpp和OMXCodec.cpp之間就聯繫起來了

OMXCodec.cpp

OMXCodec::OMXCodec(
        const sp<IOMX> &omx, IOMX::node_id node,
        uint32_t quirks, uint32_t flags,
        bool isEncoder,
        const char *mime,
        const char *componentName,
        const sp<MediaSource> &source,
        const sp<ANativeWindow> &nativeWindow)
    : mOMX(omx),
      mOMXLivesLocally(omx->livesLocally(node, getpid())),
      mNode(node),
      mQuirks(quirks),
      mFlags(flags),
      mIsEncoder(isEncoder),
      mIsVideo(!strncasecmp("video/", mime, 6)),
      mMIME(strdup(mime)),
      mComponentName(strdup(componentName)),
      mSource(source),
      mCodecSpecificDataIndex(0),
      mState(LOADED),
      mInitialBufferSubmit(true),
      mSignalledEOS(false),
      mNoMoreOutputData(false),
      mOutputPortSettingsHaveChanged(false),
      mSeekTimeUs(-1),
      mSeekMode(ReadOptions::SEEK_CLOSEST_SYNC),
      mTargetTimeUs(-1),
      mOutputPortSettingsChangedPending(false),
      mSkipCutBuffer(NULL),
      mLeftOverBuffer(NULL),
      mPaused(false),
      mNativeWindow(
              (!strncmp(componentName, "OMX.google.", 11))
                        ? NULL : nativeWindow),
      mNumBFrames(0),
      mInSmoothStreamingMode(false),
      mOutputCropChanged(false),
      mSignalledReadTryAgain(false),
      mReturnedRetry(false),
      mLastSeekTimeUs(-1),
      mLastSeekMode(ReadOptions::SEEK_CLOSEST) {
    mPortStatus[kPortIndexInput] = ENABLING;
    mPortStatus[kPortIndexOutput] = ENABLING;


    setComponentRole();
}

    >>>>>>這裏的read()方法,會被MPEG4Writer.cpp的mSourcr->read()調用,至於解碼的過程,還沒有詳細看

status_t OMXCodec::read(
        MediaBuffer **buffer, const ReadOptions *options) { 

      .......

}





發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章