Android 7.0 MediaRecorder源碼分析(二)

在上一節Android 7.0 MediaRecorder源碼分析(一)中我們已經分析到了StagefrightRecorder.cpp
接下來看一下原理分析圖:

目前我們可以認爲在APP/JNI/NATIVE這邊是在一個進程當中,在MediaPlayerService當中的MediaRecorderClient/StagefrightRecorder是在另外一個進程當中,他們之間通過binder通信,而且Bp和Bn我們也都有拿到,後面我們將不再仔細區分Bp和Bn。

客戶端這邊
BnMediaRecorderClient
BpMediaRecorder
BpMediaPlayerService

服務端這邊
BpMediaRecorderClient(如果需要通知客戶端的話,它可以獲得這個Bp)
BnMediaRecorder
BnMediaPlayerService

這裏寫圖片描述

我們以開始錄影爲例子,比如start()

在這裏就兵分兩路,一個CameraSource,一個MPEG4Writer(sp mWriter)這兩個class都位於/path/to/aosp/frameworks/av/media/libstagefright/當中

status_t StagefrightRecorder::startMPEG4Recording() {
    int32_t totalBitRate;
    status_t err = setupMPEG4Recording(
            mOutputFd, mVideoWidth, mVideoHeight,
            mVideoBitRate, &totalBitRate, &mWriter);
    if (err != OK) {
        return err;
    }

    int64_t startTimeUs = systemTime() / 1000;
    sp<MetaData> meta = new MetaData;
    setupMPEG4MetaData(startTimeUs, totalBitRate, &meta);

    err = mWriter->start(meta.get());
    if (err != OK) {
        return err;
    }

    return OK;
}
status_t StagefrightRecorder::setupMPEG4Recording(
        int outputFd,
        int32_t videoWidth, int32_t videoHeight,
        int32_t videoBitRate,
        int32_t *totalBitRate,
        sp<MediaWriter> *mediaWriter) {
    mediaWriter->clear();
    *totalBitRate = 0;
    status_t err = OK;
    sp<MediaWriter> writer = new MPEG4Writer(outputFd);

    if (mVideoSource < VIDEO_SOURCE_LIST_END) {

        sp<MediaSource> mediaSource;
        err = setupMediaSource(&mediaSource); // very important
        if (err != OK) {
            return err;
        }

        sp<MediaSource> encoder;
        err = setupVideoEncoder(mediaSource, videoBitRate, &encoder); // very important
        if (err != OK) {
            return err;
        }

        writer->addSource(encoder);
        *totalBitRate += videoBitRate;
    }

    // Audio source is added at the end if it exists.
    // This help make sure that the "recoding" sound is suppressed for
    // camcorder applications in the recorded files.
    if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
        err = setupAudioEncoder(writer); // very important
        if (err != OK) return err;
        *totalBitRate += mAudioBitRate;
    }

    ...

    writer->setListener(mListener);
    *mediaWriter = writer;
    return OK;
}
status_t StagefrightRecorder::setupMPEG4Recording(
        int outputFd,
        int32_t videoWidth, int32_t videoHeight,
        int32_t videoBitRate,
        int32_t *totalBitRate,
        sp<MediaWriter> *mediaWriter) {
    mediaWriter->clear();
    *totalBitRate = 0;
    status_t err = OK;
    sp<MediaWriter> writer = new MPEG4Writer(outputFd);

    if (mVideoSource < VIDEO_SOURCE_LIST_END) {

        sp<MediaSource> mediaSource;
        err = setupMediaSource(&mediaSource); // very important
        if (err != OK) {
            return err;
        }

        sp<MediaSource> encoder;
        err = setupVideoEncoder(mediaSource, videoBitRate, &encoder); // very important
        if (err != OK) {
            return err;
        }

        writer->addSource(encoder);
        *totalBitRate += videoBitRate;
    }

    // Audio source is added at the end if it exists.
    // This help make sure that the "recoding" sound is suppressed for
    // camcorder applications in the recorded files.
    if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) {
        err = setupAudioEncoder(writer); // very important
        if (err != OK) return err;
        *totalBitRate += mAudioBitRate;
    }

    ...

    writer->setListener(mListener);
    *mediaWriter = writer;
    return OK;
}
// Set up the appropriate MediaSource depending on the chosen option
status_t StagefrightRecorder::setupMediaSource(
                      sp<MediaSource> *mediaSource) {
    if (mVideoSource == VIDEO_SOURCE_DEFAULT
            || mVideoSource == VIDEO_SOURCE_CAMERA) {
        sp<CameraSource> cameraSource;
        status_t err = setupCameraSource(&cameraSource);
        if (err != OK) {
            return err;
        }
        *mediaSource = cameraSource;
    } else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) {
        // If using GRAlloc buffers, setup surfacemediasource.
        // Later a handle to that will be passed
        // to the client side when queried
        status_t err = setupSurfaceMediaSource();
        if (err != OK) {
            return err;
        }
        *mediaSource = mSurfaceMediaSource;
    } else {
        return INVALID_OPERATION;
    }
    return OK;
}
status_t StagefrightRecorder::setupCameraSource(
        sp<CameraSource> *cameraSource) {
    status_t err = OK;
    if ((err = checkVideoEncoderCapabilities()) != OK) {
        return err;
    }
    Size videoSize;
    videoSize.width = mVideoWidth;
    videoSize.height = mVideoHeight;
    if (mCaptureTimeLapse) {
        if (mTimeBetweenTimeLapseFrameCaptureUs < 0) {
            ALOGE("Invalid mTimeBetweenTimeLapseFrameCaptureUs value: %lld",
                mTimeBetweenTimeLapseFrameCaptureUs);
            return BAD_VALUE;
        }

        mCameraSourceTimeLapse = CameraSourceTimeLapse::CreateFromCamera(
                mCamera, mCameraProxy, mCameraId,
                videoSize, mFrameRate, mPreviewSurface,
                mTimeBetweenTimeLapseFrameCaptureUs);
        *cameraSource = mCameraSourceTimeLapse;
    } else {
        *cameraSource = CameraSource::CreateFromCamera(
                mCamera, mCameraProxy, mCameraId, videoSize, mFrameRate,
                mPreviewSurface, true /*storeMetaDataInVideoBuffers*/);
    }
    mCamera.clear();
    mCameraProxy.clear();
    if (*cameraSource == NULL) {
        return UNKNOWN_ERROR;
    }

    if ((*cameraSource)->initCheck() != OK) {
        (*cameraSource).clear();
        *cameraSource = NULL;
        return NO_INIT;
    }

    // When frame rate is not set, the actual frame rate will be set to
    // the current frame rate being used.
    if (mFrameRate == -1) {
        int32_t frameRate = 0;
        CHECK ((*cameraSource)->getFormat()->findInt32(
                    kKeyFrameRate, &frameRate));
        ALOGI("Frame rate is not explicitly set. Use the current frame "
             "rate (%d fps)", frameRate);
        mFrameRate = frameRate;
    }

    CHECK(mFrameRate != -1);

    mIsMetaDataStoredInVideoBuffers =
        (*cameraSource)->isMetaDataStoredInVideoBuffers();

    return OK;
}
status_t StagefrightRecorder::setupVideoEncoder(
        sp<MediaSource> cameraSource,
        int32_t videoBitRate,
        sp<MediaSource> *source) {
    source->clear();

    sp<MetaData> enc_meta = new MetaData;
    enc_meta->setInt32(kKeyBitRate, videoBitRate);
    enc_meta->setInt32(kKeyFrameRate, mFrameRate);

    switch (mVideoEncoder) {
        case VIDEO_ENCODER_H263:
            enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263);
            break;

        case VIDEO_ENCODER_MPEG_4_SP:
            enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4);
            break;

        case VIDEO_ENCODER_H264:
            enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC);
            break;

        default:
            CHECK(!"Should not be here, unsupported video encoding.");
            break;
    }

    sp<MetaData> meta = cameraSource->getFormat();

    int32_t width, height, stride, sliceHeight, colorFormat;
    CHECK(meta->findInt32(kKeyWidth, &width));
    CHECK(meta->findInt32(kKeyHeight, &height));
    CHECK(meta->findInt32(kKeyStride, &stride));
    CHECK(meta->findInt32(kKeySliceHeight, &sliceHeight));
    CHECK(meta->findInt32(kKeyColorFormat, &colorFormat));

    enc_meta->setInt32(kKeyWidth, width);
    enc_meta->setInt32(kKeyHeight, height);
    enc_meta->setInt32(kKeyIFramesInterval, mIFramesIntervalSec);
    enc_meta->setInt32(kKeyStride, stride);
    enc_meta->setInt32(kKeySliceHeight, sliceHeight);
    enc_meta->setInt32(kKeyColorFormat, colorFormat);
    if (mVideoTimeScale > 0) {
        enc_meta->setInt32(kKeyTimeScale, mVideoTimeScale);
    }
    if (mVideoEncoderProfile != -1) {
        enc_meta->setInt32(kKeyVideoProfile, mVideoEncoderProfile);
    }
    if (mVideoEncoderLevel != -1) {
        enc_meta->setInt32(kKeyVideoLevel, mVideoEncoderLevel);
    }

    OMXClient client;
    CHECK_EQ(client.connect(), (status_t)OK);

    uint32_t encoder_flags = 0;
    if (mIsMetaDataStoredInVideoBuffers) {
        encoder_flags |= OMXCodec::kStoreMetaDataInVideoBuffers;
    }

    // Do not wait for all the input buffers to become available.
    // This give timelapse video recording faster response in
    // receiving output from video encoder component.
    if (mCaptureTimeLapse) {
        encoder_flags |= OMXCodec::kOnlySubmitOneInputBufferAtOneTime;
    }

    sp<MediaSource> encoder = OMXCodec::Create(
            client.interface(), enc_meta,
            true /* createEncoder */, cameraSource,
            NULL, encoder_flags);
    if (encoder == NULL) {
        ALOGW("Failed to create the encoder");
        // When the encoder fails to be created, we need
        // release the camera source due to the camera's lock
        // and unlock mechanism.
        cameraSource->stop();
        return UNKNOWN_ERROR;
    }

    *source = encoder;

    return OK;
}

這裏和OMXCodec關聯起來
有一個叫media_codecs.xml的配置文件來表明設備支持哪些codec

我們錄製MPEG 4的時候還會有聲音,所以後面還有個setupAudioEncoder,具體的方法就不展開了,總之就是把聲音也作爲一個Track加入到MPEG4Writer當中去。
這裏插個題外話,Google說把setupAudioEncoder放到後面是爲了避免開始錄影的那一個提示聲音也被錄製進去,但是實際發現它這樣做還是會有bug,在一些設備上還是會把那聲錄製進去,這個遇到的都是靠APP自己來播放聲音來繞過這個問題的。

另外MPEG4Writer當中有個
start(MetaData*)
啓動兩個方法
a) startWriterThread

啓動一個thread去寫

void MPEG4Writer::threadFunc() {
    ALOGV("threadFunc");

    prctl(PR_SET_NAME, (unsigned long)"MPEG4Writer", 0, 0, 0);

    Mutex::Autolock autoLock(mLock);
    while (!mDone) {
        Chunk chunk;
        bool chunkFound = false;

        while (!mDone && !(chunkFound = findChunkToWrite(&chunk))) {
            mChunkReadyCondition.wait(mLock);
        }

        // Actual write without holding the lock in order to
        // reduce the blocking time for media track threads.
        if (chunkFound) {
            mLock.unlock();
            writeChunkToFile(&chunk);
            mLock.lock();
        }
    }

    writeAllChunks();
}

status_t MPEG4Writer::startTracks(MetaData *params) {
for (List

status_t MPEG4Writer::Track::start(MetaData *params) {
    ...

    initTrackingProgressStatus(params);

    ...

    status_t err = mSource->start(meta.get()); // 這裏會去執行CameraSource(start),這兩個是相互關聯的

    ...

    pthread_create(&mThread, &attr, ThreadWrapper, this);
    return OK;
}
void *MPEG4Writer::Track::ThreadWrapper(void *me) {
    Track *track = static_cast<Track *>(me);

    status_t err = track->threadEntry();
    return (void *) err;
}

通過status_t MPEG4Writer::Track::threadEntry()
是新啓動另外一個thread,它裏面會通過一個循環來不斷讀取CameraSource(read)裏面的數據,CameraSource裏面的數據當然是從driver返回過來的(可以參見CameraSourceListener,CameraSource用一個叫做mFrameReceived的List專門存放從driver過來的數據,如果收到數據會調用mFrameAvailableCondition.signal,若還沒有開始錄影,這個時候收到的數據是被丟棄的,當然MediaWriter先啓動的是CameraSource的start方法,再啓動寫Track),然後寫到文件當中。
注意:準確來說這裏MPEG4Writer讀取的是OMXCodec裏的數據,因爲數據先到CameraSource,codec負責編碼之後,MPEG4Writer才負責寫到文件當中!關於數據在CameraSource/OMXCodec/MPEG4Writer之間是怎麼傳遞的,可以參見http://guoh.org/lifelog/2013/06/interaction-between-stagefright-and-codec/當中講Buffer的傳輸過程。

回頭再來看,Stagefright做了什麼事情?我更覺得它只是一個粘合劑(glue)的用處,它工作在MediaPlayerService這一層,把MediaSource,MediaWriter,Codec以及上層的MediaRecorder綁定在一起,這應該就是它最大的作用,Google用它來替換Opencore也是符合其一貫的工程派作風(相比複雜的學術派而言,雖然Google很多東西也很複雜,但是它一般都是以儘量簡單的方式來解決問題)。
讓大家覺得有點不習慣的是,它把MediaRecorder放在MediaPlayerService當中,這兩個看起來是對立的事情,或者某一天它們會改名字,或者是兩者分開,不知道~~

當然這只是個簡單的大體介紹,Codec相關的後面爭取專門來分析一下!

有些細節的東西在這裏沒有列出,需要的話會把一些注意點列出來:

  1. 時光流逝錄影
    CameraSource對應的就是CameraSourceTimeLapse

具體做法就是在
dataCallbackTimestamp
當中有skipCurrentFrame

當然它是用些變量來記錄和計算
mTimeBetweenTimeLapseVideoFramesUs(1E6/videoFrameRate) // 兩個frame之間的間隔時間
記錄上一個frame的(mLastTimeLapseFrameRealTimestampUs) // 上一個frame發生的時間
然後通過frame rate計算出兩個frame之間的相距離時間,中間的都透過releaseOneRecordingFrame來drop掉
也就是說driver返回的東西都不變,只是在SW這層我們自己來處理掉

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章