android6.0源碼分析之Camera API2.0下的Preview(預覽)流程分析

本文將基於android6.0的源碼,對Camera API2.0下Camera的preview的流程進行分析。在文章android6.0源碼分析之Camera API2.0下的初始化流程分析中,已經對Camera2內置應用的Open即初始化流程進行了詳細的分析,而在open過程中,定義了一個PreviewCallback,當時並未詳細分析,即Open過程中,會自動開啓預覽過程,即會調用OneCameraImpl的startPreview方法,它是捕獲和繪製屏幕預覽幀的開始,預覽纔會真正開始提供一個表面。
Camera2文章分析目錄:
android6.0源碼分析之Camera API2.0簡介
android6.0源碼分析之Camera2 HAL分析
android6.0源碼分析之Camera API2.0下的初始化流程分析
android6.0源碼分析之Camera API2.0下的Preview(預覽)流程分析
android6.0源碼分析之Camera API2.0下的Capture流程分析
android6.0源碼分析之Camera API2.0下的video流程分析
Camera API2.0的應用


1、Camera2 preview的應用層流程分析

preview流程都是從startPreview開始的,所以來看startPreview方法的代碼:

//OneCameraImpl.java
@Override
public void startPreview(Surface previewSurface, CaptureReadyCallback listener) {
    mPreviewSurface = previewSurface;
    //根據Surface以及CaptureReadyCallback回調來建立preview環境
    setupAsync(mPreviewSurface, listener);
}

這其中有一個比較重要的回調CaptureReadyCallback,先分析setupAsync方法:

//OneCameraImpl.java
private void setupAsync(final Surface previewSurface, final CaptureReadyCallback listener) {
    mCameraHandler.post(new Runnable() {
        @Override
        public void run() {
            //建立preview環境
            setup(previewSurface, listener);
        }
    });
}

這裏通過CameraHandler來post一個Runnable對象,它只會調用Runnable的run方法,它仍然屬於UI線程,並沒有創建新的線程。所以,繼續分析setup方法:

// OneCameraImpl.java
private void setup(Surface previewSurface, final CaptureReadyCallback listener) {
    try {
        if (mCaptureSession != null) {
            mCaptureSession.abortCaptures();
            mCaptureSession = null;
        }
        List<Surface> outputSurfaces = new ArrayList<Surface>(2);
        outputSurfaces.add(previewSurface);
        outputSurfaces.add(mCaptureImageReader.getSurface());
        //創建CaptureSession會話來與Camera Device發送Preview請求
        mDevice.createCaptureSession(outputSurfaces, new CameraCaptureSession.StateCallback() {

            @Override
            public void onConfigureFailed(CameraCaptureSession session) {
                //如果配置失敗,則回調CaptureReadyCallback的onSetupFailed方法
                listener.onSetupFailed();
            }

            @Override
            public void onConfigured(CameraCaptureSession session) {
                mCaptureSession = session;
                mAFRegions = ZERO_WEIGHT_3A_REGION;
                mAERegions = ZERO_WEIGHT_3A_REGION;
                mZoomValue = 1f;
                mCropRegion = cropRegionForZoom(mZoomValue);
                //調用repeatingPreview來啓動preview
                boolean success = repeatingPreview(null);
                if (success) {
                    //若啓動成功,則回調CaptureReadyCallback的onReadyForCapture,表示準備拍照成功
                    listener.onReadyForCapture();
                } else {
                    //若啓動失敗,則回調CaptureReadyCallback的onSetupFailed,表示preview建立失敗
                    listener.onSetupFailed();
                }
            }

            @Override
            public void onClosed(CameraCaptureSession session) {
                super.onClosed(session);
            }
        }, mCameraHandler);
    } catch (CameraAccessException ex) {
        Log.e(TAG, "Could not set up capture session", ex);
        listener.onSetupFailed();
    }
}

首先,調用Device的createCaptureSession方法來創建一個會話,並定義了會話的狀態回調CameraCaptureSession.StateCallback(),其中,當會話創建成功,則會回調onConfigured()方法,在其中,首先調用repeatingPreview來啓動preview,然後處理preview的結果並調用先前定義的CaptureReadyCallback來通知用戶進行Capture操作。先分析repeatingPreview方法:

// OneCameraImpl.java
private boolean repeatingPreview(Object tag) {
    try {
        //通過CameraDevice對象創建一個CaptureRequest的preview請求
        CaptureRequest.Builder builder = mDevice.createCaptureRequest(
                CameraDevice.TEMPLATE_PREVIEW);
        //添加預覽的目標Surface
        builder.addTarget(mPreviewSurface);
        //設置預覽模式
        builder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
        addBaselineCaptureKeysToRequest(builder);
        //利用會話發送請求,mCaptureCallback爲
        mCaptureSession.setRepeatingRequest(builder.build(), mCaptureCallback,mCameraHandler);
        Log.v(TAG, String.format("Sent repeating Preview request, zoom = %.2f", mZoomValue));
        return true;
    } catch (CameraAccessException ex) {
        Log.e(TAG, "Could not access camera setting up preview.", ex);
        return false;
    }
}

首先調用CameraDeviceImpl的createCaptureRequest方法創建類型爲TEMPLATE_PREVIEW 的CaptureRequest,然後調用CameraCaptureSessionImpl的setRepeatingRequest方法將此請求發送出去:

//CameraCaptureSessionImpl.java
@Override
public synchronized int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
        Handler handler) throws CameraAccessException {
    if (request == null) {
        throw new IllegalArgumentException("request must not be null");
    } else if (request.isReprocess()) {
        throw new IllegalArgumentException("repeating reprocess requests are not supported");
    }

    checkNotClosed();
    handler = checkHandler(handler, callback);
    ...
    //將此請求添加到待處理的序列裏
    return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,createCaptureCallbackProxy(
        handler, callback), mDeviceHandler));
}

至此應用層的preview的請求流程分析結束,繼續分析其結果處理,如果preview開啓成功,則會回調CaptureReadyCallback的onReadyForCapture方法,現在分析CaptureReadyCallback回調:

//CaptureModule.java
new CaptureReadyCallback() {
    @Override
    public void onSetupFailed() {
        mCameraOpenCloseLock.release();
        Log.e(TAG, "Could not set up preview.");
        mMainThread.execute(new Runnable() {
            @Override
            public void run() {
                if (mCamera == null) {
                    Log.d(TAG, "Camera closed, aborting.");
                    return;
                }
                mCamera.close();
                mCamera = null;
            }
        });
    }

    @Override
    public void onReadyForCapture() {
        mCameraOpenCloseLock.release();
        mMainThread.execute(new Runnable() {
            @Override
            public void run() {
                Log.d(TAG, "Ready for capture.");
                if (mCamera == null) {
                    Log.d(TAG, "Camera closed, aborting.");
                    return;
                }
                //
                onPreviewStarted();
                onReadyStateChanged(true);
                mCamera.setReadyStateChangedListener(CaptureModule.this);
                mUI.initializeZoom(mCamera.getMaxZoom());
                mCamera.setFocusStateListener(CaptureModule.this);
            }
        });
    }
}

根據前面的分析,預覽成功後會回調onReadyForCapture方法,它主要是通知主線程的狀態改變,並設置Camera的ReadyStateChangedListener的監聽,其回調方法如下:

//CaptureModule.java
@Override
public void onReadyStateChanged(boolean readyForCapture) {
    if (readyForCapture) {
        mAppController.getCameraAppUI().enableModeOptions();
    }
    mAppController.setShutterEnabled(readyForCapture);
}

如代碼所示,當其狀態變成準備好拍照,則將會調用CameraActivity的setShutterEnabled方法,即使能快門按鍵,此時也就是說預覽成功結束,可以按快門進行拍照了,所以,到這裏,應用層的preview的流程基本分析完畢,下圖是應用層的關鍵調用的流程時序圖:
這裏寫圖片描述


2、Camera2 preview的Native層流程分析

分析Preview的Native的代碼真是費了九牛二虎之力,若有分析不正確之處,請各位大神指正,在第一小節的後段最後會調用CameraDeviceImpl的setRepeatingRequest方法來提交請求,而在android6.0源碼分析之Camera API2.0簡介中,分析了Camera2框架Java IPC通信使用了CameraDeviceUser來進行通信,所以看Native層的ICameraDeviceUser的onTransact方法來處理請求的提交:

//ICameraDeviceUser.cpp
status_t BnCameraDeviceUser::onTransact(uint32_t code, const Parcel& data, Parcel* reply, 
        uint32_t flags){
    switch(code) {
        …
        //請求提交
        case SUBMIT_REQUEST: {
            CHECK_INTERFACE(ICameraDeviceUser, data, reply);

            // arg0 = request
            sp<CaptureRequest> request;
            if (data.readInt32() != 0) {
                request = new CaptureRequest();
                request->readFromParcel(const_cast<Parcel*>(&data));
            }

            // arg1 = streaming (bool)
            bool repeating = data.readInt32();

            // return code: requestId (int32)
            reply->writeNoException();
            int64_t lastFrameNumber = -1;
            //將實現BnCameraDeviceUser的對下崗的submitRequest方法代碼寫入Binder
            reply->writeInt32(submitRequest(request, repeating, &lastFrameNumber));
            reply->writeInt32(1);
            reply->writeInt64(lastFrameNumber);

            return NO_ERROR;
        } break;
        ...
}

CameraDeviceClientBase繼承了BnCameraDeviceUser類,所以CameraDeviceClientBase相當於IPC Binder中的client,所以會調用其submitRequest方法,此處,至於IPC Binder通信原理不做分析,其參照其它資料:

//CameraDeviceClient.cpp
status_t CameraDeviceClient::submitRequest(sp<CaptureRequest> request,bool streaming,
        /*out*/int64_t* lastFrameNumber) {
    List<sp<CaptureRequest> > requestList;
    requestList.push_back(request);
    return submitRequestList(requestList, streaming, lastFrameNumber);
}

簡單的調用,繼續分析submitRequestList:

// CameraDeviceClient
status_t CameraDeviceClient::submitRequestList(List<sp<CaptureRequest> > requests,bool streaming, 
        int64_t* lastFrameNumber) {
    ...
    //Metadata鏈表
    List<const CameraMetadata> metadataRequestList;
    ...
    for (List<sp<CaptureRequest> >::iterator it = requests.begin(); it != requests.end(); ++it) {
        sp<CaptureRequest> request = *it;
        ...
        //初始化Metadata數據
        CameraMetadata metadata(request->mMetadata);
        ...
        //設置Stream的容量
        Vector<int32_t> outputStreamIds;
        outputStreamIds.setCapacity(request->mSurfaceList.size());
        //循環初始化Surface
        for (size_t i = 0; i < request->mSurfaceList.size(); ++i) {
            sp<Surface> surface = request->mSurfaceList[i];
            if (surface == 0) continue;
            sp<IGraphicBufferProducer> gbp = surface->getIGraphicBufferProducer();
            int idx = mStreamMap.indexOfKey(IInterface::asBinder(gbp));
            ...
            int streamId = mStreamMap.valueAt(idx);
            outputStreamIds.push_back(streamId);
        }
        //更新數據
        metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS, &outputStreamIds[0],
                        outputStreamIds.size());
        if (request->mIsReprocess) {
            metadata.update(ANDROID_REQUEST_INPUT_STREAMS, &mInputStream.id, 1);
        }
        metadata.update(ANDROID_REQUEST_ID, &requestId, /*size*/1);
        loopCounter++; // loopCounter starts from 1
        //壓棧
        metadataRequestList.push_back(metadata);
    }
    mRequestIdCounter++;

    if (streaming) {
        //預覽會走此條通道
        res = mDevice->setStreamingRequestList(metadataRequestList, lastFrameNumber);
        if (res != OK) {
            ...
        } else {
            mStreamingRequestList.push_back(requestId);
        }
    } else {
        //Capture等走此條通道
        res = mDevice->captureList(metadataRequestList, lastFrameNumber);
        if (res != OK) {
            ...
        }
    }
    if (res == OK) {
        return requestId;
    }
    return res;
}

setStreamingRequestList和captureList方法都調用了submitRequestsHelper方法,只是他們的repeating參數一個ture,一個爲false,而本節分析的preview調用的是setStreamingRequestList方法,並且API2.0下Device的實現爲Camera3Device,所以看它的submitRequestsHelper實現:

// Camera3Device.cpp
status_t Camera3Device::submitRequestsHelper(const List<const CameraMetadata> &requests, 
        bool repeating,/*out*/int64_t *lastFrameNumber) {
    ...
    RequestList requestList;
    //在這裏面會進行CaptureRequest的創建,並調用configureStreamLocked進行stream的配置,主要是設置了一個回調captureResultCb,即後面要分析的重要的回調
    res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);
    ...
    if (repeating) {
        //眼熟不,這個方法名和應用層中CameraDevice的setRepeatingRequests一樣
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        //不需重複,即repeating爲false時,調用此方法來講請求提交
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }
    ...
    return res;
}

從代碼可知,在Camera3Device裏創建了要給RequestThread線程,調用它的setRepeatingRequests或者queueRequestList方法來將應用層發送過來的Request提交,繼續看setRepeatingRequests方法:

// Camera3Device.cpp
status_t Camera3Device::RequestThread::setRepeatingRequests(const RequestList &requests,
        /*out*/int64_t *lastFrameNumber) {
    Mutex::Autolock l(mRequestLock);
    if (lastFrameNumber != NULL) {
        *lastFrameNumber = mRepeatingLastFrameNumber;
    }
    mRepeatingRequests.clear();
    //將其插入mRepeatingRequest鏈表
    mRepeatingRequests.insert(mRepeatingRequests.begin(),
            requests.begin(), requests.end());

    unpauseForNewRequests();

    mRepeatingLastFrameNumber = NO_IN_FLIGHT_REPEATING_FRAMES;
    return OK;
}

至此,Native層的preview過程基本分析結束,下面的工作將會交給Camera HAL層來處理,先給出Native層的調用時序圖:
這裏寫圖片描述


3、Camera2 preview的CameraHAL層流程分析

本節將不再對Camera的HAL層的初始化以及相關配置進行分析,只對preview等相關流程中的frame metadata的處理流程進行分析,具體的CameraHAL分析請參考android6.0源碼分析之Camera2 HAL分析.在第二小節的submitRequestsHelper方法中調用convertMetadataListToRequestListLocked的時候會進行CaptureRequest的創建,並調用configureStreamLocked進行stream的配置,主要是設置了一個回調captureResultCb,所以Native層在request提交後,會回調此captureResultCb方法,首先分析captureResultCb:

// QCamera3HWI.cpp
void QCamera3HardwareInterface::captureResultCb(mm_camera_super_buf_t *metadata_buf,
        camera3_stream_buffer_t *buffer, uint32_t frame_number)
{
    if (metadata_buf) {
        if (mBatchSize) {
            //批處理模式,但代碼也是循環調用handleMetadataWithLock方法
            handleBatchMetadata(metadata_buf, true /* free_and_bufdone_meta_buf */);
        } else { /* mBatchSize = 0 */
            pthread_mutex_lock(&mMutex);    
            //處理元數據
            handleMetadataWithLock(metadata_buf, true /* free_and_bufdone_meta_buf */);
            pthread_mutex_unlock(&mMutex);
        }
    } else {
        pthread_mutex_lock(&mMutex);
        handleBufferWithLock(buffer, frame_number);
        pthread_mutex_unlock(&mMutex);
    }
    return;
}

一種是通過循環來進行元數據的批處理,另一種是直接進行元數據的處理,但是批處理最終也是循環調用handleMetadataWithLock來處理:

// QCamera3HWI.cpp
void QCamera3HardwareInterface::handleMetadataWithLock(mm_camera_super_buf_t *metadata_buf, 
        bool free_and_bufdone_meta_buf){
    ...
    //Partial result on process_capture_result for timestamp
    if (urgent_frame_number_valid) {
        ...
        for (List<PendingRequestInfo>::iterator i =mPendingRequestsList.begin(); 
                i != mPendingRequestsList.end(); i++) {
            ...
            if (i->frame_number == urgent_frame_number &&i->bUrgentReceived == 0) {
                camera3_capture_result_t result;
                memset(&result, 0, sizeof(camera3_capture_result_t));
                i->partial_result_cnt++;
                i->bUrgentReceived = 1;
                //提取3A數據
                result.result =translateCbUrgentMetadataToResultMetadata(metadata);
                ...
                //對Capture Result進行處理
                mCallbackOps->process_capture_result(mCallbackOps, &result);
                //釋放camera_metadata_t
                free_camera_metadata((camera_metadata_t *)result.result);
                break;
            }
        }
    }
    ...
    for (List<PendingRequestInfo>::iterator i = mPendingRequestsList.begin();
            i != mPendingRequestsList.end() && i->frame_number <= frame_number;) {
        camera3_capture_result_t result;
        memset(&result, 0, sizeof(camera3_capture_result_t));
        ...
        if (i->frame_number < frame_number) {
            //清空數據結構
            camera3_notify_msg_t notify_msg;
            memset(&notify_msg, 0, sizeof(camera3_notify_msg_t));
            //定義消息類型
            notify_msg.type = CAMERA3_MSG_SHUTTER;
            notify_msg.message.shutter.frame_number = i->frame_number;
            notify_msg.message.shutter.timestamp = (uint64_t)capture_time (urgent_frame_number - 
                i->frame_number) * NSEC_PER_33MSEC;
            //調用回調通知應用層發生CAMERA3_MSG_SHUTTER消息
            mCallbackOps->notify(mCallbackOps, &notify_msg);
            ...
            CameraMetadata dummyMetadata;
            //更新元數據
            dummyMetadata.update(ANDROID_SENSOR_TIMESTAMP,
                    &i->timestamp, 1);
            dummyMetadata.update(ANDROID_REQUEST_ID,
                    &(i->request_id), 1);
            //得到元數據釋放結果
            result.result = dummyMetadata.release();
        } else {
            camera3_notify_msg_t notify_msg;
            memset(&notify_msg, 0, sizeof(camera3_notify_msg_t));

            // Send shutter notify to frameworks
            notify_msg.type = CAMERA3_MSG_SHUTTER;
            ...
            //從HAL中獲得Metadata
            result.result = translateFromHalMetadata(metadata,
                    i->timestamp, i->request_id, i->jpegMetadata, i->pipeline_depth,
                    i->capture_intent);
            saveExifParams(metadata);
            if (i->blob_request) {
                ...
                if (enabled && metadata->is_tuning_params_valid) {
                    //將Metadata複製到文件
                    dumpMetadataToFile(metadata->tuning_params, mMetaFrameCount, enabled,
                        "Snapshot",frame_number);
                }
                mPictureChannel->queueReprocMetadata(metadata_buf);
            } else {
                // Return metadata buffer
                if (free_and_bufdone_meta_buf) {
                    mMetadataChannel->bufDone(metadata_buf);
                    free(metadata_buf);
                }
            }
        }
        ...
    }
}

其中,首先會調用回調的process_capture_result方法來對Capture Result進行處理,然後會調用回調的notify方法來發送一個CAMERA3_MSG_SHUTTER消息,而process_capture_result所對應的實現其實就是Camera3Device的processCaptureResult方法,先分析processCaptureResult:

//Camera3Device.cpp
void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
    ...
    //對於HAL3.2+,如果HAL不支持partial,當metadata被包含在result中時,它必須將partial_result設置爲1
    ...
    {
        Mutex::Autolock l(mInFlightLock);
        ssize_t idx = mInFlightMap.indexOfKey(frameNumber);
        ...
        InFlightRequest &request = mInFlightMap.editValueAt(idx);
        if (result->partial_result != 0)
            request.resultExtras.partialResultCount = result->partial_result;
        // 檢查結果是否只有partial metadata
        if (mUsePartialResult && result->result != NULL) {
            if (mDeviceVersion >= CAMERA_DEVICE_API_VERSION_3_2) {//HAL版本高於3.2
                if (result->partial_result > mNumPartialResults || result->partial_result < 1) {
                    //Log顯示錯誤
                    return;
                }
                isPartialResult = (result->partial_result < mNumPartialResults);
                if (isPartialResult) {
                    //將結果加入到請求的結果集中
                    request.partialResult.collectedResult.append(result->result);
                }
            } else {//低於3.2
                ...
            }
            if (isPartialResult) {
                // Fire off a 3A-only result if possible
                if (!request.partialResult.haveSent3A) {
                    request.partialResult.haveSent3A =processPartial3AResult(frameNumber,
                        request.partialResult.collectedResult,request.resultExtras);
                }
            }
        }
        ...
        if (result->result != NULL && !isPartialResult) {
            if (shutterTimestamp == 0) {
                request.pendingMetadata = result->result;
                request.partialResult.collectedResult = collectedPartialResult;
            } else {
                CameraMetadata metadata;
                metadata = result->result;
                //發送Capture Result
                sendCaptureResult(metadata, request.resultExtras, collectedPartialResult, 
                    frameNumber, hasInputBufferInRequest,request.aeTriggerCancelOverride);
            }
        }
        //結果處理好了,將請求移除
        removeInFlightRequestIfReadyLocked(idx);
    } // scope for mInFlightLock
    ...
}

由代碼可知,它會處理局部的或者全部的metadata數據,最後如果result不爲空,且得到的是請求處理的全部數據,則會調用sendCaptureResult方法來將請求結果發送出去:

//Camera3Device.cpp
void Camera3Device::sendCaptureResult(CameraMetadata &pendingMetadata,CaptureResultExtras 
        &resultExtras,CameraMetadata &collectedPartialResult,uint32_t frameNumber,bool reprocess,
        const AeTriggerCancelOverride_t &aeTriggerCancelOverride) {
    if (pendingMetadata.isEmpty())//如果數據爲空,直接返回
        return;
    ...
    CaptureResult captureResult;
    captureResult.mResultExtras = resultExtras;
    captureResult.mMetadata = pendingMetadata;
    //更新metadata
    if (captureResult.mMetadata.update(ANDROID_REQUEST_FRAME_COUNT(int32_t*)&frameNumber, 1) 
            != OK) {
        SET_ERR("Failed to set frame# in metadata (%d)",frameNumber);
        return;
    } else {
        ...
    }

    // Append any previous partials to form a complete result
    if (mUsePartialResult && !collectedPartialResult.isEmpty()) {
        captureResult.mMetadata.append(collectedPartialResult);
    }
    //排序
    captureResult.mMetadata.sort();

    // Check that there's a timestamp in the result metadata
    camera_metadata_entry entry = captureResult.mMetadata.find(ANDROID_SENSOR_TIMESTAMP);
    ...
    overrideResultForPrecaptureCancel(&captureResult.mMetadata, aeTriggerCancelOverride);

    // 有效的結果,將其插入Buffer
    List<CaptureResult>::iterator queuedResult =mResultQueue.insert(mResultQueue.end(), 
        CaptureResult(captureResult));
    ...
    mResultSignal.signal();
}

最後,它將Capture Result插入了結果隊列,並釋放了結果的信號量,所以到這裏,Capture Result處理成功,下面分析前面的notify發送CAMERA3_MSG_SHUTTER消息:

//Camera3Device.cpp
void Camera3Device::notify(const camera3_notify_msg *msg) {

    NotificationListener *listener;
    {
        Mutex::Autolock l(mOutputLock);
        listener = mListener;
    }
    ...
    switch (msg->type) {
        case CAMERA3_MSG_ERROR: {
            notifyError(msg->message.error, listener);
            break;
        }
        case CAMERA3_MSG_SHUTTER: {
            notifyShutter(msg->message.shutter, listener);
            break;
        }
        default:
            SET_ERR("Unknown notify message from HAL: %d",
                    msg->type);
    }
}

它調用了notifyShutter方法:

// Camera3Device.cpp
void Camera3Device::notifyShutter(const camera3_shutter_msg_t &msg,
        NotificationListener *listener) {

    ...
    // Set timestamp for the request in the in-flight tracking
    // and get the request ID to send upstream
    {
        Mutex::Autolock l(mInFlightLock);
        idx = mInFlightMap.indexOfKey(msg.frame_number);
        if (idx >= 0) {
            InFlightRequest &r = mInFlightMap.editValueAt(idx);
            // Call listener, if any
            if (listener != NULL) {
                //調用監聽的notifyShutter法國法
                listener->notifyShutter(r.resultExtras, msg.timestamp);
            }
            ...
            //將待處理的result發送到Buffer
            sendCaptureResult(r.pendingMetadata, r.resultExtras,
                r.partialResult.collectedResult, msg.frame_number,
                r.hasInputBuffer, r.aeTriggerCancelOverride);
            returnOutputBuffers(r.pendingOutputBuffers.array(),
                r.pendingOutputBuffers.size(), r.shutterTimestamp);
            r.pendingOutputBuffers.clear();
            removeInFlightRequestIfReadyLocked(idx);
        }
    }
    ...
}

首先它會通知listener preview成功,最後會調用sendCaptureResult將結果加入到結果隊列。它會調用listener的notifyShutter方法,此處的listener其實是CameraDeviceClient類,所以會調用CameraDeviceClient類的notifyShutter方法:

//CameraDeviceClient.cpp
void CameraDeviceClient::notifyShutter(const CaptureResultExtras& resultExtras,nsecs_t timestamp) {
    // Thread safe. Don't bother locking.
    sp<ICameraDeviceCallbacks> remoteCb = getRemoteCallback();
    if (remoteCb != 0) {
        //調用應用層的回調(CaptureCallback的onCaptureStarted方法)
        remoteCb->onCaptureStarted(resultExtras, timestamp);
    }
}

此處的ICameraDeviceCallbacks對應的是Java層的CameraDeviceImpl.java中的內部類CameraDeviceCallbacks,所以會調用它的onCaptureStarted方法:

//CameraDeviceImpl.java
@Override
public void onCaptureStarted(final CaptureResultExtras resultExtras, final long timestamp) {
    int requestId = resultExtras.getRequestId();
    final long frameNumber = resultExtras.getFrameNumber();
    final CaptureCallbackHolder holder;

    synchronized(mInterfaceLock) {
        if (mRemoteDevice == null) return; // Camera already closed
        // Get the callback for this frame ID, if there is one
        holder = CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);
        ...
        // Dispatch capture start notice
        holder.getHandler().post(new Runnable() {
            @Override
            public void run() {
                if (!CameraDeviceImpl.this.isClosed()) {
                    holder.getCallback().onCaptureStarted(CameraDeviceImpl.this,holder.getRequest(
                        resultExtras.getSubsequenceId()),timestamp, frameNumber);
                }
           }
       });
   }
}

它會調用OneCameraImpl.java中的mCaptureCallback的onCaptureStarted方法:

//OneCameraImpl.java
//Common listener for preview frame metadata.  
private final CameraCaptureSession.CaptureCallback mCaptureCallback =
    new CameraCaptureSession.CaptureCallback() {
        @Override
        public void onCaptureStarted(CameraCaptureSession session,CaptureRequest request, 
            long timestamp,long frameNumber) {
            if (request.getTag() == RequestTag.CAPTURE&& mLastPictureCallback != null) {
                mLastPictureCallback.onQuickExpose();
            }
        }
        …
}

注意:Capture,preview以及autoFocus都是使用的這個回調,而Capture調用的時候,其RequestTag爲CAPTURE,而autoFocus的時候爲TAP_TO_FOCUS,而preview請求時沒有對RequestTag進行設置,所以回調到onCaptureStarted方法時,不需要進行處理,但是到此時,preview已經啓動成功,可以進行預覽了,其數據都在buffer裏。所以到此時,preview的流程全部分析結束,下面給出HAL層上的流程時序圖
這裏寫圖片描述


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章