[Android P] CameraAPI1 轉 HAL3 預覽流程(四) — Preview Data

系列文章

總覽

預覽打開完畢後,就進入了持續預覽階段。

Camera API2 架構下,採用一個 Request 對應一個 Result 的規範,所以在預覽期間是需要持續下 Request 來獲取預覽數據的,而仍然採用 API1 相機應用在 Framework 中也會被轉換成這樣的形式。

其中,與 Request 密切相關的一個線程是 Camera3Device::RequestThread,它負責持續下預覽 Request

Result 從底層返回時,會先回到 Camera3Device,觸發 processCaptureResult 並通知到各個 Processor(如 FrameProcessor 和 CallbackProcessor)去進一步處理、上傳。

我們現在分析的是開了 preview 以及 callback 兩路 stream 的預覽流程,其中 APP 一般是拿 callback 這路數據去進行客製化處理,然後進行預覽,所以下面的時序中,Result 部分就重點看 callback 數據的回傳部分(因爲這部分也與我最開始所描述的卡頓問題密切相關)。

previewData

主要涉及到的類及其對應的代碼地址分別是:

  1. Camera-JNI/frameworks/base/core/jni/android_hardware_Camera.cpp
  2. Camera2Client/frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp
  3. FrameProcessor/frameworks/av/services/camera/libcameraservice/api1/client2/FrameProcessor.cpp
  4. CallbackProcessor/frameworks/av/services/camera/libcameraservice/api1/client2/CallbackProcessor.cpp
  5. Camera3Device/frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

接下來我會照着上面的時序圖,結合具體代碼進行更深入一些的解釋。

代碼分析

我們可以分成兩個部分來看:

  1. 下行-控制流(Request);
  2. 上行-數據流(Result)。

控制流部分

嚴格來說,控制流部分主要是 RequestThread 在負責週轉,但由於 setPreviewCallbackFlag 裏有影響到它週轉流程的邏輯,所以我在這裏會先把 Camera2Client 中 setPreviewCallbackFlag 的兩種調用情況描述出來,這樣在分析 RequestThread::threadLoop 時我們就更能理解它受到了什麼影響。

下面主要是講圖示中紅框的部分。

control

Camera2Client::setPreviewCallbackFlag

這個函數被調用,一般是有兩種情況:

  1. App 調用了 addCallbackBuffer,把用於裝填 callback 數據的 buffer 主動傳下來時,此時帶的參數是 0x05
  2. Callback result 回調上來,到 JNI 處有 copyAndPost動作,如果此幀數據上傳後, App 提供的 buffer 用完了,就會被調用,此時帶的參數是 0x00
  3. 具體代碼就不用看了,最終都會調用到 startPreviewL,這個纔是我們需要關注的部分。

Camera2Client::startPreviewL

以參數 0x05 的情況來分析:

  1. 第 4~14 行,主要是狀態變更,以及更新參數等動作,這部分在前兩篇都有描述,就不贅述了;
  2. 第 19 行,這是一個關鍵點,由於傳入的 previewCallbackFlags 是 0x05,這裏計算出來的值是 true
  3. 第 23~37 行,由於 callbacksEnabled 爲 true,走入該分支,會調用到 CallbackProcessor 實例的 updateStream 函數,而此處我們需要關注的是第 36 行,把 callback 的 output stream 加入到 stream 列表中
  4. 第 48 行,這裏是把 preview 的 output stream 加入到 stream 列表中,此時 stream 列表的 size 爲 2
  5. 第 66 行,注意這裏,startStream 調用時帶的參數有 outputStreams,在這個函數裏面它會被傳入到新創建的 CaptureRequest 實例中,進而影響到下一次 Request 申請 Hal buffer 的動作。
status_t Camera2Client::startPreviewL(Parameters &params, bool restart) {
    // NOTE: N Lines are omitted here

    params.state = Parameters::STOPPED;
    int lastPreviewStreamId = mStreamingProcessor->getPreviewStreamId();

    res = mStreamingProcessor->updatePreviewStream(params);
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to update preview stream: %s (%d)",
                __FUNCTION__, mCameraId, strerror(-res), res);
        return res;
    }

    bool previewStreamChanged = mStreamingProcessor->getPreviewStreamId() != lastPreviewStreamId;

    // NOTE: N Lines are omitted here

    Vector<int32_t> outputStreams;
    bool callbacksEnabled = (params.previewCallbackFlags &
            CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK) ||
            params.previewCallbackSurface;

    if (callbacksEnabled) {
        // Can't have recording stream hanging around when enabling callbacks,
        // since it exceeds the max stream count on some devices.
        if (mStreamingProcessor->getRecordingStreamId() != NO_STREAM) {
            // NOTE: N Lines are omitted here
        }

        res = mCallbackProcessor->updateStream(params);
        if (res != OK) {
            ALOGE("%s: Camera %d: Unable to update callback stream: %s (%d)",
                    __FUNCTION__, mCameraId, strerror(-res), res);
            return res;
        }
        outputStreams.push(getCallbackStreamId());
    } else if (previewStreamChanged && mCallbackProcessor->getStreamId() != NO_STREAM) {
        // NOTE: N Lines are omitted here
    }

    if (params.useZeroShutterLag() &&
            getRecordingStreamId() == NO_STREAM) {
        // NOTE: N Lines are omitted here
    } else {
        mZslProcessor->deleteStream();
    }

    outputStreams.push(getPreviewStreamId());

    if (params.isDeviceZslSupported) {
        // If device ZSL is supported, resume preview buffers that may be paused
        // during last takePicture().
        mDevice->dropStreamBuffers(false, getPreviewStreamId());
    }

    if (!params.recordingHint) {
        if (!restart) {
            res = mStreamingProcessor->updatePreviewRequest(params);
            if (res != OK) {
                ALOGE("%s: Camera %d: Can't set up preview request: "
                        "%s (%d)", __FUNCTION__, mCameraId,
                        strerror(-res), res);
                return res;
            }
        }
        res = mStreamingProcessor->startStream(StreamingProcessor::PREVIEW,
                outputStreams);
    } else {
        // NOTE: N Lines are omitted here
    }
    if (res != OK) {
        ALOGE("%s: Camera %d: Unable to start streaming preview: %s (%d)",
                __FUNCTION__, mCameraId, strerror(-res), res);
        return res;
    }

    params.state = Parameters::PREVIEW;
    return OK;
}

Camera3Device::RequestThread::threadLoop

回到控制流的正題,即 RequestThread 的運轉邏輯。

RequestThread 實例實際是在 Camera3Device::initializeCommonLocked 中創建並 run 起來的,這是 openCamera 的流程,有興趣可以去看看。

線程 run 起來後就是循環調用 threadLoop 了,回到其邏輯:

  1. 第 6 行,先檢查是否需要暫停循環,如果需要暫停則直接跳過本次 threadLoop,我們此時是不需要的;
  2. 第 11 行,等待這一批要下的 request 獲取完畢,具體的邏輯下面會講到;
  3. 第 22 行,這裏主要是檢查這次的 request 帶的 Session Params 和上一次的是否一致,我們這裏主要關注一致的情況,此時會返回 false,不走這個 if 分支;
  4. 第 27 行,爲本次 request 準備好送下 HAL 層的 buffer(注意 APP 帶下來的 buffer 是停留在 JNI 的);
  5. 第 58 行,把本次 request 發送到 HAL。
bool Camera3Device::RequestThread::threadLoop() {
    ATRACE_CALL();
    status_t res;

    // Handle paused state.
    if (waitIfPaused()) {
        return true;
    }

    // Wait for the next batch of requests.
    waitForNextRequestBatch();
    if (mNextRequests.size() == 0) {
        return true;
    }

    // NOTE: N Lines are omitted here

    // 'mNextRequests' will at this point contain either a set of HFR batched requests
    //  or a single request from streaming or burst. In either case the first element
    //  should contain the latest camera settings that we need to check for any session
    //  parameter updates.
    if (updateSessionParameters(mNextRequests[0].captureRequest->mSettingsList.begin()->metadata)) {
        // NOTE: N Lines are omitted here
    }

    // Prepare a batch of HAL requests and output buffers.
    res = prepareHalRequests();
   
    // NOTE: N Lines are omitted here

    // Inform waitUntilRequestProcessed thread of a new request ID
    {
        Mutex::Autolock al(mLatestRequestMutex);

        mLatestRequestId = latestRequestId;
        mLatestRequestSignal.signal();
    }

    // Submit a batch of requests to HAL.
    // Use flush lock only when submitting multilple requests in a batch.
    // TODO: The problem with flush lock is flush() will be blocked by process_capture_request()
    // which may take a long time to finish so synchronizing flush() and
    // process_capture_request() defeats the purpose of cancelling requests ASAP with flush().
    // For now, only synchronize for high speed recording and we should figure something out for
    // removing the synchronization.
    bool useFlushLock = mNextRequests.size() > 1;

    if (useFlushLock) {
        mFlushLock.lock();
    }

    ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__,
            mNextRequests.size());

    bool submitRequestSuccess = false;
    nsecs_t tRequestStart = systemTime(SYSTEM_TIME_MONOTONIC);
    if (mInterface->supportBatchRequest()) {
        submitRequestSuccess = sendRequestsBatch();
    } else {
        submitRequestSuccess = sendRequestsOneByOne();
    }
    nsecs_t tRequestEnd = systemTime(SYSTEM_TIME_MONOTONIC);
    mRequestLatency.add(tRequestStart, tRequestEnd);

    if (useFlushLock) {
        mFlushLock.unlock();
    }

    // Unset as current request
    {
        Mutex::Autolock l(mRequestLock);
        mNextRequests.clear();
    }

    return submitRequestSuccess;
}

Camera3Device::RequestThread::waitForNextRequestBatch

此處邏輯如下:

  1. 第 10 行,首先獲取第一個 nextRequest;
  2. 第 17 行,將第一個 nextRequest 加入到隊列 mNextRequests 中;
  3. 第 20 行,根據第一個 nextRequest 帶的信息,獲取到本批(即 batch)次 request 共有多少個 request(這裏解釋一下,一般預覽的時候一批 request 裏面只帶一個 request,而像慢動作 120fps 這種情況下,一批裏面要帶 4 個 request,才能保證預覽數據是按 30fps 上來的);
  4. 第 22~32 行,把本批次的 request 逐個獲取出來。
void Camera3Device::RequestThread::waitForNextRequestBatch() {
    ATRACE_CALL();
    // Optimized a bit for the simple steady-state case (single repeating
    // request), to avoid putting that request in the queue temporarily.
    Mutex::Autolock l(mRequestLock);

    assert(mNextRequests.empty());

    NextRequest nextRequest;
    nextRequest.captureRequest = waitForNextRequestLocked();
    if (nextRequest.captureRequest == nullptr) {
        return;
    }

    nextRequest.halRequest = camera3_capture_request_t();
    nextRequest.submitted = false;
    mNextRequests.add(nextRequest);

    // Wait for additional requests
    const size_t batchSize = nextRequest.captureRequest->mBatchSize;

    for (size_t i = 1; i < batchSize; i++) {
        NextRequest additionalRequest;
        additionalRequest.captureRequest = waitForNextRequestLocked();
        if (additionalRequest.captureRequest == nullptr) {
            break;
        }

        additionalRequest.halRequest = camera3_capture_request_t();
        additionalRequest.submitted = false;
        mNextRequests.add(additionalRequest);
    }

    if (mNextRequests.size() < batchSize) {
        ALOGE("RequestThread: only get %zu out of %zu requests. Skipping requests.",
                mNextRequests.size(), batchSize);
        cleanUpFailedRequests(/*sendRequestError*/true);
    }

    return;
}

Camera3Device::RequestThread::waitForNextRequestLocked

預覽時調用到 setRepeatingRequest 會把新 CaputureRequest 加入到 mRepeatingRequests 中,此處第 7~23 行,mRepeatingRequests 非空,進入該分支,並取出其中首個 request 加入到 mRequestQueue 中

sp<Camera3Device::CaptureRequest>
        Camera3Device::RequestThread::waitForNextRequestLocked() {
    status_t res;
    sp<CaptureRequest> nextRequest;

    while (mRequestQueue.empty()) {
        if (!mRepeatingRequests.empty()) {
            // Always atomically enqueue all requests in a repeating request
            // list. Guarantees a complete in-sequence set of captures to
            // application.
            const RequestList &requests = mRepeatingRequests;
            RequestList::const_iterator firstRequest =
                    requests.begin();
            nextRequest = *firstRequest;
            mRequestQueue.insert(mRequestQueue.end(),
                    ++firstRequest,
                    requests.end());
            // No need to wait any longer

            mRepeatingLastFrameNumber = mFrameNumber + requests.size() - 1;

            break;
        }

        // NOTE: N Lines are omitted here
    }

    // NOTE: N Lines are omitted here

    if (nextRequest != NULL) {
        nextRequest->mResultExtras.frameNumber = mFrameNumber++;
        nextRequest->mResultExtras.afTriggerId = mCurrentAfTriggerId;
        nextRequest->mResultExtras.precaptureTriggerId = mCurrentPreCaptureTriggerId;

        // NOTE: N Lines are omitted here
    }

    return nextRequest;
}

Camera3Device::RequestThread::prepareHalRequests

拿完 CaptureRequest 後,就要根據其準備好送去給 HAL 的 request 了,這裏只關注幾個重點:

  1. 第 14 行,插入 trigger(比如 AE trigger);
  2. 第 51 行,特別要注意這裏的 captureRequest->mOutputStreams.size(),前面 setPreviewCallbackFlag 時候的 outputStreams 個數在此處就體現出來用處了,正常時候應該是 2,即 preview 和 callback stream 各一個,而如果正好獲取到的是 1,則會缺一路數據;
  3. 第 56 行,每個 output stream 獲取 buffer,給到 HAL request(如果缺了 callback stream,則這次 request 對應的 result 裏面就不會有 callback 數據)。
status_t Camera3Device::RequestThread::prepareHalRequests() {
    ATRACE_CALL();

    for (size_t i = 0; i < mNextRequests.size(); i++) {
        auto& nextRequest = mNextRequests.editItemAt(i);
        sp<CaptureRequest> captureRequest = nextRequest.captureRequest;
        camera3_capture_request_t* halRequest = &nextRequest.halRequest;
        Vector<camera3_stream_buffer_t>* outputBuffers = &nextRequest.outputBuffers;

        // Prepare a request to HAL
        halRequest->frame_number = captureRequest->mResultExtras.frameNumber;

        // Insert any queued triggers (before metadata is locked)
        status_t res = insertTriggers(captureRequest);
        if (res < 0) {
            SET_ERR("RequestThread: Unable to insert triggers "
                    "(capture request %d, HAL device: %s (%d)",
                    halRequest->frame_number, strerror(-res), res);
            return INVALID_OPERATION;
        }

        int triggerCount = res;
        bool triggersMixedIn = (triggerCount > 0 || mPrevTriggers > 0);
        mPrevTriggers = triggerCount;

        // If the request is the same as last, or we had triggers last time
        bool newRequest = mPrevRequest != captureRequest || triggersMixedIn;
        if (newRequest) {
            // NOTE: N Lines are omitted here
        } else {
            // leave request.settings NULL to indicate 'reuse latest given'
            ALOGVV("%s: Request settings are REUSED",
                   __FUNCTION__);
        }

        // NOTE: N Lines are omitted here

        outputBuffers->insertAt(camera3_stream_buffer_t(), 0,
                captureRequest->mOutputStreams.size());
        halRequest->output_buffers = outputBuffers->array();
        std::set<String8> requestedPhysicalCameras;

        sp<Camera3Device> parent = mParent.promote();
        if (parent == NULL) {
            // Should not happen, and nowhere to send errors to, so just log it
            CLOGE("RequestThread: Parent is gone");
            return INVALID_OPERATION;
        }
        nsecs_t waitDuration = kBaseGetBufferWait + parent->getExpectedInFlightDuration();

        for (size_t j = 0; j < captureRequest->mOutputStreams.size(); j++) {
            sp<Camera3OutputStreamInterface> outputStream = captureRequest->mOutputStreams.editItemAt(j);

            // NOTE: N Lines are omitted here

            res = outputStream->getBuffer(&outputBuffers->editItemAt(j),
                    waitDuration,
                    captureRequest->mOutputSurfaces[j]);
            if (res != OK) {
                // Can't get output buffer from gralloc queue - this could be due to
                // abandoned queue or other consumer misbehavior, so not a fatal
                // error
                ALOGE("RequestThread: Can't get output buffer, skipping request:"
                        " %s (%d)", strerror(-res), res);

                return TIMED_OUT;
            }

            String8 physicalCameraId = outputStream->getPhysicalCameraId();

            if (!physicalCameraId.isEmpty()) {
                // Physical stream isn't supported for input request.
                if (halRequest->input_buffer) {
                    CLOGE("Physical stream is not supported for input request");
                    return INVALID_OPERATION;
                }
                requestedPhysicalCameras.insert(physicalCameraId);
            }
            halRequest->num_output_buffers++;
        }
        totalNumBuffers += halRequest->num_output_buffers;

        // Log request in the in-flight queue
        // If this request list is for constrained high speed recording (not
        // preview), and the current request is not the last one in the batch,
        // do not send callback to the app.
        bool hasCallback = true;
        if (mNextRequests[0].captureRequest->mBatchSize > 1 && i != mNextRequests.size()-1) {
            hasCallback = false;
        }
        res = parent->registerInFlight(halRequest->frame_number,
                totalNumBuffers, captureRequest->mResultExtras,
                /*hasInput*/halRequest->input_buffer != NULL,
                hasCallback,
                calculateMaxExpectedDuration(halRequest->settings),
                requestedPhysicalCameras);
        ALOGVV("%s: registered in flight requestId = %" PRId32 ", frameNumber = %" PRId64
               ", burstId = %" PRId32 ".",
                __FUNCTION__,
                captureRequest->mResultExtras.requestId, captureRequest->mResultExtras.frameNumber,
                captureRequest->mResultExtras.burstId);
        if (res != OK) {
            SET_ERR("RequestThread: Unable to register new in-flight request:"
                    " %s (%d)", strerror(-res), res);
            return INVALID_OPERATION;
        }
    }

    return OK;
}

Camera3Device::RequestThread::sendRequestsBatch

這裏面主要是第 12 行,調用 Camera3Device::HalInterface::processBatchCaptureRequests,打包數據成 HIDL 規範的格式然後丟給 HAL 層。後面的邏輯就不再關注了,注意下第 45 行有個 removeTriggers 動作,與前面 prepareHalRequestinsertTriggers 是相對應的。

bool Camera3Device::RequestThread::sendRequestsBatch() {
    ATRACE_CALL();
    status_t res;
    size_t batchSize = mNextRequests.size();
    std::vector<camera3_capture_request_t*> requests(batchSize);
    uint32_t numRequestProcessed = 0;
    for (size_t i = 0; i < batchSize; i++) {
        requests[i] = &mNextRequests.editItemAt(i).halRequest;
        ATRACE_ASYNC_BEGIN("frame capture", mNextRequests[i].halRequest.frame_number);
    }

    res = mInterface->processBatchCaptureRequests(requests, &numRequestProcessed);

    bool triggerRemoveFailed = false;
    NextRequest& triggerFailedRequest = mNextRequests.editItemAt(0);
    for (size_t i = 0; i < numRequestProcessed; i++) {
        NextRequest& nextRequest = mNextRequests.editItemAt(i);
        nextRequest.submitted = true;


        // Update the latest request sent to HAL
        if (nextRequest.halRequest.settings != NULL) { // Don't update if they were unchanged
            Mutex::Autolock al(mLatestRequestMutex);

            camera_metadata_t* cloned = clone_camera_metadata(nextRequest.halRequest.settings);
            mLatestRequest.acquire(cloned);

            sp<Camera3Device> parent = mParent.promote();
            if (parent != NULL) {
                parent->monitorMetadata(TagMonitor::REQUEST,
                        nextRequest.halRequest.frame_number,
                        0, mLatestRequest);
            }
        }

        if (nextRequest.halRequest.settings != NULL) {
            nextRequest.captureRequest->mSettingsList.begin()->metadata.unlock(
                    nextRequest.halRequest.settings);
        }

        cleanupPhysicalSettings(nextRequest.captureRequest, &nextRequest.halRequest);

        if (!triggerRemoveFailed) {
            // Remove any previously queued triggers (after unlock)
            status_t removeTriggerRes = removeTriggers(mPrevRequest);
            if (removeTriggerRes != OK) {
                triggerRemoveFailed = true;
                triggerFailedRequest = nextRequest;
            }
        }
    }

    // NOTE: N Lines are omitted here
    return true;
}

數據流部分

Result 回傳的流程在 Framework 裏相對簡單一些。

數據在 HAL 處理完畢打包成 Result 後,會通過 HIDL 回傳到 Framework,而 Camera3Device::processCaptureResult 則會接收到這個 Result,然後再將內中攜帶的數據通過一系列回調上傳到 APP(或者直接送去 Display)。

這裏需要注意一下,MTK 底層的 AppStreamMgr 會有兩次調用 processCaptureResult,第一次調用時是 reqeust.hasCallbacktrue 的情況,這裏面主要是和 FrameProcessor 對接的,下面就不細緻去看了,重點是看 callback buffer 回傳的流程。

下面主要是講圖示中紅框部分。

data

Camera3Device::processCaptureResult

Result 到達 Framework:

  1. 第 26~31 行,獲取本次 result 的時間戳;
  2. 第 40 行,將本次收到的 buffer 向上層返回。
void Camera3Device::processCaptureResult(const camera3_capture_result *result) {
    // NOTE: N Lines are omitted here

    bool isPartialResult = false;
    CameraMetadata collectedPartialResult;
    bool hasInputBufferInRequest = false;

    // Get shutter timestamp and resultExtras from list of in-flight requests,
    // where it was added by the shutter notification for this frame. If the
    // shutter timestamp isn't received yet, append the output buffers to the
    // in-flight request and they will be returned when the shutter timestamp
    // arrives. Update the in-flight status and remove the in-flight entry if
    // all result data and shutter timestamp have been received.
    nsecs_t shutterTimestamp = 0;

    {
        Mutex::Autolock l(mInFlightLock);
        // NOTE: N Lines are omitted here

        shutterTimestamp = request.shutterTimestamp;
        hasInputBufferInRequest = request.hasInputBuffer;

        // Did we get the (final) result metadata for this capture?
        // NOTE: N Lines are omitted here

        camera_metadata_ro_entry_t entry;
        res = find_camera_metadata_ro_entry(result->result,
                ANDROID_SENSOR_TIMESTAMP, &entry);
        if (res == OK && entry.count == 1) {
            request.sensorTimestamp = entry.data.i64[0];
        }

        // If shutter event isn't received yet, append the output buffers to
        // the in-flight request. Otherwise, return the output buffers to
        // streams.
        if (shutterTimestamp == 0) {
            request.pendingOutputBuffers.appendArray(result->output_buffers,
                result->num_output_buffers);
        } else {
            returnOutputBuffers(result->output_buffers,
                result->num_output_buffers, shutterTimestamp);
        }

        if (result->result != NULL && !isPartialResult) {
            for (uint32_t i = 0; i < result->num_physcam_metadata; i++) {
                CameraMetadata physicalMetadata;
                physicalMetadata.append(result->physcam_metadata[i]);
                request.physicalMetadatas.push_back({String16(result->physcam_ids[i]),
                        physicalMetadata});
            }
            // NOTE: N Lines are omitted here
        }

        removeInFlightRequestIfReadyLocked(idx);
    } // scope for mInFlightLock

    // NOTE: N Lines are omitted here
}

Camera3Device::returnOutputBuffers

這個函數會把所有的 buffer 返回到各自的 output stream。

需要注意的是第 7 行,此處調用到了 Camera3Stream 的 returnBuffer 方法。

具體的邏輯就不詳細講了,簡單來說一下調用邏輯。

這裏面會調用到 Camera3OutputStream 實例的 returnBufferLocked 方法,進一步到其基類 Camera3IOStreamBase 的 returnAnyBufferLocked 方法,然後再回到 Camera3OutputStream 實現的 returnBufferCheckedLocked 方法,最終在這裏調用到 queueBufferToConsumer

queueBufferToConsumer 就很關鍵了,調用到了 consumer 實例的 queueBuffer 邏輯,這裏粗略來說會對應到 Surface 的 queueBuffer 邏輯,再進一步(這裏似乎涉及到 Binder)會調用到 BufferQueueProducer 的 queueBuffer,在這裏面就有觸發 onFrameAvailable 的邏輯,具體到 callback stream 的話就是觸發了 CallbackProcessor 實現的 onFrameAvailable

void Camera3Device::returnOutputBuffers(
        const camera3_stream_buffer_t *outputBuffers, size_t numBuffers,
        nsecs_t timestamp) {
    for (size_t i = 0; i < numBuffers; i++)
    {
        Camera3Stream *stream = Camera3Stream::cast(outputBuffers[i].stream);
        status_t res = stream->returnBuffer(outputBuffers[i], timestamp);
        // Note: stream may be deallocated at this point, if this buffer was
        // the last reference to it.
        if (res != OK) {
            ALOGE("Can't return buffer to its stream: %s (%d)",
                strerror(-res), res);
        }
    }
}

CallbackProcessor::onFrameAvailable

第 5 行發送信號,結束 threadLoop 裏的等待。

void CallbackProcessor::onFrameAvailable(const BufferItem& /*item*/) {
    Mutex::Autolock l(mInputMutex);
    if (!mCallbackAvailable) {
        mCallbackAvailable = true;
        mCallbackAvailableSignal.signal();
    }
}

CallbackProcessor::threadLoop

此處邏輯如下:

  1. 第 7 行,進入等待,前面的 onFrameAvailable 被調用後這裏就能繼續執行下去;
  2. 第 19 行,正常情況下走的這裏,對傳來的數據幀進行進一步處理。
bool CallbackProcessor::threadLoop() {
    status_t res;

    {
        Mutex::Autolock l(mInputMutex);
        while (!mCallbackAvailable) {
            res = mCallbackAvailableSignal.waitRelative(mInputMutex,
                    kWaitDuration);
            if (res == TIMED_OUT) return true;
        }
        mCallbackAvailable = false;
    }

    do {
        sp<Camera2Client> client = mClient.promote();
        if (client == 0) {
            res = discardNewCallback();
        } else {
            res = processNewCallback(client);
        }
    } while (res == OK);

    return true;
}

CallbackProcessor::processNewCallback

這個函數實際內容不少,我已經省略了大半部分,這部分主要是將 buffer 數據拷貝到第 5 行 callbackHeap 的 mBuffers 中,而後第 19 行調用 dataCallback 將該 buffer 送往 JNI。

這裏的 dataCallback 對應的應該是 Camera.cpp 中實現的函數(在 open camera ,調用 Camera2Client::connect 時 Camera 類實例作爲 client 傳入,並對應到 mSharedCameraCallbacks 這個成員),它主要是調用了 JNI 的 postData 函數

status_t CallbackProcessor::processNewCallback(sp<Camera2Client> &client) {
    ATRACE_CALL();
    status_t res;

    sp<Camera2Heap> callbackHeap;
    bool useFlexibleYuv = false;
    int32_t previewFormat = 0;
    size_t heapIdx;
    
    // NOTE: N Lines are omitted here

    // Call outside parameter lock to allow re-entrancy from notification
    {
        Camera2Client::SharedCameraCallbacks::Lock
            l(client->mSharedCameraCallbacks);
        if (l.mRemoteCallback != 0) {
            ALOGV("%s: Camera %d: Invoking client data callback",
                    __FUNCTION__, mId);
            l.mRemoteCallback->dataCallback(CAMERA_MSG_PREVIEW_FRAME,
                    callbackHeap->mBuffers[heapIdx], NULL);
        }
    }

    // Only increment free if we're still using the same heap
    mCallbackHeapFree++;

    ALOGV("%s: exit", __FUNCTION__);

    return OK;
}

Camera-JNI::postData

這裏主要是第 28 行,調用 copyAndPost 將數據上傳。

void JNICameraContext::postData(int32_t msgType, const sp<IMemory>& dataPtr,
                                camera_frame_metadata_t *metadata)
{
    // VM pointer will be NULL if object is released
    Mutex::Autolock _l(mLock);
    JNIEnv *env = AndroidRuntime::getJNIEnv();
    if (mCameraJObjectWeak == NULL) {
        ALOGW("callback on dead camera object");
        return;
    }

    int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;

    // return data based on callback type
    switch (dataMsgType) {
        case CAMERA_MSG_VIDEO_FRAME:
            // should never happen
            break;

        // For backward-compatibility purpose, if there is no callback
        // buffer for raw image, the callback returns null.
        case CAMERA_MSG_RAW_IMAGE:
            ALOGV("rawCallback");
            if (mRawImageCallbackBuffers.isEmpty()) {
                env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
                        mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
            } else {
                copyAndPost(env, dataPtr, dataMsgType);
            }
            break;

        // There is no data.
        case 0:
            break;

        default:
            ALOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
            copyAndPost(env, dataPtr, dataMsgType);
            break;
    }

    // post frame metadata to Java
    if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
        postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
    }
}

Camera-JNI::copyAndPost

這裏就是我們要看的最後一步了:

  1. 第 7~11 行,將傳過來的 buffer 地址獲取到;
  2. 第 18 行,由於是 APP 主動帶下 buffer,所以走了這個分支;
  3. 第 19 行,注意這裏,把 APP 帶下來的 buffer 以 jbyteArray 的類型指針形式獲取出來,取出後 mCallbackBuffers 的 size 就減 1,如果此時 mCallbackBuffers 爲空,則返回的是 NULL,進而會導致第 27 行直接 return,但不會導致出現邏輯 Error
  4. 第 21 行,一般 APP 一次只帶一個 buffer 下來,所以這裏取出後 isEmpty 會是 true,於是走了這個分支;
  5. 第 23 行,目前 mCallbackBuffers 爲空,這裏 Google 的邏輯是調用一次 setPreviewCallbackFlag(0x00)(具體觸發的邏輯前面已經有提及了,這裏就不贅述),會導致更新一次 CaptureRequest,而這個 request 裏面 outputBuffers 的 size 就會是 1,如果 RequestThread 在下 request 時候正好拿到了這個 CaptureRequest,就會導致後續 callback buffer 缺失的情況;
  6. 第 47 行,將獲取出來的 callback buffer 丟上去給 APP。
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
{
    jbyteArray obj = NULL;

    // allocate Java byte array and copy data
    if (dataPtr != NULL) {
        ssize_t offset;
        size_t size;
        sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
        ALOGV("copyAndPost: off=%zd, size=%zu", offset, size);
        uint8_t *heapBase = (uint8_t*)heap->base();

        if (heapBase != NULL) {
            const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);

            if (msgType == CAMERA_MSG_RAW_IMAGE) {
                obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
            } else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
                obj = getCallbackBuffer(env, &mCallbackBuffers, size);

                if (mCallbackBuffers.isEmpty()) {
                    ALOGV("Out of buffers, clearing callback!");
                    mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
                    mManualCameraCallbackSet = false;

                    if (obj == NULL) {
                        return;
                    }
                }
            } else {
                ALOGV("Allocating callback buffer");
                obj = env->NewByteArray(size);
            }

            if (obj == NULL) {
                ALOGE("Couldn't allocate byte array for JPEG data");
                env->ExceptionClear();
            } else {
                env->SetByteArrayRegion(obj, 0, size, data);
            }
        } else {
            ALOGE("image heap is NULL");
        }
    }

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
    if (obj) {
        env->DeleteLocalRef(obj);
    }
}

結語

至此,API1 轉 HAL3 流程中,關於預覽部分的邏輯我們就已經有了大體的瞭解,回顧一下這幾篇文章都講了些什麼:

  1. 概述:以日常處理三方卡頓問題時遇到的兩個由 Framework 邏輯引起的卡頓現象爲背景,簡單介紹它們的原理以及對應的解決方案,由此引出對 API1 轉 HAL3 預覽流程的思考;
  2. startPreview:介紹了 API1 中最基本的打開預覽的接口,在 HAL3 下的運作邏輯;
  3. setPreviewCallbackFlag:介紹了 startPreview 被調用後,接着調用 setPreviewCallbackWithBuffer 會觸發的邏輯時序,從而深入理解 re-configure 動作導致的卡頓現象;
  4. Request && Result:重點關注 callback stream 對應的 request 下發邏輯以及對應的 result 回傳邏輯,深入瞭解由於 setPreviewCallbackFlag(0x00) 觸發時機導致的 callback buffer 缺失而引起的卡頓現象。

由於本次流程學習時間較緊張,在詳細解析中我省略了較多細節,但應該不會影響對整體流程的理解,如有不清楚的地方可以指出,雖然我不一定了解得很透徹,但還是可以交流交流的。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章