系列文章
- [Android P] CameraAPI1 轉 HAL3 預覽流程(一) — 背景概述
- [Android P] CameraAPI1 轉 HAL3 預覽流程(二) — startPreview
- [Android P] CameraAPI1 轉 HAL3 預覽流程(三) — setPreviewCallbackFlag
- [Android P] CameraAPI1 轉 HAL3 預覽流程(四) — Preview Data
總覽
三方應用調用 Camera API1 的 startPreview
接口時, Framework 部分的時序圖如下,其中注意幾個點:
- API1 轉 HAL3 的主要邏輯都在 Camera2Client 中,只有在底層 HAL3 的情況下調用 API1 會走到這(在
openCamera
時會有相關的實例化邏輯); - Camera3Device 相當於 HAL3 架構下 Framework 層中與 HAL 層進行對接的實例,這部分不區分 API1 還是 API2;
startPreview
只打開一路 Stream,對應上 StreamingProcessor。其流程中對應 configureStream 這一動作的邏輯是在 startStream 調用期間完成的。
主要涉及到的類及其對應的代碼地址分別是:
- Camera2Client:
/frameworks/av/services/camera/libcameraservice/api1/Camera2Client.cpp
- StreamingProcessor:
/frameworks/av/services/camera/libcameraservice/api1/client2/StreamingProcessor.cpp
- Camera3Device && Camera3Device::RequestThread:
/frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
接下來我會照着上面的時序圖,結合具體代碼進行更深入一些的解釋。
代碼分析
下面這部分我只會寫我關注到的部分,沒有完整覆蓋所有的調用邏輯以及相應細節,建議有空時自己走一遍這個流程,印象、理解會深刻許多。
根據時序圖來看,可以分成三個重點部分來看:
- Camera2Client 中的
startPreview
; - StreamingProcessor 中的
updatePreviewStream
; - Camera3Device 中的
setStreamingRequest
;
在進行分析前,有必要提及一下,preview stream 和 preview callback stream 是兩路不同的 stream。其中 preview 這一路主要是由 StreamingProcessor 來創建管理的,preview callback 則主要對應 CallbackProcessor。
startPreview 相關內容
首先需要提一下,APP 調用 Camera API1 的接口 startPreview
後,會先到 JNI 部分會調用到 android_hardware_Camera_startPreview
,然後纔會到 Camera2Client 這邊的 startPreview
。
Camera2Client::startPreview
這個函數應該不用多說:
- 主要就是去調用
startPreviewL
; - 傳入的第一個參數是 API1 風格的參數 Parameters;
- 注意第二個參數是 restart 標誌,這裏設置爲 false。
status_t Camera2Client::startPreview() {
ATRACE_CALL();
ALOGV("%s: E", __FUNCTION__);
Mutex::Autolock icl(mBinderSerializationLock);
status_t res;
if ( (res = checkPid(__FUNCTION__) ) != OK) return res;
SharedParameters::Lock l(mParameters);
return startPreviewL(l.mParameters, false);
}
Camera2Client::startPreviewL
到這個函數就要注意了,內容比較多,我省略部分語句,挑一些重點(時序圖中體現到的邏輯)來講。
NOTE:省略的部分我會加一句註釋 “N Lines are omitted here” 代替。
下面來看一下這裏面的流程:
- 第 5 行,先把 state 設置爲 STOPPED;
- 第 6 行,從 StreamingProcessor 實例中獲取 preview stream 的 ID,由於是第一次打開預覽,這裏返回的是 NO_STREAM;
- 第 8 行,執行 StreamingProcessor 實例的
updatePreviewStream
方法,這個方法後續單獨拿出來分析,這裏主要作用是創建 preview 這一路 stream,對應地會生成 streamID; - 第 20~22 行,注意這裏 previewCallbackFlags 一般是需要 preview callback stream 時候纔會設置,而首次打開預覽時候這裏的 callbacksEnabled 最終值會是 false;
- 第 26~31 行中,一般走的是 else 分支裏面的
deleteStream
; - 第 33 行,這裏把從 StreamingProcessor 創建的 preview stream 加入到 outputStreams 中,這是一個 int 型的 vector,相當於當前 output stream 的 ID 列表;
- 第 38~46 行,最開始時說過,傳入的參數 restart 爲 false,所以會先執行一次
updatePreviewRequest
,這個函數主要是把 API1 風格的參數 Parameters 轉換成 API2 風格的參數 Metadata; - 第 47 行,這裏調用了
startStream
,這裏面的邏輯會更復雜一些,後面會詳細分析,需要注意的是這裏會對接到 configureStreams 流程,與 HAL 存在交互; - 第 53 行,預覽啓動完畢,將 state 設置爲 PREVIEW。
status_t Camera2Client::startPreviewL(Parameters ¶ms, bool restart) {
// NOTE: N Lines are omitted here
params.state = Parameters::STOPPED;
int lastPreviewStreamId = mStreamingProcessor->getPreviewStreamId();
res = mStreamingProcessor->updatePreviewStream(params);
if (res != OK) {
ALOGE("%s: Camera %d: Unable to update preview stream: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
return res;
}
bool previewStreamChanged = mStreamingProcessor->getPreviewStreamId() != lastPreviewStreamId;
// NOTE: N Lines are omitted here
Vector<int32_t> outputStreams;
bool callbacksEnabled = (params.previewCallbackFlags &
CAMERA_FRAME_CALLBACK_FLAG_ENABLE_MASK) ||
params.previewCallbackSurface;
// NOTE: N Lines are omitted here
if (params.useZeroShutterLag() &&
getRecordingStreamId() == NO_STREAM) {
// NOTE: N Lines are omitted here
} else {
mZslProcessor->deleteStream();
}
outputStreams.push(getPreviewStreamId());
// NOTE: N Lines are omitted here
if (!params.recordingHint) {
if (!restart) {
res = mStreamingProcessor->updatePreviewRequest(params);
if (res != OK) {
ALOGE("%s: Camera %d: Can't set up preview request: "
"%s (%d)", __FUNCTION__, mCameraId,
strerror(-res), res);
return res;
}
}
res = mStreamingProcessor->startStream(StreamingProcessor::PREVIEW,
outputStreams);
} else {
// NOTE: N Lines are omitted here
}
params.state = Parameters::PREVIEW;
return OK;
}
updatePreviewStream 相關內容
這一部分主要作用是創建 preview 這一路 stream。
StreamingProcessor::updataPreviewStream
關於這段代碼:
- 第 5~15 行,由於當前還沒有 preview stream,所以需要調用
createStream
方法去創建一個 OuputStream,需要注意這裏的 device 對應的是 Camera3Device 的實例; - 第 17 行,創建 Stream 完畢後,要給它設置一下 Transform 的信息,這個函數就不深究了。
status_t StreamingProcessor::updatePreviewStream(const Parameters ¶ms) {
// NOTE: N Lines are omitted here
if (mPreviewStreamId == NO_STREAM) {
res = device->createStream(mPreviewWindow,
params.previewWidth, params.previewHeight,
CAMERA2_HAL_PIXEL_FORMAT_OPAQUE, HAL_DATASPACE_UNKNOWN,
CAMERA3_STREAM_ROTATION_0, &mPreviewStreamId, String8());
if (res != OK) {
ALOGE("%s: Camera %d: Unable to create preview stream: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
}
res = device->setStreamTransform(mPreviewStreamId,
params.previewTransform);
if (res != OK) {
ALOGE("%s: Camera %d: Unable to set preview stream transform: "
"%s (%d)", __FUNCTION__, mId, strerror(-res), res);
return res;
}
return OK;
}
Camera3Device::createStream
注意這裏有兩個 createStream
:
- 第 1~19 行,這個
createStream
是首先調用到的,它會調用真正實現創建 stream 的createStream
函數; - 第 31~54 行,這裏面走的是 STATUS_UNCONFIGURED 的分支,即直接跳出 switch 邏輯;
- 第 61~78 行,這裏走的是 HAL_PIXEL_FORMAT_RAW_OPAQUE 分支的部分,主要是第 69~71 行創建了 Camera3OutputStream 實例;
- 第 82、84 行,對新創建的 stream 進行一些基本配置,主要關於 StatusTracker 和 BufferManager,這部分就不繼續深入了;
- 第 86 行,此處把新創建的 stream 加入到 Camera3Device 的成員變量 mOutputStreams 中,它是一個 Set,其中 streamID 和 stream 實例一一對應;
- 第 93 行,這裏設置 Camera3Device 的成員變量 mNeedConfig 爲 true,表示需要進行 Config 動作,這個變量會影響到後續調用
startStream
期間configureStreamLocked
函數的走向。
status_t Camera3Device::createStream(sp<Surface> consumer,
uint32_t width, uint32_t height, int format,
android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
const String8& physicalCameraId,
std::vector<int> *surfaceIds, int streamSetId, bool isShared, uint64_t consumerUsage) {
ATRACE_CALL();
if (consumer == nullptr) {
ALOGE("%s: consumer must not be null", __FUNCTION__);
return BAD_VALUE;
}
std::vector<sp<Surface>> consumers;
consumers.push_back(consumer);
return createStream(consumers, /*hasDeferredConsumer*/ false, width, height,
format, dataSpace, rotation, id, physicalCameraId, surfaceIds, streamSetId,
isShared, consumerUsage);
}
status_t Camera3Device::createStream(const std::vector<sp<Surface>>& consumers,
bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
android_dataspace dataSpace, camera3_stream_rotation_t rotation, int *id,
const String8& physicalCameraId,
std::vector<int> *surfaceIds, int streamSetId, bool isShared, uint64_t consumerUsage) {
// NOTE: N Lines are omitted here
status_t res;
bool wasActive = false;
switch (mStatus) {
case STATUS_ERROR:
CLOGE("Device has encountered a serious error");
return INVALID_OPERATION;
case STATUS_UNINITIALIZED:
CLOGE("Device not initialized");
return INVALID_OPERATION;
case STATUS_UNCONFIGURED:
case STATUS_CONFIGURED:
// OK
break;
case STATUS_ACTIVE:
ALOGV("%s: Stopping activity to reconfigure streams", __FUNCTION__);
res = internalPauseAndWaitLocked(maxExpectedDuration);
if (res != OK) {
SET_ERR_L("Can't pause captures to reconfigure streams!");
return res;
}
wasActive = true;
break;
default:
SET_ERR_L("Unexpected status: %d", mStatus);
return INVALID_OPERATION;
}
assert(mStatus != STATUS_ACTIVE);
sp<Camera3OutputStream> newStream;
// NOTE: N Lines are omitted here
if (format == HAL_PIXEL_FORMAT_BLOB) {
// NOTE: N Lines are omitted here
} else if (format == HAL_PIXEL_FORMAT_RAW_OPAQUE) {
ssize_t rawOpaqueBufferSize = getRawOpaqueBufferSize(width, height);
if (rawOpaqueBufferSize <= 0) {
SET_ERR_L("Invalid RAW opaque buffer size %zd", rawOpaqueBufferSize);
return BAD_VALUE;
}
newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
width, height, rawOpaqueBufferSize, format, dataSpace, rotation,
mTimestampOffset, physicalCameraId, streamSetId);
} else if (isShared) {
// NOTE: N Lines are omitted here
} else if (consumers.size() == 0 && hasDeferredConsumer) {
// NOTE: N Lines are omitted here
} else {
// NOTE: N Lines are omitted here
}
// NOTE: N Lines are omitted here
newStream->setStatusTracker(mStatusTracker);
newStream->setBufferManager(mBufferManager);
res = mOutputStreams.add(mNextStreamId, newStream);
if (res < 0) {
SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
return res;
}
*id = mNextStreamId++;
mNeedConfig = true;
// NOTE: N Lines are omitted here
return OK;
}
setStreamingRequest 相關內容
這部分主要是由 startStream
觸發,作用是引出 ConfigureStream 的相關邏輯,並且通過 setRepeatingRequest
開啓持續預覽模式。
StreamingProcessor::startStream
這個函數邏輯比較簡單:
- 第 5 行,由於 type 是 PREVIEW,這裏獲取的是 mPreviewRequest;
- 第 8~10 行,注意這裏傳入的 outputStreams 的 size,最終會影響到
Camera3Device::RequestThread::prepareHalRequests
裏的captureRequests->mOutputStreams.size()
,這幾行主要是更新當前存在的 outputStream; - 第 14 行,調用 Camera3Device 的
setStreamingRequest
,從這裏開始就進入我們這一部分的正題了。
status_t StreamingProcessor::startStream(StreamType type,
const Vector<int32_t> &outputStreams) {
// NOTE: N Lines are omitted here
CameraMetadata &request = (type == PREVIEW) ?
mPreviewRequest : mRecordingRequest;
res = request.update(
ANDROID_REQUEST_OUTPUT_STREAMS,
outputStreams);
// NOTE: N Lines are omitted here
res = device->setStreamingRequest(request);
if (res != OK) {
ALOGE("%s: Camera %d: Unable to set preview request to start preview: "
"%s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
mActiveRequest = type;
mPaused = false;
mActiveStreamIds = outputStreams;
return OK;
}
Camera3Device::setStreamingRequest
這裏只是一個入口:
- 第 7 行,首先把傳入的 Metadata 轉換成 PhysicalCameraSettingsList;
- 第 9 行,開始進入下一步
setStreamingRequestList
。
status_t Camera3Device::setStreamingRequest(const CameraMetadata &request,
int64_t* /*lastFrameNumber*/) {
ATRACE_CALL();
List<const PhysicalCameraSettingsList> requestsList;
std::list<const SurfaceMap> surfaceMaps;
convertToRequestList(requestsList, surfaceMaps, request);
return setStreamingRequestList(requestsList, /*surfaceMap*/surfaceMaps,
/*lastFrameNumber*/NULL);
}
Camera3Device::setStreamingRequestList
這個函數沒什麼意思,只是繼續向下調用到 submitRequestsHelper
。
status_t Camera3Device::setStreamingRequestList(
const List<const PhysicalCameraSettingsList> &requestsList,
const std::list<const SurfaceMap> &surfaceMaps, int64_t *lastFrameNumber) {
ATRACE_CALL();
return submitRequestsHelper(requestsList, surfaceMaps, /*repeating*/true, lastFrameNumber);
}
Camera3Device::submitRequestsHelper
這裏面有幾個關鍵的動作:
- 第 11 行,目的是獲取 RequestList,它是以 Camera3Device 內部定義的類 CaptureRequest 爲基礎的鏈表,此處還需要注意傳入的參數中 repeating 的值是 true;
- 第 18~22 行,執行到的是第 19 行的 setRepeatingRequests,這裏會把前面拿到的 RequestList 傳到 Camera3Device::Request 實例裏面,此處的傳遞在持續預覽過程中是一個關鍵點,但在當前分析的流程中暫時不關注;
- 第 25 行,執行等待直到 state 爲 STATUS_ACTIVE。
status_t Camera3Device::submitRequestsHelper(
const List<const PhysicalCameraSettingsList> &requests,
const std::list<const SurfaceMap> &surfaceMaps,
bool repeating,
/*out*/
int64_t *lastFrameNumber) {
// NOTE: N Lines are omitted here
RequestList requestList;
res = convertMetadataListToRequestListLocked(requests, surfaceMaps,
repeating, /*out*/&requestList);
if (res != OK) {
// error logged by previous call
return res;
}
if (repeating) {
res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
} else {
res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
}
if (res == OK) {
waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
if (res != OK) {
SET_ERR_L("Can't transition to active in %f seconds!",
kActiveTimeout/1e9);
}
ALOGV("Camera %s: Capture request %" PRId32 " enqueued", mId.string(),
(*(requestList.begin()))->mResultExtras.requestId);
} else {
CLOGE("Cannot queue request. Impossible.");
return BAD_VALUE;
}
return res;
}
Camera3Device::convertMetadataListToRequestListLocked
這裏面主要關注的是:
- 第 10 行,獲取實例化的 CaptureRequest;
- 第 16 行,獲取到的 CaptureRequest 成員變量 mRepeating 設置爲 true;
- 第 22 行,把本次循環獲取的 CaptureRequest 加入到 RequestList 中。
status_t Camera3Device::convertMetadataListToRequestListLocked(
const List<const PhysicalCameraSettingsList> &metadataList,
const std::list<const SurfaceMap> &surfaceMaps,
bool repeating,
RequestList *requestList) {
// NOTE: N Lines are omitted here
for (; metadataIt != metadataList.end() && surfaceMapIt != surfaceMaps.end();
++metadataIt, ++surfaceMapIt) {
sp<CaptureRequest> newRequest = setUpRequestLocked(*metadataIt, *surfaceMapIt);
if (newRequest == 0) {
CLOGE("Can't create capture request");
return BAD_VALUE;
}
newRequest->mRepeating = repeating;
// Setup burst Id and request Id
// NOTE: N Lines are omitted here
requestList->push_back(newRequest);
ALOGV("%s: requestId = %" PRId32, __FUNCTION__, newRequest->mResultExtras.requestId);
}
// NOTE: N Lines are omitted here
return OK;
}
Camera3Device::setUpRequestLocked
這裏面有 2 件事:
- 第 8 行,由於當前處於 STATUS_UNCONFIGURED 狀態,所以需要先進行 configure stream 的操作;
- 第 21 行,根據傳入的參數,轉化 RequestList。
sp<Camera3Device::CaptureRequest> Camera3Device::setUpRequestLocked(
const PhysicalCameraSettingsList &request, const SurfaceMap &surfaceMap) {
status_t res;
if (mStatus == STATUS_UNCONFIGURED || mNeedConfig) {
// This point should only be reached via API1 (API2 must explicitly call configureStreams)
// so unilaterally select normal operating mode.
res = filterParamsAndConfigureLocked(request.begin()->metadata,
CAMERA3_STREAM_CONFIGURATION_NORMAL_MODE);
// Stream configuration failed. Client might try other configuraitons.
if (res != OK) {
CLOGE("Can't set up streams: %s (%d)", strerror(-res), res);
return NULL;
} else if (mStatus == STATUS_UNCONFIGURED) {
// Stream configuration successfully configure to empty stream configuration.
CLOGE("No streams configured");
return NULL;
}
}
sp<CaptureRequest> newRequest = createCaptureRequest(request, surfaceMap);
return newRequest;
}
Camera3Device::filterParamsAndConfigureLocked
這裏比較簡單:
- 第 4~20 行,都是爲了準備好 filteredParams,從第 3 行的註釋可以知道,這是把所有 session params 過濾出來;
- 第 22 行,開始進入 configure stream 的正題。
status_t Camera3Device::filterParamsAndConfigureLocked(const CameraMetadata& sessionParams,
int operatingMode) {
//Filter out any incoming session parameters
const CameraMetadata params(sessionParams);
camera_metadata_entry_t availableSessionKeys = mDeviceInfo.find(
ANDROID_REQUEST_AVAILABLE_SESSION_KEYS);
CameraMetadata filteredParams(availableSessionKeys.count);
camera_metadata_t *meta = const_cast<camera_metadata_t *>(
filteredParams.getAndLock());
set_camera_metadata_vendor_id(meta, mVendorTagId);
filteredParams.unlock(meta);
if (availableSessionKeys.count > 0) {
for (size_t i = 0; i < availableSessionKeys.count; i++) {
camera_metadata_ro_entry entry = params.find(
availableSessionKeys.data.i32[i]);
if (entry.count > 0) {
filteredParams.update(entry);
}
}
}
return configureStreamsLocked(operatingMode, filteredParams);
}
Camera3Device::configureStreamsLocked
分析該函數:
- 第 18 行,首先要讓 prepareThread 掛起;
- 第 22~42 行,主要是取每個 output stream 調用
startConfiguration
(第 33 行),更深入具體的邏輯這裏就不關注了; - 第 50 行,注意這裏的 mInterface 是 Camera3Device::HalInterface 的實例,主要負責對接調用 HIDL 相關的邏輯,此處調用的是
Camera3Device::HalInterface::configureStreams
這個函數,它會調用到mHidlSession_3_4->configureStreams_3_4
從而進入 HAL 層的 configure stream 動作; - 第 55~67 行,configure 完畢後,對應上
startConfiguration
,此處調用每個 output stream 的finishConfiguration
動作; - 第 72 行,觸發 RequestThread 的
configurationComplete
動作; - 第 75~89 行,這裏額外提及一下,這部分似乎與線程優先級的設置有關,property 裏面的 fifo 應該表示的是 First In First Out 即先進先出調度模式,這個地方感覺日後可以再深入去研究一下;
- 第 93 行,注意這裏已經把 mNeedConfig 設置爲 false;
- 第 95 行,這裏把 state 設置成 STATUS_CONFIGURED;
- 第 103 行,最後調用 prepareThread 的
resume
函數讓其繼續運作。
status_t Camera3Device::configureStreamsLocked(int operatingMode,
const CameraMetadata& sessionParams, bool notifyRequestThread) {
// NOTE: N Lines are omitted here
// Workaround for device HALv3.2 or older spec bug - zero streams requires
// adding a dummy stream instead.
// TODO: Bug: 17321404 for fixing the HAL spec and removing this workaround.
if (mOutputStreams.size() == 0) {
addDummyStreamLocked();
} else {
tryRemoveDummyStreamLocked();
}
// Start configuring the streams
ALOGV("%s: Camera %s: Starting stream configuration", __FUNCTION__, mId.string());
mPreparerThread->pause();
// NOTE: N Lines are omitted here
for (size_t i = 0; i < mOutputStreams.size(); i++) {
// Don't configure bidi streams twice, nor add them twice to the list
if (mOutputStreams[i].get() ==
static_cast<Camera3StreamInterface*>(mInputStream.get())) {
config.num_streams--;
continue;
}
camera3_stream_t *outputStream;
outputStream = mOutputStreams.editValueAt(i)->startConfiguration();
if (outputStream == NULL) {
CLOGE("Can't start output stream configuration");
cancelStreamsConfigurationLocked();
return INVALID_OPERATION;
}
streams.add(outputStream);
// NOTE: N Lines are omitted here
}
config.streams = streams.editArray();
// Do the HAL configuration; will potentially touch stream
// max_buffers, usage, priv fields.
const camera_metadata_t *sessionBuffer = sessionParams.getAndLock();
res = mInterface->configureStreams(sessionBuffer, &config, bufferSizes);
sessionParams.unlock(sessionBuffer);
// NOTE: N Lines are omitted here
for (size_t i = 0; i < mOutputStreams.size(); i++) {
sp<Camera3OutputStreamInterface> outputStream =
mOutputStreams.editValueAt(i);
if (outputStream->isConfiguring() && !outputStream->isConsumerConfigurationDeferred()) {
res = outputStream->finishConfiguration();
if (res != OK) {
CLOGE("Can't finish configuring output stream %d: %s (%d)",
outputStream->getId(), strerror(-res), res);
cancelStreamsConfigurationLocked();
return BAD_VALUE;
}
}
}
// Request thread needs to know to avoid using repeat-last-settings protocol
// across configure_streams() calls
if (notifyRequestThread) {
mRequestThread->configurationComplete(mIsConstrainedHighSpeedConfiguration, sessionParams);
}
char value[PROPERTY_VALUE_MAX];
property_get("camera.fifo.disable", value, "0");
int32_t disableFifo = atoi(value);
if (disableFifo != 1) {
// Boost priority of request thread to SCHED_FIFO.
pid_t requestThreadTid = mRequestThread->getTid();
res = requestPriority(getpid(), requestThreadTid,
kRequestThreadPriority, /*isForApp*/ false, /*asynchronous*/ false);
if (res != OK) {
ALOGW("Can't set realtime priority for request processing thread: %s (%d)",
strerror(-res), res);
} else {
ALOGD("Set real time priority for request queue thread (tid %d)", requestThreadTid);
}
}
// NOTE: N Lines are omitted here
mNeedConfig = false;
internalUpdateStatusLocked((mDummyStreamId == NO_STREAM) ?
STATUS_CONFIGURED : STATUS_UNCONFIGURED);
ALOGV("%s: Camera %s: Stream configuration complete", __FUNCTION__, mId.string());
// tear down the deleted streams after configure streams.
mDeletedStreams.clear();
auto rc = mPreparerThread->resume();
if (rc != OK) {
SET_ERR_L("%s: Camera %s: Preparer thread failed to resume!", __FUNCTION__, mId.string());
return rc;
}
return OK;
}
Camera3Device::createCaptureRequest
ConfigureStream 完成後,就要創建 CaptureRequest 實例並配置其 stream 信息:
- 第 6 行,創建 CaptureRequest 實例;
- 第 18~45 行,對每個 output stream,將其加入到 CaptureRequest 實例的 mOutputStreams 成員中(注意第 44 行)。
sp<Camera3Device::CaptureRequest> Camera3Device::createCaptureRequest(
const PhysicalCameraSettingsList &request, const SurfaceMap &surfaceMap) {
ATRACE_CALL();
status_t res;
sp<CaptureRequest> newRequest = new CaptureRequest;
newRequest->mSettingsList = request;
// NOTE: N Lines are omitted here
camera_metadata_entry_t streams =
newRequest->mSettingsList.begin()->metadata.find(ANDROID_REQUEST_OUTPUT_STREAMS);
if (streams.count == 0) {
CLOGE("Zero output streams specified!");
return NULL;
}
for (size_t i = 0; i < streams.count; i++) {
int idx = mOutputStreams.indexOfKey(streams.data.i32[i]);
if (idx == NAME_NOT_FOUND) {
CLOGE("Request references unknown stream %d",
streams.data.u8[i]);
return NULL;
}
sp<Camera3OutputStreamInterface> stream =
mOutputStreams.editValueAt(idx);
// It is illegal to include a deferred consumer output stream into a request
auto iter = surfaceMap.find(streams.data.i32[i]);
if (iter != surfaceMap.end()) {
const std::vector<size_t>& surfaces = iter->second;
for (const auto& surface : surfaces) {
if (stream->isConsumerConfigurationDeferred(surface)) {
CLOGE("Stream %d surface %zu hasn't finished configuration yet "
"due to deferred consumer", stream->getId(), surface);
return NULL;
}
}
newRequest->mOutputSurfaces[i] = surfaces;
}
// NOTE: N Lines are omitted here
newRequest->mOutputStreams.push(stream);
}
newRequest->mSettingsList.begin()->metadata.erase(ANDROID_REQUEST_OUTPUT_STREAMS);
newRequest->mBatchSize = 1;
return newRequest;
}
Camera3Device::RequestThread::setRepeatingRequests
創建完 CaptureRequest ,回到 submitRequestHelper
會調用到這個函數:
- 第 10 行,先清空當前 mRepeatingRequests 中內容;
- 第 11 行,把剛纔創建的 request 加入到 mRepeatingRequests 中;
- 第 14 行,調用
unpauseForNewRequests
向相關線程發送信號。
status_t Camera3Device::RequestThread::setRepeatingRequests(
const RequestList &requests,
/*out*/
int64_t *lastFrameNumber) {
ATRACE_CALL();
Mutex::Autolock l(mRequestLock);
if (lastFrameNumber != NULL) {
*lastFrameNumber = mRepeatingLastFrameNumber;
}
mRepeatingRequests.clear();
mRepeatingRequests.insert(mRepeatingRequests.begin(),
requests.begin(), requests.end());
unpauseForNewRequests();
mRepeatingLastFrameNumber = hardware::camera2::ICameraDeviceUser::NO_IN_FLIGHT_REPEATING_FRAMES;
return OK;
}
Camera3Device::RequestThread::unpauseForNewRequests
這裏面需要注意的是第 6 行,觸發了 mRequestSignal 的信號,它對應的是 Camera3Device::RequestThread::waitForNextRequestLocked
中的 wait 動作。
void Camera3Device::RequestThread::unpauseForNewRequests() {
ATRACE_CALL();
// With work to do, mark thread as unpaused.
// If paused by request (setPaused), don't resume, to avoid
// extra signaling/waiting overhead to waitUntilPaused
mRequestSignal.signal();
Mutex::Autolock p(mPauseLock);
if (!mDoPause) {
ALOGV("%s: RequestThread: Going active", __FUNCTION__);
if (mPaused) {
sp<StatusTracker> statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentActive(mStatusId);
}
}
mPaused = false;
}
}
下集預告
這片文章主要介紹了 Android P 版本下,相機 APP 調用 Camera API1 的 startPreview
時,Framework 層是如何轉換成 HAL3 的邏輯以確保兼容性的。
而下一篇文章將會介紹到,調用了 startPreview
後,接着去調用 setPreviewCallbackWithBuffer
(會觸發到 setPreviewCallbackFlag
)時相關的時序。