在前面的幾篇筆記中,我已經把 Camera 控制流的部分梳理得比較清楚了。在 Camera 流程中,還有一個重要的部分,即數據流。
Camera API 1 中,數據流主要是通過函數回調的方式,依照從下往上的方向,逐層 return 到 Applications 中。
由於數據流的部分相對來說比較簡單,所以我就將其與 Camera 的控制流結合起來,從 takePicture() 方法切入,追蹤一個比較完整的 Camera 流程,這個系列的筆記到這篇也就可以結束了。
位置:frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h
setCallback():
設置 notify 回調,這用來通知數據已經更新。
設置 data 回調以及 dataTimestamp 回調,對應的是函數指針 mDataCb 與 mDataCvTimestamp 。
注意到,設置 mDevice->ops 對應回調函數時,傳入的不是之前設置的函數指針,而是 __data_cb 這樣的函數。在該文件中,實現了 __data_cb ,將回調函數做了一層封裝。
/** Set the notification and data callbacks */
void setCallbacks(notify_callback notify_cb,
data_callback data_cb,
data_callback_timestamp data_cb_timestamp,
void* user)
{
mNotifyCb = notify_cb;
mDataCb = data_cb;
mDataCbTimestamp = data_cb_timestamp;
mCbUser = user;
ALOGV("%s(%s)", __FUNCTION__, mName.string());
if (mDevice->ops->set_callbacks) {
mDevice->ops->set_callbacks(mDevice,
__notify_cb,
__data_cb,
__data_cb_timestamp,
__get_memory,
this);
}
}
__data_cb():
對原 callback 函數簡單封裝,附加了一個防止數組越界判斷。
static void __data_cb(int32_t msg_type,
const camera_memory_t *data, unsigned int index,
camera_frame_metadata_t *metadata,
void *user)
{
ALOGV("%s", __FUNCTION__);
CameraHardwareInterface *__this =
static_cast<CameraHardwareInterface *>(user);
sp<CameraHeapMemory> mem(static_cast<CameraHeapMemory *>(data->handle));
if (index >= mem->mNumBufs) {
ALOGE("%s: invalid buffer index %d, max allowed is %d", __FUNCTION__,
index, mem->mNumBufs);
return;
}
__this->mDataCb(msg_type, mem->mBuffers[index], metadata, __this->mCbUser);
}
位置:frameworks/base/core/jni/android_hardware_Camera.cpp
takePicture():
獲取已經打開的 camera 實例,調用其 takePicture() 接口。
注意,在這個函數中,對於 RAW_IMAGE 有一些附加操作:
如果設置了 RAW 的 callback ,則要檢查上下文中,是否能找到對應 Buffer。
若無法找到 Buffer ,則將 CAMERA_MSG_RAW_IMAGE 的信息去掉,換成 CAMERA_MSG_RAW_IMAGE_NOTIFY。
替換後,就只會獲得 notification 的消息,而沒有對應的圖像數據。
static void android_hardware_Camera_takePicture(JNIEnv *env, jobject thiz, jint msgType)
{
ALOGV("takePicture");
JNICameraContext* context;
//先前有分析,https://my.oschina.net/u/920274/blog/5034592
sp<Camera> camera = get_native_camera(env, thiz, &context);
if (camera == 0) return;
/*
* When CAMERA_MSG_RAW_IMAGE is requested, if the raw image callback
* buffer is available, CAMERA_MSG_RAW_IMAGE is enabled to get the
* notification _and_ the data; otherwise, CAMERA_MSG_RAW_IMAGE_NOTIFY
* is enabled to receive the callback notification but no data.
*
* Note that CAMERA_MSG_RAW_IMAGE_NOTIFY is not exposed to the
* Java application.
*/
if (msgType & CAMERA_MSG_RAW_IMAGE) {
ALOGV("Enable raw image callback buffer");
if (!context->isRawImageCallbackBufferAvailable()) {
ALOGV("Enable raw image notification, since no callback buffer exists");
msgType &= ~CAMERA_MSG_RAW_IMAGE;
msgType |= CAMERA_MSG_RAW_IMAGE_NOTIFY;
}
}
if (camera->takePicture(msgType) != NO_ERROR) {
jniThrowRuntimeException(env, "takePicture failed");
return;
}
}
調用camera->takePicture(msgType)後,來到下面的地方。
位置:frameworks/av/camera/Camera.cpp
takePicture():
獲取一個 ICamera,調用其 takePicture 接口。
這裏直接用 return 的方式調用,比較簡單。
// take a picture
status_t Camera::takePicture(int msgType)
{
ALOGV("takePicture: 0x%x", msgType);
//https://my.oschina.net/u/920274/blog/5034592 有分析
sp <ICamera> c = mCamera;
if (c == 0) return NO_INIT;
return c->takePicture(msgType);
}
然後跳轉到
位置:frameworks/av/camera/ICamera.cpp
takePicture():
利用 Binder 機制發送相應指令到服務端。
實際調用到的是 CameraClient::takePicture() 函數。
// take a picture - returns an IMemory (ref-counted mmap)
status_t takePicture(int msgType)
{
ALOGV("takePicture: 0x%x", msgType);
Parcel data, reply;
data.writeInterfaceToken(ICamera::getInterfaceDescriptor());
data.writeInt32(msgType);
//後面會分析這個地方的調用
remote()->transact(TAKE_PICTURE, data, &reply);
status_t ret = reply.readInt32();
return ret;
}
remote()->transact(TAKE_PICTURE, data, &reply);
binder調用到文件的CameraService.cpp的CameraService::onTransact方法
status_t CameraService::onTransact(uint32_t code, const Parcel& data, Parcel* reply,
uint32_t flags) {
const int pid = getCallingPid();
const int selfPid = getpid();
// Permission checks
switch (code) {
case BnCameraService::CONNECT:
case BnCameraService::CONNECT_DEVICE:
case BnCameraService::CONNECT_LEGACY: {
if (pid != selfPid) {
// we're called from a different process, do the real check
if (!checkCallingPermission(
String16("android.permission.CAMERA"))) {
const int uid = getCallingUid();
ALOGE("Permission Denial: "
"can't use the camera pid=%d, uid=%d", pid, uid);
return PERMISSION_DENIED;
}
}
break;
}
case BnCameraService::NOTIFY_SYSTEM_EVENT: {
if (pid != selfPid) {
// Ensure we're being called by system_server, or similar process with
// permissions to notify the camera service about system events
if (!checkCallingPermission(
String16("android.permission.CAMERA_SEND_SYSTEM_EVENTS"))) {
const int uid = getCallingUid();
ALOGE("Permission Denial: cannot send updates to camera service about system"
" events from pid=%d, uid=%d", pid, uid);
return PERMISSION_DENIED;
}
}
break;
}
}
return BnCameraService::onTransact(code, data, reply, flags); //然後會調用到這來
}
繼續往下到文件ICameraService.cpp
status_t BnCameraService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
。。。
。。。
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
繼續會調用到文件ICamera.cpp
status_t BnCamera::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case TAKE_PICTURE: {
ALOGV("TAKE_PICTURE");
CHECK_INTERFACE(ICamera, data, reply);
int msgType = data.readInt32();
reply->writeInt32(takePicture(msgType));
return NO_ERROR;
} break;
default:
return BBinder::onTransact(code, data, reply, flags);
}
}
會調用到文件CameraClient.cpp下的takePicture(),是因爲CameraClient繼承 BnCamera
繼續往下看文件 CameraClient.cpp
// take a picture - image is returned in callback
status_t CameraClient::takePicture(int msgType) {
LOG1("takePicture (pid %d): 0x%x", getCallingPid(), msgType);
Mutex::Autolock lock(mLock);
status_t result = checkPidAndHardware();
if (result != NO_ERROR) return result;
if ((msgType & CAMERA_MSG_RAW_IMAGE) &&
(msgType & CAMERA_MSG_RAW_IMAGE_NOTIFY)) {
ALOGE("CAMERA_MSG_RAW_IMAGE and CAMERA_MSG_RAW_IMAGE_NOTIFY"
" cannot be both enabled");
return BAD_VALUE;
}
// We only accept picture related message types
// and ignore other types of messages for takePicture().
int picMsgType = msgType
& (CAMERA_MSG_SHUTTER |
CAMERA_MSG_POSTVIEW_FRAME |
CAMERA_MSG_RAW_IMAGE |
CAMERA_MSG_RAW_IMAGE_NOTIFY |
CAMERA_MSG_COMPRESSED_IMAGE);
enableMsgType(picMsgType);
return mHardware->takePicture();
}
位置:frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h
takePicture():
通過 mDevice 中設置的函數指針,調用 HAL 層中具體平臺對應的 takePicture 操作的實現邏輯。
接下來就是與具體的平臺相關的流程了,這部分內容對我並非主要,而且在上一篇筆記中已經有比較深入的探索,所以在這裏就不繼續向下挖掘了。
控制流程到了 HAL 層後,再向 Linux Drivers 發送控制指令,從而使具體的 Camera 設備執行指令,並獲取數據。
/**
* Take a picture.
*/
status_t takePicture()
{
ALOGV("%s(%s)", __FUNCTION__, mName.string());
if (mDevice->ops->take_picture)
return mDevice->ops->take_picture(mDevice);
return INVALID_OPERATION;
}
後面就是 數據流
由於數據流是通過 callback 函數實現的,所以探究其流程的時候我是從底層向上層進行分析的。