Android Camera源碼分析

回顧這半年做的項目基本都跟Camera有關,從手勢識別控制空調,到人臉識別的門禁,都是圍繞相機的數據處理和渲染。這裏相機不限於本地的相機,還包括遠程的RTSP相機,要將數據流拉到本地進行渲染。

這兩天好好讀了一下Camera的源碼,大概理清了整體架構,總結了一下,其實沒多少東西,Android的各個模塊都差不多,都是有個系統服務,然後Java封一層,Native封一層,Java層和Native層的對象互相綁定,上面的請求從Java層往下到Native層,然後跨進程丟到系統服務中處理,這裏Binder是不可避免的,傳接口或文件句柄之類的。對於大塊內存就不能指望Binder了,這時候共享內存就出場了,打開設備獲得描述符,各自映射到進程的用戶空間,這裏描述符通過Binder傳遞,不過要注意的是跨進程傳遞後描述符可能會變,不過沒關係,內核空間還是對應着同一個文件對象。接着說系統服務,這裏面很可能繼續往下調到HAL層,這個就是定義的一套設備的標準接口,不同廠家實現好了so,這裏直接加載,然後調用對應的設備操作函數就好了。底層拿到了數據,或者狀態變化需要通知上層,則又一層層地回調回去。

有了這麼一個大框架,讀起源碼來就順多了。總的來說,Android有兩個核心,一個是跨進程通信,一個是內存管理,其它的各系統服務諸如AMS、WMS、PMS或者相機藍牙什麼的都可以看做是架在上面的業務層。

好了,接下來我們從Camera的Java層調用入手,過一遍源碼。

Camera的調用大致如下:

Camera camera = Camera.open(1);
camera.setPreviewTexture(surfaceTexture);
camera.setPreviewCallbackWithBuffer(callback);
camera.addCallbackBuffer(buffer);
camera.startPreview();

本文將重點分析以上幾個函數,看看底層是如何實現的。

首先說說Camera.open,這是個靜態函數。Camera有個系統服務,在系統啓動的時候註冊到ServiceManager,每次調用Camera.open都會創建一個Camera Java對象,然後調到Native層初始化,連接到CameraService,拿到對應的Binder句柄,生成Camera Native層的對象和上下文數據結構,然後綁定到Java層的Camera類中。

Camera的open會調到native_setup,傳入cameraId,

// android_hardware_Camera.cpp
static jint android_hardware_Camera_native_setup(JNIEnv *env, ...) {
    sp<Camera> camera = Camera::connect(cameraId, clientName, );

    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);

    camera->setListener(context);
    env->SetLongField(thiz, fields.context, (jlong)context.get());
}
// Camera.cpp
sp<Camera> Camera::connect(int cameraId, const String16& clientPackageName, ) {
    return CameraBaseT::connect(cameraId, clientPackageName, clientUid);
}

要注意的是Camera類繼承自模板類CameraBase,如下:

// Camera.h
class Camera : public CameraBase<Camera>, public BnCameraClient

上面調用了CameraBaseT的connect函數,這是個靜態函數,CameraBaseT定義在CameraBase中,

typedef CameraBase<TCam>         CameraBaseT;

而對於Camera類來說就是CameraBase,再往下看connect的實現:

// CameraBase.cpp
template <typename TCam, typename TCamTraits>
sp<TCam> CameraBase<TCam, TCamTraits>::connect(int cameraId, ) {
    sp<TCam> c = new TCam(cameraId);
    sp<TCamCallbacks> cl = c;
    const sp<::android::hardware::ICameraService> cs = getCameraService();

    binder::Status ret;
    if (cs != nullptr) {
        TCamConnectService fnConnectService = TCamTraits::fnConnectService;
        ret = (cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid, clientPid, &c->mCamera);
    }
    if (ret.isOk() && c->mCamera != nullptr) {
        IInterface::asBinder(c->mCamera)->linkToDeath(c);
        c->mStatus = NO_ERROR;
    }
    return c;
}

這裏先獲取CameraService的句柄,類型爲ICameraService,然後調用fnConnectService,先看看這個函數指針指向哪:

CameraTraits<Camera>::TCamConnectService CameraTraits<Camera>::fnConnectService =
        &::android::hardware::ICameraService::connect;

看來是先通過getCameraService獲取Binder句柄,然後調用對應的connect函數。這個函數的最後一個參數在Binder調用結束時會被設置爲一個句柄,以後的Camera調用都要靠他了,這是ICamera類型的。此外這個函數第一個參數是回調,類型爲ICameraClient。這裏的TCamCallbacks就是ICameraClient,TCam就是Camera。所以這裏Camera既然能被賦值給ICameraClient,肯定實現了其接口。答案是肯定的,Camera繼承自BnCameraClient,

// ICameraClient.h
class BnCameraClient: public BnInterface<ICameraClient>

ICameraService的實現在CameraService中,先看看CameraService類的定義:

class CameraService :
    public BinderService<CameraService>,
    public BnCameraService,
    public IBinder::DeathRecipient,
    public camera_module_callbacks_t

可見繼承自BnCameraService,這個類定義在ICameraService.h中,如下:

class BnCameraService: public BnInterface<ICameraService> {
public:
    virtual status_t    onTransact( uint32_t code, ...};

所以BnCameraService中的onTrasact處理Binder請求,然後調用CameraService的connect:

// ICameraService.cpp
case CONNECT: {
    CHECK_INTERFACE(ICameraService, data, reply);
    sp<ICameraClient> cameraClient = 
        interface_cast<ICameraClient>(data.readStrongBinder());
    int32_t cameraId = data.readInt32();
    const String16 clientName = data.readString16();
    int32_t clientUid = data.readInt32();
    sp<ICamera> camera;
    status_t status = connect(cameraClient, cameraId, clientName, 
        clientUid, camera);
    reply->writeNoException();
    reply->writeInt32(status);
    if (camera != NULL) {
        reply->writeInt32(1);
        reply->writeStrongBinder(IInterface::asBinder(camera));
    } else {
        reply->writeInt32(0);
    }
    return NO_ERROR;
} break;

這裏要注意的是調用了connect之後,camera被賦值了,然後通過Binder直接返回。因此Camera類的mCamera被賦值。這個mCamera是定義在CameraBase類中,類型爲ICamera,這個類定義在ICamera.h中,裏面定義了相機操作的一系列接口。

status_t CameraService::connect() {
    ret = connectHelper<ICameraClient,Client>(cameraClient, id, CAMERA_HAL_API_VERSION_UNSPECIFIED,
            clientPackageName, clientUid, API_1, false, false, client);

    device = client;
    return NO_ERROR;
}

connectHelper函數定義在CameraService.h中,裏面會創建一個CameraClient並初始化返回,如下,這個類繼承自Client,Client繼承自BnCamera,BnCamera是ICamera的Bn端。

// CameraClient.h
class CameraClient : public CameraService::Client

// CameraService.h
class Client : public BnCamera, public BasicClient

// ICamera.h
class BnCamera: public BnInterface<ICamera>

我們注意到ICamera類中關於connect的定義如下:

// ICamera.h
virtual status_t        connect(const sp<ICameraClient>& client) = 0;

這裏ICameraClient應該是回調,我們看其定義,果然不出所料。

class ICameraClient: public IInterface {
public:
    DECLARE_META_INTERFACE(CameraClient);

    virtual void            notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2) = 0;
    virtual void            dataCallback(int32_t msgType, const sp<IMemory>& data,
                                         camera_frame_metadata_t *metadata) = 0;
    virtual void            dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& data) = 0;
};

// ----------------------------------------------------------------------------

class BnCameraClient: public BnInterface<ICameraClient>
{
public:
    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);
};

綜上,打開攝像頭時,會連接到CameraService,Binder調用返回一個ICamera句柄,在CameraService端的實現爲CameraClient。

再來看setPreviewTexture,直接調到native層:

static void android_hardware_Camera_setPreviewTexture(JNIEnv *env,
        jobject thiz, jobject jSurfaceTexture) {
    sp<Camera> camera = get_native_camera(env, thiz, NULL);

    sp<IGraphicBufferProducer> producer = NULL;
    if (jSurfaceTexture != NULL) {
        producer = SurfaceTexture_getProducer(env, jSurfaceTexture);
    }

    camera->setPreviewTarget(producer);
}

這裏先通過Java層的Camera對象找到Native層的Camera對象,Java層的Camera對象中保存了Native層Camera對象上下文的指針。然後獲取SurfaceTexture的producer,設置爲Camera的preview target,從字面意思上理解就是預覽的輸出。即攝像機的預覽輸出作爲了SurfaceTexture的producer端,這是IGraphicBufferProducer類型的,對應着一個BufferQueue。

// Camera.cpp
status_t Camera::setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer) {
    sp <ICamera> c = mCamera;
    return c->setPreviewTarget(bufferProducer);
}

這裏mCamera是Camera類中保存的遠端ICamera句柄,所有相機的調用都要通過這個句柄。直接看CameraClient中的實現,

status_t CameraClient::setPreviewTarget(
        const sp<IGraphicBufferProducer>& bufferProducer) {
    sp<IBinder> binder;
    sp<ANativeWindow> window;
    if (bufferProducer != 0) {
        binder = IInterface::asBinder(bufferProducer);
        window = new Surface(bufferProducer, true);
    }
    return setPreviewWindow(binder, window);
}

這裏通過Producer生成一個Surface,繼承自ANativeWindow,關於Surface的源碼解析我們得另開一章,這裏就不細說了。我們看setPreviewWindow的實現:

status_t CameraClient::setPreviewWindow(const sp<IBinder>& binder,
        const sp<ANativeWindow>& window) {
    if (window != 0) {
        result = native_window_api_connect(window.get(), NATIVE_WINDOW_API_CAMERA);
    }

    disconnectWindow(mPreviewWindow);
    mSurface = binder;
    mPreviewWindow = window;

    return result;
}

這裏邏輯很簡單,就是給參數保存一下。注意到這裏有個mHardware,

sp<CameraHardwareInterface>     mHardware; 

是在CameraClient類的initialize中初始化的:

status_t CameraClient::initialize(CameraModule *module) {
    int callingPid = getCallingPid();
    status_t res = startCameraOps();

    mHardware = new CameraHardwareInterface(camera_device_name);
    res = mHardware->initialize(module);

    mHardware->setCallbacks(notifyCallback,
            dataCallback,
            dataCallbackTimestamp,
            (void *)(uintptr_t)mCameraId);

    // Enable zoom, error, focus, and metadata messages by default
    enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
                  CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE);
    return OK;
}

這裏創建並初始化了CameraHardwareInterface,然後註冊回調。這個接口應該是面向HAL層的了。

我們再看setPreviewCallbackWithBuffer的實現,

public final void setPreviewCallbackWithBuffer(PreviewCallback cb) {
    mPreviewCallback = cb;
    mOneShot = false;
    mWithBuffer = true;
    if (cb != null) {
        mUsingPreviewAllocation = false;
    }
    setHasPreviewCallback(cb != null, true);
}

setHasPreviewCallback是個Native函數,

static void android_hardware_Camera_setHasPreviewCallback(JNIEnv *env, jobject thiz, jboolean installed, jboolean manualBuffer) {
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context);
    context->setCallbackMode(env, installed, manualBuffer);
}

setCallbackMode裏沒有Binder調用,只是簡單地設置本地參數。

我們再來看addCallbackBuffer的實現,

public final void addCallbackBuffer(byte[] callbackBuffer) {
    _addCallbackBuffer(callbackBuffer, CAMERA_MSG_PREVIEW_FRAME);
}

這裏帶了個CAMERA_MSG_PREVIEW_FRAME標誌,_addCallbackBuffer是個native函數。我們注意到Camera.java中有個EventHandler,

private static void postEventFromNative(Object camera_ref, ) {
    Camera c = (Camera)((WeakReference)camera_ref).get();

    if (c.mEventHandler != null) {
        Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
        c.mEventHandler.sendMessage(m);
    }
}

這個應該是Native層調到Java層的,我們看消息的處理:

case CAMERA_MSG_PREVIEW_FRAME:
        PreviewCallback pCb = mPreviewCallback;
        if (pCb != null) {
            pCb.onPreviewFrame((byte[])msg.obj, mCamera);
        }
        return;

原來onPreviewFrame是這裏調的。那我們只要研究Native層什麼時候拋消息到Java層就好了。我們接着看addCallbackBuffer在Native層的實現:

void JNICameraContext::addCallbackBuffer(
        JNIEnv *env, jbyteArray cbb, int msgType) {
    jbyteArray callbackBuffer = (jbyteArray)env->NewGlobalRef(cbb);
    mCallbackBuffers.push(callbackBuffer);
}

看來只是簡單的丟到mCallbackBuffers中了。

最後我們看startPreview的實現,這是個native函數,

static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz) {
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    camera->startPreview();
}

這裏看來要走Binder調用了,

status_t Camera::startPreview() {
    sp <ICamera> c = mCamera;
    return c->startPreview();
}

果然不出所料,我們看CameraClient中的實現:

status_t CameraClient::startPreview() {
    return startCameraMode(CAMERA_PREVIEW_MODE);
}

這裏mode分preview和recording兩種,預覽調到了startPreviewMode,

status_t CameraClient::startPreviewMode() {
    mHardware->setPreviewWindow(mPreviewWindow);
    return mHardware->startPreview();
}

看來這裏主角是mHardware,先設置PreviewWindow,然後startPreview。

接下來我們研究這個mHardware,看看CameraHardwareInterface這個類,這個類是向下面下HAL層的,在打開相機時,在CameraService中創建CameraClient並initialize時會初始化這個mHardware。在CameraService的onFirstRef時會通過hw_get_module加載相機模塊,之後在初始化mHardware時會用到這個module。

之後的所有相機請求到會丟到HAL層去執行。

最後我們再看看相機的幀數據回調,我們在前面提到,在CameraClient的initialize中會將callback設置到mHardware,想必又被傳到HAL層去了。

void CameraClient::dataCallback(int32_t msgType,
        const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
    sp<CameraClient> client = static_cast<CameraClient*>(getClientFromCookie(user).get());

    switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
        case CAMERA_MSG_PREVIEW_FRAME:
            client->handlePreviewData(msgType, dataPtr, metadata);
            break;
        ......
    }
}

我們注意到這個dataCallback的參數中dataPtr是個共享內存,這樣數據就可以高效地跨進程分享了。先看看CameraClient的handlePreviewData的實現,這個ICamearClient是個回調接口。

void CameraClient::handlePreviewData(int32_t msgType, ) {
    sp<IMemoryHeap> heap = mem->getMemory(&offset, &size);
    sp<ICameraClient> c = mRemoteCallback;
    c->dataCallback(msgType, mem, metadata);
}

看看mRemoteCallback是什麼時候賦值的,答案是在CameraClient構造函數中,答案是打開相機時,這個回調是個ICameraClient的,也是個Binder接口,是從上層傳下來的。我們前面提到過,在CameraBase的connect時,傳入了Camera類對象,由於是繼承了BnCameraClient,所以實現了ICameraClient接口,我們看其實現:

void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
                          camera_frame_metadata_t *metadata) {
    sp<CameraListener> listener;
    {
        Mutex::Autolock _l(mLock);
        listener = mListener;
    }
    if (listener != NULL) {
        listener->postData(msgType, dataPtr, metadata);
    }
}

這裏mListener是什麼東西,其實是打開Camera時創建的JNICameraContext,是在android_hardware_Camera_native_setup函數中設置到Camera中的。而JNICameraContext是實現了這個接口的,

class JNICameraContext: public CameraListener

這裏調postData,由於msgType是CAMERA_MSG_PREVIEW_FRAME,所以調到了copyAndPost,如下:

void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType) {
    ssize_t offset;
    size_t size;
    sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
    uint8_t *heapBase = (uint8_t*)heap->base();

    const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);
    jbyteArray obj = getCallbackBuffer(env, &mCallbackBuffers, size);
    env->SetByteArrayRegion(obj, 0, size, data);

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
}

這裏的fields.post_event對應的是Java層的postEventFromNative:

fields.post_event = GetStaticMethodIDOrDie(env, clazz, "postEventFromNative",
    "(Ljava/lang/Object;IIILjava/lang/Object;)V");

這裏相機數據是通過共享內存傳過來的,然後通過SetByteArrayRegion拷貝到buffer中回調到Java層。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章