Android Camera源码分析

回顾这半年做的项目基本都跟Camera有关,从手势识别控制空调,到人脸识别的门禁,都是围绕相机的数据处理和渲染。这里相机不限于本地的相机,还包括远程的RTSP相机,要将数据流拉到本地进行渲染。

这两天好好读了一下Camera的源码,大概理清了整体架构,总结了一下,其实没多少东西,Android的各个模块都差不多,都是有个系统服务,然后Java封一层,Native封一层,Java层和Native层的对象互相绑定,上面的请求从Java层往下到Native层,然后跨进程丢到系统服务中处理,这里Binder是不可避免的,传接口或文件句柄之类的。对于大块内存就不能指望Binder了,这时候共享内存就出场了,打开设备获得描述符,各自映射到进程的用户空间,这里描述符通过Binder传递,不过要注意的是跨进程传递后描述符可能会变,不过没关系,内核空间还是对应着同一个文件对象。接着说系统服务,这里面很可能继续往下调到HAL层,这个就是定义的一套设备的标准接口,不同厂家实现好了so,这里直接加载,然后调用对应的设备操作函数就好了。底层拿到了数据,或者状态变化需要通知上层,则又一层层地回调回去。

有了这么一个大框架,读起源码来就顺多了。总的来说,Android有两个核心,一个是跨进程通信,一个是内存管理,其它的各系统服务诸如AMS、WMS、PMS或者相机蓝牙什么的都可以看做是架在上面的业务层。

好了,接下来我们从Camera的Java层调用入手,过一遍源码。

Camera的调用大致如下:

Camera camera = Camera.open(1);
camera.setPreviewTexture(surfaceTexture);
camera.setPreviewCallbackWithBuffer(callback);
camera.addCallbackBuffer(buffer);
camera.startPreview();

本文将重点分析以上几个函数,看看底层是如何实现的。

首先说说Camera.open,这是个静态函数。Camera有个系统服务,在系统启动的时候注册到ServiceManager,每次调用Camera.open都会创建一个Camera Java对象,然后调到Native层初始化,连接到CameraService,拿到对应的Binder句柄,生成Camera Native层的对象和上下文数据结构,然后绑定到Java层的Camera类中。

Camera的open会调到native_setup,传入cameraId,

// android_hardware_Camera.cpp
static jint android_hardware_Camera_native_setup(JNIEnv *env, ...) {
    sp<Camera> camera = Camera::connect(cameraId, clientName, );

    sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);

    camera->setListener(context);
    env->SetLongField(thiz, fields.context, (jlong)context.get());
}
// Camera.cpp
sp<Camera> Camera::connect(int cameraId, const String16& clientPackageName, ) {
    return CameraBaseT::connect(cameraId, clientPackageName, clientUid);
}

要注意的是Camera类继承自模板类CameraBase,如下:

// Camera.h
class Camera : public CameraBase<Camera>, public BnCameraClient

上面调用了CameraBaseT的connect函数,这是个静态函数,CameraBaseT定义在CameraBase中,

typedef CameraBase<TCam>         CameraBaseT;

而对于Camera类来说就是CameraBase,再往下看connect的实现:

// CameraBase.cpp
template <typename TCam, typename TCamTraits>
sp<TCam> CameraBase<TCam, TCamTraits>::connect(int cameraId, ) {
    sp<TCam> c = new TCam(cameraId);
    sp<TCamCallbacks> cl = c;
    const sp<::android::hardware::ICameraService> cs = getCameraService();

    binder::Status ret;
    if (cs != nullptr) {
        TCamConnectService fnConnectService = TCamTraits::fnConnectService;
        ret = (cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid, clientPid, &c->mCamera);
    }
    if (ret.isOk() && c->mCamera != nullptr) {
        IInterface::asBinder(c->mCamera)->linkToDeath(c);
        c->mStatus = NO_ERROR;
    }
    return c;
}

这里先获取CameraService的句柄,类型为ICameraService,然后调用fnConnectService,先看看这个函数指针指向哪:

CameraTraits<Camera>::TCamConnectService CameraTraits<Camera>::fnConnectService =
        &::android::hardware::ICameraService::connect;

看来是先通过getCameraService获取Binder句柄,然后调用对应的connect函数。这个函数的最后一个参数在Binder调用结束时会被设置为一个句柄,以后的Camera调用都要靠他了,这是ICamera类型的。此外这个函数第一个参数是回调,类型为ICameraClient。这里的TCamCallbacks就是ICameraClient,TCam就是Camera。所以这里Camera既然能被赋值给ICameraClient,肯定实现了其接口。答案是肯定的,Camera继承自BnCameraClient,

// ICameraClient.h
class BnCameraClient: public BnInterface<ICameraClient>

ICameraService的实现在CameraService中,先看看CameraService类的定义:

class CameraService :
    public BinderService<CameraService>,
    public BnCameraService,
    public IBinder::DeathRecipient,
    public camera_module_callbacks_t

可见继承自BnCameraService,这个类定义在ICameraService.h中,如下:

class BnCameraService: public BnInterface<ICameraService> {
public:
    virtual status_t    onTransact( uint32_t code, ...};

所以BnCameraService中的onTrasact处理Binder请求,然后调用CameraService的connect:

// ICameraService.cpp
case CONNECT: {
    CHECK_INTERFACE(ICameraService, data, reply);
    sp<ICameraClient> cameraClient = 
        interface_cast<ICameraClient>(data.readStrongBinder());
    int32_t cameraId = data.readInt32();
    const String16 clientName = data.readString16();
    int32_t clientUid = data.readInt32();
    sp<ICamera> camera;
    status_t status = connect(cameraClient, cameraId, clientName, 
        clientUid, camera);
    reply->writeNoException();
    reply->writeInt32(status);
    if (camera != NULL) {
        reply->writeInt32(1);
        reply->writeStrongBinder(IInterface::asBinder(camera));
    } else {
        reply->writeInt32(0);
    }
    return NO_ERROR;
} break;

这里要注意的是调用了connect之后,camera被赋值了,然后通过Binder直接返回。因此Camera类的mCamera被赋值。这个mCamera是定义在CameraBase类中,类型为ICamera,这个类定义在ICamera.h中,里面定义了相机操作的一系列接口。

status_t CameraService::connect() {
    ret = connectHelper<ICameraClient,Client>(cameraClient, id, CAMERA_HAL_API_VERSION_UNSPECIFIED,
            clientPackageName, clientUid, API_1, false, false, client);

    device = client;
    return NO_ERROR;
}

connectHelper函数定义在CameraService.h中,里面会创建一个CameraClient并初始化返回,如下,这个类继承自Client,Client继承自BnCamera,BnCamera是ICamera的Bn端。

// CameraClient.h
class CameraClient : public CameraService::Client

// CameraService.h
class Client : public BnCamera, public BasicClient

// ICamera.h
class BnCamera: public BnInterface<ICamera>

我们注意到ICamera类中关于connect的定义如下:

// ICamera.h
virtual status_t        connect(const sp<ICameraClient>& client) = 0;

这里ICameraClient应该是回调,我们看其定义,果然不出所料。

class ICameraClient: public IInterface {
public:
    DECLARE_META_INTERFACE(CameraClient);

    virtual void            notifyCallback(int32_t msgType, int32_t ext1, int32_t ext2) = 0;
    virtual void            dataCallback(int32_t msgType, const sp<IMemory>& data,
                                         camera_frame_metadata_t *metadata) = 0;
    virtual void            dataCallbackTimestamp(nsecs_t timestamp, int32_t msgType, const sp<IMemory>& data) = 0;
};

// ----------------------------------------------------------------------------

class BnCameraClient: public BnInterface<ICameraClient>
{
public:
    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);
};

综上,打开摄像头时,会连接到CameraService,Binder调用返回一个ICamera句柄,在CameraService端的实现为CameraClient。

再来看setPreviewTexture,直接调到native层:

static void android_hardware_Camera_setPreviewTexture(JNIEnv *env,
        jobject thiz, jobject jSurfaceTexture) {
    sp<Camera> camera = get_native_camera(env, thiz, NULL);

    sp<IGraphicBufferProducer> producer = NULL;
    if (jSurfaceTexture != NULL) {
        producer = SurfaceTexture_getProducer(env, jSurfaceTexture);
    }

    camera->setPreviewTarget(producer);
}

这里先通过Java层的Camera对象找到Native层的Camera对象,Java层的Camera对象中保存了Native层Camera对象上下文的指针。然后获取SurfaceTexture的producer,设置为Camera的preview target,从字面意思上理解就是预览的输出。即摄像机的预览输出作为了SurfaceTexture的producer端,这是IGraphicBufferProducer类型的,对应着一个BufferQueue。

// Camera.cpp
status_t Camera::setPreviewTarget(const sp<IGraphicBufferProducer>& bufferProducer) {
    sp <ICamera> c = mCamera;
    return c->setPreviewTarget(bufferProducer);
}

这里mCamera是Camera类中保存的远端ICamera句柄,所有相机的调用都要通过这个句柄。直接看CameraClient中的实现,

status_t CameraClient::setPreviewTarget(
        const sp<IGraphicBufferProducer>& bufferProducer) {
    sp<IBinder> binder;
    sp<ANativeWindow> window;
    if (bufferProducer != 0) {
        binder = IInterface::asBinder(bufferProducer);
        window = new Surface(bufferProducer, true);
    }
    return setPreviewWindow(binder, window);
}

这里通过Producer生成一个Surface,继承自ANativeWindow,关于Surface的源码解析我们得另开一章,这里就不细说了。我们看setPreviewWindow的实现:

status_t CameraClient::setPreviewWindow(const sp<IBinder>& binder,
        const sp<ANativeWindow>& window) {
    if (window != 0) {
        result = native_window_api_connect(window.get(), NATIVE_WINDOW_API_CAMERA);
    }

    disconnectWindow(mPreviewWindow);
    mSurface = binder;
    mPreviewWindow = window;

    return result;
}

这里逻辑很简单,就是给参数保存一下。注意到这里有个mHardware,

sp<CameraHardwareInterface>     mHardware; 

是在CameraClient类的initialize中初始化的:

status_t CameraClient::initialize(CameraModule *module) {
    int callingPid = getCallingPid();
    status_t res = startCameraOps();

    mHardware = new CameraHardwareInterface(camera_device_name);
    res = mHardware->initialize(module);

    mHardware->setCallbacks(notifyCallback,
            dataCallback,
            dataCallbackTimestamp,
            (void *)(uintptr_t)mCameraId);

    // Enable zoom, error, focus, and metadata messages by default
    enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
                  CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE);
    return OK;
}

这里创建并初始化了CameraHardwareInterface,然后注册回调。这个接口应该是面向HAL层的了。

我们再看setPreviewCallbackWithBuffer的实现,

public final void setPreviewCallbackWithBuffer(PreviewCallback cb) {
    mPreviewCallback = cb;
    mOneShot = false;
    mWithBuffer = true;
    if (cb != null) {
        mUsingPreviewAllocation = false;
    }
    setHasPreviewCallback(cb != null, true);
}

setHasPreviewCallback是个Native函数,

static void android_hardware_Camera_setHasPreviewCallback(JNIEnv *env, jobject thiz, jboolean installed, jboolean manualBuffer) {
    JNICameraContext* context;
    sp<Camera> camera = get_native_camera(env, thiz, &context);
    context->setCallbackMode(env, installed, manualBuffer);
}

setCallbackMode里没有Binder调用,只是简单地设置本地参数。

我们再来看addCallbackBuffer的实现,

public final void addCallbackBuffer(byte[] callbackBuffer) {
    _addCallbackBuffer(callbackBuffer, CAMERA_MSG_PREVIEW_FRAME);
}

这里带了个CAMERA_MSG_PREVIEW_FRAME标志,_addCallbackBuffer是个native函数。我们注意到Camera.java中有个EventHandler,

private static void postEventFromNative(Object camera_ref, ) {
    Camera c = (Camera)((WeakReference)camera_ref).get();

    if (c.mEventHandler != null) {
        Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
        c.mEventHandler.sendMessage(m);
    }
}

这个应该是Native层调到Java层的,我们看消息的处理:

case CAMERA_MSG_PREVIEW_FRAME:
        PreviewCallback pCb = mPreviewCallback;
        if (pCb != null) {
            pCb.onPreviewFrame((byte[])msg.obj, mCamera);
        }
        return;

原来onPreviewFrame是这里调的。那我们只要研究Native层什么时候抛消息到Java层就好了。我们接着看addCallbackBuffer在Native层的实现:

void JNICameraContext::addCallbackBuffer(
        JNIEnv *env, jbyteArray cbb, int msgType) {
    jbyteArray callbackBuffer = (jbyteArray)env->NewGlobalRef(cbb);
    mCallbackBuffers.push(callbackBuffer);
}

看来只是简单的丢到mCallbackBuffers中了。

最后我们看startPreview的实现,这是个native函数,

static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz) {
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    camera->startPreview();
}

这里看来要走Binder调用了,

status_t Camera::startPreview() {
    sp <ICamera> c = mCamera;
    return c->startPreview();
}

果然不出所料,我们看CameraClient中的实现:

status_t CameraClient::startPreview() {
    return startCameraMode(CAMERA_PREVIEW_MODE);
}

这里mode分preview和recording两种,预览调到了startPreviewMode,

status_t CameraClient::startPreviewMode() {
    mHardware->setPreviewWindow(mPreviewWindow);
    return mHardware->startPreview();
}

看来这里主角是mHardware,先设置PreviewWindow,然后startPreview。

接下来我们研究这个mHardware,看看CameraHardwareInterface这个类,这个类是向下面下HAL层的,在打开相机时,在CameraService中创建CameraClient并initialize时会初始化这个mHardware。在CameraService的onFirstRef时会通过hw_get_module加载相机模块,之后在初始化mHardware时会用到这个module。

之后的所有相机请求到会丢到HAL层去执行。

最后我们再看看相机的帧数据回调,我们在前面提到,在CameraClient的initialize中会将callback设置到mHardware,想必又被传到HAL层去了。

void CameraClient::dataCallback(int32_t msgType,
        const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
    sp<CameraClient> client = static_cast<CameraClient*>(getClientFromCookie(user).get());

    switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
        case CAMERA_MSG_PREVIEW_FRAME:
            client->handlePreviewData(msgType, dataPtr, metadata);
            break;
        ......
    }
}

我们注意到这个dataCallback的参数中dataPtr是个共享内存,这样数据就可以高效地跨进程分享了。先看看CameraClient的handlePreviewData的实现,这个ICamearClient是个回调接口。

void CameraClient::handlePreviewData(int32_t msgType, ) {
    sp<IMemoryHeap> heap = mem->getMemory(&offset, &size);
    sp<ICameraClient> c = mRemoteCallback;
    c->dataCallback(msgType, mem, metadata);
}

看看mRemoteCallback是什么时候赋值的,答案是在CameraClient构造函数中,答案是打开相机时,这个回调是个ICameraClient的,也是个Binder接口,是从上层传下来的。我们前面提到过,在CameraBase的connect时,传入了Camera类对象,由于是继承了BnCameraClient,所以实现了ICameraClient接口,我们看其实现:

void Camera::dataCallback(int32_t msgType, const sp<IMemory>& dataPtr,
                          camera_frame_metadata_t *metadata) {
    sp<CameraListener> listener;
    {
        Mutex::Autolock _l(mLock);
        listener = mListener;
    }
    if (listener != NULL) {
        listener->postData(msgType, dataPtr, metadata);
    }
}

这里mListener是什么东西,其实是打开Camera时创建的JNICameraContext,是在android_hardware_Camera_native_setup函数中设置到Camera中的。而JNICameraContext是实现了这个接口的,

class JNICameraContext: public CameraListener

这里调postData,由于msgType是CAMERA_MSG_PREVIEW_FRAME,所以调到了copyAndPost,如下:

void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType) {
    ssize_t offset;
    size_t size;
    sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
    uint8_t *heapBase = (uint8_t*)heap->base();

    const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);
    jbyteArray obj = getCallbackBuffer(env, &mCallbackBuffers, size);
    env->SetByteArrayRegion(obj, 0, size, data);

    // post image data to Java
    env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
            mCameraJObjectWeak, msgType, 0, 0, obj);
}

这里的fields.post_event对应的是Java层的postEventFromNative:

fields.post_event = GetStaticMethodIDOrDie(env, clazz, "postEventFromNative",
    "(Ljava/lang/Object;IIILjava/lang/Object;)V");

这里相机数据是通过共享内存传过来的,然后通过SetByteArrayRegion拷贝到buffer中回调到Java层。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章