Android攝像頭相關源碼分析: 設備驅動, HAL, Framework

Table of Contents

1 序

本文分析的Android源代碼來自Android-X86, 對應的版本是5.1, 因此可能與手機上的Android系統有點差異.

2 V4L2

V4L2是linux針對攝像頭等視頻設備的驅動, 應用程序只要對攝像頭設備通過open獲取文件描述符, 然後就能使用read, write, ioctl等操作對攝像頭進行操作了. 在android HAL中同樣也是這麼做的, 直接對設備的操作相關的代碼位於/hardware/libcamera/V4L2Camera.cpp中. 由於我的項目中使用了一個虛擬攝像頭設備v4l2loopback, 但是有些android中用到的ioctl該設備的動作不符合預期, 需要進行修改, 所以本小節針對v4l2的做一個介紹, 內容主要來源於v4l2的官方文檔.

V4L2是一套大而全的設備驅動, 但是很多功能對於攝像頭來說用不到, 在android的HAL中, 用到的ioctl有:

接下來就對這些ioctl做一個說明.

2.1 ioctls

VIDIOC_QUERYCAP

用於查詢設備的功能和類型, 基本上每個V4L2應用在open設備之後都要用這個ioctl來確定設備的類型. 使用時需要給定一個v4l2_capability類型的數據結構用於傳出輸出結果, 對於攝像頭設備來說, v4l2_capability->capability的V4L2_CAP_VIDEO_CAPTURE位以及V4L2_CAP_STREAMING位需要爲1.

V4L2_CAP_VIDEO_CAPTURE表示該設備支持視頻捕獲, 這也是攝像頭的基本功能.

V4L2_CAP_STREAMING表示該設備支持Streaming I/O, 這是一種內存映射的方式來在內核與應用直接傳輸數據.

VIDIOC_ENUM_FMT

用於查詢攝像頭支持的圖像格式. 使用時需要指定一個v4l2_fmtdesc類型的數據結構作爲輸出參數. 對於支持多種圖像格式的設備來說, 需要設定v4l2_fmtdesc->index然後多次調用這個ioctl, 直到返回EINVAL. ioctl調用成功後, 可以通過v4l2_fmtdesc->pixelformat來獲取該設備支持的圖像格式, pixelformat可以是:

  • V4L2_PIX_FMT_MJPEG
  • V4L2_PIX_FMT_JPEG
  • V4L2_PIX_FMT_YUYV
  • V4L2_PIX_FMT_YVYU等.

VIDIOC_ENUM_FRAMESIZES

獲取到圖像格式之後, 還要進一步查詢該種格式下設備支持的的分辨率, 這個ioctl就是幹這個事. 使用是需要指定一個v4l2_frmsizeenum類型的數據結構, 並且設置v4l2_frmsizeenum->pixel_fromat爲需要查詢的圖像格式, v4l2_frmsizeenum->index設置爲0.

成功調用後, v4l2_frmivalenum->type可能有三種情況:

  1. V4L2_FRMSIZE_TYPE_DISCRETE: 可以遞增的設置v4l2_frmsizeenum->index來重複調用直到返回EINVAL來獲取該種圖像格式下所有支持的分辨率. 此時, 可以通過v4l2_frmsizeenum->width和height來獲取支持的分辨率的長寬.
  2. V4L2_FRMSIZE_TYPE_STEPWISE: 此時只有v4l2_frmsizeenum->stepwise是有效的, 並且不能再將index設爲其他值重複調用此ioctl.
  3. V4L2_FRMSIZE_TYPE_CONTINUOUS: STEPWISE的一種特殊情況, 此時同樣只有stepwise有效, 並且stepwise.step_width和stepwise.step_height都爲1.

上述三種情況中, 第一種很好理解, 就是支持的分辨率. 但是STEPWISE和CONTINUOUS還不知道是什麼意思. 可能是任意分辨率都支持的意思?

VIDIOC_ENUM_FRAMEINTERVALS

獲取到圖像格式以及圖像分辨率之後, 還可以查詢在這種格式及分辨率下攝像頭設備支持的fps. 使用是需要給定v4l2_frmivalenum格式的一個數據結構, 並設置好index=0, pixel_format, width, height.

調用完成後, 同樣需要檢查v4l2_frmivalenum.type, 同樣有DISCRETE, STEPWISE, CONTINUOUS三種情況.

VIDIOC_TRY_FMT/VIDIOC_S_FMT/VIDIOC_G_FMT

這三個ioctl用於設置以及獲取圖像格式. TRY_FMT和S_FMT的區別在於前者不改變驅動的狀態.

需要設置圖像格式時一般通過G_FMT先獲取當前格式, 再修改參數, 最後S_FMT或者TRY_FMT.

VIDIOC_S_PARM/VIDIOC_G_PARM

用於設置和獲取streaming io的參數. 需要指定一個v4l2_streamparm類型的數據結構.

VIDIOC_S_JPEGCOMP/VIDIOC_G_JPEGCOMP

用於設置和獲取JPEG格式的相關參數

VIDIOC_REQBUFS

爲了在用戶程序和內核設備之間交換圖像數據, 需要分配一塊內存. 這塊內存可以在內核設備中分配, 然後用戶程序通過mmap映射到用戶空間, 也可以在用戶空間分配, 然後內核設備進入用戶指針IO模式. 這兩種情況都可以用這個ioctl來進行初始化. 使用時需要給定一個v4l2_requestbuffers, 並設定好type, memory, count. 可以多次調用這個ioctl來重新設置參數. count設爲0表示free 掉所有內存.

VIDIOC_QUERYBUF

VIDIOC_REQBUFS之後, 可以隨時通過此ioctl查詢buffer的當前狀態. 使用時需要給定一個v4l2_buffer類型的數據結構. 並設定好type和index. 其中index的有效值爲[0, count-1]. 這裏的count是VIDIOC_REQBUFS返回的count.

VIDIOC_QBUF/VIDIOC_DQBUF

VIDIOC_REQBUFS分配了內存後, V4l2設備驅動還不能直接用這些內存, 需要通過VIDIOC_QBUF將分配好的內存中的一幀壓入驅動的接收隊列中, 然後通過VIDIOC_DQBUF推出一幀數據的內存. 如果是攝像頭這樣的CAPTURE設備, 那壓入的就是一個空的內存區間, 等到攝像頭拍攝到數據填充到了這片內存之後, 就能推出一片包含有效圖像數據的幀了. 如果攝像頭還沒完成填充動作, 那麼VIDIOC_DQBUF就會阻塞在那, 除非在open時設置了O_NONBLOCK標記位.

VIDIOC_STREAMON/VIDIOC_STREAMOFF

STREAMON表示開始工作, 只有在這個ioctl調用之後, 攝像頭才能開始捕捉圖像, 填充數據. 相對的, STREAMOFF則表示停止工作, 此時在設備驅動中尚未被DQBUF 的圖像會丟失.

3 像素編碼格式

由於底層涉及到圖像, 攝像頭相關的代碼中出現了不少像素編碼格式, 所以簡單瞭解一下攝像頭會遇到的格式.

3.1 RGB

很簡單, 一個像素由RGB三種顏色組成, 每種顏色佔用8位, 所以一個像素需要3 字節存儲. 有些RGB編碼格式會用更少的位保存一些顏色, 例如RGB844, 綠色和藍色分別用4位來保存, 這樣一個像素就只需要2字節. 還有RGBA, 增加了一個alpha通道, 這樣一個像素就需要32位來保存.

3.2 YUV

YUV同樣有三個通道, Y表示亮度, 由RGB三色特定部分疊加到一起, U和V則分別是紅色與亮度的差異和藍色與亮度的差異. Y一般佔用8位, 而UV則可以省略, 因此衍生出了YUV444, YUV420, YUV411等一系列編碼. 名稱中的三位數字表示YUC 三個信道的比例, 例如YUV444表示三種信道1:1:1, 而Y佔用8位, 因此一個像素佔用24位. YUV420並不是說V就完全省略了, 而是一行4:1:0, 一行4:0:1.

android系統的攝像頭預覽默認採用YUV420sp編碼. YUV420又分爲YUV420p和YUV420sp, 兩者的區別在於UV數據的存放順序不一樣:

Android_Hardware_Camera_20160605_142139.png

Figure 1: 圖片來自http://blog.csdn.net/jefry_xdz/article/details/7931018

4 Android Camera

4.1 Hardware

在Android-x85 5.1中, HAL層攝像頭相關的類之間的關係大概如圖所示:

android_camera_uml.png

SurfaceSize是對一個Surface的長寬參數的封裝. SurfaceDesc是對一個Surface 的長寬參數, fps參數的封裝.

V4L2Camera類是對V4L2設備驅動的一層封裝, 直接通過ioctl控制V4L2設備.

CameraParameters是對攝像頭參數的封裝. 其中flatten和unfaltten相當於是對CameraParameters的序列化和反序列化.

camera_device其實是一個類似於抽象類的結構體, 定義了一些列攝像頭應該實現的接口, 而CameraHardware繼承了這個camera_device, 表示一個攝像頭的抽象, 這個類主要實現了android定義的一個攝像頭應該實現的動作, 如startPreview. 其底層通過V4L2Camera對象來操作攝像頭設備, 每個CameraHardware對象都包含一個V4L2Camera對象. 每個實例還包含了一個CameraParameters對象, 保存攝像頭的相關參數.

CameraFactory是一個攝像頭的管理類, 整個安卓系統中只有一個實例, 其中通過讀配置文件的方式創建了多個CameraHardware實例.

CameraFactory

CameraFactory類扮演着是一個攝像頭設備的管理員的角色, 這個類查詢手機上有幾個攝像頭, 這幾個攝像頭的設備路徑分別是什麼, 旋轉角度是多少, 朝向(前置還是後置)等信息. Android-x86通過讀取一個配置文件來獲取機器上有多少個攝像頭:

hardware/libcamera/CameraFactory.cpp

void CameraFactory::parseConfig(const char* configFile)
{
    ALOGD("CameraFactory::parseConfig: configFile = %s", configFile);

    FILE* config = fopen(configFile, "r");
    if (config != NULL) {
        char line[128];
        char arg1[128];
        char arg2[128];
        int  arg3;

        while (fgets(line, sizeof line, config) != NULL) {
            int lineStart = strspn(line, " \t\n\v" );

            if (line[lineStart] == '#')
                continue;

            sscanf(line, "%s %s %d", arg1, arg2, &arg3);
            if (arg3 != 0 && arg3 != 90 && arg3 != 180 && arg3 != 270)
                arg3 = 0;

            if (strcmp(arg1, "front") == 0) {
                newCameraConfig(CAMERA_FACING_FRONT, arg2, arg3);
            } else if (strcmp(arg1, "back") == 0) {
                newCameraConfig(CAMERA_FACING_BACK, arg2, arg3);
            } else {
                ALOGD("CameraFactory::parseConfig: Unrecognized config line '%s'", line);
            }
        }
    } else {
        ALOGD("%s not found, using camera configuration defaults", CONFIG_FILE);
        if (access(DEFAULT_DEVICE_BACK, F_OK) != -1){
            ALOGD("Found device %s", DEFAULT_DEVICE_BACK);
            newCameraConfig(CAMERA_FACING_BACK, DEFAULT_DEVICE_BACK, 0);
        }
        if (access(DEFAULT_DEVICE_FRONT, F_OK) != -1){
            ALOGD("Found device %s", DEFAULT_DEVICE_FRONT);
            newCameraConfig(CAMERA_FACING_FRONT, DEFAULT_DEVICE_FRONT, 0);
        }
    }
}

配置文件的路徑位於/etc/camera.cfg, 格式爲”front/back path_to_device orientation”, 例如”front /dev/video0 0″

值得一提的另一個函數是 cameraDeviceOpen, APP在打開攝像頭的過程中會通過這個函數獲取攝像頭:

 1: int CameraFactory::cameraDeviceOpen(const hw_module_t* module,int camera_id, hw_device_t** device)
 2: {
 3:     ALOGD("CameraFactory::cameraDeviceOpen: id = %d", camera_id);
 4: 
 5:     *device = NULL;
 6: 
 7:     if (!mCamera || camera_id < 0 || camera_id >= getCameraNum()) {
 8:         ALOGE("%s: Camera id %d is out of bounds (%d)",
 9:              __FUNCTION__, camera_id, getCameraNum());
10:         return -EINVAL;
11:     }
12: 
13:     if (!mCamera[camera_id]) {
14:         mCamera[camera_id] = new CameraHardware(module, mCameraDevices[camera_id]);
15:     }
16:     return mCamera[camera_id]->connectCamera(device);
17: }

從第13行可知, 當android系統啓動的時候, 就已經構建好了CameraFactory對象並且通過配置文件讀取到機器中有多少攝像頭, 對應的設備路徑是什麼. 但是此時並沒有創建CameraHardware對象, 而是直到對應的攝像頭第一次被打開的時候才創建.

CameraFactory類整個android系統只有一個實例, 那就是定義在CameraFactory.cpp中的gCameraFactory. camera_module_t定義了幾個函數指針, 指向了CameraFactory中的static函數, 當調用這些指針指向的函數的時候, 實質上是在調用gCameraFactory對象中的相應方法:

hardware/libcamera/CameraHal.cpp

camera_module_t HAL_MODULE_INFO_SYM = {
    common: {
         tag:           HARDWARE_MODULE_TAG,
         version_major: 1,
         version_minor: 0,
         id:            CAMERA_HARDWARE_MODULE_ID,
         name:          "Camera Module",
         author:        "The Android Open Source Project",
         methods:       &android::CameraFactory::mCameraModuleMethods,
         dso:           NULL,
         reserved:      {0},
    },
    get_number_of_cameras:  android::CameraFactory::get_number_of_cameras,
    get_camera_info:        android::CameraFactory::get_camera_info,
};

上述代碼定義了一個camera_module_t, 相應的函數定義如下:

hardware/libcamera/CameraFactory.cpp

int CameraFactory2::device_open(const hw_module_t* module,
                                       const char* name,
                                       hw_device_t** device)
{
    ALOGD("CameraFactory2::device_open: name = %s", name);

    /*
     * Simply verify the parameters, and dispatch the call inside the
     * CameraFactory instance.
     */

    if (module != &HAL_MODULE_INFO_SYM.common) {
        ALOGE("%s: Invalid module %p expected %p",
                __FUNCTION__, module, &HAL_MODULE_INFO_SYM.common);
        return -EINVAL;
    }
    if (name == NULL) {
        ALOGE("%s: NULL name is not expected here", __FUNCTION__);
        return -EINVAL;
    }

    int camera_id = atoi(name);
    return gCameraFactory.cameraDeviceOpen(module, camera_id, device);
}

int CameraFactory2::get_number_of_cameras(void)
{
    ALOGD("CameraFactory2::get_number_of_cameras");
    return gCameraFactory.getCameraNum();
}

int CameraFactory2::get_camera_info(int camera_id,
                                           struct camera_info* info)
{
    ALOGD("CameraFactory2::get_camera_info");
    return gCameraFactory.getCameraInfo(camera_id, info);
}

camera_device

camera_device同時被定義爲了camera_device_t, 此處涉及到HAL的擴展規範. Android HAL定義了三個數據類型, struct hw_module_tstruct hw_module_methods_tstruct hw_device_t, 分別表示模塊類型, 模塊方法和設備類型. 當需要擴展HAL, 增加一種設備的時候, 就要實現以上這三種數據結構. 例如對於攝像頭來說, 就需要定義camera_module_t, camera_device_t, 以及爲hw_module_methods_t中的函數指針賦值, 其中只有一個open函數, 相當於初始化模塊. 而且HAL還規定camera_module_t的第一個成員必須是hw_module_t, camera_device_t的第一個成員必須是hw_device_t, 接下來的其他成員可以自己定義.

更詳細的原理說明參加這裏.

在Android-x86中, camera_device_t的定義位於hardware/libhardware/include/hardware/hardware.h

hardware/libhardware/include/hardware/hardware.h

typedef struct camera_device {
    hw_device_t common;
    camera_device_ops_t *ops;
    void *priv;
} camera_device_t;

其中的camera_device_ops_t攝像頭模塊自己定義的一組函數接口, 在同一個文件中, 太長了就不貼出來了.

camera_module_t位於同一個目錄下的camera_common.h中

hardware/libhardware/include/hardware/camera_common.h

typedef struct camera_module {
    hw_module_t common;
    int (*get_number_of_cameras)(void);
    int (*get_camera_info)(int camera_id, struct camera_info *info);
    int (*set_callbacks)(const camera_module_callbacks_t *callbacks);
    void (*get_vendor_tag_ops)(vendor_tag_ops_t* ops);
    int (*open_legacy)(const struct hw_module_t* module, const char* id,
            uint32_t halVersion, struct hw_device_t** device);

    /* reserved for future use */
    void* reserved[7];
} camera_module_t;

在framework調用HAL代碼時, 通過hw_module_t->methods->open獲取hw_device_t, 然後強制轉化爲camera_device_t, 就能調用camera_device_t->ops中的攝像頭相關的函數了. 對於Android-x86的攝像頭來說, ops中的函數指針的賦值位於繼承了camera_device的CameraHardware類中.

CameraHardware

CameraHardware的衆多接口主要是爲了完成三個動作: 預覽, 錄製, 拍照. 其他的函數大多是設置參數等準備工作. 本小節以預覽爲例對代碼流程做一個說明.

首先是對CameraHardware對象的初始化參數, 對應的函數爲initDefaultParameters. 該函數通過調用V4L2Camera的getBestPreviewFmtgetBestPictureFmtgetAvailableSizesgetAvailableFps 來分別獲取默認預覽圖像格式, 默認圖像格式, 攝像頭支持的分辨率和fps:

hardware/libcamera/CameraHardware.cpp

int pw = MIN_WIDTH;
int ph = MIN_HEIGHT;
int pfps = 30;
int fw = MIN_WIDTH;
int fh = MIN_HEIGHT;
SortedVector<SurfaceSize> avSizes;
SortedVector<int> avFps;

if (camera.Open(mVideoDevice) != NO_ERROR) {
    ALOGE("cannot open device.");
} else {

    // Get the default preview format
    pw = camera.getBestPreviewFmt().getWidth();
    ph = camera.getBestPreviewFmt().getHeight();
    pfps = camera.getBestPreviewFmt().getFps();

    // Get the default picture format
    fw = camera.getBestPictureFmt().getWidth();
    fh = camera.getBestPictureFmt().getHeight();

    // Get all the available sizes
    avSizes = camera.getAvailableSizes();

    // Add some sizes that some specific apps expect to find:
    //  GTalk expects 320x200
    //  Fring expects 240x160
    // And also add standard resolutions found in low end cameras, as
    //  android apps could be expecting to find them
    // The V4LCamera handles those resolutions by choosing the next
    //  larger one and cropping the captured frames to the requested size

    avSizes.add(SurfaceSize(480,320)); // HVGA
    avSizes.add(SurfaceSize(432,320)); // 1.35-to-1, for photos. (Rounded up from 1.3333 to 1)
    avSizes.add(SurfaceSize(352,288)); // CIF
    avSizes.add(SurfaceSize(320,240)); // QVGA
    avSizes.add(SurfaceSize(320,200));
    avSizes.add(SurfaceSize(240,160)); // SQVGA
    avSizes.add(SurfaceSize(176,144)); // QCIF

    // Get all the available Fps
    avFps = camera.getAvailableFps();
}

然後將這些參數轉爲文本形式, 設置到CameraParameters對象中:

hardware/libcamera/CameraHardware.cpp

    // Antibanding
    p.set(CameraParameters::KEY_SUPPORTED_ANTIBANDING,"auto");
    p.set(CameraParameters::KEY_ANTIBANDING,"auto");

    // Effects
    p.set(CameraParameters::KEY_SUPPORTED_EFFECTS,"none"); // "none,mono,sepia,negative,solarize"
    p.set(CameraParameters::KEY_EFFECT,"none");

    // Flash modes
    p.set(CameraParameters::KEY_SUPPORTED_FLASH_MODES,"off");
    p.set(CameraParameters::KEY_FLASH_MODE,"off");

    // Focus modes
    p.set(CameraParameters::KEY_SUPPORTED_FOCUS_MODES,"fixed");
    p.set(CameraParameters::KEY_FOCUS_MODE,"fixed");

#if 0
    p.set(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT,0);
    p.set(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY,75);
    p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES,"0x0");
    p.set("jpeg-thumbnail-size","0x0");
    p.set(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH,0);
#endif

    // Picture - Only JPEG supported
    p.set(CameraParameters::KEY_SUPPORTED_PICTURE_FORMATS,CameraParameters::PIXEL_FORMAT_JPEG); // ONLY jpeg
    p.setPictureFormat(CameraParameters::PIXEL_FORMAT_JPEG);
    p.set(CameraParameters::KEY_SUPPORTED_PICTURE_SIZES, szs);
    p.setPictureSize(fw,fh);
    p.set(CameraParameters::KEY_JPEG_QUALITY, 85);

    // Preview - Supporting yuv422i-yuyv,yuv422sp,yuv420sp, defaulting to yuv420sp, as that is the android Defacto default
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FORMATS,"yuv422i-yuyv,yuv422sp,yuv420sp,yuv420p"); // All supported preview formats
    p.setPreviewFormat(CameraParameters::PIXEL_FORMAT_YUV422SP); // For compatibility sake ... Default to the android standard
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FPS_RANGE, fpsranges);
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_FRAME_RATES, fps);
    p.setPreviewFrameRate( pfps );
    p.set(CameraParameters::KEY_SUPPORTED_PREVIEW_SIZES, szs);
    p.setPreviewSize(pw,ph);

    // Video - Supporting yuv422i-yuyv,yuv422sp,yuv420sp and defaulting to yuv420p
    p.set("video-size-values"/*CameraParameters::KEY_SUPPORTED_VIDEO_SIZES*/, szs);
    p.setVideoSize(pw,ph);
    p.set(CameraParameters::KEY_VIDEO_FRAME_FORMAT, CameraParameters::PIXEL_FORMAT_YUV420P);
    p.set("preferred-preview-size-for-video", "640x480");

    // supported rotations
    p.set("rotation-values","0");
    p.set(CameraParameters::KEY_ROTATION,"0");

    // scenes modes
    p.set(CameraParameters::KEY_SUPPORTED_SCENE_MODES,"auto");
    p.set(CameraParameters::KEY_SCENE_MODE,"auto");

    // white balance
    p.set(CameraParameters::KEY_SUPPORTED_WHITE_BALANCE,"auto");
    p.set(CameraParameters::KEY_WHITE_BALANCE,"auto");

    // zoom
    p.set(CameraParameters::KEY_SMOOTH_ZOOM_SUPPORTED,"false");
    p.set("max-video-continuous-zoom", 0 );
    p.set(CameraParameters::KEY_ZOOM, "0");
    p.set(CameraParameters::KEY_MAX_ZOOM, "100");
    p.set(CameraParameters::KEY_ZOOM_RATIOS, "100");
    p.set(CameraParameters::KEY_ZOOM_SUPPORTED, "false");

    // missing parameters for Camera2
    p.set(CameraParameters::KEY_FOCAL_LENGTH, 4.31);
    p.set(CameraParameters::KEY_HORIZONTAL_VIEW_ANGLE, 90);
    p.set(CameraParameters::KEY_VERTICAL_VIEW_ANGLE, 90);
    p.set(CameraParameters::KEY_SUPPORTED_JPEG_THUMBNAIL_SIZES, "640x480,0x0");

當準備工作做完之後, 調用 CameraHardware::startPreview 函數, 這個函數只有三行, 首先加個鎖, 然後調用 startPreviewLocked, 所有的預覽的工作都是在這個函數中完成.

hardware/libcamera/CameraHardware.cpp

status_t CameraHardware::startPreviewLocked()
{
    ALOGD("CameraHardware::startPreviewLocked");

    // 預覽由一個獨立的線程完成, 這幾行檢查預覽是否已經開啓. 一般來說是不會進入到if的
    if (mPreviewThread != 0) {
        ALOGD("CameraHardware::startPreviewLocked: preview already running");
        return NO_ERROR;
    }

    // 通過CameraParameters獲取預覽的長寬.
    int width, height;
    // If we are recording, use the recording video size instead of the preview size
    if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {
        mParameters.getVideoSize(&width, &height);
    } else {
        mParameters.getPreviewSize(&width, &height);
    }

    // 通過CameraParameters獲取預覽的fps
    int fps = mParameters.getPreviewFrameRate();
    ALOGD("CameraHardware::startPreviewLocked: Open, %dx%d", width, height);

    // 調用V4L2Camera的open函數打開攝像頭設備
    status_t ret = camera.Open(mVideoDevice);
    if (ret != NO_ERROR) {
        ALOGE("Failed to initialize Camera");
        return ret;
    }
    ALOGD("CameraHardware::startPreviewLocked: Init");

    // 調用V4L2Camera的init函數初始化攝像頭設備
    ret = camera.Init(width, height, fps);
    if (ret != NO_ERROR) {
        ALOGE("Failed to setup streaming");
        return ret;
    }

    // 用戶要求的預覽的長寬可能攝像頭設備不支持, 攝像頭實際工作的長寬通過以下函數獲取.
    /* Retrieve the real size being used */
    camera.getSize(width, height);
    ALOGD("CameraHardware::startPreviewLocked: effective size: %dx%d",width, height);

    // 保存實際工作的長寬
    // If we are recording, use the recording video size instead of the preview size
    if (mRecordingEnabled && mMsgEnabled & CAMERA_MSG_VIDEO_FRAME) {
        /* Store it as the video size to use */
        mParameters.setVideoSize(width, height);
    } else {
        /* Store it as the preview size to use */
        mParameters.setPreviewSize(width, height);
    }

    // ???
    /* And reinit the memory heaps to reflect the real used size if needed */
    initHeapLocked();
    ALOGD("CameraHardware::startPreviewLocked: StartStreaming");

    // 通過V4L2Camera.StartStreaming讓攝像頭設備開始工作
    ret = camera.StartStreaming();
    if (ret != NO_ERROR) {
        ALOGE("Failed to start streaming");
        return ret;
    }

    // 初始化預覽窗口
    // setup the preview window geometry in order to use it to zoom the image
    if (mWin != 0) {
        ALOGD("CameraHardware::setPreviewWindow - Negotiating preview format");
        NegotiatePreviewFormat(mWin);
    }

    ALOGD("CameraHardware::startPreviewLocked: starting PreviewThread");

    // 開啓一個線程處理預覽工作
    mPreviewThread = new PreviewThread(this);

    ALOGD("CameraHardware::startPreviewLocked: O - this:0x%p",this);

    return NO_ERROR;
}

再來看這個 PreviewThread, 這個類很簡單, 就是調用了CameraHardware的previewThread方法, 這個方法根據fps計算出一個等待時間, 然後調用V4L2Camera的GrabRawFrame獲取攝像頭設備的圖像, 然後轉換成支持的圖像格式, 最後放到顯示窗口中顯示圖像.

V4L2Camera

V4L2Camera類主要是對V4L2設備的封裝, 下面分析一下常用的幾個接口, 如OpenInitStartStreamingGrabRawFrameEnumFrameIntervalsEnumFrameSizesEnumFrameFormats.

  • Open

    Open接口的邏輯比較簡單, 通過 open 系統調用獲取攝像頭設備的文件描述符, 然後調用VIDIOC_QUERYCAP ioctl查詢設備的能力, 由於是攝像頭設備, 這裏就要求是設備的V4L2_CAP_VIDEO_CAPTURE位被置爲1, 所以有個檢查. 最後調用EnumFrameFormats 獲取攝像頭支持的圖像格式.

    hardware/libcamera/V4L2Camera.cpp

    int V4L2Camera::Open (const char *device)
    {
        int ret;
    
        /* Close the previous instance, if any */
        Close();
    
        memset(videoIn, 0, sizeof (struct vdIn));
    
        if ((fd = open(device, O_RDWR)) == -1) {
            ALOGE("ERROR opening V4L interface: %s", strerror(errno));
            return -1;
        }
    
        ret = ioctl (fd, VIDIOC_QUERYCAP, &videoIn->cap);
        if (ret < 0) {
            ALOGE("Error opening device: unable to query device.");
            return -1;
        }
    
        if ((videoIn->cap.capabilities & V4L2_CAP_VIDEO_CAPTURE) == 0) {
            ALOGE("Error opening device: video capture not supported.");
            return -1;
        }
    
        if (!(videoIn->cap.capabilities & V4L2_CAP_STREAMING)) {
            ALOGE("Capture device does not support streaming i/o");
            return -1;
        }
    
        /* Enumerate all available frame formats */
        EnumFrameFormats();
    
        return ret;
    }
    
  • EnumFrameFormats

    此函數獲取攝像頭設備的圖像格式, 分辨率以及fps. 參見2.1可知, 設備支持的分辨率是對某個圖像格式下才有意義, 不說明在什麼圖像格式下, 是無法獲取支持的分辨率的, 同樣, fps也是針對某個圖像格式和某個分辨率的. 因此攝像頭設備支持的圖像格式, 分辨率和fps在調用完這個函數之後就全部知道了. 同時, 這個函數還設置好了 m_BestPreviewFmt 和 m_BestPictureFmt 這兩個參數, 這兩個參數會被用來設置預覽的默認格式.

    hardware/libcamera/V4L2Camera.cpp

    bool V4L2Camera::EnumFrameFormats()
    {
        ALOGD("V4L2Camera::EnumFrameFormats");
        struct v4l2_fmtdesc fmt;
    
        // Start with no modes
        m_AllFmts.clear();
    
        memset(&fmt, 0, sizeof(fmt));
        fmt.index = 0;
        fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    
        // 遍歷地獲取設備所有支持的圖像格式
        while (ioctl(fd,VIDIOC_ENUM_FMT, &fmt) >= 0) {
            fmt.index++;
            ALOGD("{ pixelformat = '%c%c%c%c', description = '%s' }",
                    fmt.pixelformat & 0xFF, (fmt.pixelformat >> 8) & 0xFF,
                    (fmt.pixelformat >> 16) & 0xFF, (fmt.pixelformat >> 24) & 0xFF,
                    fmt.description);
    
            // 獲取該種格式下設備支持的分辨率和fps
            //enumerate frame sizes for this pixel format
            if (!EnumFrameSizes(fmt.pixelformat)) {
                ALOGE("  Unable to enumerate frame sizes.");
            }
        };
    
        // 此時, 圖像格式, 分辨率, fps已經都獲取到了.
    
        // Now, select the best preview format and the best PictureFormat
        m_BestPreviewFmt = SurfaceDesc();
        m_BestPictureFmt = SurfaceDesc();
    
        unsigned int i;
        for (i=0; i<m_AllFmts.size(); i++) {
            SurfaceDesc s = m_AllFmts[i];
    
            // 此處設置最佳的拍照參數, 由於是拍照, 對fps就沒沒什麼要求, 只要分辨率大就可以了
            // 因此優先尋找一個分辨率最大的那個SurfaceDesc賦值給m_BestPictureFmt
            // Prioritize size over everything else when taking pictures. use the
            // least fps possible, as that usually means better quality
            if ((s.getSize()  > m_BestPictureFmt.getSize()) ||
                (s.getSize() == m_BestPictureFmt.getSize() && s.getFps() < m_BestPictureFmt.getFps() )
                ) {
                m_BestPictureFmt = s;
            }
    
            // 此處設置最佳的預覽參數, 對於預覽來說, fps的權重更高
            // 因此優先尋找fps高的SurfaceDesc賦值給m_BestPreviewFmt
            // Prioritize fps, then size when doing preview
            if ((s.getFps()  > m_BestPreviewFmt.getFps()) ||
                (s.getFps() == m_BestPreviewFmt.getFps() && s.getSize() > m_BestPreviewFmt.getSize() )
                ) {
                m_BestPreviewFmt = s;
            }
    
        }
    
        return true;
    }
    
  • EnumFrameSizes

    此函數根據給定的pixfmt查詢該格式下設備支持的分辨率.

    hardware/libcamera/V4L2Camera.cpp

    bool V4L2Camera::EnumFrameSizes(int pixfmt)
    {
        ALOGD("V4L2Camera::EnumFrameSizes: pixfmt: 0x%08x",pixfmt);
        int ret=0;
        int fsizeind = 0;
        struct v4l2_frmsizeenum fsize;
    
        // 設置好v4l2_frmsizeenum
        memset(&fsize, 0, sizeof(fsize));
        fsize.index = 0;
        fsize.pixel_format = pixfmt;
        // 循環調用VIDIOC_ENUM_FRAMESIZES ioctl查詢所有支持的分辨率
        while (ioctl(fd, VIDIOC_ENUM_FRAMESIZES, &fsize) >= 0) {
            fsize.index++;
            // 根據輸出結果的type分情況討論
            if (fsize.type == V4L2_FRMSIZE_TYPE_DISCRETE) {
                ALOGD("{ discrete: width = %u, height = %u }",
                    fsize.discrete.width, fsize.discrete.height);
    
                // 這個變量保存設備支持的DISCRETE類型的分辨率的個數
                fsizeind++;
    
                // 繼續查詢這種分辨率下支持的fps
                if (!EnumFrameIntervals(pixfmt,fsize.discrete.width, fsize.discrete.height))
                    ALOGD("  Unable to enumerate frame intervals");
            } else if (fsize.type == V4L2_FRMSIZE_TYPE_CONTINUOUS) { // 如果type是CONTINUOUS或STEPWISE, 則不做任何事
                ALOGD("{ continuous: min { width = %u, height = %u } .. "
                    "max { width = %u, height = %u } }",
                    fsize.stepwise.min_width, fsize.stepwise.min_height,
                    fsize.stepwise.max_width, fsize.stepwise.max_height);
                ALOGD("  will not enumerate frame intervals.\n");
            } else if (fsize.type == V4L2_FRMSIZE_TYPE_STEPWISE) {
                ALOGD("{ stepwise: min { width = %u, height = %u } .. "
                    "max { width = %u, height = %u } / "
                    "stepsize { width = %u, height = %u } }",
                    fsize.stepwise.min_width, fsize.stepwise.min_height,
                    fsize.stepwise.max_width, fsize.stepwise.max_height,
                    fsize.stepwise.step_width, fsize.stepwise.step_height);
                ALOGD("  will not enumerate frame intervals.");
            } else {
                ALOGE("  fsize.type not supported: %d\n", fsize.type);
                ALOGE("     (Discrete: %d   Continuous: %d  Stepwise: %d)",
                    V4L2_FRMSIZE_TYPE_DISCRETE,
                    V4L2_FRMSIZE_TYPE_CONTINUOUS,
                    V4L2_FRMSIZE_TYPE_STEPWISE);
            }
        }
    
        // 如果設備不支持任何DISCRETE類型的分辨率, 嘗試通過VIDIOC_TRY_FMT對設備設置分辨率, 如果設置成功, 也認爲
        // 這個攝像頭設備支持這種分辨率
        if (fsizeind == 0) {
            /* ------ gspca doesn't enumerate frame sizes ------ */
            /*       negotiate with VIDIOC_TRY_FMT instead       */
            static const struct {
                int w,h;
            } defMode[] = {
                {800,600},
                {768,576},
                {768,480},
                {720,576},
                {720,480},
                {704,576},
                {704,480},
                {640,480},
                {352,288},
                {320,240}
            };
    
            unsigned int i;
            for (i = 0 ; i < (sizeof(defMode) / sizeof(defMode[0])); i++) {
    
                fsizeind++;
                struct v4l2_format fmt;
                fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
                fmt.fmt.pix.width = defMode[i].w;
                fmt.fmt.pix.height = defMode[i].h;
                fmt.fmt.pix.pixelformat = pixfmt;
                fmt.fmt.pix.field = V4L2_FIELD_ANY;
    
                if (ioctl(fd,VIDIOC_TRY_FMT, &fmt) >= 0) {
                    ALOGD("{ ?GSPCA? : width = %u, height = %u }\n", fmt.fmt.pix.width, fmt.fmt.pix.height);
    
                    // Add the mode descriptor
                    m_AllFmts.add( SurfaceDesc( fmt.fmt.pix.width, fmt.fmt.pix.height, 25 ) );
                }
            }
        }
    
        return true;
    }
    

    可以看到, Android對於分辨率的類型只認DISCRETE, CONTINUOUS和STEPWISE只輸出個日誌, 不做任何事.

  • EnumFrameIntervals

    該函數通過VIDIOC_ENUM_FRAMEINTERVALS ioctl查詢指定圖像格式和分辨率下設備支持的fps.

    hardware/libcamera/V4L2Camera.cpp

    bool V4L2Camera::EnumFrameIntervalsi(int pixfmt, int width, int height)
    {
        ALOGD("V4L2Camera::EnumFrameIntervals: pixfmt: 0x%08x, w:%d, h:%d",pixfmt,width,height);
    
        struct v4l2_frmivalenum fival;
        int list_fps=0;
        // 設置參數
        memset(&fival, 0, sizeof(fival));
        fival.index = 0;
        fival.pixel_format = pixfmt;
        fival.width = width;
        fival.height = height;
    
        ALOGD("\tTime interval between frame: ");
        // 遍歷的調用ioctl獲取所有支持的fps
        while (ioctl(fd,VIDIOC_ENUM_FRAMEINTERVALS, &fival) >= 0)
        {
            fival.index++;
            // 同樣只認DISCRETE
            if (fival.type == V4L2_FRMIVAL_TYPE_DISCRETE) {
                ALOGD("%u/%u", fival.discrete.numerator, fival.discrete.denominator);
                // 新建一個SurfaceDesc添加到成員變量m_AllFmts中
                m_AllFmts.add( SurfaceDesc( width, height, fival.discrete.denominator ) );
                list_fps++;
            } else if (fival.type == V4L2_FRMIVAL_TYPE_CONTINUOUS) {
                ALOGD("{min { %u/%u } .. max { %u/%u } }",
                    fival.stepwise.min.numerator, fival.stepwise.min.numerator,
                    fival.stepwise.max.denominator, fival.stepwise.max.denominator);
                break;
            } else if (fival.type == V4L2_FRMIVAL_TYPE_STEPWISE) {
                ALOGD("{min { %u/%u } .. max { %u/%u } / "
                    "stepsize { %u/%u } }",
                    fival.stepwise.min.numerator, fival.stepwise.min.denominator,
                    fival.stepwise.max.numerator, fival.stepwise.max.denominator,
                    fival.stepwise.step.numerator, fival.stepwise.step.denominator);
                break;
            }
        }
    
        // Assume at least 1fps
        if (list_fps == 0) {
            m_AllFmts.add( SurfaceDesc( width, height, 1 ) );
        }
    
        return true;
    }
    
  • Init

    Init函數是V4L2Camera類中最爲複雜的一個方法.

    hardware/libcamera/V4L2Camera.cpp

    int V4L2Camera::Init(int width, int height, int fps)
    {
        ALOGD("V4L2Camera::Init");
    
        /* Initialize the capture to the specified width and height */
        static const struct {
            int fmt;            /* PixelFormat */
            int bpp;            /* bytes per pixel */
            int isplanar;       /* If format is planar or not */
            int allowscrop;     /* If we support cropping with this pixel format */
        } pixFmtsOrder[] = {
            {V4L2_PIX_FMT_YUYV,     2,0,1},
            {V4L2_PIX_FMT_YVYU,     2,0,1},
            {V4L2_PIX_FMT_UYVY,     2,0,1},
            {V4L2_PIX_FMT_YYUV,     2,0,1},
            {V4L2_PIX_FMT_SPCA501,  2,0,0},
            {V4L2_PIX_FMT_SPCA505,  2,0,0},
            {V4L2_PIX_FMT_SPCA508,  2,0,0},
            {V4L2_PIX_FMT_YUV420,   0,1,0},
            {V4L2_PIX_FMT_YVU420,   0,1,0},
            {V4L2_PIX_FMT_NV12,     0,1,0},
            {V4L2_PIX_FMT_NV21,     0,1,0},
            {V4L2_PIX_FMT_NV16,     0,1,0},
            {V4L2_PIX_FMT_NV61,     0,1,0},
            {V4L2_PIX_FMT_Y41P,     0,0,0},
            {V4L2_PIX_FMT_SGBRG8,   0,0,0},
            {V4L2_PIX_FMT_SGRBG8,   0,0,0},
            {V4L2_PIX_FMT_SBGGR8,   0,0,0},
            {V4L2_PIX_FMT_SRGGB8,   0,0,0},
            {V4L2_PIX_FMT_BGR24,    3,0,1},
            {V4L2_PIX_FMT_RGB24,    3,0,1},
            {V4L2_PIX_FMT_MJPEG,    0,1,0},
            {V4L2_PIX_FMT_JPEG,     0,1,0},
            {V4L2_PIX_FMT_GREY,     1,0,1},
            {V4L2_PIX_FMT_Y16,      2,0,1},
        };
    
        int ret;
    
        // If no formats, break here
        if (m_AllFmts.isEmpty()) {
            ALOGE("No video formats available");
            return -1;
        }
    
        // Try to get the closest match ...
        SurfaceDesc closest;
        int closestDArea = -1;
        int closestDFps = -1;
        unsigned int i;
        int area = width * height;
        for (i = 0; i < m_AllFmts.size(); i++) {
            SurfaceDesc sd = m_AllFmts[i];
    
            // Always choose a bigger or equal surface
            if (sd.getWidth() >= width &&
                sd.getHeight() >= height) {
    
                int difArea = sd.getArea() - area;
                int difFps = my_abs(sd.getFps() - fps);
    
                ALOGD("Trying format: (%d x %d), Fps: %d [difArea:%d, difFps:%d, cDifArea:%d, cDifFps:%d]",sd.getWidth(),sd.getHeight(),sd.getFps(), difArea, difFps, closestDArea, closestDFps);
    
                // 從攝像頭設備支持的分辨率中尋找一個長寬都大於等於輸入的長寬, 並且面積差得最少的一個分辨率
                // 如果這種分辨率有多個, 就尋找一個fps差異最小的
                // 找到的這個SurfaceDesc賦值給closest變量
                if (closestDArea < 0 ||
                    difArea < closestDArea ||
                    (difArea == closestDArea && difFps < closestDFps)) {
    
                    // Store approximation
                    closestDArea = difArea;
                    closestDFps = difFps;
    
                    // And the new surface descriptor
                    closest = sd;
                }
            }
        }
    
        // 如果可用的分辨率中沒有長寬都大於等於輸入的長寬的分辨率
        if (closestDArea == -1) {
            ALOGE("Size not available: (%d x %d)",width,height);
            return -1;
        }
    
        // 此時closest就是最接近輸入參數的SurfaceDesc
        ALOGD("Selected format: (%d x %d), Fps: %d",closest.getWidth(),closest.getHeight(),closest.getFps());
    
        // 如果closest的長寬並不完全等於輸入的長寬, 說明需要剪短
        // Check if we will have to crop the captured image
        bool crop = width != closest.getWidth() || height != closest.getHeight();
    
        // 遍歷支持的像素格式
        // Iterate through pixel formats from best to worst
        ret = -1;
        for (i=0; i < (sizeof(pixFmtsOrder) / sizeof(pixFmtsOrder[0])); i++) {
    
            // If we will need to crop, make sure to only select formats we can crop...
            // 當需要剪短並且選中的像素格式支持, 或者不需要剪短, 才進入這個if
            if (!crop || pixFmtsOrder[i].allowscrop) {
    
                memset(&videoIn->format,0,sizeof(videoIn->format));
                videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
                videoIn->format.fmt.pix.width = closest.getWidth();
                videoIn->format.fmt.pix.height = closest.getHeight();
                videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;
    
                // 通過VIDIOC_TRY_FMT設置攝像頭設備使用的像素格式
                ret = ioctl(fd, VIDIOC_TRY_FMT, &videoIn->format);
                // 檢查調用成功並且再次確認使用的確實是closest的參數
                if (ret >= 0 &&
                    videoIn->format.fmt.pix.width ==  (uint)closest.getWidth() &&
                    videoIn->format.fmt.pix.height == (uint)closest.getHeight()) {
                    break;
                }
            }
        }
        if (ret < 0) {
            ALOGE("Open: VIDIOC_TRY_FMT Failed: %s", strerror(errno));
            return ret;
        }
    
        // 真正設置像素格式
        /* Set the format */
        memset(&videoIn->format,0,sizeof(videoIn->format));
        videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->format.fmt.pix.width = closest.getWidth();
        videoIn->format.fmt.pix.height = closest.getHeight();
        videoIn->format.fmt.pix.pixelformat = pixFmtsOrder[i].fmt;
        ret = ioctl(fd, VIDIOC_S_FMT, &videoIn->format);
        if (ret < 0) {
            ALOGE("Open: VIDIOC_S_FMT Failed: %s", strerror(errno));
            return ret;
        }
    
    
        // 查詢當前使用的圖像格式
        /* Query for the effective video format used */
        memset(&videoIn->format,0,sizeof(videoIn->format));
        videoIn->format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        ret = ioctl(fd, VIDIOC_G_FMT, &videoIn->format);
        if (ret < 0) {
            ALOGE("Open: VIDIOC_G_FMT Failed: %s", strerror(errno));
            return ret;
        }
    
        /* Note VIDIOC_S_FMT may change width and height. */
    
        /* Buggy driver paranoia. */
        // 爲裁剪準備參數
        unsigned int min = videoIn->format.fmt.pix.width * 2;
        if (videoIn->format.fmt.pix.bytesperline < min)
            videoIn->format.fmt.pix.bytesperline = min;
        min = videoIn->format.fmt.pix.bytesperline * videoIn->format.fmt.pix.height;
        if (videoIn->format.fmt.pix.sizeimage < min)
            videoIn->format.fmt.pix.sizeimage = min;
    
        /* Store the pixel formats we will use */
        videoIn->outWidth           = width;
        videoIn->outHeight          = height;
        videoIn->outFrameSize       = width * height << 1; // Calculate the expected output framesize in YUYV
        videoIn->capBytesPerPixel   = pixFmtsOrder[i].bpp;
    
        // 開始裁剪
        /* Now calculate cropping margins, if needed, rounding to even */
        int startX = ((closest.getWidth() - width) >> 1) & (-2);
        int startY = ((closest.getHeight() - height) >> 1) & (-2);
    
        /* Avoid crashing if the mode found is smaller than the requested */
        if (startX < 0) {
            videoIn->outWidth += startX;
            startX = 0;
        }
        if (startY < 0) {
            videoIn->outHeight += startY;
            startY = 0;
        }
    
        /* Calculate the starting offset into each captured frame */
        videoIn->capCropOffset = (startX * videoIn->capBytesPerPixel) +
                (videoIn->format.fmt.pix.bytesperline * startY);
    
        ALOGI("Cropping from origin: %dx%d - size: %dx%d  (offset:%d)",
            startX,startY,
            videoIn->outWidth,videoIn->outHeight,
            videoIn->capCropOffset);
    
        /* sets video device frame rate */
        memset(&videoIn->params,0,sizeof(videoIn->params));
        videoIn->params.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->params.parm.capture.timeperframe.numerator = 1;
        videoIn->params.parm.capture.timeperframe.denominator = closest.getFps();
    
        // 設置fps
        /* Set the framerate. If it fails, it wont be fatal */
        if (ioctl(fd,VIDIOC_S_PARM,&videoIn->params) < 0) {
            ALOGE("VIDIOC_S_PARM error: Unable to set %d fps", closest.getFps());
        }
    
        /* Gets video device defined frame rate (not real - consider it a maximum value) */
        if (ioctl(fd,VIDIOC_G_PARM,&videoIn->params) < 0) {
            ALOGE("VIDIOC_G_PARM - Unable to get timeperframe");
        }
    
        ALOGI("Actual format: (%d x %d), Fps: %d, pixfmt: '%c%c%c%c', bytesperline: %d",
            videoIn->format.fmt.pix.width,
            videoIn->format.fmt.pix.height,
            videoIn->params.parm.capture.timeperframe.denominator,
            videoIn->format.fmt.pix.pixelformat & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 8) & 0xFF,
            (videoIn->format.fmt.pix.pixelformat >> 16) & 0xFF, (videoIn->format.fmt.pix.pixelformat >> 24) & 0xFF,
            videoIn->format.fmt.pix.bytesperline);
    
        /* Configure JPEG quality, if dealing with those formats */
        if (videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_JPEG ||
            videoIn->format.fmt.pix.pixelformat == V4L2_PIX_FMT_MJPEG) {
    
            // 設置JPEG質量爲100%
            /* Get the compression format */
            ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp);
    
            /* Set to maximum */
            videoIn->jpegcomp.quality = 100;
    
            /* Try to set it */
            if(ioctl(fd,VIDIOC_S_JPEGCOMP, &videoIn->jpegcomp) >= 0)
            {
                ALOGE("VIDIOC_S_COMP:");
                if(errno == EINVAL)
                {
                    videoIn->jpegcomp.quality = -1; //not supported
                    ALOGE("   compression control not supported\n");
                }
            }
    
            /* gets video stream jpeg compression parameters */
            if(ioctl(fd,VIDIOC_G_JPEGCOMP, &videoIn->jpegcomp) >= 0) {
                ALOGD("VIDIOC_G_COMP:\n");
                ALOGD("    quality:      %i\n", videoIn->jpegcomp.quality);
                ALOGD("    APPn:         %i\n", videoIn->jpegcomp.APPn);
                ALOGD("    APP_len:      %i\n", videoIn->jpegcomp.APP_len);
                ALOGD("    APP_data:     %s\n", videoIn->jpegcomp.APP_data);
                ALOGD("    COM_len:      %i\n", videoIn->jpegcomp.COM_len);
                ALOGD("    COM_data:     %s\n", videoIn->jpegcomp.COM_data);
                ALOGD("    jpeg_markers: 0x%x\n", videoIn->jpegcomp.jpeg_markers);
            } else {
                ALOGE("VIDIOC_G_COMP:");
                if(errno == EINVAL) {
                    videoIn->jpegcomp.quality = -1; //not supported
                    ALOGE("   compression control not supported\n");
                }
            }
        }
    
        /* Check if camera can handle NB_BUFFER buffers */
        memset(&videoIn->rb,0,sizeof(videoIn->rb));
        videoIn->rb.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->rb.memory = V4L2_MEMORY_MMAP;
        videoIn->rb.count = NB_BUFFER; // 定義在V4L2Camera.h中, 爲4
    
        // 要求設備分配內存
        ret = ioctl(fd, VIDIOC_REQBUFS, &videoIn->rb);
        if (ret < 0) {
            ALOGE("Init: VIDIOC_REQBUFS failed: %s", strerror(errno));
            return ret;
        }
    
        for (int i = 0; i < NB_BUFFER; i++) {
    
            memset (&videoIn->buf, 0, sizeof (struct v4l2_buffer));
            videoIn->buf.index = i;
            videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
            videoIn->buf.memory = V4L2_MEMORY_MMAP;
    
            ret = ioctl (fd, VIDIOC_QUERYBUF, &videoIn->buf);
            if (ret < 0) {
                ALOGE("Init: Unable to query buffer (%s)", strerror(errno));
                return ret;
            }
    
            // 通過mmap將內核設備剛剛分配的內存映射到用戶空間
            videoIn->mem[i] = mmap (0,
                                    videoIn->buf.length,
                                    PROT_READ | PROT_WRITE,
                                    MAP_SHARED,
                                    fd,
                                    videoIn->buf.m.offset);
    
            if (videoIn->mem[i] == MAP_FAILED) {
                ALOGE("Init: Unable to map buffer (%s)", strerror(errno));
                return -1;
            }
    
            ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);
            if (ret < 0) {
                ALOGE("Init: VIDIOC_QBUF Failed");
                return -1;
            }
    
            nQueued++;
        }
    
        // Reserve temporary buffers, if they will be needed
        size_t tmpbuf_size=0;
        switch (videoIn->format.fmt.pix.pixelformat)
        {
            case V4L2_PIX_FMT_JPEG:
            case V4L2_PIX_FMT_MJPEG:
            case V4L2_PIX_FMT_UYVY:
            case V4L2_PIX_FMT_YVYU:
            case V4L2_PIX_FMT_YYUV:
            case V4L2_PIX_FMT_YUV420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
            case V4L2_PIX_FMT_YVU420: // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
            case V4L2_PIX_FMT_Y41P:   // only needs 3/2 bytes per pixel but we alloc 2 bytes per pixel
            case V4L2_PIX_FMT_NV12:
            case V4L2_PIX_FMT_NV21:
            case V4L2_PIX_FMT_NV16:
            case V4L2_PIX_FMT_NV61:
            case V4L2_PIX_FMT_SPCA501:
            case V4L2_PIX_FMT_SPCA505:
            case V4L2_PIX_FMT_SPCA508:
            case V4L2_PIX_FMT_GREY:
            case V4L2_PIX_FMT_Y16:
    
            case V4L2_PIX_FMT_YUYV:
                //  YUYV doesn't need a temp buffer but we will set it if/when
                //  video processing disable control is checked (bayer processing).
                //            (logitech cameras only)
                break;
    
            case V4L2_PIX_FMT_SGBRG8: //0
            case V4L2_PIX_FMT_SGRBG8: //1
            case V4L2_PIX_FMT_SBGGR8: //2
            case V4L2_PIX_FMT_SRGGB8: //3
                // Raw 8 bit bayer
                // when grabbing use:
                //    bayer_to_rgb24(bayer_data, RGB24_data, width, height, 0..3)
                //    rgb2yuyv(RGB24_data, pFrameBuffer, width, height)
    
                // alloc a temp buffer for converting to YUYV
                // rgb buffer for decoding bayer data
                tmpbuf_size = videoIn->format.fmt.pix.width * videoIn->format.fmt.pix.height * 3;
                if (videoIn->tmpBuffer)
                    free(videoIn->tmpBuffer);
                videoIn->tmpBuffer = (uint8_t*)calloc(1, tmpbuf_size);
                if (!videoIn->tmpBuffer) {
                    ALOGE("couldn't calloc %lu bytes of memory for frame buffer\n",
                        (unsigned long) tmpbuf_size);
                    return -ENOMEM;
                }
    
    
                break;
    
            case V4L2_PIX_FMT_RGB24: //rgb or bgr (8-8-8)
            case V4L2_PIX_FMT_BGR24:
                break;
    
            default:
                ALOGE("Should never arrive (1)- exit fatal !!\n");
                return -1;
        }
    
        return 0;
    }
    

    總的來說, Init函數做了以下幾件事:

    1. 根據用戶要求的長寬和fps, 從設備支持的分辨率和fps中找一個大於用戶要求並且最接近的分辨率和fps. 然後設置攝像頭設備使用此分辨率和fps. 最後由於實際使用的分辨率比用戶請求的大, 還要計算一個裁剪偏差值, 以後使用圖片的時候把多出來的那部分裁減掉.
    2. 如果攝像頭設備使用了JPEG或者MJPEG壓縮, 則設置圖片質量是100%.
    3. 要求設備分配內存, 並映射到用戶空間的videoIn->mem中. 然後壓入攝像頭設備的輸入隊列, 至此, 攝像頭設備已經做好了捕捉圖像的準備, 就等streamon命令了.
  • StartStreaming

    StartStreaming函數很簡單, 除了調用STREAMON ioctl之外只是設置了videoIn 的isStreaming標記:

    hardware/libcamera/V4L2Camera.cpp

    int V4L2Camera::StartStreaming ()
    {
        enum v4l2_buf_type type;
        int ret;
    
        if (!videoIn->isStreaming) {
            type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    
            ret = ioctl (fd, VIDIOC_STREAMON, &type);
            if (ret < 0) {
                ALOGE("StartStreaming: Unable to start capture: %s", strerror(errno));
                return ret;
            }
    
            videoIn->isStreaming = true;
        }
    
        return 0;
    }
    

    調用這個函數之後, 攝像頭就開始工作了.

  • GrabRawFrame

    StartStreaming之後, 還需要獲取拍攝到的攝像頭內容, 於是要調用GrabRawFrame.

    hardware/libcamera/V4L2Camera.cpp

    void V4L2Camera::GrabRawFrame (void *frameBuffer, int maxSize)
    {
        LOG_FRAME("V4L2Camera::GrabRawFrame: frameBuffer:%p, len:%d",frameBuffer,maxSize);
        int ret;
    
        /* DQ */
        // 調用DQBUF獲取一幀數據
        memset(&videoIn->buf,0,sizeof(videoIn->buf));
        videoIn->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
        videoIn->buf.memory = V4L2_MEMORY_MMAP;
        ret = ioctl(fd, VIDIOC_DQBUF, &videoIn->buf);
        if (ret < 0) {
            ALOGE("GrabPreviewFrame: VIDIOC_DQBUF Failed");
            return;
        }
    
        nDequeued++;
    
        // Calculate the stride of the output image (YUYV) in bytes
        int strideOut = videoIn->outWidth << 1;
    
        // And the pointer to the start of the image
        // Init中計算出來需要剪裁掉的偏移量, 此處就通過增加了偏移量來把圖像剪裁爲用戶調用Init
        // 時設置的大小. 得到的src是圖像的起始地址
        uint8_t* src = (uint8_t*)videoIn->mem[videoIn->buf.index] + videoIn->capCropOffset;
    
        LOG_FRAME("V4L2Camera::GrabRawFrame - Got Raw frame (%dx%d) (buf:%d@0x%p, len:%d)",videoIn->format.fmt.pix.width,videoIn->format.fmt.pix.height,videoIn->buf.index,src,videoIn->buf.bytesused);
    
        /* Avoid crashing! - Make sure there is enough room in the output buffer! */
        if (maxSize < videoIn->outFrameSize) {
    
            ALOGE("V4L2Camera::GrabRawFrame: Insufficient space in output buffer: Required: %d, Got %d - DROPPING FRAME",videoIn->outFrameSize,maxSize);
    
        } else {
    
            // 根據像素格式分別處理, 最終把圖像數據保存到了輸出參數framebuffer中.
            switch (videoIn->format.fmt.pix.pixelformat)
            {
                case V4L2_PIX_FMT_JPEG:
                case V4L2_PIX_FMT_MJPEG:
                    if(videoIn->buf.bytesused <= HEADERFRAME1) {
                        // Prevent crash on empty image
                        ALOGE("Ignoring empty buffer ...\n");
                        break;
                    }
    
                    if (jpeg_decode((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight) < 0) {
                        ALOGE("jpeg decode errors\n");
                        break;
                    }
                    break;
    
                case V4L2_PIX_FMT_UYVY:
                    uyvy_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                 src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YVYU:
                    yvyu_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                 src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YYUV:
                    yyuv_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                 src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YUV420:
                    yuv420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YVU420:
                    yvu420_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV12:
                    nv12_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV21:
                    nv21_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV16:
                    nv16_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_NV61:
                    nv61_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_Y41P:
                    y41p_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_GREY:
                    grey_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_Y16:
                    y16_to_yuyv((uint8_t*)frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SPCA501:
                    s501_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SPCA505:
                    s505_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SPCA508:
                    s508_to_yuyv((uint8_t*)frameBuffer, strideOut, src, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_YUYV:
                    {
                        int h;
                        uint8_t* pdst = (uint8_t*)frameBuffer;
                        uint8_t* psrc = src;
                        int ss = videoIn->outWidth << 1;
                        for (h = 0; h < videoIn->outHeight; h++) {
                            memcpy(pdst,psrc,ss);
                            pdst += strideOut;
                            psrc += videoIn->format.fmt.pix.bytesperline;
                        }
                    }
                    break;
    
                case V4L2_PIX_FMT_SGBRG8: //0
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 0);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SGRBG8: //1
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 1);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SBGGR8: //2
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 2);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_SRGGB8: //3
                    bayer_to_rgb24 (src,(uint8_t*) videoIn->tmpBuffer, videoIn->outWidth, videoIn->outHeight, 3);
                    rgb_to_yuyv ((uint8_t*) frameBuffer, strideOut,
                                (uint8_t*)videoIn->tmpBuffer, videoIn->outWidth*3, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_RGB24:
                    rgb_to_yuyv((uint8_t*) frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                case V4L2_PIX_FMT_BGR24:
                    bgr_to_yuyv((uint8_t*) frameBuffer, strideOut,
                                src, videoIn->format.fmt.pix.bytesperline, videoIn->outWidth, videoIn->outHeight);
                    break;
    
                default:
                    ALOGE("error grabbing: unknown format: %i\n", videoIn->format.fmt.pix.pixelformat);
                    break;
            }
    
            LOG_FRAME("V4L2Camera::GrabRawFrame - Copied frame to destination 0x%p",frameBuffer);
        }
    
        // buffer用完之後入隊, 重複利用
        /* And Queue the buffer again */
        ret = ioctl(fd, VIDIOC_QBUF, &videoIn->buf);
        if (ret < 0) {
            ALOGE("GrabPreviewFrame: VIDIOC_QBUF Failed");
            return;
        }
    
        nQueued++;
    
        LOG_FRAME("V4L2Camera::GrabRawFrame - Queued buffer");
    
    }
    

4.2 Framwork

JavaSDK層

Hardware的分析可以自底向上, 首先看V4L2Camera, 再看CameraHardware, 再到CameraFactory. Framework的代碼自底向上看東西就太多了, 因此先從SDK中的攝像頭部分看起. HAL和Framework說的都是C++的東西, 實現了安卓的底層. 但是實際上在開發app的時候用是SDK是JAVA語言編寫的. 我們知道JAVA可以通過JNI來調用C++代碼, 接下來就來看看ADK中是如何使用的.

首先考慮一段調用攝像頭預覽的代碼:

Camera cam = Camera.open();           // 獲取一個攝像頭實例
cam.setPreviewDisplay(surfaceHolder); // 設置預覽窗口
cam.startPreview();                   // 開始預覽

第一行打開的是默認攝像頭, 也可以換成 Camera.open(1) 打開其他攝像頭, 這幾個函數的定義在ADK中位於Camera.java中, open函數爲:

frameworks/base/core/java/android/hardware/Camera.java

public static Camera open(int cameraId) {
    return new Camera(cameraId);
}

public static Camera open() {
    int numberOfCameras = getNumberOfCameras();
    CameraInfo cameraInfo = new CameraInfo();
    for (int i = 0; i < numberOfCameras; i++) {
        getCameraInfo(i, cameraInfo);
        if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
            return new Camera(i);
        }
    }
    return null;
}

可以看到, 直接open不加任何參數打開的其實是第一個後置攝像頭. 總之最後open返回了一個Camera對象. 這裏看到了一個熟悉的函數getNumberOfCameras, 在HAL中的camera_module_t中, 除了必須的hw_module_t, 還有兩個函數指針 get_number_of_cameras 和get_camera_info, 估計這個 getNumberOfCameras 最終就是調用了get_number_of_cameras. 於是來看這個函數:

/**
 * Returns the number of physical cameras available on this device.
 */
public native static int getNumberOfCameras();

這個函數在Camera.java中只有一個聲明, 表明這是一個native函數, 於是就要找其對應的JNI的定義.

frameworks/base/core/jni/android_hardware_Camera.cpp

static jint android_hardware_Camera_getNumberOfCameras(JNIEnv *env, jobject thiz)
{
    return Camera::getNumberOfCameras();
}

再來找這個C++中的Camera類, 這個類已經位於android framework中了, 但是getNumberOfCameras的定義其實是在它的父類CameraBase中:

frameworks/av/camera/CameraBase.cpp

template <typename TCam, typename TCamTraits>
int CameraBase<TCam, TCamTraits>::getNumberOfCameras() {
    const sp<ICameraService> cs = getCameraService();

    if (!cs.get()) {
        // as required by the public Java APIs
        return 0;
    }
    return cs->getNumberOfCameras();
}

可以看到這裏只是簡單的獲取CameraService, 然後調用其getNumberOfCameras 函數, 再來看這個函數:

frameworks/av/camera/CameraBase.cpp

template <typename TCam, typename TCamTraits>
const sp<ICameraService>& CameraBase<TCam, TCamTraits>::getCameraService()
{
    Mutex::Autolock _l(gLock);
    if (gCameraService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16(kCameraServiceName));
            if (binder != 0) {
                break;
            }
            ALOGW("CameraService not published, waiting...");
            usleep(kCameraServicePollDelay);
        } while(true);
        if (gDeathNotifier == NULL) {
            gDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(gDeathNotifier);
        gCameraService = interface_cast<ICameraService>(binder);
    }
    ALOGE_IF(gCameraService == 0, "no CameraService!?");
    return gCameraService;
}

可以看到gCameraService是一個sp<ICameraService>類型的單例, 第一次調用這個函數的時候對gCameraService初始化, 以後每次只是簡單地返回這個變量. 在初始化的過程中, 用到了defaultServiceManager獲取了一個sm, 並通過sm->getService獲取到CameraService. defaultServiceManager這個函數位於frameworks/native/lib/binder/IServiceManager.cpp, 屬於binder通信的一部分, 超出了本文的範圍, 以後有空再寫一篇博客說明.

open函數調用完之後, 就是setPreviewDisplay和startPreiview, 這兩個函數同樣是native的, 其實現類似, 下面就只看看startPreview:

frameworks/base/core/jni/android_hardware_Camera.cpp

static void android_hardware_Camera_startPreview(JNIEnv *env, jobject thiz)
{
    ALOGV("startPreview");
    sp<Camera> camera = get_native_camera(env, thiz, NULL);
    if (camera == 0) return;

    if (camera->startPreview() != NO_ERROR) {
        jniThrowRuntimeException(env, "startPreview failed");
        return;
    }
}

這段代碼首先獲取了一個Camera對象, 然後對其調用startPreview, get_native_camera的實習如下:

sp<Camera> get_native_camera(JNIEnv *env, jobject thiz, JNICameraContext** pContext)
{
    sp<Camera> camera;
    Mutex::Autolock _l(sLock);
    JNICameraContext* context = reinterpret_cast<JNICameraContext*>(env->GetLongField(thiz, fields.context));
    if (context != NULL) {
        camera = context->getCamera();
    }
    ALOGV("get_native_camera: context=%p, camera=%p", context, camera.get());
    if (camera == 0) {
        jniThrowRuntimeException(env,
                "Camera is being used after Camera.release() was called");
    }

    if (pContext != NULL) *pContext = context;
    return camera;
}

該函數通過env->GetLongField獲取了一個JNICameraContext的對象的指針, 然後就能通過getCamera得到Camera對象了, 而這個JNICameraContext的對象的指針是在native_setup中設置的:

 1: static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
 2:     jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
 3: {
 4:     // Convert jstring to String16
 5:     const char16_t *rawClientName = env->GetStringChars(clientPackageName, NULL);
 6:     jsize rawClientNameLen = env->GetStringLength(clientPackageName);
 7:     String16 clientName(rawClientName, rawClientNameLen);
 8:     env->ReleaseStringChars(clientPackageName, rawClientName);
 9: 
10:     sp<Camera> camera;
11:     if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
12:         // Default path: hal version is don't care, do normal camera connect.
13:         camera = Camera::connect(cameraId, clientName,
14:                 Camera::USE_CALLING_UID);
15:     } else {
16:         jint status = Camera::connectLegacy(cameraId, halVersion, clientName,
17:                 Camera::USE_CALLING_UID, camera);
18:         if (status != NO_ERROR) {
19:             return status;
20:         }
21:     }
22: 
23:     if (camera == NULL) {
24:         return -EACCES;
25:     }
26: 
27:     // make sure camera hardware is alive
28:     if (camera->getStatus() != NO_ERROR) {
29:         return NO_INIT;
30:     }
31: 
32:     jclass clazz = env->GetObjectClass(thiz);
33:     if (clazz == NULL) {
34:         // This should never happen
35:         jniThrowRuntimeException(env, "Can't find android/hardware/Camera");
36:         return INVALID_OPERATION;
37:     }
38: 
39:     // We use a weak reference so the Camera object can be garbage collected.
40:     // The reference is only used as a proxy for callbacks.
41:     sp<JNICameraContext> context = new JNICameraContext(env, weak_this, clazz, camera);
42:     context->incStrong((void*)android_hardware_Camera_native_setup);
43:     camera->setListener(context);
44: 
45:     // save context in opaque field
46:     env->SetLongField(thiz, fields.context, (jlong)context.get());
47:     return NO_ERROR;
48: }

注意第13行, 通過Camera::connect獲取到了一個Camera對象, 這裏終於又從ADK 層進入到了Framework層.

class Camera

在frameworks/av/camera/下有個Camera.cpp, 定義了一個Camera類, 由上一小節可以看到, ADK層通過JNI, 與這個類直接打交道, 進而進入Framework層, 可以說這個類是進入Framework的入口.

Camera類多重繼承於CameraBase和BnCameraClient, 而這個CameraBase用到了模板:

frameworks/av/include/camera/CameraBase.h

template <typename TCam>
struct CameraTraits {
};

template <typename TCam, typename TCamTraits = CameraTraits<TCam> >
class CameraBase : public IBinder::DeathRecipient
{
public:
    typedef typename TCamTraits::TCamListener       TCamListener;
    typedef typename TCamTraits::TCamUser           TCamUser;
    typedef typename TCamTraits::TCamCallbacks      TCamCallbacks;
    typedef typename TCamTraits::TCamConnectService TCamConnectService;
}

這裏除了用到模板還用到了模板的偏特化, Camera在實際繼承CameraBase的時候, TCam就是Camera類型, 而TCamTraits用的是CameraTraits<Camera>, 但是這個模板特化並不是CameraBase.h中的CameraTraits, 而是定義在Camera.h中, 否則的話CameraTraits是空的, 也就沒有TCamTraits::TCamListener這些東西了:

frameworks/av/include/camera/Camera.h

template <>
struct CameraTraits<Camera>
{
    typedef CameraListener        TCamListener;
    typedef ICamera               TCamUser;
    typedef ICameraClient         TCamCallbacks;
    typedef status_t (ICameraService::*TCamConnectService)(const sp<ICameraClient>&,
                                                           int, const String16&, int,
                                                           /*out*/
                                                           sp<ICamera>&);
    static TCamConnectService     fnConnectService;
};

mediaserver

mediaserver是一個獨立的進程, 有着自己的main函數, 系統啓動之後會啓動mediaserver作爲一個守護進程. mediaserver負責管理android應用需要用到的多媒體相關的服務, 例如音頻, 視頻播放, 攝像頭等. 通過Android的binder機制與app進行通信.

frameworks/av/media/mediaserver/main_mediaserver.cpp

int main(int argc __unused, char** argv)
{
    signal(SIGPIPE, SIG_IGN);
    char value[PROPERTY_VALUE_MAX];
    bool doLog = (property_get("ro.test_harness", value, "0") > 0) && (atoi(value) == 1);
    pid_t childPid;
    if (doLog && (childPid = fork()) != 0) {
        // ...省略
    } else {
        // all other services
        if (doLog) {
            prctl(PR_SET_PDEATHSIG, SIGKILL);   // if parent media.log dies before me, kill me also
            setpgid(0, 0);                      // but if I die first, don't kill my parent
        }
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());
        AudioFlinger::instantiate();
        MediaPlayerService::instantiate();
        CameraService::instantiate();
        AudioPolicyService::instantiate();
        SoundTriggerHwService::instantiate();
        registerExtensions();
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
    }
}

可以看到, 在main函數中分別對幾大服務調用了instantiate初始化. 重點關注CameraService::instantiate() 這一行, 初始化了攝像頭服務. 於是接下來就來看這個CameraService類. 這個類的聲明就很長, 約五百行, 內部還定義了BasicClientClient 等內部類. 但是並沒有發現main函數中調用的instantiate函數. 考慮到CameraService多繼承了BinderService<CameraService>, BnCameraService, DeathRecipient, camera_module_callbacks_t等四個類, 估計這個instantiate就是其中一個類定義的, 果然在BinderService.h中發現了這個函數:

frameworks/native/include/binder/BinderService.h

template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false) {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService(
                String16(SERVICE::getServiceName()),
                new SERVICE(), allowIsolated);
    }

    static void publishAndJoinThreadPool(bool allowIsolated = false) {
        publish(allowIsolated);
        joinThreadPool();
    }

    static void instantiate() { publish(); }

    static status_t shutdown() { return NO_ERROR; }

private:
    static void joinThreadPool() {
        sp<ProcessState> ps(ProcessState::self());
        ps->startThreadPool();
        ps->giveThreadPoolName();
        IPCThreadState::self()->joinThreadPool();
    }
};

可以看到 instantiate 調用了 publish, 而 publish 首先取得了一個全局唯一的 IServiceManager 實例的指針, 並且向其中註冊了一個新的服務, 結合CameraService繼承了 BinderService<CameraService> 來看, 註冊服務實際上調用的是:

sm->addService(
    String16(CameraSeivice::getServiceName()),
    new CameraService(), allowIsolated);

至此, main函數僅僅註冊了CameraService, 那麼什麼時候使用CameraService呢? 這就要看main函數的最後兩行:

ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();

這裏開始就就到了binder通信的部分, 不在本文的範圍內, 參見我的另一篇博文.

android_camera_framework_uml.png

CameraHardwareInterface

CameraService

  • BasicClient
  • Client
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章