高通平臺Camera驅動和HAL層代碼架構

轉載地址https://www.jianshu.com/p/b14d65f83496

本文主要研究高通平臺Camera驅動和HAL層代碼架構,熟悉高通Camera的控制流程。
平臺:Qcom-高通平臺
Hal版本:【HAL1】
知識點如下:
從HAL層到driver層:研究Camera以下內容
1.打開(open)流程
2.預覽(preview)流程
3.拍照(tackPicture)流程

2、Camera軟件架構

camera軟件架構

由上圖可以看出,Android Camera 框架是 client/service 的架構,

  • 1.有兩個進程:

    client 進程:可以看成是 AP 端,主要包括 JAVA 代碼與一些 native c/c++代碼;

    service 進 程:”屬於服務端,是 native c/c++代碼,主要負責和 linux kernel 中的 camera driver 交互,蒐集 linuxkernel 中 cameradriver 傳上來的數據,並交給顯示系統SurfaceFlinger顯示。

client 進程與 service 進程通過 Binder 機制通信, client 端通過調用 service 端的接口實現各個具體的功能。

  • 2.最下面的是kernel層的驅動,其中按照V4L2架構實現了camera sensor等驅動,向用戶空間提供/dev/video0節點,這些設備節點文件,把操作設備的接口暴露給用戶空間。

  • 3.在往上是HAL層,高通代碼實現了對/dev/video0的基本操作,對接了android的camera相關的interface。

2.1 Camera的open流程

2.1.1 Hal層

Android中Camera的調用流程, 基本是 Java -> JNI -> Service -> HAL -> 驅動層。

frameworks/av/services/camera/libcameraservice/device1/CameraHardwareInterface.h

status_t initialize(CameraModule *module) {
···
    rc = module->open(mName.string(), (hw_device_t **)&mDevice);
···
}

這裏調用module->open開始調用到HAL層,那調用的是哪個方法呢?
我們繼續往下看:

hardware/qcom/camera/QCamera2/HAL/wrapper/QualcommCamera.cpp

static hw_module_methods_t camera_module_methods = {
    open: camera_device_open,
};

實際上是調用了camera_device_open函數,爲了對調用流程更加清晰的認識,
我畫了一張流程圖(畫圖工具:processon):


open流程

open流程圖已經很清晰明瞭,我們關注一些重點函數:
在HAL層的 module->open(mName.string(), (hw_device_t **)&mDevice)層層調用,最終會調用到函數mm_camera_open(cam_obj);

hardware/qcom/camera/QCamera2/HAL/core/src/QCameraHWI.cpp

QCameraHardwareInterface::QCameraHardwareInterface(int cameraId, int mode)
{
···
/* Open camera stack! */
    mCameraHandle=camera_open(mCameraId, &mem_hooks);
    //Preview
    result = createPreview();
    //Record
    result = createRecord();
    //Snapshot
    result = createSnapshot();
    /* launch jpeg notify thread and raw data proc thread */
    mNotifyTh = new QCameraCmdThread();
    mDataProcTh = new QCameraCmdThread();
···
}

分析:new QCameraHardwareInterface()進行初始化:主要做了以下動作:

  • 1.打開camera
  • 2.creat preview stream、record stream、snapshot stream
  • 3.創建2個線程(jpeg notify thread和raw data proc thread)

hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera.c

int32_t mm_camera_open(mm_camera_obj_t *my_obj)
{
···
    my_obj->ctrl_fd = open(dev_name, O_RDWR | O_NONBLOCK);
···

}

在V4L2框架中,Camera被看做一個視頻設備,使用open函數打開這個設備:這裏以阻塞模式打開Camera。


1. 用非阻塞模式打開攝像頭設備

cameraFd = open("/dev/video0", O_RDWR | O_NONBLOCK);

  1. 如果用阻塞模式打開攝像頭設備,上述代碼變爲:

cameraFd = open("/dev/video0", O_RDWR);

ps:關於阻塞模式和非阻塞模式

應用程序能夠使用阻塞模式或非阻塞模式打開視頻設備,如果使用非阻塞模式調用視頻設備,
即使尚未捕獲到信息,驅動依舊會把緩存(DQBUFF)裏的東西返回給應用程序。

那麼,接下來就會調用到Kernel層的代碼

2.1.2Kernel層

kernel/drivers/media/platform/msm/camera_v2/msm.c

static struct v4l2_file_operations msm_fops = {
  .owner  = THIS_MODULE,
  .open   = msm_open,
  .poll   = msm_poll,
  .release = msm_close,
  .ioctl   = video_ioctl2,
#ifdef CONFIG_COMPAT
  .compat_ioctl32 = video_ioctl2,
#endif
};

實際上是調用了msm_open這個函數,我們跟進去看:

static int msm_open(struct file *filep)
{
···
   /* !!! only ONE open is allowed !!! */
   if (atomic_cmpxchg(&pvdev->opened, 0, 1))
       return -EBUSY;

spin_lock_irqsave(&msm_pid_lock, flags);
msm_pid = get_pid(task_pid(current));
spin_unlock_irqrestore(&msm_pid_lock, flags);

/* create event queue */
rc = v4l2_fh_open(filep);
if (rc < 0)
return rc;

spin_lock_irqsave(&msm_eventq_lock, flags);
msm_eventq = filep->private_data;
spin_unlock_irqrestore(&msm_eventq_lock, flags);

/* register msm_v4l2_pm_qos_request */
msm_pm_qos_add_request();
···
}

分析:
通過調用v4l2_fh_open函數打開Camera,該函數會創建event隊列等進行一些其他操作。

接下來我們跟着log去看:
camera open log

<3>[   12.526811] msm_camera_power_up type 1
<3>[   12.526818] msm_camera_power_up:1303 gpio set val 33
<3>[   12.528873] msm_camera_power_up index 6
<3>[   12.528885] msm_camera_power_up type 1
<3>[   12.528893] msm_camera_power_up:1303 gpio set val 33
<3>[   12.534954] msm_camera_power_up index 7
<3>[   12.534969] msm_camera_power_up type 1
<3>[   12.534977] msm_camera_power_up:1303 gpio set val 28
<3>[   12.540162] msm_camera_power_up index 8
<3>[   12.540177] msm_camera_power_up type 1
<3>[   ·
<3>[   ·
<3>[   ·
<3>[   12.562753] msm_sensor_match_id: read id: 0x5675 expected id 0x5675:
<3>[   12.562763] ov5675_back probe succeeded
<3>[   12.562771] msm_sensor_driver_create_i2c_v4l_subdev camera I2c probe succeeded
<3>[   12.564930] msm_sensor_driver_create_i2c_v4l_subdev rc 0 session_id 1
<3>[   12.565495] msm_sensor_driver_create_i2c_v4l_subdev:120
<3>[   12.565507] msm_camera_power_down:1455
<3>[   12.565514] msm_camera_power_down index 0

分析:
最終就是調用msm_camera_power_up上電,msm_sensor_match_id識別sensor id,調用ov5675_back probe()探測函數去完成匹配設備和驅動的工作,msm_camera_power_down下電!

到此 我們的open流程就結束了!!!

2.2 Camera的preview流程

2.2.1 Hal層

hardware/qcom/camera/QCamera2/HAL/QCamera2HWI.cpp

int QCamera2HardwareInterface::startPreview()
{
···
    int32_t rc = NO_ERROR;
···
    rc = startChannel(QCAMERA_CH_TYPE_PREVIEW);
···
}

這裏調用startChannel(QCAMERA_CH_TYPE_PREVIEW),開啓preview流。
接來下看我畫的一張流程圖:(Hal層)

Preview流程

關注一些重點函數:
hardware/qcom/camera/QCamera2/HAL/QCameraChannel.cpp

int32_t QCameraChannel::start()
{
···
    mStreams[i]->start();//流程1
···
    rc = m_camOps->start_channel(m_camHandle, m_handle);//流程2
···
}

進入QCameraChannel::start()函數開始執行兩個流程,分別是
mStreams[i]->start()和m_camOps->start_channel(m_camHandle, m_handle);

流程1:mStreams[i]->start()

1.通過mProcTh.launch(dataProcRoutine, this)開啓新線程
2.執行CAMERA_CMD_TYPE_DO_NEXT_JOB分支,
3.從mDataQ隊列中取出數據並放入mDataCB中,等待數據返回到對應的stream回調中去,
4.最後向kernel請求數據;

流程2:m_camOps->start_channel(m_camHandle, m_handle);

通過流程圖,我們可以清晰的看到,經過一系列複雜的調用用,
最後在mm_camera_channel.c中
調用mm_channel_start(mm_channel_t *my_obj)函數,

來看mm_channel_start做了什麼事情:
hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c

int32_t mm_channel_start(mm_channel_t *my_obj)
{
···
    /* 需要發送cb,因此啓動線程 */
    /* 初始化superbuf隊列 */
    mm_channel_superbuf_queue_init(&my_obj->bundle.superbuf_queue);
    /* 啓動cb線程,通過cb調度superbuf中 */
    snprintf(my_obj->cb_thread.threadName, THREAD_NAME_SIZE, "CAM_SuperBuf");
    mm_camera_cmd_thread_launch(&my_obj->cb_thread,
                                    mm_channel_dispatch_super_buf,
                                    (void*)my_obj);
    /* 啓動 cmd 線程,作爲superbuf接收數據的回調函數*/
    snprintf(my_obj->cmd_thread.threadName, THREAD_NAME_SIZE, "CAM_SuperBufCB");
    mm_camera_cmd_thread_launch(&my_obj->cmd_thread,
                                mm_channel_process_stream_buf,
                                (void*)my_obj);
    /* 爲每個strean分配 buf */
    /*allocate buf*/
    rc = mm_stream_fsm_fn(s_objs[i],
                              MM_STREAM_EVT_GET_BUF,
                              NULL,
                              NULL);
    /* reg buf */
    rc = mm_stream_fsm_fn(s_objs[i],
                              MM_STREAM_EVT_REG_BUF,
                              NULL,
                              NULL);
    /* 開啓 stream */
    rc = mm_stream_fsm_fn(s_objs[i],
                              MM_STREAM_EVT_START,
                              NULL,
                              NULL);
···
}

過程包括:

  • 1.創建cb thread,cmd thread線程以及
  • 2.爲每個stream分配buf
  • 3.開啓stream;
    我們繼續關注開啓stream後的流程:
    rc = mm_stream_fsm_fn(s_objs[i],MM_STREAM_EVT_START,NULL,NULL);
    調用到
    rc = mm_stream_fsm_reg(my_obj, evt, in_val, out_val)
    hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_stream.c
int32_t mm_stream_fsm_reg(···)
{
···
    case MM_STREAM_EVT_START:
        rc = mm_stream_streamon(my_obj);
···
}

在mm_camera_stream.c中調用mm_stream_streamon(mm_stream_t *my_obj)函數.

向kernel發送v4l2請求,等待數據回調

int32_t mm_stream_streamon(mm_stream_t *my_obj)
{
···
    enum v4l2_buf_type buf_type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
···
    rc = ioctl(my_obj->fd, VIDIOC_STREAMON, &buf_type);  
···
}

2.2.2 Kernel層

image

kernel/drivers/media/platform/msm/camera_v2/camera/camera.c
通過ioctl的方式,經過層層調用,最後調用到camera_v4l2_streamon();

static int camera_v4l2_streamon(struct file *filep, void *fh,
    enum v4l2_buf_type buf_type)
{
    struct v4l2_event event;
    int rc; 
    struct camera_v4l2_private *sp = fh_to_private(fh);
rc = vb2_streamon(&amp;sp-&gt;vb2_q, buf_type);
camera_pack_event(filep, MSM_CAMERA_SET_PARM,
    MSM_CAMERA_PRIV_STREAM_ON, <span class="hljs-number">-1</span>, &amp;event);

rc = msm_post_event(&amp;event, MSM_POST_EVT_TIMEOUT);

···
rc = camera_check_event_status(&event);
return rc;
}

分析:通過msm_post_event發生數據請求,等待數據回調。

Preview完整流程圖

Preview完整流程圖

到此,preview預覽流程結束

2.3 Camera的tackPicture流程

事實上,tackPicture流程和preview的流程很類似!

以ZSL模式(零延遲模式)爲切入點:

2.3.1 Hal層

hardware/qcom/camera/QCamera2/HAL/QCamera2HWI.cpp

int QCamera2HardwareInterface::takePicture()
{
···
    //流程1
    mCameraHandle->ops->start_zsl_snapshot(mCameraHandle->camera_handle,    
        pZSLChannel->getMyHandle());
···
    //流程2
     rc = pZSLChannel->takePicture(numSnapshots);
···
}

進入QCamera2HardwareInterface::takePicture後,會走2個流程:

  • 1.mCameraHandle->ops->start_zsl_snapshot(···);

  • 2.pZSLChannel->takePicture(numSnapshots);

流程1:

image

經過層層調用,最終會調用到mm_channel_start_zsl_snapshot
hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c

int32_t mm_channel_start_zsl_snapshot(mm_channel_t *my_obj)
{
    int32_t rc = 0; 
    mm_camera_cmdcb_t* node = NULL;
node = (<span class="hljs-keyword">mm_camera_cmdcb_t</span> *)<span class="hljs-built_in">malloc</span>(<span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">mm_camera_cmdcb_t</span>));
<span class="hljs-keyword">if</span> (<span class="hljs-literal">NULL</span> != node) {
    <span class="hljs-built_in">memset</span>(node, <span class="hljs-number">0</span>, <span class="hljs-keyword">sizeof</span>(<span class="hljs-keyword">mm_camera_cmdcb_t</span>));
    node-&gt;cmd_type = MM_CAMERA_CMD_TYPE_START_ZSL;

    <span class="hljs-comment">/* enqueue to cmd thread */</span>
    cam_queue_enq(&amp;(my_obj-&gt;cmd_thread.cmd_queue), node);

    <span class="hljs-comment">/* wake up cmd thread */</span>
    cam_sem_post(&amp;(my_obj-&gt;cmd_thread.cmd_sem));
} <span class="hljs-keyword">else</span> {
    CDBG_ERROR(<span class="hljs-string">"%s: No memory for mm_camera_node_t"</span>, __func__);
    rc = <span class="hljs-number">-1</span>;
}

<span class="hljs-keyword">return</span> rc;

}

分析:
該函數主要做了2件事情:

  • 1 cam_queue_enq(&(my_obj->cmd_thread.cmd_queue), node);入隊
  • 2 通過cam_sem_post(&(my_obj->cmd_thread.cmd_sem));喚醒cmd線程

這裏的node->cmd_type=MM_CAMERA_CMD_TYPE_START_ZSL

hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_thread.c

static void *mm_camera_cmd_thread(void *data)
{
···
      case MM_CAMERA_CMD_TYPE_START_ZSL:
         cmd_thread->cb(node, cmd_thread->user_data);
···
}

這裏cmd_thread->cb是回調函數:
cmd_thread->cb = mm_channel_process_stream_buf,經過層層複雜的回調
最終:
mm_channel_superbuf_skip(ch_obj, &ch_obj->bundle.superbuf_queue);
super_buf = (mm_channel_queue_node_t*)node->data;
將buffer 取出 且釋放list中的node,最終將buffer queue給kernel進行下一次填充.

流程2:

image

同樣,經過層層調用,最終調用到mm_channel_request_super_buf

hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_channel.c

int32_t mm_channel_request_super_buf(mm_channel_t *my_obj, uint32_t num_buf_requested)
{
    /* set pending_cnt
     * will trigger dispatching super frames if pending_cnt > 0 */
    /* send cam_sem_post to wake up cmd thread to dispatch super buffer */
    node = (mm_camera_cmdcb_t *)malloc(sizeof(mm_camera_cmdcb_t));
    if (NULL != node) {
        memset(node, 0, sizeof(mm_camera_cmdcb_t));
        node->cmd_type = MM_CAMERA_CMD_TYPE_REQ_DATA_CB;
        node->u.req_buf.num_buf_requested = num_buf_requested;
    <span class="hljs-comment">/* enqueue to cmd thread */</span>
    cam_queue_enq(&amp;(my_obj-&gt;cmd_thread.cmd_queue), node);

    <span class="hljs-comment">/* wake up cmd thread */</span>
    cam_sem_post(&amp;(my_obj-&gt;cmd_thread.cmd_sem));
} <span class="hljs-keyword">else</span> {
    CDBG_ERROR(<span class="hljs-string">"%s: No memory for mm_camera_node_t"</span>, __func__);
    rc = <span class="hljs-number">-1</span>;
}

<span class="hljs-keyword">return</span> rc;

}

分析:該函數和流程1一樣:

  • 1 cam_queue_enq(&(my_obj->cmd_thread.cmd_queue), node);入隊
  • 2 通過cam_sem_post(&(my_obj->cmd_thread.cmd_sem));喚醒cmd線程
static void *mm_camera_cmd_thread(void *data)
{
···
      case MM_CAMERA_CMD_TYPE_START_ZSL:
      case MM_CAMERA_CMD_TYPE_REQ_DATA_CB:
         cmd_thread->cb(node, cmd_thread->user_data);
···
}

這裏和流程1一樣,就不再贅述!

2.3.2 Kernel層

int32_t mm_camera_start_zsl_snapshot(mm_camera_obj_t *my_obj)
{
···
    rc = mm_camera_util_s_ctrl(my_obj->ctrl_fd,
             CAM_PRIV_START_ZSL_SNAPSHOT, &value);
···
}
int32_t mm_camera_util_s_ctrl(int32_t fd,  uint32_t id, int32_t *value)
{
···
    rc = ioctl(fd, VIDIOC_S_CTRL, &control);
···
}
kernel/drivers/media/v4l2-core/v4l2-subdev.c
static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
{
···
    case VIDIOC_S_CTRL:
        return v4l2_s_ctrl(vfh, vfh->ctrl_handler, arg);
···
}

通過ioctl(fd, VIDIOC_S_CTRL, &control)的方式,藉助V4L2框架,調用到kernel層,

最終buffer queue給kernel進行下一次填充。

takePicture完整流程圖

takePicture完整流程圖

Stay hungry,Stay foolish!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章