Camera MetaData 介紹

和你一起終身學習,這裏是程序員 Android

經典好文推薦,通過閱讀本文,您將收穫以下知識點:

一、Camera MetaData 作用簡介
二、MetaData 定義介紹
三、Camera MetaData 代碼流程分析
四、CameraMetadata.cpp 代碼分析

一、Camera MetaData 作用簡介

簡單來說,Camera 設置參數,以前都是調用 SetParameter()/Paramters() 來實現下發或獲取參數。
而現在新的 Camera API2 / HAL3 架構,則修改爲使用 Camera MetaData 的形式來下發或獲取參數。

Camera MetaData 就是將參數以共享內存的形式,將所有的Camera 參數以 有序的結構體的形式 保存在一塊連接的內存中。

在API2 中,Java層中直接對參數進行設置並將其封裝到Capture_Request即可,
而兼容 API1 ,則在 API1中的 SetParameter()/Paramters() 方法中進行轉換,最終以 MetaData 的形式傳遞下去。

接下來,我們分別來學習下 Camera MetaData 的定義 及 使用方法。

二、MetaData 定義介紹

Camera MetaData 的定義,其主要集中在 /system/media/camera/ 目錄,
從 Android.bp 中可以看出,最終是編譯成 libcamera_metadata.so庫。

# system/media/camera/Android.bp
subdirs = ["tests"]

cc_library_shared {
    name: "libcamera_metadata",
    vendor_available: true,
    vndk: {
        enabled: true,
    },
    srcs: ["src/camera_metadata.c"],

    include_dirs: ["system/media/private/camera/include"],
    local_include_dirs: ["include"],
    export_include_dirs: ["include"],

    shared_libs: [
        "libcutils",
        "liblog",
    ],
}

Camera MetaData 頭文件定義在如下幾個文件中:

MetaData 層次結構定義及 基本宏定義 /system/media/camera/include/system/camera_metadata_tags.h
MetaData 枚舉定義及常用API 定義 /system/media/camera/include/system/camera_metadata.h
MetaData 基本函數操作結構體定義 /system/media/camera/include/system/camera_vendor_tags.h
MetaData 宏定義與字符串綁定 /system/media/camera/src/camera_metadata_tag_info.c
MetaData 核心代碼實現 /system/media/camera/src/camera_metadata.c

2.1 Camera MetaData 內存分佈

在 camera_metadata.c 中,有一幅 內存分存圖,可以看出 Camera MetaData 數據結構是一塊連續的內存空間。

其內存區分佈如下:

區域一 : 何存camera_metadata_t 結構體定義,佔用內存 96 Byte
區域二 : 保留區,供未來使用
區域三 : 何存所有 Tag 結構體定義,TAG[0]、TAG[1]、…、TAG[entry_count-1]
區域四 : 剩餘未使用的 Tag 結構體的內存保留,該區域大小爲 (entry_capacity - entry_count) 個TAG
區域五 : 所有 Tag對應的具體 metadata 數據
區域六 : 剩餘未使用的 Tag 佔用的內存

# system/media/camera/src/camera_metadata.c

/**
 * A packet of metadata. This is a list of entries, each of which may point to
 * its values stored at an offset in data.
 *
 * It is assumed by the utility functions that the memory layout of the packet
 * is as follows:
 *   |-----------------------------------------------|
 *   | camera_metadata_t                             |  區域一 :何存camera_metadata_t  結構體定義
 *   |                                               |
 *   |-----------------------------------------------|
 *   | reserved for future expansion                 |  區域二 :保留區,供未來使用
 *   |-----------------------------------------------|
 *   | camera_metadata_buffer_entry_t #0             |  區域三 :何存所有 Tag 結構體定義
 *   |-----------------------------------------------|          TAG[0]、TAG[1]、.....、TAG[entry_count-1]
 *   | ....                                          |
 *   |-----------------------------------------------|
 *   | camera_metadata_buffer_entry_t #entry_count-1 |
 *   |-----------------------------------------------|
 *   | free space for                                |  區域四 :剩餘未使用的 Tag 結構體的內存保留,
 *   | (entry_capacity-entry_count) entries          |          該區域大小爲 (entry_capacity - entry_count) 個TAG  
 *   |-----------------------------------------------|
 *   | start of camera_metadata.data                 |  區域五 : 所有 Tag對應的具體 metadata 數據
 *   |                                               |
 *   |-----------------------------------------------|
 *   | free space for                                |  區域六 : 剩餘未使用的 Tag 佔用的內存 
 *   | (data_capacity-data_count) bytes              |
 *   |-----------------------------------------------|
 *
 * With the total length of the whole packet being camera_metadata.size bytes.
 *
 * In short, the entries and data are contiguous in memory after the metadata
 * header.
 */
#define METADATA_ALIGNMENT ((size_t) 4)
struct camera_metadata {
    metadata_size_t          size;              //整個metadata數據大小
    uint32_t                 version;           //version
    uint32_t                 flags;
    metadata_size_t          entry_count;       //已經添加TAG的入口數量,(即內存塊中已經包含多少TAG了)
    metadata_size_t          entry_capacity;    //最大能容納TAG的入口數量(即最大能放多少tag)
    metadata_uptrdiff_t      entries_start;     //TAG區域相對開始處的偏移  Offset from camera_metadata
    metadata_size_t          data_count;        //記錄數據段當前已用的內存空間
    metadata_size_t          data_capacity;     //總的數據段內存空間
    metadata_uptrdiff_t      data_start;        //數據區相對開始處的偏移 Offset from camera_metadata
    uint32_t                 padding;           // padding to 8 bytes boundary
    metadata_vendor_id_t     vendor_id;         // vendor id
};
typedef struct camera_metadata camera_metadata_t;

每個TAG 對應的數據結構體如下,佔用內存 33 Byte,由於是以 8字節對齊,所以該結構體佔用 40 個Byte。

/**
 * A datum of metadata. This corresponds to camera_metadata_entry_t::data
 * with the difference that each element is not a pointer. We need to have a
 * non-pointer type description in order to figure out the largest alignment
 * requirement for data (DATA_ALIGNMENT).
 */
#define DATA_ALIGNMENT ((size_t) 8)
typedef union camera_metadata_data {
    uint8_t u8;
    int32_t i32;
    float   f;
    int64_t i64;
    double  d;
    camera_metadata_rational_t r;
} camera_metadata_data_t;

#define ENTRY_ALIGNMENT ((size_t) 4)
typedef struct camera_metadata_buffer_entry {
    uint32_t tag;
    uint32_t count;
    union {
        uint32_t offset;
        uint8_t  value[4];
    } data;
    uint8_t  type;
    uint8_t  reserved[3];
} camera_metadata_buffer_entry_t;

2.2 基本宏定義 camera_metadata_tags.h

Camera MetaData 中所有的TAG 定義在 camera_metadata_tags.h 中。
可以看出,目錄系統默認定義了 26 個Tag,分別如下:

# system/media/camera/include/system/camera_metadata_tags.h

/* Top level hierarchy definitions for camera metadata. *_INFO sections are for
 * the static metadata that can be retrived without opening the camera device.
 * New sections must be added right before ANDROID_SECTION_COUNT to maintain
 * existing enumerations. */
typedef enum camera_metadata_section {
    ANDROID_COLOR_CORRECTION,
    ANDROID_CONTROL,            // 控制數據
    ANDROID_DEMOSAIC,
    ANDROID_EDGE,
    ANDROID_FLASH,              // 
    ANDROID_FLASH_INFO,
    ANDROID_HOT_PIXEL,
    ANDROID_JPEG,
    ANDROID_LENS,
    ANDROID_LENS_INFO,
    ANDROID_NOISE_REDUCTION,
    ANDROID_QUIRKS,
    ANDROID_REQUEST,
    ANDROID_SCALER,
    ANDROID_SENSOR,
    ANDROID_SENSOR_INFO,
    ANDROID_SHADING,
    ANDROID_STATISTICS,
    ANDROID_STATISTICS_INFO,
    ANDROID_TONEMAP,
    ANDROID_LED,
    ANDROID_INFO,
    ANDROID_BLACK_LEVEL,
    ANDROID_SYNC,
    ANDROID_REPROCESS,
    ANDROID_DEPTH,
    ANDROID_SECTION_COUNT,

    VENDOR_SECTION = 0x8000
} camera_metadata_section_t;

由於在內存中,各個tag 數據都是以有序的結構體形式保存起來,各個tag 對應的偏移地址如下:

/**
 * Hierarchy positions in enum space. All vendor extension tags must be
 * defined with tag >= VENDOR_SECTION_START
 */
typedef enum camera_metadata_section_start {
    ANDROID_COLOR_CORRECTION_START = ANDROID_COLOR_CORRECTION  << 16,
    ANDROID_CONTROL_START          = ANDROID_CONTROL           << 16,
    ANDROID_DEMOSAIC_START         = ANDROID_DEMOSAIC          << 16,
    ANDROID_EDGE_START             = ANDROID_EDGE              << 16,
    ANDROID_FLASH_START            = ANDROID_FLASH             << 16,
    ANDROID_FLASH_INFO_START       = ANDROID_FLASH_INFO        << 16,
    ANDROID_HOT_PIXEL_START        = ANDROID_HOT_PIXEL         << 16,
    ANDROID_JPEG_START             = ANDROID_JPEG              << 16,
    ANDROID_LENS_START             = ANDROID_LENS              << 16,
    ANDROID_LENS_INFO_START        = ANDROID_LENS_INFO         << 16,
    ANDROID_NOISE_REDUCTION_START  = ANDROID_NOISE_REDUCTION   << 16,
    ANDROID_QUIRKS_START           = ANDROID_QUIRKS            << 16,
    ANDROID_REQUEST_START          = ANDROID_REQUEST           << 16,
    ANDROID_SCALER_START           = ANDROID_SCALER            << 16,
    ANDROID_SENSOR_START           = ANDROID_SENSOR            << 16,
    ANDROID_SENSOR_INFO_START      = ANDROID_SENSOR_INFO       << 16,
    ANDROID_SHADING_START          = ANDROID_SHADING           << 16,
    ANDROID_STATISTICS_START       = ANDROID_STATISTICS        << 16,
    ANDROID_STATISTICS_INFO_START  = ANDROID_STATISTICS_INFO   << 16,
    ANDROID_TONEMAP_START          = ANDROID_TONEMAP           << 16,
    ANDROID_LED_START              = ANDROID_LED               << 16,
    ANDROID_INFO_START             = ANDROID_INFO              << 16,
    ANDROID_BLACK_LEVEL_START      = ANDROID_BLACK_LEVEL       << 16,
    ANDROID_SYNC_START             = ANDROID_SYNC              << 16,
    ANDROID_REPROCESS_START        = ANDROID_REPROCESS         << 16,
    ANDROID_DEPTH_START            = ANDROID_DEPTH             << 16,
    VENDOR_SECTION_START           = VENDOR_SECTION            << 16
} camera_metadata_section_start_t;

接下來,定義了,各個TAG 對應換詳細的參數,每個 TAG 以 ##TAG##_START 和 ##TAG##_END 結束。

/**
 * Main enum for defining camera metadata tags.  New entries must always go
 * before the section _END tag to preserve existing enumeration values.  In
 * addition, the name and type of the tag needs to be added to
 * system/media/camera/src/camera_metadata_tag_info.c
 */
typedef enum camera_metadata_tag {
    ANDROID_COLOR_CORRECTION_MODE =                   // enum         | public
           ANDROID_COLOR_CORRECTION_START,
    ANDROID_COLOR_CORRECTION_TRANSFORM,               // rational[]   | public
    ANDROID_COLOR_CORRECTION_GAINS,                   // float[]      | public
    ANDROID_COLOR_CORRECTION_ABERRATION_MODE,         // enum         | public
    ANDROID_COLOR_CORRECTION_AVAILABLE_ABERRATION_MODES,
                                                      // byte[]       | public
    ANDROID_COLOR_CORRECTION_END,

    ANDROID_CONTROL_AE_ANTIBANDING_MODE =             // enum         | public
            ANDROID_CONTROL_START,
    ANDROID_CONTROL_AE_EXPOSURE_COMPENSATION,         // int32        | public
    ANDROID_CONTROL_AE_LOCK,                          // enum         | public
    ANDROID_CONTROL_AE_MODE,                          // enum         | public
    ......
    ANDROID_CONTROL_END,
    
    ANDROID_FLASH_FIRING_POWER =                      // byte         | system
            ANDROID_FLASH_START,
    ANDROID_FLASH_FIRING_TIME,                        // int64        | system
    ANDROID_FLASH_MODE,                               // enum         | public
    ANDROID_FLASH_COLOR_TEMPERATURE,                  // byte         | system
    ANDROID_FLASH_MAX_ENERGY,                         // byte         | system
    ANDROID_FLASH_STATE,                              // enum         | public
    ANDROID_FLASH_END,

2.3 基本API定義 camera_metadata.h

# system/media/camera/include/system/camera_metadata.h

// 根據 TAG 數量定義兩個數組。
#include "camera_metadata_tags.h"
ANDROID_API
extern unsigned int camera_metadata_section_bounds[ANDROID_SECTION_COUNT][2];
ANDROID_API
extern const char *camera_metadata_section_names[ANDROID_SECTION_COUNT];

/**
 * A reference to a metadata entry in a buffer.
 *
 * The data union pointers point to the real data in the buffer, and can be
 * modified in-place if the count does not need to change. The count is the
 * number of entries in data of the entry's type, not a count of bytes.
 */
//  每個 Tag 的數據結構體定義
typedef struct camera_metadata_entry {
    size_t   index;
    uint32_t tag;
    uint8_t  type;
    size_t   count;
    union {
        uint8_t *u8;
        int32_t *i32;
        float   *f;
        int64_t *i64;
        double  *d;
        camera_metadata_rational_t *r;
    } data;
} camera_metadata_entry_t;

接着在該頭文件中定義了一些常用的 API 方法:

ANDROID_API
camera_metadata_t *allocate_camera_metadata(size_t entry_capacity,size_t data_capacity);

ANDROID_API
camera_metadata_t *place_camera_metadata(void *dst, size_t dst_size,size_t data_capacity);

ANDROID_API
void free_camera_metadata(camera_metadata_t *metadata);

ANDROID_API
size_t calculate_camera_metadata_size(size_t entry_count,size_t data_count);

ANDROID_API
camera_metadata_t *copy_camera_metadata(void *dst, size_t dst_size, const camera_metadata_t *src);

ANDROID_API
int add_camera_metadata_entry(camera_metadata_t *dst, uint32_t tag, const void *data, size_t data_count);

2.4 產商API自定義 camera_vendor_tags.h

在該頭文件中,定義了供產商自定義 metadata 及查詢的方法。

# system/media/camera/include/system/camera_vendor_tags.h

typedef struct vendor_tag_ops vendor_tag_ops_t;
struct vendor_tag_ops {
    int (*get_tag_count)(const vendor_tag_ops_t *v);
    void (*get_all_tags)(const vendor_tag_ops_t *v, uint32_t *tag_array);
    const char *(*get_section_name)(const vendor_tag_ops_t *v, uint32_t tag);
    const char *(*get_tag_name)(const vendor_tag_ops_t *v, uint32_t tag);
    int (*get_tag_type)(const vendor_tag_ops_t *v, uint32_t tag);
    void* reserved[8];
};

struct vendor_tag_cache_ops {
    int (*get_tag_count)(metadata_vendor_id_t id);
    void (*get_all_tags)(uint32_t *tag_array, metadata_vendor_id_t id);
    const char *(*get_section_name)(uint32_t tag, metadata_vendor_id_t id);
    const char *(*get_tag_name)(uint32_t tag, metadata_vendor_id_t id);
    int (*get_tag_type)(uint32_t tag, metadata_vendor_id_t id);
    void* reserved[8];
};

2.5 將宏與字符串綁定 camera_metadata_tag_info.c

# system/media/camera/src/camera_metadata_tag_info.c

const char *camera_metadata_section_names[ANDROID_SECTION_COUNT] = {
    [ANDROID_COLOR_CORRECTION]     = "android.colorCorrection",
    [ANDROID_CONTROL]              = "android.control",
    [ANDROID_DEMOSAIC]             = "android.demosaic",
    [ANDROID_EDGE]                 = "android.edge",
    [ANDROID_FLASH]                = "android.flash",
    [ANDROID_FLASH_INFO]           = "android.flash.info",
    [ANDROID_HOT_PIXEL]            = "android.hotPixel",
    [ANDROID_JPEG]                 = "android.jpeg",
    [ANDROID_LENS]                 = "android.lens",
    [ANDROID_LENS_INFO]            = "android.lens.info",
    [ANDROID_NOISE_REDUCTION]      = "android.noiseReduction",
    [ANDROID_QUIRKS]               = "android.quirks",
    [ANDROID_REQUEST]              = "android.request",
    [ANDROID_SCALER]               = "android.scaler",
    [ANDROID_SENSOR]               = "android.sensor",
    [ANDROID_SENSOR_INFO]          = "android.sensor.info",
    [ANDROID_SHADING]              = "android.shading",
    [ANDROID_STATISTICS]           = "android.statistics",
    [ANDROID_STATISTICS_INFO]      = "android.statistics.info",
    [ANDROID_TONEMAP]              = "android.tonemap",
    [ANDROID_LED]                  = "android.led",
    [ANDROID_INFO]                 = "android.info",
    [ANDROID_BLACK_LEVEL]          = "android.blackLevel",
    [ANDROID_SYNC]                 = "android.sync",
    [ANDROID_REPROCESS]            = "android.reprocess",
    [ANDROID_DEPTH]                = "android.depth",
};

static tag_info_t android_flash[ANDROID_FLASH_END -
        ANDROID_FLASH_START] = {
    [ ANDROID_FLASH_FIRING_POWER - ANDROID_FLASH_START ] =
    { "firingPower",                   TYPE_BYTE   },
    [ ANDROID_FLASH_FIRING_TIME - ANDROID_FLASH_START ] =
    { "firingTime",                    TYPE_INT64  },
    [ ANDROID_FLASH_MODE - ANDROID_FLASH_START ] =
    { "mode",                          TYPE_BYTE   },
    [ ANDROID_FLASH_COLOR_TEMPERATURE - ANDROID_FLASH_START ] =
    { "colorTemperature",              TYPE_BYTE   },
    [ ANDROID_FLASH_MAX_ENERGY - ANDROID_FLASH_START ] =
    { "maxEnergy",                     TYPE_BYTE   },
    [ ANDROID_FLASH_STATE - ANDROID_FLASH_START ] =
    { "state",                         TYPE_BYTE   },
};

2.6 核心代碼實現

前面瞭解清楚它的內存分佈,宏定義,及操作方法後,我們開始進入c代碼看下它的核心實現。

# system/media/camera/src/camera_metadata.c

#define LOG_TAG "camera_metadata"
#include <system/camera_metadata.h>
#include <camera_metadata_hidden.h>

// 獲取 entries
static camera_metadata_buffer_entry_t *get_entries( const camera_metadata_t *metadata) {
    return (camera_metadata_buffer_entry_t*) ((uint8_t*)metadata + metadata->entries_start);
}
// 獲取 數據
static uint8_t *get_data(const camera_metadata_t *metadata) {
    return (uint8_t*)metadata + metadata->data_start;
}
// 分配一個 camera_metadata 結構體對象
camera_metadata_t *allocate_camera_metadata(size_t entry_capacity,size_t data_capacity) {

    size_t memory_needed = calculate_camera_metadata_size(entry_capacity,data_capacity);
    void *buffer = calloc(1, memory_needed);
    camera_metadata_t *metadata = place_camera_metadata( buffer, memory_needed, entry_capacity, data_capacity);
    return metadata;
}
// 獲取 metadata 結構體
camera_metadata_t *place_camera_metadata(void *dst, size_t dst_size,  size_t entry_capacity, size_t data_capacity) {

    size_t memory_needed = calculate_camera_metadata_size(entry_capacity, data_capacity);
    if (memory_needed > dst_size) return NULL;

    camera_metadata_t *metadata = (camera_metadata_t*)dst;
    metadata->version = CURRENT_METADATA_VERSION;
    metadata->flags = 0;
    metadata->entry_count = 0;
    metadata->entry_capacity = entry_capacity;
    metadata->entries_start = ALIGN_TO(sizeof(camera_metadata_t), ENTRY_ALIGNMENT);
    metadata->data_count = 0;
    metadata->data_capacity = data_capacity;
    metadata->size = memory_needed;
    size_t data_unaligned = (uint8_t*)(get_entries(metadata) +  metadata->entry_capacity) - (uint8_t*)metadata;
    metadata->data_start = ALIGN_TO(data_unaligned, DATA_ALIGNMENT);
    metadata->vendor_id = CAMERA_METADATA_INVALID_VENDOR_ID;

    assert(validate_camera_metadata_structure(metadata, NULL) == OK);
    return metadata;
}

void free_camera_metadata(camera_metadata_t *metadata) {
    free(metadata);
}

// 拷貝 metadata 結構體
camera_metadata_t* copy_camera_metadata(void *dst, size_t dst_size,const camera_metadata_t *src) {
    size_t memory_needed = get_camera_metadata_compact_size(src);
    
    camera_metadata_t *metadata = place_camera_metadata(dst, dst_size, src->entry_count, src->data_count);

    metadata->flags = src->flags;
    metadata->entry_count = src->entry_count;
    metadata->data_count = src->data_count;
    metadata->vendor_id = src->vendor_id;

    memcpy(get_entries(metadata), get_entries(src),  sizeof(camera_metadata_buffer_entry_t[metadata->entry_count]));
    memcpy(get_data(metadata), get_data(src),  sizeof(uint8_t[metadata->data_count]));

    assert(validate_camera_metadata_structure(metadata, NULL) == OK);
    return metadata;
}

int add_camera_metadata_entry(camera_metadata_t *dst, uint32_t tag, const void *data, size_t data_count) {
    int type = get_local_camera_metadata_tag_type(tag, dst);
    return add_camera_metadata_entry_raw(dst, tag, type, data, data_count);
}


int find_camera_metadata_entry(camera_metadata_t *src, uint32_t tag, camera_metadata_entry_t *entry) {
    if (src == NULL) return ERROR;

    uint32_t index;
    if (src->flags & FLAG_SORTED) {
        // Sorted entries, do a binary search
        camera_metadata_buffer_entry_t *search_entry = NULL;
        camera_metadata_buffer_entry_t key;
        key.tag = tag;
        search_entry = bsearch(&key, get_entries(src),  src->entry_count, 
                        sizeof(camera_metadata_buffer_entry_t), compare_entry_tags);
        if (search_entry == NULL) return NOT_FOUND;
        index = search_entry - get_entries(src);
    } else {
        // Not sorted, linear search
        camera_metadata_buffer_entry_t *search_entry = get_entries(src);
        for (index = 0; index < src->entry_count; index++, search_entry++) {
            if (search_entry->tag == tag) {
                break;
            }
        }
        if (index == src->entry_count) return NOT_FOUND;
    }
    return get_camera_metadata_entry(src, index,  entry);
}

int delete_camera_metadata_entry(camera_metadata_t *dst, size_t index) {
    camera_metadata_buffer_entry_t *entry = get_entries(dst) + index;
    size_t data_bytes = calculate_camera_metadata_entry_data_size(entry->type, entry->count);

    if (data_bytes > 0) {
        // Shift data buffer to overwrite deleted data
        uint8_t *start = get_data(dst) + entry->data.offset;
        uint8_t *end = start + data_bytes;
        size_t length = dst->data_count - entry->data.offset - data_bytes;
        memmove(start, end, length);

        // Update all entry indices to account for shift
        camera_metadata_buffer_entry_t *e = get_entries(dst);
        size_t i;
        for (i = 0; i < dst->entry_count; i++) {
            if (calculate_camera_metadata_entry_data_size( e->type, e->count) > 0 &&
                e->data.offset > entry->data.offset) {
                e->data.offset -= data_bytes;
            }
            ++e;
        }
        dst->data_count -= data_bytes;
    }
    // Shift entry array
    memmove(entry, entry + 1, sizeof(camera_metadata_buffer_entry_t) *(dst->entry_count - index - 1) );
    dst->entry_count -= 1;

    assert(validate_camera_metadata_structure(dst, NULL) == OK);
    return OK;
}

int update_camera_metadata_entry(camera_metadata_t *dst,size_t index, const void *data,size_t data_count,
        camera_metadata_entry_t *updated_entry) {

    camera_metadata_buffer_entry_t *entry = get_entries(dst) + index;

    size_t data_bytes =calculate_camera_metadata_entry_data_size(entry->type, data_count);
    size_t data_payload_bytes =data_count * camera_metadata_type_size[entry->type];

    size_t entry_bytes = calculate_camera_metadata_entry_data_size(entry->type, entry->count);
    if (data_bytes != entry_bytes) {
        // May need to shift/add to data array
        if (dst->data_capacity < dst->data_count + data_bytes - entry_bytes) {
            // No room
            return ERROR;
        }
        if (entry_bytes != 0) {
            // Remove old data
            uint8_t *start = get_data(dst) + entry->data.offset;
            uint8_t *end = start + entry_bytes;
            size_t length = dst->data_count - entry->data.offset - entry_bytes;
            memmove(start, end, length);
            dst->data_count -= entry_bytes;

            // Update all entry indices to account for shift
            camera_metadata_buffer_entry_t *e = get_entries(dst);
            size_t i;
            for (i = 0; i < dst->entry_count; i++) {
                if (calculate_camera_metadata_entry_data_size( e->type, e->count) > 0 && e->data.offset > entry->data.offset) {
                    e->data.offset -= entry_bytes;
                }
                ++e;
            }
        }
        if (data_bytes != 0) {
            // Append new data
            entry->data.offset = dst->data_count;
            memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
            dst->data_count += data_bytes;
        }
    } else if (data_bytes != 0) {
        // data size unchanged, reuse same data location
        memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
    }

    if (data_bytes == 0) {
        // Data fits into entry
        memcpy(entry->data.value, data, data_payload_bytes);
    }

    entry->count = data_count;

    if (updated_entry != NULL) {
        get_camera_metadata_entry(dst,  index,  updated_entry);
    }

    assert(validate_camera_metadata_structure(dst, NULL) == OK);
    return OK;
}

2.7 Vendor Ops 實現

通過 Vendor Ops ,用戶可以自已定義 metadata 及 對應的操作方法 ops。

通過 set_camera_metadata_vendor_ops() 及 set_camera_metadata_vendor_cache_ops() 方法 自定義對應的 ops。

# system/media/camera/src/camera_metadata.c

static const vendor_tag_ops_t *vendor_tag_ops = NULL;
static const struct vendor_tag_cache_ops *vendor_cache_ops = NULL;

// Declared in system/media/private/camera/include/camera_metadata_hidden.h
int set_camera_metadata_vendor_ops(const vendor_tag_ops_t* ops) {
    vendor_tag_ops = ops;
    return OK;
}

// Declared in system/media/private/camera/include/camera_metadata_hidden.h
int set_camera_metadata_vendor_cache_ops( const struct vendor_tag_cache_ops *query_cache_ops) {
    vendor_cache_ops = query_cache_ops;
    return OK;
}

static void print_data(int fd, const uint8_t *data_ptr, uint32_t tag, int type, int count, int indentation);

void dump_camera_metadata(const camera_metadata_t *metadata, int fd, int verbosity) {
    dump_indented_camera_metadata(metadata, fd, verbosity, 0);
}

三、Camera MetaData 代碼流程分析

Camera MetaData 代碼 主要在 frameworks/av/camera/CameraMetadata.cpp 中。

從Android.mk 中可以看出,CameraMetadata.cpp和 camera client 一起編譯到 libcamera_client.so 庫中的。

# frameworks/av/camera/Android.mk

LOCAL_SRC_FILES += \
    Camera.cpp \
    CameraMetadata.cpp \
    CameraParameters.cpp \
    CameraParameters2.cpp \
    ICamera.cpp \
    ICameraClient.cpp \

LOCAL_SHARED_LIBRARIES := \
    libcamera_metadata \        // 使用 system 中的 libcamera_metadata.so 共享庫
    
LOCAL_MODULE:= libcamera_client 

3.1 CameraMetadata 參數設置流程

參考 frameworks/av/services/camera/libcameraservice/CameraFlashlight.cpp 中的代碼。
可以看出,當要使用 CameraMetadata,主要步驟如下:

初始化 mMetadata 對像
獲取 TAG 爲 CAMERA3_TEMPLATE_PREVIEW 的 Metadata
調用 mMetadata->update 更新 Metadata 參數
調用setStreamingRequest 下發參數

# frameworks/av/services/camera/libcameraservice/CameraFlashlight.cpp

status_t CameraDeviceClientFlashControl::submitTorchEnabledRequest() {
    status_t res;

    if (mMetadata == NULL) {
        // 1. 初始化 mMetadata 對像
        mMetadata = new CameraMetadata();
        // 2. 獲取 TAG 爲 CAMERA3_TEMPLATE_PREVIEW 的 Metadata。
        res = mDevice->createDefaultRequest(  CAMERA3_TEMPLATE_PREVIEW, mMetadata);
    }
    // 3. 調用 mMetadata->update 更新 Metadata 參數
    uint8_t torchOn = ANDROID_FLASH_MODE_TORCH;
    mMetadata->update(ANDROID_FLASH_MODE, &torchOn, 1);
    mMetadata->update(ANDROID_REQUEST_OUTPUT_STREAMS, &mStreamId, 1);

    uint8_t aeMode = ANDROID_CONTROL_AE_MODE_ON;
    mMetadata->update(ANDROID_CONTROL_AE_MODE, &aeMode, 1);

    int32_t requestId = 0;
    mMetadata->update(ANDROID_REQUEST_ID, &requestId, 1);

    if (mStreaming) {
        // 4. 調用setStreamingRequest 下發參數
        res = mDevice->setStreamingRequest(*mMetadata);
        ======================>  
        +   @ frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
        +   List<const CameraMetadata> requests;
        +   requests.push_back(request);
        +   return setStreamin=RequestList(requests, /*lastFrameNumber*/NULL);
        +       =======>
        +       return submitRequestsHelper(requests, /*repeating*/true, lastFrameNumber);
        <======================
    } else {
        res = mDevice->capture(*mMetadata);
    }
    return res;
}

可以看到 ,最終跑到了Camera3Device.cpp 中提交 request ,最終將 request 放入mRequestQueue 中,
由 Camera3Device::RequestThread 來對消息進行處理。

# frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

status_t Camera3Device::submitRequestsHelper(
        const List<const CameraMetadata> &requests, bool repeating, /*out*/ int64_t *lastFrameNumber) {

    RequestList requestList;
    res = convertMetadataListToRequestListLocked(requests, /*out*/&requestList);

    if (repeating) {
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }

    if (res == OK) {
        waitUntilStateThenRelock(/*active*/true, kActiveTimeout);
        if (res != OK) {
            SET_ERR_L("Can't transition to active in %f seconds!",  kActiveTimeout/1e9);
        }
        ALOGV("Camera %d: Capture request %" PRId32 " enqueued", mId,
                     (*(requestList.begin()))->mResultExtras.requestId);
    }
    return res;
}

3.1.1 Camera3Device::RequestThread::threadLoop()

我們來看下 Camera3Device::RequestThread::threadLoop() 的具體實現:

等待下一個 request 請求,將請求保存在 mNextRequests 中。
獲取 最新的request 的Entry, 這裏爲 CAMERA3_TEMPLATE_PREVIEW
調用hardware層的process_capture_request()方法,處理request 請求

# frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp
bool Camera3Device::RequestThread::threadLoop() {
    // 1. 等待下一個 request 請求,將請求保存在 mNextRequests 中。
    // Wait for the next batch of requests.
    waitForNextRequestBatch();
    ===========>
    +   additionalRequest.captureRequest = waitForNextRequestLocked();
    +   mNextRequests.add(additionalRequest);
    <===========
    if (mNextRequests.size() == 0) {
        return true;
    }
    // 2. 獲取 最新的request 的Entry, 這裏爲 CAMERA3_TEMPLATE_PREVIEW
    // Get the latest request ID, if any
    int latestRequestId;
    camera_metadata_entry_t requestIdEntry = mNextRequests[mNextRequests.size() - 1].
            captureRequest->mSettings.find(ANDROID_REQUEST_ID);
    if (requestIdEntry.count > 0) {
        latestRequestId = requestIdEntry.data.i32[0];
    }
    // Prepare a batch of HAL requests and output buffers.
    res = prepareHalRequests();
    =============>
    +   status_t res = insertTriggers(captureRequest);
    +   ------------->
    +       mTriggerRemovedMap.add(tag, trigger);
    +       res = metadata.update(tag, &entryValue, /*count*/1);
    +   <-------------
    +   mPrevRequest = captureRequest;
    <=============

    mLatestRequestId = latestRequestId;
    mLatestRequestSignal.signal();

    // 3. 調用hardware層的方法,處理request 請求
    ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__, mNextRequests.size());
    for (auto& nextRequest : mNextRequests) {
        // Submit request and block until ready for next one
        ATRACE_ASYNC_BEGIN("frame capture", nextRequest.halRequest.frame_number);
        ATRACE_BEGIN("camera3->process_capture_request");
        res = mHal3Device->ops->process_capture_request(mHal3Device, &nextRequest.halRequest);
        ============>
        +   # hardware/qcom/camera/QCamera2/HAL3/QCamera3HWI.cpp
        +   QCamera3HardwareInterface *hw = reinterpret_cast<QCamera3HardwareInterface *>(device->priv);
        +   int rc = hw->orchestrateRequest(request);
        +   
        <============

        // Mark that the request has be submitted successfully.
        nextRequest.submitted = true;

        // Update the latest request sent to HAL
        if (nextRequest.halRequest.settings != NULL) { // Don't update if they were unchanged
            Mutex::Autolock al(mLatestRequestMutex);

            camera_metadata_t* cloned = clone_camera_metadata(nextRequest.halRequest.settings);
            mLatestRequest.acquire(cloned);

            sp<Camera3Device> parent = mParent.promote();
            if (parent != NULL) {
                parent->monitorMetadata(TagMonitor::REQUEST, nextRequest.halRequest.frame_number,
                        0, mLatestRequest);
            }
        }
        // 移除當前請求
        // Remove any previously queued triggers (after unlock)
        res = removeTriggers(mPrevRequest);
    
    }
    mNextRequests.clear();
    return true;
}

3.1.1.1 處理上層request 請求

# hardware/qcom/camera/QCamera2/HAL3/QCamera3HWI.cpp
/*===========================================================================
 * FUNCTION   : orchestrateRequest
 * DESCRIPTION: Orchestrates a capture request from camera service
 *
 * PARAMETERS :
 *   @request : request from framework to process
 * RETURN     : Error status codes
 *==========================================================================*/
int32_t QCamera3HardwareInterface::orchestrateRequest( camera3_capture_request_t *request)
{

    uint32_t originalFrameNumber = request->frame_number;
    uint32_t originalOutputCount = request->num_output_buffers;
    const camera_metadata_t *original_settings = request->settings;
    List<InternalRequest> internallyRequestedStreams;
    List<InternalRequest> emptyInternalList;

    if (isHdrSnapshotRequest(request) && request->input_buffer == NULL) {
        LOGD("Framework requested:%d buffers in HDR snapshot", request->num_output_buffers);
        uint32_t internalFrameNumber;
        CameraMetadata modified_meta;
        int8_t hdr_exp_values;
        cam_hdr_bracketing_info_t& hdrBracketingSetting = gCamCapability[mCameraId]->hdr_bracketing_setting;
        uint32_t hdrFrameCount = hdrBracketingSetting.num_frames;
        LOGD("HDR values %d, %d frame count: %u",
              (int8_t) hdrBracketingSetting.exp_val.values[0],
              (int8_t) hdrBracketingSetting.exp_val.values[1],  hdrFrameCount);

        cam_exp_bracketing_t aeBracket;
        memset(&aeBracket, 0, sizeof(cam_exp_bracketing_t));
        aeBracket.mode = hdrBracketingSetting.exp_val.mode;

        if (aeBracket.mode == CAM_EXP_BRACKETING_OFF) {
            LOGD(" Bracketing is Off");
        }

        /* Add Blob channel to list of internally requested streams */
        for (uint32_t i = 0; i < request->num_output_buffers; i++) {
            if (request->output_buffers[i].stream->format == HAL_PIXEL_FORMAT_BLOB) {
                InternalRequest streamRequested;
                streamRequested.meteringOnly = 1;
                streamRequested.need_metadata = 0;
                streamRequested.stream = request->output_buffers[i].stream;
                internallyRequestedStreams.push_back(streamRequested);
            }
        }
        request->num_output_buffers = 0;
        auto itr =  internallyRequestedStreams.begin();

        // 獲取metadata修改的地方
        /* Modify setting to set compensation */
        modified_meta = request->settings;
        hdr_exp_values = hdrBracketingSetting.exp_val.values[0];
        int32_t expCompensation = hdr_exp_values;
        uint8_t aeLock = 1;
        modified_meta.update(ANDROID_CONTROL_AE_EXPOSURE_COMPENSATION, &expCompensation, 1);
        modified_meta.update(ANDROID_CONTROL_AE_LOCK, &aeLock, 1);
        camera_metadata_t *modified_settings = modified_meta.release();
        request->settings = modified_settings;

        /* Capture Settling & -2x frame */
        _orchestrationDb.generateStoreInternalFrameNumber(internalFrameNumber);
        request->frame_number = internalFrameNumber;
        processCaptureRequest(request, internallyRequestedStreams);

        request->num_output_buffers = originalOutputCount;
        _orchestrationDb.allocStoreInternalFrameNumber(originalFrameNumber, internalFrameNumber);
        request->frame_number = internalFrameNumber;
        mHdrFrameNum = internalFrameNumber;
        processCaptureRequest(request, emptyInternalList);
        request->num_output_buffers = 0;

        modified_meta = modified_settings;
        hdr_exp_values = hdrBracketingSetting.exp_val.values[1];
        expCompensation = hdr_exp_values;
        aeLock = 1;
        modified_meta.update(ANDROID_CONTROL_AE_EXPOSURE_COMPENSATION, &expCompensation, 1);
        modified_meta.update(ANDROID_CONTROL_AE_LOCK, &aeLock, 1);
        modified_settings = modified_meta.release();
        request->settings = modified_settings;

        /* Capture Settling & 0X frame */

        itr =  internallyRequestedStreams.begin();
        if (itr == internallyRequestedStreams.end()) {
            LOGE("Error Internally Requested Stream list is empty");
            assert(0);
        } else {
            itr->need_metadata = 0;
            itr->meteringOnly = 1;
        }

        _orchestrationDb.generateStoreInternalFrameNumber(internalFrameNumber);
        request->frame_number = internalFrameNumber;
        processCaptureRequest(request, internallyRequestedStreams);
        ==================>
        +   rc = mCameraHandle->ops->set_parms(mCameraHandle->camera_handle, mParameters);
        +   ==================>
        +   -   # hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera_interface.c
        +   -   /* camera ops v-table */
        +   -   static mm_camera_ops_t mm_camera_ops = {
        +   -       .set_parms = mm_camera_intf_set_parms,
        +   -       .get_parms = mm_camera_intf_get_parms,
        +   -   }
        +   -   ==================>
        +   -   |   mm_camera_set_parms(my_obj, parms);
        +   -   |   ==================>
        +   -   |   +   # hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera.c
        +   -   |   +   c = mm_camera_util_s_ctrl(my_obj, 0, my_obj->ctrl_fd, CAM_PRIV_PARM, &value);
        +   -   |   +   =========>
        +   -   |   +       # hardware/qcom/camera/QCamera2/stack/mm-camera-interface/src/mm_camera.c
        +   -   |   +       control.id = id;
        +   -   |   +       control.value = *value;
        +   -   |   +       rc = ioctl(fd, VIDIOC_S_CTRL, &control);
        +   -   |   <==================
        +   -   <==================
        +   <==================
        <==================

        _orchestrationDb.generateStoreInternalFrameNumber(internalFrameNumber);
        request->frame_number = internalFrameNumber;
        processCaptureRequest(request, internallyRequestedStreams);

        /* Capture 2X frame*/
        modified_meta = modified_settings;
        hdr_exp_values = hdrBracketingSetting.exp_val.values[2];
        expCompensation = hdr_exp_values;
        aeLock = 1;
        modified_meta.update(ANDROID_CONTROL_AE_EXPOSURE_COMPENSATION, &expCompensation, 1);
        modified_meta.update(ANDROID_CONTROL_AE_LOCK, &aeLock, 1);
        modified_settings = modified_meta.release();
        request->settings = modified_settings;

        _orchestrationDb.generateStoreInternalFrameNumber(internalFrameNumber);
        request->frame_number = internalFrameNumber;
        processCaptureRequest(request, internallyRequestedStreams);

        _orchestrationDb.generateStoreInternalFrameNumber(internalFrameNumber);
        request->frame_number = internalFrameNumber;
        mHdrSnapshotRunning = true;
        processCaptureRequest(request, internallyRequestedStreams);

        /* Capture 2X on original streaming config*/
        internallyRequestedStreams.clear();

        /* Restore original settings pointer */
        request->settings = original_settings;
    } else {
        uint32_t internalFrameNumber;
        _orchestrationDb.allocStoreInternalFrameNumber(request->frame_number, internalFrameNumber);
        request->frame_number = internalFrameNumber;
        return processCaptureRequest(request, internallyRequestedStreams);
    }

    return NO_ERROR;
}

3.1.1.2 通過IOCTL 往V4L2 層下發參數

在前面追代碼最終追到 ioctl(fd, VIDIOC_S_CTRL, &control);
往 V4L2 下發 control ,相關request 請求,保存在control 中

# kernel/msm-4.4/drivers/media/v4l2-core/v4l2-subdev.c

static long subdev_do_ioctl(struct file *file, unsigned int cmd, void *arg)
{
    switch (cmd) {
    case VIDIOC_G_CTRL:
        return v4l2_g_ctrl(vfh->ctrl_handler, arg);

    case VIDIOC_S_CTRL:
        return v4l2_s_ctrl(vfh, vfh->ctrl_handler, arg);
    }
}

在 v4l2-ctrls.c 中

# kernel/msm-4.4/drivers/media/v4l2-core/v4l2-ctrls.c

int v4l2_s_ctrl(struct v4l2_fh *fh, struct v4l2_ctrl_handler *hdl, struct v4l2_control *control)
{
    struct v4l2_ctrl *ctrl = v4l2_ctrl_find(hdl, control->id);
    struct v4l2_ext_control c = { control->id };
    int ret;
    
    if (ctrl->flags & V4L2_CTRL_FLAG_READ_ONLY)
        return -EACCES;

    c.value = control->value;
    ret = set_ctrl_lock(fh, ctrl, &c);
    ===============>
        user_to_new(c, ctrl);
        ret = set_ctrl(fh, ctrl, 0);
        ========>
            return try_or_set_cluster(fh, master, true, ch_flags);
    <===============
    control->value = c.value;
    return ret;
}
EXPORT_SYMBOL(v4l2_s_ctrl);

在 try_or_set_cluster() 中

# kernel/msm-4.4/drivers/media/v4l2-core/v4l2-ctrls.c
/* Core function that calls try/s_ctrl and ensures that the new value is
   copied to the current value on a set. Must be called with ctrl->handler->lock held. */
static int try_or_set_cluster(struct v4l2_fh *fh, struct v4l2_ctrl *master, bool set, u32 ch_flags)
{
    ret = call_op(master, try_ctrl);

    ret = call_op(master, s_ctrl);

    /* If OK, then make the new values permanent. */
    update_flag = is_cur_manual(master) != is_new_manual(master);
    for (i = 0; i < master->ncontrols; i++)
        new_to_cur(fh, master->cluster[i], ch_flags |
            ((update_flag && i > 0) ? V4L2_EVENT_CTRL_CH_FLAGS : 0));
    return 0;
}

接下來調用 call_op(master, s_ctrl), 進行參數設置。 call_op 定義如下

kernel/msm-4.4/drivers/media/v4l2-core/v4l2-ctrls.c
#define call_op(master, op) \
    (has_op(master, op) ? master->ops->op(master) : 0)

因爲 mastart 的類型爲 struct v4l2_ctrl * 

struct v4l2_ctrl {
    const struct v4l2_ctrl_ops *ops;
    const struct v4l2_ctrl_type_ops *type_ops;
------------------------------------------

# kernel/msm-4.4/include/media/v4l2-ctrls.h

v4l2_ctrl_ops定義爲 

struct v4l2_ctrl_ops {
    int (*g_volatile_ctrl)(struct v4l2_ctrl *ctrl);
    int (*try_ctrl)(struct v4l2_ctrl *ctrl);
    int (*s_ctrl)(struct v4l2_ctrl *ctrl);
};

const struct v4l2_ctrl_ops *ops; 是在v4l2_ctrl_new() 中初始化的

/* Add a new control */
static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
            const struct v4l2_ctrl_ops *ops,
            const struct v4l2_ctrl_type_ops *type_ops,
            u32 id, const char *name, enum v4l2_ctrl_type type,
            s64 min, s64 max, u64 step, s64 def,
            const u32 dims[V4L2_CTRL_MAX_DIMS], u32 elem_size,
            u32 flags, const char * const *qmenu,
            const s64 *qmenu_int, void *priv)
{
    ctrl->handler = hdl;
    ctrl->ops = ops;
    ctrl->type_ops = type_ops ? type_ops : &std_type_ops;
    ctrl->id = id;
    ctrl->name = name;
    ctrl->type = type;

3.1.1.3 V4L2_Ctrl 的作用是啥

我們隨便找個代碼,如: kernel/msm-4.4/drivers/media/i2c/ov7670.c
雖然這個代碼際不會跑,但我們參考學習 ctrl 是啥 ,還是合適的。

在probe 初始化時,初始化了大量的ctrl, 以 V4L2_CID_BRIGHTNESSb 爲例,看下面代碼追蹤,可以發現,最終跑到了寫寄存器的地方。

這樣就很清晰了。

# kernel/msm-4.4/drivers/media/i2c/ov7670.c
static int ov7670_probe(struct i2c_client *client,const struct i2c_device_id *id)
{
    v4l2_ctrl_new_std(&info->hdl, &ov7670_ctrl_ops, V4L2_CID_BRIGHTNESS, 0, 255, 1, 128);
    v4l2_ctrl_new_std(&info->hdl, &ov7670_ctrl_ops,V4L2_CID_CONTRAST, 0, 127, 1, 64);
    v4l2_ctrl_new_std(&info->hdl, &ov7670_ctrl_ops,V4L2_CID_VFLIP, 0, 1, 1, 0);
    v4l2_ctrl_new_std(&info->hdl, &ov7670_ctrl_ops,V4L2_CID_HFLIP, 0, 1, 1, 0);
    info->saturation = v4l2_ctrl_new_std(&info->hdl, &ov7670_ctrl_ops,V4L2_CID_SATURATION, 0, 256, 1, 128);
}

static const struct v4l2_ctrl_ops ov7670_ctrl_ops = {
    .s_ctrl = ov7670_s_ctrl,
    .g_volatile_ctrl = ov7670_g_volatile_ctrl,
};

static int ov7670_s_ctrl(struct v4l2_ctrl *ctrl)
{
    struct v4l2_subdev *sd = to_sd(ctrl);
    struct ov7670_info *info = to_state(sd);

    switch (ctrl->id) {
    case V4L2_CID_BRIGHTNESS:
        return ov7670_s_brightness(sd, ctrl->val);
    case V4L2_CID_CONTRAST:
        return ov7670_s_contrast(sd, ctrl->val);
    case V4L2_CID_SATURATION:
        return ov7670_s_sat_hue(sd,info->saturation->val, info->hue->val);
    return -EINVAL;
}


static int ov7670_s_brightness(struct v4l2_subdev *sd, int value)
{
    unsigned char com8 = 0, v;
    int ret;

    ov7670_read(sd, REG_COM8, &com8);
    com8 &= ~COM8_AEC;
    ov7670_write(sd, REG_COM8, com8);
    v = ov7670_abs_to_sm(value);
    ret = ov7670_write(sd, REG_BRIGHT, v);
    return ret;
}

3.2 CameraMetadata 下發參數總結

在前面 3.1 中,我們詳細跟蹤代碼看了CameraMetadata 下發參數的過程。

初始化 mMetadata 對像,獲取 TAG 爲 CAMERA3_TEMPLATE_PREVIEW 的 Metadata
調用 mMetadata->update 更新 Metadata 參數,調用setStreamingRequest 下發參數
在 Camera3Device.cpp 中,最終將 request 放入mRequestQueue 中
在 Camera3Device::RequestThread::threadLoop() 來對 mRequestQueue 消息進行處理
調用hardware層的process_capture_request()方法,處理request 請求
在hardware 層 QCamera3HardwareInterface::orchestrateRequest() 中處理上層下發的 request 請求
最終通過ioctl(fd, VIDIOC_S_CTRL, &control); 往 V4L2 下發參數。
在V4L2 中,根據具體設備註冊 V4L2_ctrl 時的 ops ,調用不同的操作函數,來更新具體的硬件寄存器。

四、CameraMetadata.cpp 代碼分析

4.1 CameraMetadata 方法定義

# frameworks/av/include/camera/CameraMetadata.h
class CameraMetadata: public Parcelable {
  public:
    /** Creates an empty object; best used when expecting to acquire contents from elsewhere */
    CameraMetadata();
    /** Creates an object with space for entryCapacity entries, with dataCapacity extra storage */
    CameraMetadata(size_t entryCapacity, size_t dataCapacity = 10);
    /** Takes ownership of passed-in buffer */
    CameraMetadata(camera_metadata_t *buffer);
    /** Clones the metadata */
    CameraMetadata(const CameraMetadata &other);
    
    /* Update metadata entry. Will create entry if it doesn't exist already, and
     * will reallocate the buffer if insufficient space exists. Overloaded for
     * the various types of valid data. */
    status_t update(uint32_t tag, const uint8_t *data, size_t data_count);
    status_t update(uint32_t tag, const int32_t *data, size_t data_count);
    status_t update(uint32_t tag, const float *data, size_t data_count);
    status_t update(uint32_t tag, const int64_t *data, size_t data_count);
    status_t update(uint32_t tag, const double *data, size_t data_count);
    status_t update(uint32_t tag, const camera_metadata_rational_t *data, size_t data_count);
    status_t update(uint32_t tag, const String8 &string);
    status_t update(const camera_metadata_ro_entry &entry);
    template<typename T>
    status_t update(uint32_t tag, Vector<T> data) {
        return update(tag, data.array(), data.size());
    }
    
    // Metadata object is unchanged when reading from parcel fails.
    virtual status_t readFromParcel(const Parcel *parcel) override;
    virtual status_t writeToParcel(Parcel *parcel) const override;

    /* Caller becomes the owner of the new metadata
      * 'const Parcel' doesnt prevent us from calling the read functions.
      *  which is interesting since it changes the internal state
      *
      * NULL can be returned when no metadata was sent, OR if there was an issue
      * unpacking the serialized data (i.e. bad parcel or invalid structure).*/
    static status_t readFromParcel(const Parcel &parcel, camera_metadata_t** out);
    /* Caller retains ownership of metadata
      * - Write 2 (int32 + blob) args in the current position */
    static status_t writeToParcel(Parcel &parcel, const camera_metadata_t* metadata);
private:
    camera_metadata_t *mBuffer;

4.2 修改MetaData 內存數據 CameraMetadata::update()

當需要修改 metadata 數據時,調用 update 方法,如下

# frameworks/av/camera/CameraMetadata.cpp
status_t CameraMetadata::update(uint32_t tag, const int32_t *data, size_t data_count) {
    return updateImpl(tag, (const void*)data, data_count);
}

可以看出,最終調用的都是 CameraMetadata::updateImpl() 方法,我們來看下它的具體實現
可以看出,它處理方法是,如果entry 已經有了,則更新其數據,如果不存在,則新增一個entry。
最終,metadata 在保存在內存中, 注意,由於此時參數並沒有下發,所以此時參數肯定是不生效的。

# frameworks/av/camera/CameraMetadata.cpp
status_t CameraMetadata::updateImpl(uint32_t tag, const void *data, size_t data_count) {

    int type = get_camera_metadata_tag_type(tag);//獲取tag的Type,爲後面計算內存做準備
    
    // Safety check - ensure that data isn't pointing to this metadata, since
    // that would get invalidated if a resize is needed
    size_t bufferSize = get_camera_metadata_size(mBuffer);
    uintptr_t bufAddr = reinterpret_cast<uintptr_t>(mBuffer);
    uintptr_t dataAddr = reinterpret_cast<uintptr_t>(data);

    size_t data_size = calculate_camera_metadata_entry_data_size(type, data_count);
    

    res = resizeIfNeeded(1, data_size);

    if (res == OK) {
        camera_metadata_entry_t entry;
        res = find_camera_metadata_entry(mBuffer, tag, &entry);
        if (res == NAME_NOT_FOUND) {
            res = add_camera_metadata_entry(mBuffer,tag, data, data_count);
        } else if (res == OK) {
            res = update_camera_metadata_entry(mBuffer, entry.index, data, data_count, NULL);
        }
    }
    return res;
}
int update_camera_metadata_entry(camera_metadata_t *dst, size_t index, const void *data,
        size_t data_count, camera_metadata_entry_t *updated_entry) {

    camera_metadata_buffer_entry_t *entry = get_entries(dst) + index;

    size_t data_bytes = calculate_camera_metadata_entry_data_size(entry->type, data_count);
    size_t data_payload_bytes = data_count * camera_metadata_type_size[entry->type];

    size_t entry_bytes = calculate_camera_metadata_entry_data_size(entry->type, entry->count);
    if (data_bytes != entry_bytes) {
        if (entry_bytes != 0) {
            // Remove old data
            uint8_t *start = get_data(dst) + entry->data.offset;
            uint8_t *end = start + entry_bytes;
            size_t length = dst->data_count - entry->data.offset - entry_bytes;
            memmove(start, end, length);
            dst->data_count -= entry_bytes;

            // Update all entry indices to account for shift
            camera_metadata_buffer_entry_t *e = get_entries(dst);
            size_t i;
            for (i = 0; i < dst->entry_count; i++) {
                if (calculate_camera_metadata_entry_data_size(
                        e->type, e->count) > 0 &&
                        e->data.offset > entry->data.offset) {
                    e->data.offset -= entry_bytes;
                }
                ++e;
            }
        }
        if (data_bytes != 0) {
            // Append new data
            entry->data.offset = dst->data_count;
            memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
            dst->data_count += data_bytes;
        }
    } else if (data_bytes != 0) {
        // data size unchanged, reuse same data location
        memcpy(get_data(dst) + entry->data.offset, data, data_payload_bytes);
    }
    if (data_bytes == 0) {
        // Data fits into entry
        memcpy(entry->data.value, data,data_payload_bytes);
    }
    entry->count = data_count;
    if (updated_entry != NULL) {
        get_camera_metadata_entry(dst, index, updated_entry);
    }
    assert(validate_camera_metadata_structure(dst, NULL) == OK);
    return OK;
}

原文鏈接:https://blog.csdn.net/Ciellee/article/details/105807436

至此,本篇已結束。轉載網絡的文章,小編覺得很優秀,歡迎點擊閱讀原文,支持原創作者,如有侵權,懇請聯繫小編刪除,歡迎您的建議與指正。同時期待您的關注,感謝您的閱讀,謝謝!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章