MTK 雙攝算法集成

和你一起終身學習,這裏是程序員 Android

經典好文推薦,通過閱讀本文,您將收穫以下知識點:

一、 雙攝算法簡介
二、 選擇feature和配置feature table
三、掛載算法
四、 APP調用算法
五、結語

一、 雙攝算法簡介

雙攝算法相比單幀算法和多幀算法要複雜的多。無論是用於夜拍,HDR,還是用於虛化(景深/人像/大光圈)的雙攝算法,一般都會需要主、輔兩個攝像頭的圖像同步。並且,由於每一組攝像頭模組都有一定的差異,還會開發特定的標定程序,在工廠的產線進行標定。標定程序將標定參數(也就是標定的結果)寫入到不易被擦除的分區(如NV分區)中。拍照時,雙攝算法根據標定參數修正模組差異。並使用主、輔攝像頭的圖像進行計算,得出深度、曝光之類的參數。使用得出的深度、曝光之類的參數來調整主攝圖像,達到夜拍增強、HDR、背景虛化(景深/人像/大光圈)等等效果。

對於算法集成來說,一般有兩點:

  • 標定程序的集成:包括標定APP以及配置APP的SELinux權限等等。
  • 雙攝算法的集成:與單幀算法、多幀算法類似,選擇對應的feature,實現對應的plugin,掛載算法。

由於預置標定APP比較簡單,本文就不介紹,關於配置標定APP的SELinux權限,可參考我的另外一篇文章:SELinux權限

由於我無法提供一個真正的雙攝算法,還是和介紹單幀算法集成時類似,提供一個模擬算法庫,這個模擬算法庫拼接主、輔攝像頭的圖像,將輔攝圖像拼接到主攝圖像中間,最終呈現類似畫中畫的效果。

二、 選擇feature和配置feature table

2.1 選擇feature

雙攝算法是很常見的算法,在MTK已預置一些雙攝的feature,總結下大概有以下feature是用於雙攝的:
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/mtk/mtk_feature_type.h:

    MTK_FEATURE_DEPTH       = 1ULL << 8, 
    MTK_FEATURE_BOKEH       = 1ULL << 9,
    MTK_FEATURE_VSDOF       = (MTK_FEATURE_DEPTH|MTK_FEATURE_BOKEH),
    MTK_FEATURE_DUAL_YUV    = 1ULL << 14,
    MTK_FEATURE_DUAL_HWDEPTH  = 1ULL << 15,

其中,MTK_FEATURE_DEPTH和MTK_FEATURE_BOKEH用於雙攝虛化,並且計算深度和模糊處理是在兩個分開的掛載點進行。

vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h:

    TP_FEATURE_DEPTH        = 1ULL << 37,
    TP_FEATURE_BOKEH        = 1ULL << 38,
    TP_FEATURE_VSDOF        = (TP_FEATURE_DEPTH|TP_FEATURE_BOKEH),
    TP_FEATURE_FUSION       = 1ULL << 39,
    TP_FEATURE_HDR_DC       = 1ULL << 40,
    TP_FEATURE_DUAL_YUV     = 1ULL << 41,
    TP_FEATURE_DUAL_HWDEPTH = 1ULL << 42,
    TP_FEATURE_PUREBOKEH    = 1ULL << 43,

customer部分定義的feature,其中TP_FEATURE_DEPTH和TP_FEATURE_BOKEH也是用於雙攝虛化,並且計算深度和模糊處理也是在兩個分開的node進行。TP_FEATURE_FUSION和TP_FEATURE_PUREBOKEH是用於雙攝虛化的,但是它們將計算深度和虛化處理放在同一個掛載點進行。TP_FEATURE_HDR_DC是用於雙攝HDR算法的。

按MTK的設計意圖來看,MTK_FEATURE_DUAL_YUV和TP_FEATURE_DUAL_YUV兩個feature應該也是可以用於雙攝算法的,但是我沒有試過,我一般用TP_FEATURE_FUSION或者TP_FEATURE_PUREBOKEH,有興趣的童鞋可以自己試一下。

既然MTK已經預置好了,這一步我們就對號入座,不用再額外添加feature。由於是第三方算法,所以我們選擇TP_FEATURE_PUREBOKEH。

2.2 配置feature table

上一步,我們選擇了TP_FEATURE_PUREBOKEH,MTK很貼心的在vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp中已經定義了一個MTK_FEATURE_COMBINATION_TP_PUREBOKEH。所以定義這一步我們也省了,只需要將MTK_FEATURE_COMBINATION_TP_PUREBOKEH添加到對應的MTK_CAMERA_SCENARIO_CAPTURE_DUALCAM。這裏由於我們沒有其它雙攝算法,將其它兩行註釋掉。

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
index 38365e0602..7adc2a76db 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
@@ -363,8 +363,9 @@ const std::vector<std::unordered_map<int32_t, ScenarioFeatures>>  gMtkScenarioFe
         CAMERA_SCENARIO_END
         //
         CAMERA_SCENARIO_START(MTK_CAMERA_SCENARIO_CAPTURE_DUALCAM)
-        ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR,   MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR)
-        ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,  MTK_FEATURE_COMBINATION_TP_VSDOF)
+        //ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR,   MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR)
+        //ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,  MTK_FEATURE_COMBINATION_TP_VSDOF)
+        ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,  MTK_FEATURE_COMBINATION_TP_PUREBOKEH)
         CAMERA_SCENARIO_END
         //
         CAMERA_SCENARIO_START(MTK_CAMERA_SCENARIO_CAPTURE_CSHOT)

注意:
如果是9.0代碼,是區分camera id的,feature table的配置要修改openId = 4中的MTK_CAMERA_SCENARIO_CAPTURE_DUALCAM。

順帶提一下4個攝像頭的手機,一般情況下,邏輯camera id的劃分:

  • 0:後置主攝
  • 1:前置主攝
  • 2:後置輔攝
  • 3:後置廣角
  • 4:雙攝0+2同時開。

市面上的手機已經有5、6個攝像頭的。也已經有多組雙攝模式的,例如,主攝和輔攝虛化一組,廣角加長焦一組,主攝和微距一組。甚至有些手機前攝也有兩個攝像頭的。而我還沒有接觸過多個雙攝模式的項目,也沒有接觸過前置雙攝的項目,並且每個公司,甚至每個項目可能都會一些差異,所以我這裏列舉的不一定完整和準確,歡迎瞭解的童鞋交流補充。

三、掛載算法

3.1 爲算法選擇plugin

MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中將三方算法的掛載點大致分爲以下幾類:

  • BokehPlugin: Bokeh算法掛載點,雙攝景深算法的虛化部分。
  • DepthPlugin: Depth算法掛載點,雙攝景深算法的計算深度部分。
  • FusionPlugin: Depth和Bokeh放在1個算法中,即合併的雙攝景深算法掛載點。
  • JoinPlugin: Streaming相關算法掛載點,預覽算法都掛載在這裏。
  • MultiFramePlugin: 多幀算法掛載點,包括YUV與RAW,例如MFNR/HDR
  • RawPlugin: RAW算法掛載點,例如remosaic
  • YuvPlugin: Yuv單幀算法掛載點,例如美顏、廣角鏡頭畸變校正等。

對號入座,爲要集成的算法選擇相應的plugin。這裏模擬算法庫是在同一個掛載點處理的雙攝算法,所以選擇FusionPlugin。

3.2 添加全局宏控

爲了能控制某個項目是否集成此算法,我們在device/mediateksample/[platform]/ProjectConfig.mk中添加一個宏,用於控制新接入算法的編譯:

QXT_DUALCAMERA_SUPPORT = yes

當某個項目不需要這個算法時,將device/mediateksample/[platform]/ProjectConfig.mk的QXT_DUALCAMERA_SUPPORT的值設爲 no 就可以了。

3.3 編寫算法集成文件

vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/cp_dualcamera/
├── Android.mk
├── DualCameraCapture.cpp
├── include
│ └── dual_camera.h
└── lib
├── arm64-v8a
│ └── libdualcamera.so
└── armeabi-v7a
└── libdualcamera.so

文件說明:

  • Android.mk中配置算法庫、頭文件、集成的源代碼DualCameraCapture.cpp文件,將它們編譯成庫libmtkcam.plugin.tp_dc,供libmtkcam_3rdparty.customer依賴調用。

  • libdualcamera.so可以將主、輔攝圖像拼接成一張畫中畫效果的圖,libdualcamera.so用來模擬需要接入的第三方雙攝算法庫。dual_camera.h是頭文件。

  • DualCameraCapture.cpp是集成的源代碼CPP文件。

3.3.1 mtkcam3/3rdparty/customer/cp_dualcamera/Android.mk
ifeq ($(QXT_DUALCAMERA_SUPPORT),yes)

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)
LOCAL_MODULE := libdualcamera
LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libdualcamera.so
LOCAL_SRC_FILES_64 := lib/arm64-v8a/libdualcamera.so
LOCAL_MODULE_TAGS := optional
LOCAL_MODULE_CLASS := SHARED_LIBRARIES
LOCAL_MODULE_SUFFIX := .so
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MULTILIB := both
include $(BUILD_PREBUILT)
################################################################################
#
################################################################################
include $(CLEAR_VARS)

#-----------------------------------------------------------
-include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk

#-----------------------------------------------------------
LOCAL_SRC_FILES += DualCameraCapture.cpp

#-----------------------------------------------------------
LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES)
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include $(MTK_PATH_SOURCE)/hardware/mtkcam/include
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_COMMON)/hal/inc
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_CUSTOM_PLATFORM)/hal/inc
#
LOCAL_C_INCLUDES += system/media/camera/include
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/cp_dualcamera/include

#-----------------------------------------------------------
LOCAL_CFLAGS += $(MTKCAM_CFLAGS)
#

#-----------------------------------------------------------
LOCAL_STATIC_LIBRARIES +=
#
LOCAL_WHOLE_STATIC_LIBRARIES +=

#-----------------------------------------------------------
LOCAL_SHARED_LIBRARIES += liblog
LOCAL_SHARED_LIBRARIES += libutils
LOCAL_SHARED_LIBRARIES += libcutils
LOCAL_SHARED_LIBRARIES += libmtkcam_metadata
LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf
#LOCAL_SHARED_LIBRARIES += libmtkcam_3rdparty

#-----------------------------------------------------------
LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers

#-----------------------------------------------------------
LOCAL_MODULE := libmtkcam.plugin.tp_dc
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MODULE_OWNER := mtk
LOCAL_MODULE_TAGS := optional
include $(MTK_STATIC_LIBRARY)

################################################################################
#
################################################################################
include $(call all-makefiles-under,$(LOCAL_PATH))
endif

3.3.2 mtkcam3/3rdparty/customer/cp_dualcamera/include/dual_camera.h
#ifndef QXT_DUAL_CAMERA_H
#define QXT_DUAL_CAMERA_H

typedef unsigned char uchar;

#define CENTER 0
#define LEFT_TOP 1
#define LEFT_BOTTOM 2
#define RIGHT_TOP 3
#define RIGHT_BOTTOM 4

class DualCamera {

public:
    DualCamera();

    ~DualCamera();

    void processI420(uchar *main, int mainWidth, int mainHeight,
                            uchar *sub, int subWidth, int subHeight);

    void processI420(uchar *mainY, uchar *mainU, uchar *mainV, int mainWidth, int mainHeight,
                            uchar *subY, uchar *subU, uchar *subV, int subWidth, int subHeight);

    void processNV21(uchar *main, int mainWidth, int mainHeight,
                            uchar *sub, int subWidth, int subHeight);

    void processNV21(uchar *mainY, uchar *mainUV, int mainWidth, int mainHeight,
                            uchar *subY, uchar *subUV, int subWidth, int subHeight);

private:
    int position;
};

#endif //QXT_DUAL_CAMERA_H

頭文件中的接口函數介紹:

  • DualCamera: 構造函數,構造函數中會模擬讀取標定參數文件,這裏模擬的標定參數文件內容只是一個數字,用於指定副攝圖像的座標位置。
  • processI420:用於將主副攝圖像拼接成畫中畫,輸入和輸出圖像必須是I420格式。
  • processNV21:用於將主副攝圖像拼接成畫中畫,輸入和輸出圖像必須是NV21格式。
  • ~DualCamera(): 析構函數,沒有實際作用。

爲了方便有興趣的童鞋們,實現代碼dual_camera.cpp也一併貼上:

#include <cstring>
#include <cstdio>
#include "dual_camera.h"
#include "logger.h"

using namespace std;

DualCamera::DualCamera() {
    const char * path = "/vendor/persist/camera/calibration.cfg";
    FILE *fp;
    if ((fp = fopen(path, "r")) != nullptr) {
        auto buffer = new int[1];
        fread(buffer, 1, sizeof(int), fp);
        position = buffer[0];
    } else {
        LOGE("Failed to open: %s", path);
        position = CENTER;
    }
}

DualCamera::~DualCamera() = default;

void DualCamera::processI420(uchar *main, int mainWidth, int mainHeight,
                                 uchar *sub, int subWidth, int subHeight) {
    uchar *mainY = main;
    uchar *mainU = main + mainWidth * mainHeight;
    uchar *mainV = main + mainWidth * mainHeight * 5 / 4;
    uchar *subY = sub;
    uchar *subU = sub + subWidth * subHeight;
    uchar *subV = sub + subWidth * subHeight * 5 / 4;
    processI420(mainY, mainU, mainV, mainWidth, mainHeight, subY, subU, subV, subWidth, subHeight);
}

void
DualCamera::processI420(uchar *mainY, uchar *mainU, uchar *mainV, int mainWidth, int mainHeight,
                            uchar *subY, uchar *subU, uchar *subV, int subWidth, int subHeight) {
    int mainUVHeight = mainHeight / 2;
    int mainUVWidth = mainWidth / 2;

    int subUVHeight = subHeight / 2;
    int subUVWidth = subWidth / 2;

    //merge
    unsigned char *pDstY;
    unsigned char *pSrcY;

    for (int i = 0; i < subHeight; i++) {
        pSrcY = subY + i * subWidth;
        if (position == LEFT_TOP) {
            pDstY = mainY + i * mainWidth;
        } else if (position == LEFT_BOTTOM) {
            pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth);
        } else if (position == RIGHT_TOP) {
            pDstY = mainY + i * mainWidth + (mainWidth - subWidth);
        } else if (position == RIGHT_BOTTOM) {
            pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth) +
                    (mainWidth - subWidth);
        } else if (position == CENTER) {
            pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) / 2 * mainWidth) +
                    (mainWidth - subWidth) / 2;
        } else {
            LOGE("Unsupported position: %d", position);
            return;
        }
        memcpy(pDstY, pSrcY, subWidth);
    }

    unsigned char *pDstU;
    unsigned char *pDstV;
    unsigned char *pSrcU;
    unsigned char *pSrcV;
    for (int i = 0; i < subUVHeight; i++) {
        pSrcU = subU + i * subUVWidth;
        pSrcV = subV + i * subUVWidth;
        if (position == LEFT_TOP) {
            pDstU = mainU + i * mainUVWidth;
            pDstV = mainV + i * mainUVWidth;
        } else if (position == LEFT_BOTTOM) {
            pDstU = mainU + ((mainUVHeight - subUVHeight) * mainUVWidth) + i * mainUVWidth;
            pDstV = mainV + ((mainUVHeight - subUVHeight) * mainUVWidth) + i * mainUVWidth;
        } else if (position == RIGHT_TOP) {
            pDstU = mainU + i * mainUVWidth + mainUVWidth - subUVWidth;
            pDstV = mainV + i * mainUVWidth + mainUVWidth - subUVWidth;
        } else if (position == RIGHT_BOTTOM) {
            pDstU = mainU + ((mainUVHeight - subUVHeight) * mainUVWidth) +
                    i * mainUVWidth + (mainUVWidth - subUVWidth);
            pDstV = mainV + ((mainUVHeight - subUVHeight) * mainUVWidth) +
                    i * mainUVWidth + (mainUVWidth - subUVWidth);
        } else if (position == CENTER) {
            pDstU = mainU + ((mainUVHeight - subUVHeight) / 2 * mainUVWidth) +
                    i * mainUVWidth + (mainUVWidth - subUVWidth) / 2;
            pDstV = mainV + ((mainUVHeight - subUVHeight) / 2 * mainUVWidth) +
                    i * mainUVWidth + (mainUVWidth - subUVWidth) / 2;
        } else {
            LOGE("Unsupported position: %d", position);
            return;
        }
        memcpy(pDstU, pSrcU, subUVWidth);
        memcpy(pDstV, pSrcV, subUVWidth);

    }
}

void DualCamera::processNV21(uchar *main, int mainWidth, int mainHeight,
                 uchar *sub, int subWidth, int subHeight) {
    uchar *mainY = main;
    uchar *mainUV = main + mainWidth * mainHeight;
    uchar *subY = sub;
    uchar *subUV = sub + subWidth * subHeight;
    processNV21(mainY, mainUV, mainWidth, mainHeight, subY, subUV, subWidth, subHeight);
}

void DualCamera::processNV21(uchar *mainY, uchar *mainUV, int mainWidth, int mainHeight,
                 uchar *subY, uchar *subUV, int subWidth, int subHeight) {
    LOGD("[processNV21] mainY:%p, mainUV:%p, mainWidth:%d, mainHeight:%d, subY:%p, subUV:%p, subWidth:%d, subHeight:%d, position:%d",
            mainY, mainUV, mainWidth, mainHeight, subY, subUV, subWidth, subHeight, position);
    int mainUVHeight = mainHeight / 2;
    int mainUVWidth = mainWidth / 2;
    unsigned char *pDstY;
    unsigned char *pSrcY;

    for (int i = 0; i < subHeight; i++) {
        pSrcY = subY + i * subWidth;
        if (position == LEFT_TOP) {
            pDstY = mainY + i * mainWidth;
        } else if (position == LEFT_BOTTOM) {
            pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth);
        } else if (position == RIGHT_TOP) {
            pDstY = mainY + i * mainWidth + (mainWidth - subWidth);
        } else if (position == RIGHT_BOTTOM) {
            pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) * mainWidth) +
                    (mainWidth - subWidth);
        } else if (position == CENTER) {
            pDstY = mainY + i * mainWidth + ((mainHeight - subHeight) / 2 * mainWidth) +
                    (mainWidth - subWidth) / 2;
        } else {
            LOGE("Unsupported position: %d", position);
            return;
        }
        memcpy(pDstY, pSrcY, subWidth);
    }

    int subUVHeight = subHeight / 2;
    int subUVWidth = subWidth / 2;
    unsigned char *pDstUV;
    unsigned char *pSrcUV;
    for (int i = 0; i < subUVHeight; i++) {
        pSrcUV = subUV + i * subUVWidth * 2;
        if (position == LEFT_TOP) {
            pDstUV = mainUV + i * mainUVWidth * 2;
        } else if (position == LEFT_BOTTOM) {
            pDstUV = mainUV + ((mainUVHeight - subUVHeight) * mainUVWidth + i * mainUVWidth) * 2;
        } else if (position == RIGHT_TOP) {
            pDstUV = mainUV + (i * mainUVWidth + mainUVWidth - subUVWidth) * 2;
        } else if (position == RIGHT_BOTTOM)  {
            pDstUV = mainUV + ((mainUVHeight - subUVHeight) * mainUVWidth +
                    i * mainUVWidth + mainUVWidth - subUVWidth) * 2;
        } else if (position == CENTER) {
            pDstUV = mainUV + ((mainUVHeight - subUVHeight) / 2 * mainUVWidth +
                    i * mainUVWidth + (mainUVWidth - subUVWidth) / 2) * 2;
        } else {
            LOGE("Unsupported position: %d", position);
            return;
        }
        memcpy(pDstUV, pSrcUV, subUVWidth * 2);
    }
}

3.3.3 mtkcam3/3rdparty/customer/cp_dualcamera/DualCameraCapture.cpp
#define LOG_TAG "DualCamera"

// Standard C header file
#include <stdlib.h>
#include <chrono>
#include <random>
#include <thread>
// Android system/core header file

// mtkcam custom header file

// mtkcam global header file
#include <mtkcam/utils/std/Log.h>
// Module header file
#include <mtkcam/drv/iopipe/SImager/IImageTransform.h>
#include <mtkcam/utils/metastore/IMetadataProvider.h>
#include <mtkcam3/3rdparty/plugin/PipelinePlugin.h>
#include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
//
#include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
#include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
// Local header file
#include <dual_camera.h>

using namespace NSCam;
using namespace android;
using namespace std;
using namespace NSCam::NSPipelinePlugin;
/******************************************************************************
 *
 ******************************************************************************/
#define MY_LOGV(fmt, arg...)        CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGD(fmt, arg...)        CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGI(fmt, arg...)        CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGW(fmt, arg...)        CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGE(fmt, arg...)        CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
//
#define MY_LOGV_IF(cond, ...)       do { if( (cond) ) { MY_LOGV(__VA_ARGS__); } }while(0)
#define MY_LOGD_IF(cond, ...)       do { if( (cond) ) { MY_LOGD(__VA_ARGS__); } }while(0)
#define MY_LOGI_IF(cond, ...)       do { if( (cond) ) { MY_LOGI(__VA_ARGS__); } }while(0)
#define MY_LOGW_IF(cond, ...)       do { if( (cond) ) { MY_LOGW(__VA_ARGS__); } }while(0)
#define MY_LOGE_IF(cond, ...)       do { if( (cond) ) { MY_LOGE(__VA_ARGS__); } }while(0)

/*******************************************************************************
* MACRO Utilities Define.
********************************************************************************/
namespace { // anonymous namespace for debug MARCO function
using AutoObject = std::unique_ptr<const char, std::function<void(const char*)>>;
//
auto
createAutoScoper(const char* funcName) -> AutoObject
{
    CAM_LOGD("[%s] +", funcName);
    return AutoObject(funcName, [](const char* p)
    {
        CAM_LOGD("[%s] -", p);
    });
}
#define SCOPED_TRACER() auto scoped_tracer = ::createAutoScoper(__FUNCTION__)
//
auto
createAutoTimer(const char* funcName, const char* text) -> AutoObject
{
    using Timing = std::chrono::time_point<std::chrono::high_resolution_clock>;
    using DuationTime = std::chrono::duration<float, std::milli>;

    Timing startTime = std::chrono::high_resolution_clock::now();
    return AutoObject(text, [funcName, startTime](const char* p)
    {
        Timing endTime = std::chrono::high_resolution_clock::now();
        DuationTime duationTime = endTime - startTime;
        CAM_LOGD("[%s] %s, elapsed(ms):%.4f",funcName, p, duationTime.count());
    });
}
#define AUTO_TIMER(TEXT) auto auto_timer = ::createAutoTimer(__FUNCTION__, TEXT)
//
#define UNREFERENCED_PARAMETER(param) (param)
//
} // end anonymous namespace for debug MARCO function

/*******************************************************************************
* Alias.
********************************************************************************/
using namespace NSCam;
using namespace NSCam::NSPipelinePlugin;
using namespace NSCam::NSIoPipe::NSSImager;

//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  Type Alias..
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
using Property = FusionPlugin::Property;
using Selection = FusionPlugin::Selection;
using RequestPtr = FusionPlugin::Request::Ptr;
using RequestCallbackPtr = FusionPlugin::RequestCallback::Ptr;
//
template<typename T>
using AutoPtr             = std::unique_ptr<T, std::function<void(T*)>>;
//
using ImgPtr              = AutoPtr<IImageBuffer>;
using MetaPtr             = AutoPtr<IMetadata>;
using ImageTransformPtr   = AutoPtr<IImageTransform>;

/*******************************************************************************
* Namespace Start.
********************************************************************************/
namespace { // anonymous namespace

/*******************************************************************************
* Class Definition
********************************************************************************/
/**
 * @brief third party pure bokeh algo. provider
 */
class DualCameraCapture final: public FusionPlugin::IProvider
{
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  Instantiation.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
public:
    DualCameraCapture();

//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  FusionPlugin::IProvider Public Operations.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
public:
    void set(MINT32 iOpenId, MINT32 iOpenId2) override;

    const Property& property() override;

    MERROR negotiate(Selection& sel) override;

    void init() override;

    MERROR process(RequestPtr requestPtr, RequestCallbackPtr callbackPtr) override;

    void abort(vector<RequestPtr>& requestPtrs) override;

    void uninit() override;

    ~DualCameraCapture();
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  DualCameraCapture Private Operator.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
private:
    MERROR processDone(const RequestPtr& requestPtr, const RequestCallbackPtr& callbackPtr, MERROR status);

//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  DualCameraCapture Private Data Members.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
private:
    MINT32 mEnable;
    //
    MINT32 mOpenId;
    MINT32 mOpenId2;
    MINT32 mDump;
    DualCamera* mDualCamera = NULL;
};
REGISTER_PLUGIN_PROVIDER(Fusion, DualCameraCapture);

/**
 * @brief utility class
 */
class DualCameraUtility final
{
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  Instantiation.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
public:
    DualCameraUtility() = delete;

//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  DualCameraUtility Public Operations.
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
public:
    static inline ImageTransformPtr createImageTransformPtr();

    static inline ImgPtr createImgPtr(BufferHandle::Ptr& hangle);

    static inline MetaPtr createMetaPtr(MetadataHandle::Ptr& hangle);

    static inline MVOID dump(const IImageBuffer* pImgBuf, const std::string& dumpName);

    static inline MVOID dump(IMetadata* pMetaData, const std::string& dumpName);

    static inline const char * format2String(MINT format);

    static inline MVOID saveImg(NSCam::IImageBuffer* pImgBuf, const std::string& fileName);
};

//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  DualCameraUtility implementation.
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ImageTransformPtr
DualCameraUtility::
createImageTransformPtr()
{
    return ImageTransformPtr(IImageTransform::createInstance(), [](IImageTransform *p)
    {
        p->destroyInstance();
    });
}

ImgPtr
DualCameraUtility::
createImgPtr(BufferHandle::Ptr& hangle)
{
    return ImgPtr(hangle->acquire(), [hangle](IImageBuffer* p)
    {
        UNREFERENCED_PARAMETER(p);
        hangle->release();
    });
};

MetaPtr
DualCameraUtility::
createMetaPtr(MetadataHandle::Ptr& hangle)
{
    return MetaPtr(hangle->acquire(), [hangle](IMetadata* p)
    {
        UNREFERENCED_PARAMETER(p);
        hangle->release();
    });
};

MVOID
DualCameraUtility::
dump(const IImageBuffer* pImgBuf, const std::string& dumpName)
{
    MY_LOGD("dump image info, dumpName:%s, info:[a:%p, si:%dx%d, st:%zu, f:0x%x, va:%p]",
        dumpName.c_str(), pImgBuf,
        pImgBuf->getImgSize().w, pImgBuf->getImgSize().h,
        pImgBuf->getBufStridesInBytes(0),
        pImgBuf->getImgFormat(),
        reinterpret_cast<void*>(pImgBuf->getBufVA(0)));
}

MVOID
DualCameraUtility::
dump(IMetadata* pMetaData, const std::string& dumpName)
{
    MY_LOGD("dump meta info, dumpName:%s, addr::%p, count:%u",
        dumpName.c_str(), pMetaData, pMetaData->count());
}

MVOID
DualCameraUtility::
saveImg(NSCam::IImageBuffer* pImgBuf, const std::string& fileName)
{

    char path[256];
    snprintf(path, sizeof(path), "/data/vendor/camera_dump/%s_%zu_%dx%d.%s", fileName.c_str(), pImgBuf->getBufStridesInBytes(0),
             pImgBuf->getImgSize().w, pImgBuf->getImgSize().h, format2String(pImgBuf->getImgFormat()));
    pImgBuf->saveToFile(path);
}

const char* 
DualCameraUtility::
format2String(MINT format) {
    switch(format) {
       case NSCam::eImgFmt_RGBA8888:          return "rgba";
       case NSCam::eImgFmt_RGB888:            return "rgb";
       case NSCam::eImgFmt_RGB565:            return "rgb565";
       case NSCam::eImgFmt_STA_BYTE:          return "byte";
       case NSCam::eImgFmt_YVYU:              return "yvyu";
       case NSCam::eImgFmt_UYVY:              return "uyvy";
       case NSCam::eImgFmt_VYUY:              return "vyuy";
       case NSCam::eImgFmt_YUY2:              return "yuy2";
       case NSCam::eImgFmt_YV12:              return "yv12";
       case NSCam::eImgFmt_YV16:              return "yv16";
       case NSCam::eImgFmt_NV16:              return "nv16";
       case NSCam::eImgFmt_NV61:              return "nv61";
       case NSCam::eImgFmt_NV12:              return "nv12";
       case NSCam::eImgFmt_NV21:              return "nv21";
       case NSCam::eImgFmt_I420:              return "i420";
       case NSCam::eImgFmt_I422:              return "i422";
       case NSCam::eImgFmt_Y800:              return "y800";
       case NSCam::eImgFmt_BAYER8:            return "bayer8";
       case NSCam::eImgFmt_BAYER10:           return "bayer10";
       case NSCam::eImgFmt_BAYER12:           return "bayer12";
       case NSCam::eImgFmt_BAYER14:           return "bayer14";
       case NSCam::eImgFmt_FG_BAYER8:         return "fg_bayer8";
       case NSCam::eImgFmt_FG_BAYER10:        return "fg_bayer10";
       case NSCam::eImgFmt_FG_BAYER12:        return "fg_bayer12";
       case NSCam::eImgFmt_FG_BAYER14:        return "fg_bayer14";
       default:                               return "unknown";
    }
}

//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
//  ThirdPartyFusionProvider implementation.
//+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
DualCameraCapture::
DualCameraCapture()
: mEnable(-1)
, mOpenId(-1)
, mOpenId2(-1)
, mDump(-1)
{
    // on:1/off:0/auto:-1
    mEnable = ::property_get_int32("vendor.debug.camera.dualcamera.enable", mEnable);
    mDump = ::property_get_int32("vendor.debug.camera.dualcamera.dump", mDump);
    mDualCamera = new DualCamera();
    MY_LOGD("ctor:%p, mEnable:%d", this, mEnable);
}

void
DualCameraCapture::
set(MINT32 iOpenId, MINT32 iOpenId2)
{
    mOpenId = iOpenId;
    mOpenId2 = iOpenId2;
    MY_LOGD("set openId:%d openId2:%d", mOpenId, mOpenId2);
}

const Property&
DualCameraCapture::
property()
{
    static const Property prop = []() -> const Property
    {
        Property ret;
        ret.mName = "DualCamera";
        ret.mFeatures = TP_FEATURE_PUREBOKEH;
        ret.mFaceData = eFD_Cache;
        ret.mBoost = eBoost_CPU;
        ret.mInitPhase = ePhase_OnPipeInit;
        return ret;
    }();
    return prop;
}

MERROR
DualCameraCapture::
negotiate(Selection& sel)
{
    SCOPED_TRACER();

    if( mEnable == 0 )
    {
        MY_LOGD("force off tp dual camera");
        return BAD_VALUE;
    }
    // INPUT
    {
        sel.mIBufferFull
            .setRequired(MTRUE)
            .addAcceptedFormat(eImgFmt_NV21)
            .addAcceptedSize(eImgSize_Full);

        sel.mIBufferFull2
            .setRequired(MTRUE)
            .addAcceptedFormat(eImgFmt_NV21)
            .addAcceptedSize(eImgSize_Full);

        sel.mIMetadataApp.setRequired(MTRUE);
        sel.mIMetadataHal.setRequired(MTRUE);
        sel.mIMetadataHal2.setRequired(MTRUE);
        sel.mIMetadataDynamic.setRequired(MTRUE);
        sel.mIMetadataDynamic2.setRequired(MTRUE);
    }
    // OUTPUT
    {
        sel.mOBufferFull
            .setRequired(MTRUE)
            .addAcceptedFormat(eImgFmt_NV21)
            .addAcceptedSize(eImgSize_Full);

        sel.mOMetadataApp.setRequired(MTRUE);
        sel.mOMetadataHal.setRequired(MTRUE);
    }
    return OK;
}

void
DualCameraCapture::
init()
{
    SCOPED_TRACER();
    ::srand(time(nullptr));
}

MERROR
DualCameraCapture::
process(RequestPtr requestPtr, RequestCallbackPtr callbackPtr)
{
    SCOPED_TRACER();

    auto isValidInput = [](const RequestPtr& requestPtr) -> MBOOL
    {
        const MBOOL ret = requestPtr->mIBufferFull != nullptr
                    && requestPtr->mIBufferFull2 != nullptr
                    && requestPtr->mIMetadataApp != nullptr
                    && requestPtr->mIMetadataHal != nullptr
                    && requestPtr->mIMetadataHal2 != nullptr;
        if( !ret )
        {
            MY_LOGE("invalid request with input, req:%p, inFullImg:%p, inFullImg2:%p, inAppMeta:%p, inHalMeta:%p, inHalMeta2:%p",
                requestPtr.get(),
                requestPtr->mIBufferFull.get(),
                requestPtr->mIBufferFull2.get(),
                requestPtr->mIMetadataApp.get(),
                requestPtr->mIMetadataHal.get(),
                requestPtr->mIMetadataHal2.get());
        }
        return ret;
    };

    auto isValidOutput = [](const RequestPtr& requestPtr) -> MBOOL
    {
        const MBOOL ret = requestPtr->mOBufferFull != nullptr
                    && requestPtr->mOMetadataApp != nullptr
                    && requestPtr->mOMetadataHal != nullptr;
        if( !ret )
        {
            MY_LOGE("invalid request with input, req:%p, outFullImg:%p, outAppMeta:%p, outHalMeta:%p",
                requestPtr.get(),
                requestPtr->mOBufferFull.get(),
                requestPtr->mOMetadataApp.get(),
                requestPtr->mOMetadataHal.get());
        }
        return ret;
    };

    MY_LOGD("process, reqAdrr:%p", requestPtr.get());

    if( !isValidInput(requestPtr) )
    {
        return processDone(requestPtr, callbackPtr, BAD_VALUE);
    }

    if( !isValidOutput(requestPtr) )
    {
        return processDone(requestPtr, callbackPtr, BAD_VALUE);
    }
    //
    //
    {
        // note: we can just call createXXXXPtr one time for a specified handle
        ImgPtr inMainImgPtr = DualCameraUtility::createImgPtr(requestPtr->mIBufferFull);
        ImgPtr inSubImgPtr = DualCameraUtility::createImgPtr(requestPtr->mIBufferFull2);
        ImgPtr outFSImgPtr = DualCameraUtility::createImgPtr(requestPtr->mOBufferFull);
        //
        MetaPtr inAppMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mIMetadataApp);
        MetaPtr inMainHalMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mIMetadataHal);
        MetaPtr inSubHalMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mIMetadataHal2);
        MetaPtr outAppMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mOMetadataApp);
        MetaPtr outHalMetaPtr = DualCameraUtility::createMetaPtr(requestPtr->mOMetadataHal);
        // dump info
        {
            DualCameraUtility::dump(inMainImgPtr.get(), "inputMainImg");
            DualCameraUtility::dump(inSubImgPtr.get(), "inputSubImg");
            DualCameraUtility::dump(outFSImgPtr.get(), "outFSImg");
            //
            DualCameraUtility::dump(inAppMetaPtr.get(), "inAppMeta");
            DualCameraUtility::dump(inMainHalMetaPtr.get(), "inMainHalMeta");
            DualCameraUtility::dump(inSubHalMetaPtr.get(), "inSubHalMeta");
            DualCameraUtility::dump(outAppMetaPtr.get(), "outAppMeta");
            DualCameraUtility::dump(outHalMetaPtr.get(), "outHalMeta");
        }

        //dual camera algo
        {
            AUTO_TIMER("proces dual camera algo.");
            NSCam::IImageBuffer* inMainImgBuf = inMainImgPtr.get();
            NSCam::IImageBuffer* inSubImgBuf = inSubImgPtr.get();
            NSCam::IImageBuffer* outImgBuf = outFSImgPtr.get();
            if (mDump) {
                DualCameraUtility::saveImg(inMainImgBuf, "inputMainImg");
                DualCameraUtility::saveImg(inSubImgBuf, "inputSubImg");
            }
            memcpy(reinterpret_cast<uchar*>(outImgBuf->getBufVA(0)), reinterpret_cast<uchar*>(inMainImgBuf->getBufVA(0)), inMainImgBuf->getBufSizeInBytes(0));
            memcpy(reinterpret_cast<uchar*>(outImgBuf->getBufVA(1)), reinterpret_cast<uchar*>(inMainImgBuf->getBufVA(1)), inMainImgBuf->getBufSizeInBytes(1));
            if (mDualCamera != NULL) {
                mDualCamera->processNV21(reinterpret_cast<uchar*>(outImgBuf->getBufVA(0)), reinterpret_cast<uchar*>(outImgBuf->getBufVA(1)),
                                         outImgBuf->getImgSize().w, outImgBuf->getImgSize().h,
                                         reinterpret_cast<uchar*>(inSubImgBuf->getBufVA(0)), reinterpret_cast<uchar*>(inSubImgBuf->getBufVA(1)),
                                         inSubImgBuf->getImgSize().w, inSubImgBuf->getImgSize().h);
            }
        }
    }
    return processDone(requestPtr, callbackPtr, OK);
}

MERROR
DualCameraCapture::
processDone(const RequestPtr& requestPtr, const RequestCallbackPtr& callbackPtr, MERROR status)
{
    SCOPED_TRACER();

    MY_LOGD("process done, call complete, reqAddr:%p, callbackPtr:%p, status:%d",
        requestPtr.get(), callbackPtr.get(), status);

    if( callbackPtr != nullptr )
    {
        callbackPtr->onCompleted(requestPtr, status);
    }
    return OK;
}

void
DualCameraCapture::
abort(vector<RequestPtr>& requestPtrs)
{
    SCOPED_TRACER();

    for(auto& item : requestPtrs)
    {
        MY_LOGD("abort request, reqAddr:%p", item.get());
    }
}

void
DualCameraCapture::
uninit()
{
    SCOPED_TRACER();
}

DualCameraCapture::
~DualCameraCapture()
{
    MY_LOGD("dtor:%p", this);
    if (mDualCamera != NULL) {
        delete mDualCamera;
        mDualCamera = NULL;
    }
}

}  // anonymous namespace

主要函數介紹:

  • 在property函數中feature類型設置成TP_FEATURE_PUREBOKEH,並設置名稱等屬性。

  • 在negotiate函數中配置算法需要的輸入、輸出圖像的格式、尺寸。注意,雙攝算法有2個輸入Buffer,但是隻有1個輸出Buffer。

  • 在process函數中接入算法。調用算法接口函數processNV21進行處理。

集成時,可以參照MTK提供的實例文件TPPureBokehImpl.cpp或者TPFusionImpl.cpp。

3.3.4 mtkcam3/3rdparty/customer/Android.mk

最終vendor.img需要的目標共享庫是libmtkcam_3rdparty.customer.so。因此,我們還需要修改Android.mk,使模塊libmtkcam_3rdparty.customer依賴libmtkcam.plugin.tp_dc。

同時,爲了避免衝突以及出圖更快,我們還需要移除MTK示例的libmtkcam.plugin.tp_purebokeh。

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
index 5e5dd6524f..bf2f6ffeae 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
@@ -65,7 +65,7 @@ LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_bokeh
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_depth
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_fusion
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_dc_hdr
-LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_purebokeh
+#LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_purebokeh
 #
 LOCAL_SHARED_LIBRARIES += libcam.iopipe
 LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
@@ -83,6 +83,11 @@ LOCAL_SHARED_LIBRARIES += libyuv.vendor
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_mfnr
 endif

+ifeq ($(QXT_DUALCAMERA_SUPPORT), yes)
+LOCAL_SHARED_LIBRARIES += libdualcamera
+LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_dc
+endif
+

由於MTK已經定義了相關的metadata,因此,我們也無需再自定義metadata。

前面這些步驟完成之後,集成工作就基本完成了。我們需要重新編譯一下系統源碼,爲節約時間,可以只編譯vendor.img。

四、 APP調用算法

由於MTK原生的Camera APP本身就有雙攝stereo模式,我們也無需再寫APP來驗證算法。爲樣機刷入系統整包或者vendor.img,開機後,進入MTK 原生Camera APP的stereo模式。我們來拍一張看看效果:

輔攝的色彩效果似乎有些異常,但是不管怎樣,模擬算法庫是運行正常的,已經將主、輔攝圖像拼成畫中畫效果了。

五、結語

雙攝算法是所有算法中最複雜的,涉及到標定、主副攝同步、深度計算、模糊調優、邊緣處理等等。算法和集成兩部分只要出一點點小問題,雙攝的效果可能會天差地別。集成雙攝算法時,請一定仔細,仔細,再仔細。

MTK HAL算法集成系列的三篇文章到這裏就收官了。農曆2020年馬上要結束了,這應該也是我農曆2020年最後一篇文章了。也臨近放假了,提前祝大家假期愉快!

原文鏈接:https://www.jianshu.com/p/d4a2aacc1760

至此,本篇已結束。轉載網絡的文章,小編覺得很優秀,歡迎點擊閱讀原文,支持原創作者,如有侵權,懇請聯繫小編刪除,歡迎您的建議與指正。同時期待您的關注,感謝您的閱讀,謝謝!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章