和你一起終身學習,這裏是程序員 Android
經典好文推薦,通過閱讀本文,您將收穫以下知識點:
算法概覽
一、算法集成前的準備
二、 爲算法選擇feature
三、 將算法對應的feature添加到scenario配置表
四、掛載算法
五、自定義metadata
六、APP調用算法
七、遇到的問題及解決方法
八、結語
算法概覽
爲了給用戶提供更好的成像效果,現在的手機都會接入一些第三方的圖像處理算法。MTK平臺的HAL3也在P2這一層提供接入的plugin。按圖像處理算法需要的幀數和攝像頭數量,大體可以分爲三類:
單幀算法:
常見的單幀算法有:美顏算法(瘦臉、磨皮、大眼)、廣角鏡頭畸變校正算法、附加表情算法、單攝背景虛化算法(僞雙攝算法)等等,僅需單幀圖像輸入的算法都屬於單幀算法。一般情況下,輸入一幀圖像,算法處理完輸出一幀處理後的圖像。多幀算法:
常見的多幀算法有:MFNR(多幀降噪)、HDR(高動態範圍)等等,需要連續多幀圖像輸入的算法都屬於多幀算法。一般情況下,輸入連續多幀圖像,算法處理完輸出一幀處理後的圖像。雙攝算法:
最常見的雙攝算法是雙攝景深算法或者叫雙攝背景虛化算法,除此之外,也有彩色+黑白用於增強夜拍效果的雙攝算法。單幀算法和多幀算法僅需要獲取一個攝像頭的圖像。而雙攝算法需要獲取主、輔兩個攝像頭的圖像,並且一般還會要求主、輔攝像頭同步。分別獲取主、輔攝像頭的兩幀同步圖像,處理後輸出一幀主攝圖像,用戶也僅能看到主攝圖像。
根據這個大體上的分類,MTK HAL算法集成系列文章共三篇:
- MTK HAL算法集成之單幀算法
- MTK HAL算法集成之多幀算法
- MTK HAL算法集成之雙攝算法
本文是其中的第一篇。這個系列文章均基於Android 9.0,MT6763平臺,HAL版本是HAL3。
一、算法集成前的準備
在開展集成工作之前,首先要對算法有一個基本的評估,並且對於集成也應有一定的要求。
1. 1 算法要求及評估
- 處理效果好,不能比競品差,超過競品更佳。(這條和camera調試的主觀效果一樣,主觀性較強,往往一廂情願,具體看項目要求吧)
- 各個場景及壓力測試下效果穩定。
- 處理後照片無色差、銳度和飽和度無損失,或者損失在可接受範圍。
- 達到可接受的分辨率,最好可達到攝像頭的最大分辨率。
- 處理時間越快越好,不超過競品時間、不超過項目和產品的目標時間。
- 無內存泄露,佔用內存少。
- 提供必要的集成說明文檔,包括算法類型、輸入及輸出圖像要求、輸入參數要求等等。
注意:如果有條件,處理時間、內存佔用、分辨率等等可量化的指標可要求算法提供方給出具體的參考數據,以便集成完後測試驗證。
1.2 算法集成要求
- 編譯時可根據項目控制是否集成算法。
- 運行時可以用參數控制是否啓用算法。
- 集成算法庫正常運行、壓力測試下效果穩定、無內存泄露。
1.3 算法集成的步驟
(1). 根據算法選擇feature類型,如果與MTK提供的feature不能對號入座,則需要添加自定義feature。
(2). 將算法對應的feature類型添加到scenario配置表。
(3). 根據算法選擇plugin類型,編寫CPP文件實現plugin,掛載算法。
(4). 如果算法不能複用Android和MTK提供的metadata,則還需要爲算法配置自定義的metadata以便APP控制是否啓用算法。
首先,我準備了一個libwatermark.so,它僅僅實現了一個添加水印的功能,用它來模擬第三方的單幀算法庫。如果想了解添加水印的實現代碼,可以參考我另外一篇文章:Android 實現圖片加水印或logo。接下來,我們就按照集成步驟,逐步詳細講解。
二、 爲算法選擇feature
2.1 MTK提供的feature
MTK在mtk_feature_type.h和customer_feature_type.h已經提供了一些feature。
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/mtk/mtk_feature_type.h:
NO_FEATURE_NORMAL = 0ULL,
// MTK (bit 0-31)
MTK_FEATURE_MFNR = 1ULL << 0,
MTK_FEATURE_HDR = 1ULL << 1,
MTK_FEATURE_REMOSAIC = 1ULL << 2,
MTK_FEATURE_ABF = 1ULL << 3,
MTK_FEATURE_NR = 1ULL << 4,
MTK_FEATURE_FB = 1ULL << 5,
MTK_FEATURE_CZ = 1ULL << 6,
MTK_FEATURE_DRE = 1ULL << 7,
MTK_FEATURE_DEPTH = 1ULL << 8,
MTK_FEATURE_BOKEH = 1ULL << 9,
MTK_FEATURE_VSDOF = (MTK_FEATURE_DEPTH|MTK_FEATURE_BOKEH),
MTK_FEATURE_FSC = 1ULL << 10,
MTK_FEATURE_3DNR = 1ULL << 11,
MTK_FEATURE_EIS = 1ULL << 12,
MTK_FEATURE_AINR = 1ULL << 13,
MTK_FEATURE_DUAL_YUV = 1ULL << 14,
MTK_FEATURE_DUAL_HWDEPTH = 1ULL << 15,
MTK_FEATURE_AIS = 1ULL << 16,
MTK_FEATURE_HFG = 1ULL << 17,
MTK_FEATURE_DCE = 1ULL << 18,
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h:
// ThirdParty (bit 32-63)
TP_FEATURE_HDR = 1ULL << 32,
TP_FEATURE_MFNR = 1ULL << 33,
TP_FEATURE_EIS = 1ULL << 34,
TP_FEATURE_FB = 1ULL << 35,
TP_FEATURE_FILTER = 1ULL << 36,
TP_FEATURE_DEPTH = 1ULL << 37,
TP_FEATURE_BOKEH = 1ULL << 38,
TP_FEATURE_VSDOF = (TP_FEATURE_DEPTH|TP_FEATURE_BOKEH),
TP_FEATURE_FUSION = 1ULL << 39,
TP_FEATURE_HDR_DC = 1ULL << 40, // used by DualCam
TP_FEATURE_DUAL_YUV = 1ULL << 41,
TP_FEATURE_DUAL_HWDEPTH = 1ULL << 42,
TP_FEATURE_PUREBOKEH = 1ULL << 43,
TP_FEATURE_RAW_HDR = 1ULL << 44,
TP_FEATURE_RELIGHTING = 1ULL << 45,
MTK提供的這些feature可以滿足絕大多數算法的集成,在可以對號入座的情況下,我們直接使用已有feature即可。如果不能夠滿足我們的要求,可以參考下節內容添加新的feature。
2.2 添加自定義feature
本來單幀算法對應的feature可以選擇MTK提供的MTK_FEATURE_FB和TP_FEATURE_FB,但是爲了講解如何添加新feature,我們選擇添加一個自定義feature:TP_FEATURE_WATERMARK。
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h
old mode 100644
new mode 100755
index a41fd864f5..17bc35eea8
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/customer/customer_feature_type.h
@@ -59,6 +59,7 @@ enum eFeatureIndexCustomer {
TP_FEATURE_PUREBOKEH = 1ULL << 43,
TP_FEATURE_RAW_HDR = 1ULL << 44,
TP_FEATURE_RELIGHTING = 1ULL << 45,
+ TP_FEATURE_WATERMARK = 1ULL << 46,
// TODO: reserve for customer feature index (bit 32-63)
};
vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp
old mode 100644
new mode 100755
index e32f80a609..47273b01c7
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/CaptureFeature_Common.cpp
@@ -599,6 +599,7 @@ const char* FeatID2Name(FeatureID_T fid)
case FID_FUSION_3RD_PARTY: return "fusion_3rd_party";
case FID_PUREBOKEH_3RD_PARTY: return "purebokeh_3rd_party";
case FID_RELIGHTING_3RD_PARTY: return "relighting_3rd_party";
+ case FID_WATERMARK_3RD_PARTY: return "watermark_3rd_party";
default: return "unknown";
};
vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp
index 8bb794ba02..d4343aaccf 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp
@@ -779,7 +779,8 @@ MBOOL YUVNode::onInit()
featId = FID_FB_3RD_PARTY;
else if (rProperty.mFeatures & TP_FEATURE_RELIGHTING)
featId = FID_RELIGHTING_3RD_PARTY;
-
+ else if (rProperty.mFeatures & TP_FEATURE_WATERMARK)
+ featId = FID_WATERMARK_3RD_PARTY;
if (featId != NULL_FEATURE) {
MY_LOGD_IF(mLogLevel, "%s finds plugin:%s, priority:%d",
vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h
old mode 100644
new mode 100755
index 2f1ad8a665..ab47aae456
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/feature/featurePipe/ICaptureFeaturePipe.h
@@ -172,6 +172,7 @@ enum CaptureFeatureFeatureID {
FID_FUSION_3RD_PARTY,
FID_PUREBOKEH_3RD_PARTY,
FID_RELIGHTING_3RD_PARTY,
+ FID_WATERMARK_3RD_PARTY,
NUM_OF_FEATURE,
NULL_FEATURE = 0xFF,
};
vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp
old mode 100644
new mode 100755
index cc1dc549fd..00559cbc30
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/hwnode/p2/P2_CaptureProcessor.cpp
@@ -428,6 +428,9 @@ MBOOL CaptureProcessor::onEnque(const sp<P2FrameRequest> &pP2Frame)
pCapRequest->addFeature(FID_HFG);
if (feature & MTK_FEATURE_DCE)
pCapRequest->addFeature(FID_DCE);
+ if (feature & TP_FEATURE_WATERMARK)
+ pCapRequest->addFeature(FID_WATERMARK_3RD_PARTY);
+
}
}
三、 將算法對應的feature添加到scenario配置表
在我們打開camera進行預覽和拍照的時候,MTK HAL3會執行vendor/mediatek/proprietary/hardware/mtkcam3/pipeline/policy/FeatureSettingPolicy.cpp的代碼,會分別調用
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/scenario_mgr.cpp的
get_streaming_scenario函數和get_capture_scenario函數。它們會讀取一個scenario的feature配置表,遍歷所有的feature,決定哪些feature會被執行。這個配置表中有許多的scenario,一個scenario可能對應多個feature。因此添加自定義feature後,還需將自定義的feature添加到配置表中。MTK feature 對應的配置表是 gMtkScenarioFeaturesMaps,customer feature 對應的配置表是 gCustomerScenarioFeaturesMaps。
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp
old mode 100644
new mode 100755
index f8d081e433..577f85797e
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/customer_scenario_mgr.cpp
@@ -93,30 +93,30 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr;
// #define <feature combination> (key feature | post-processing features | ...)
//
// single cam capture feature combination
-#define TP_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB)
-#define TP_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB)
-#define TP_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB)
-#define TP_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB)
-#define TP_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB)
+#define TP_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
#define TP_FEATURE_COMBINATION_CSHOT (NO_FEATURE_NORMAL | MTK_FEATURE_CZ| MTK_FEATURE_HFG)
-#define TP_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB)
-#define TP_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB)
+#define TP_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
#define TP_FEATURE_COMBINATION_PRO (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE)
-#define TP_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB)
+#define TP_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| TP_FEATURE_FB| TP_FEATURE_WATERMARK)
// dual cam capture feature combination
// the VSDOF means the combination of Bokeh feature and Depth feature
-#define TP_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF)
-#define TP_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF)
-#define TP_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF)
-#define TP_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_FUSION)
-#define TP_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_PUREBOKEH)
+#define TP_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE| TP_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK)
// streaming feature combination (TODO: it should be refined by streaming scenario feature)
-#define TP_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB)
-#define TP_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV)
-#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH)
-#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB)
+#define TP_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV|TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH|TP_FEATURE_WATERMARK)
+#define TP_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
// ======================================================================================================
//
/******************************************************************************
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
old mode 100644
new mode 100755
index 011f551354..f14ff8a6e2
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
@@ -89,29 +89,29 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr;
// #define <feature combination> (key feature | post-processing features | ...)
//
// single cam capture feature combination
-#define MTK_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB)
+#define MTK_FEATURE_COMBINATION_SINGLE (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_HDR (TP_FEATURE_HDR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_AINR (MTK_FEATURE_AINR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_REMOSAIC (MTK_FEATURE_REMOSAIC| MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
#define MTK_FEATURE_COMBINATION_CSHOT (NO_FEATURE_NORMAL | MTK_FEATURE_CZ| MTK_FEATURE_HFG)
-#define MTK_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB)
+#define MTK_FEATURE_COMBINATION_YUV_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_SUPER_NIGHT_RAW_REPROCESS (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_FB| TP_FEATURE_WATERMARK)
// dual cam capture feature combination
// the VSDOF means the combination of Bokeh feature and Depth feature
-#define MTK_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF)
-#define MTK_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF)
-#define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF)
-#define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION)
-#define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH)
+#define MTK_FEATURE_COMBINATION_TP_VSDOF (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_TP_VSDOF_HDR (TP_FEATURE_HDR_DC | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR (MTK_FEATURE_MFNR | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_TP_FUSION (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_TP_PUREBOKEH (NO_FEATURE_NORMAL | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK)
// streaming feature combination (TODO: it should be refined by streaming scenario feature)
-#define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB)
-#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV)
-#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH)
-#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB)
+#define MTK_FEATURE_COMBINATION_VIDEO_NORMAL (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_YUV (MTK_FEATURE_FB|MTK_FEATURE_DUAL_YUV|TP_FEATURE_FB|TP_FEATURE_DUAL_YUV|TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWDEPTH (MTK_FEATURE_FB|MTK_FEATURE_DUAL_HWDEPTH|TP_FEATURE_FB|TP_FEATURE_DUAL_HWDEPTH|TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_VIDEO_DUAL_HWVSDOF (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
// ======================================================================================================
//
/******************************************************************************
注意:
MTK在Android Q(10.0)上優化了scenario配置表的客製化,Android Q及更高版本,scenario需要在:
vendor/mediatek/proprietary/custom/[platform]/hal/camera/camera_custom_feature_table.cpp中配置,[platform]是諸如mt6580,mt6763之類的。
將自定義feature添加到scenario配置表時,不可貪多,只要添加到合適的scenario就行,多了可能多個算法會有衝突。如果僅在簡單場景,添加到MTK_FEATURE_COMBINATION_SINGLE和TP_FEATURE_COMBINATION_SINGLE就可以滿足絕大多數需求。(2021-02-02更新)
四、掛載算法
4.1 爲算法選擇plugin
MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中將三方算法的掛載點大致分爲以下幾類:
- BokehPlugin: Bokeh算法掛載點,雙攝景深算法的虛化部分。
- DepthPlugin: Depth算法掛載點,雙攝景深算法的計算深度部分。
- FusionPlugin: Depth和Bokeh放在1個算法中,即合併的雙攝景深算法掛載點。
- JoinPlugin: Streaming相關算法掛載點,預覽算法都掛載在這裏。
- MultiFramePlugin: 多幀算法掛載點,包括YUV與RAW,例如MFNR/HDR
- RawPlugin: RAW算法掛載點,例如remosaic
- YuvPlugin: Yuv單幀算法掛載點,例如美顏、廣角鏡頭畸變校正等
對號入座,將要集成的算法選擇相應的plugin。這裏是單幀算法,所以預覽我們選擇JoinPlugin,拍照選擇YuvPlugin。
4.2 編寫算法集成文件
參照FBImpl.cpp和sample_streaming_fb.cpp中分別實現拍照和預覽。目錄結構如下:
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/tp_watermark/
├── Android.mk
├── include
│ └── watermark.h
├── lib
│ ├── arm64-v8a
│ │ └── libwatermark.so
│ └── armeabi-v7a
│ └── libwatermark.so
├── res
│ └── watermark.rgba
├── WatermarkCapture.cpp
└── WatermarkPreview.cpp
文件說明:
- Android.mk中配置算法庫、頭文件、集成的源代碼CPP文件編譯成庫libmtkcam.plugin.tp_watermark,供libmtkcam_3rdparty.customer依賴調用。
- 集成的源代碼CPP文件,WatermarkCapture.cpp用於拍照,WatermarkPreview.cpp用於預覽。
- libwatermark.so實現了添加水印的功能,libwatermark.so用來模擬需要接入的第三方算法庫。watermark.h是頭文件。
- watermark.rgba是對應的水印文件。
4.2.1 添加全局宏控
爲了能控制某個項目是否集成此算法,我們在device/mediateksample/k63v2_64_bsp/ProjectConfig.mk中添加一個宏,用於控制新接入算法的編譯:
QXT_WATERMARK_SUPPORT = yes
當某個項目不需要新接入的算法時,將device/mediateksample/[platform]/ProjectConfig.mk的QXT_WA_SUPPORT的值設爲 no 就可以了。
4.2.2 mtkcam3/3rdparty/customer/tp_watermark/Android.mk
ifeq ($(QXT_WATERMARK_SUPPORT),yes)
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := libwatermark
LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libwatermark.so
LOCAL_SRC_FILES_64 := lib/arm64-v8a/libwatermark.so
LOCAL_MODULE_TAGS := optional
LOCAL_MODULE_CLASS := SHARED_LIBRARIES
LOCAL_MODULE_SUFFIX := .so
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MULTILIB := both
include $(BUILD_PREBUILT)
################################################################################
include $(CLEAR_VARS)
#-----------------------------------------------------------
include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk
#-----------------------------------------------------------
LOCAL_SRC_FILES += WatermarkCapture.cpp
LOCAL_SRC_FILES += WatermarkPreview.cpp
#-----------------------------------------------------------
LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES)
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/include
#
LOCAL_C_INCLUDES += system/media/camera/include
LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/tp_watermark/include
#-----------------------------------------------------------
LOCAL_CFLAGS += $(MTKCAM_CFLAGS)
#
#-----------------------------------------------------------
LOCAL_STATIC_LIBRARIES +=
#
LOCAL_WHOLE_STATIC_LIBRARIES +=
#-----------------------------------------------------------
LOCAL_SHARED_LIBRARIES += liblog
LOCAL_SHARED_LIBRARIES += libutils
LOCAL_SHARED_LIBRARIES += libcutils
LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
LOCAL_SHARED_LIBRARIES += libmtkcam_stdutils
LOCAL_SHARED_LIBRARIES += libmtkcam_pipeline
LOCAL_SHARED_LIBRARIES += libmtkcam_metadata
LOCAL_SHARED_LIBRARIES += libmtkcam_metastore
LOCAL_SHARED_LIBRARIES += libmtkcam_streamutils
LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf
LOCAL_SHARED_LIBRARIES += libyuv.vendor
#-----------------------------------------------------------
LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers
#-----------------------------------------------------------
LOCAL_MODULE := libmtkcam.plugin.tp_watermark
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MODULE_OWNER := mtk
LOCAL_MODULE_TAGS := optional
include $(MTK_STATIC_LIBRARY)
################################################################################
include $(call all-makefiles-under,$(LOCAL_PATH))
endif
4.2.3 mtkcam3/3rdparty/customer/tp_watermark/WatermarkCapture.cpp
主要函數介紹:
- 在property函數中feature類型設置我們在第三步中添加的TP_FEATURE_WATERMARK,並設置名稱、優先級等等屬性。
- 在negotiate函數中配置算法需要的輸入、輸出圖像的格式、尺寸。
- 在negotiate函數或者process函數中獲取上層傳下來的metadata參數,根據參數決定算法是否運行,或者將參數傳給算法。
- 在process函數中接入算法。
注意:
MTK原文:
negotiate函數設置格式時,一個掛載點如果掛載多個同類型的plugin,則只有第一個 plugin 中的 negotiate 中的 input buffer 設定有效。
在YUVNode 下掛載單幀 YUV plugin時,一定要確保 MTK 平臺的SWNR plugin 的 negotiate 直接返回不OK,不做任何 accepted format 等的設定。否則,可能會出現因 SWNR plugin和三方plugin negotiate時設定的 accepted format 不一致而導致的三方 plugin 拿不到它想要的 format 的buffer。
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp
old mode 100644
new mode 100755
index 0ae951cc83..c4819068f7
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/swnr/SWNRImpl.cpp
@@ -340,7 +340,7 @@ negotiate(Selection& sel)
sel.mOMetadataApp.setRequired(false);
sel.mOMetadataHal.setRequired(true);
- return OK;
+ return -EINVAL;//OK;
}
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/tp_watermark/WatermarkCapture.cpp:
#define LOG_TAG "WatermarkCapture"
//
#include <mtkcam/utils/std/Log.h>
//
#include <stdlib.h>
#include <utils/Errors.h>
#include <utils/List.h>
#include <utils/RefBase.h>
#include <sstream>
//
#include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
#include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
//
//
#include <mtkcam/utils/imgbuf/IIonImageBufferHeap.h>
//
#include <mtkcam/drv/IHalSensor.h>
#include <mtkcam/utils/std/Format.h>
//
#include <mtkcam3/pipeline/hwnode/NodeId.h>
#include <mtkcam/utils/metastore/ITemplateRequest.h>
#include <mtkcam/utils/metastore/IMetadataProvider.h>
#include <mtkcam3/3rdparty/plugin/PipelinePlugin.h>
#include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
#include <stdlib.h>
#include <watermark.h>
#include <mtkcam/utils/std/Time.h>
#include <time.h>
#include <libyuv.h>
//
using namespace NSCam;
using namespace android;
using namespace std;
using namespace NSCam::NSPipelinePlugin;
/******************************************************************************
*
******************************************************************************/
#define MY_LOGV(fmt, arg...) CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGD(fmt, arg...) CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGI(fmt, arg...) CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGW(fmt, arg...) CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGE(fmt, arg...) CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
//
#define FUNCTION_IN MY_LOGD("%s +", __FUNCTION__)
#define FUNCTION_OUT MY_LOGD("%s -", __FUNCTION__)
//systrace
#if 1
#ifndef ATRACE_TAG
#define ATRACE_TAG ATRACE_TAG_CAMERA
#endif
#include <utils/Trace.h>
#define WATERMARK_TRACE_CALL() ATRACE_CALL()
#define WATERMARK_TRACE_NAME(name) ATRACE_NAME(name)
#define WATERMARK_TRACE_BEGIN(name) ATRACE_BEGIN(name)
#define WATERMARK_TRACE_END() ATRACE_END()
#else
#define WATERMARK_TRACE_CALL()
#define WATERMARK_TRACE_NAME(name)
#define WATERMARK_TRACE_BEGIN(name)
#define WATERMARK_TRACE_END()
#endif
template <class T>
inline bool
tryGetMetadata(IMetadata const *pMetadata, MUINT32 tag, T& rVal)
{
if(pMetadata == nullptr) return MFALSE;
IMetadata::IEntry entry = pMetadata->entryFor(tag);
if(!entry.isEmpty())
{
rVal = entry.itemAt(0,Type2Type<T>());
return true;
}
else
{
#define var(v) #v
#define type(t) #t
MY_LOGW("no metadata %s in %s", var(tag), type(pMetadata));
#undef type
#undef var
}
return false;
}
/******************************************************************************
*
******************************************************************************/
class WatermarkCapture : public YuvPlugin::IProvider {
public:
typedef YuvPlugin::Property Property;
typedef YuvPlugin::Selection Selection;
typedef YuvPlugin::Request::Ptr RequestPtr;
typedef YuvPlugin::RequestCallback::Ptr RequestCallbackPtr;
private:
int mOpenid;
MBOOL mEnable = 1;
MBOOL mDump = 0;
unsigned char *mSrcRGBA = nullptr;
unsigned char *mWatermarkRGBA = nullptr;
int mWatermarkWidth = 0;
int mWatermarkHeight = 0;
public:
WatermarkCapture();
~WatermarkCapture();
void init();
void uninit();
void abort(vector <RequestPtr> &pRequests);
void set(MINT32 iOpenId, MINT32 iOpenId2);
const Property &property();
MERROR negotiate(Selection &sel);
MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback);
};
WatermarkCapture::WatermarkCapture() : mOpenid(-1) {
FUNCTION_IN;
mEnable = property_get_bool("vendor.debug.camera.watermark.capture.enable", 1);
mDump = property_get_bool("vendor.debug.camera.watermark.capture.dump", 0);
FUNCTION_OUT;
}
WatermarkCapture::~WatermarkCapture() {
FUNCTION_IN;
FUNCTION_OUT;
}
void WatermarkCapture::init() {
FUNCTION_IN;
mWatermarkWidth = 180;
mWatermarkHeight = 640;
int watermarkSize = mWatermarkWidth * mWatermarkHeight * 4;
mWatermarkRGBA = (unsigned char *) malloc(watermarkSize);
FILE *fp;
char path[256];
snprintf(path, sizeof(path), "/vendor/res/images/watermark.rgba");
if ((fp = fopen(path, "r")) == NULL) {
MY_LOGE("Failed to open /vendor/res/images/watermark.rgba");
}
fread(mWatermarkRGBA, 1, watermarkSize, fp);
fclose(fp);
FUNCTION_OUT;
}
void WatermarkCapture::uninit() {
FUNCTION_IN;
free(mWatermarkRGBA);
FUNCTION_OUT;
}
void WatermarkCapture::abort(vector <RequestPtr> &pRequests) {
FUNCTION_IN;
(void)pRequests;
FUNCTION_OUT;
}
void WatermarkCapture::set(MINT32 iOpenId, MINT32 iOpenId2) {
FUNCTION_IN;
MY_LOGD("set openId:%d openId2:%d", iOpenId, iOpenId2);
mOpenid = iOpenId;
FUNCTION_OUT;
}
const WatermarkCapture::Property &WatermarkCapture::property() {
FUNCTION_IN;
static Property prop;
static bool inited;
if (!inited) {
prop.mName = "TP_WATERMARK";
prop.mFeatures = TP_FEATURE_WATERMARK;
prop.mInPlace = MTRUE;
prop.mFaceData = eFD_Current;
prop.mPosition = 0;
inited = true;
}
FUNCTION_OUT;
return prop;
}
MERROR WatermarkCapture::negotiate(Selection &sel) {
FUNCTION_IN;
if (!mEnable) {
MY_LOGD("Force off TP_WATERMARK");
FUNCTION_OUT;
return -EINVAL;
}
sel.mIBufferFull
.setRequired(MTRUE)
.addAcceptedFormat(eImgFmt_I420)
.addAcceptedSize(eImgSize_Full);
sel.mIMetadataDynamic.setRequired(MTRUE);
sel.mIMetadataApp.setRequired(MTRUE);
sel.mIMetadataHal.setRequired(MTRUE);
sel.mOMetadataApp.setRequired(MTRUE);
sel.mOMetadataHal.setRequired(MTRUE);
FUNCTION_OUT;
return OK;
}
MERROR WatermarkCapture::process(RequestPtr pRequest,
RequestCallbackPtr pCallback = nullptr) {
FUNCTION_IN;
WATERMARK_TRACE_CALL();
MBOOL needRun = MFALSE;
if (pRequest->mIBufferFull != nullptr && pRequest->mOBufferFull != nullptr) {
IImageBuffer *pIBufferFull = pRequest->mIBufferFull->acquire();
IImageBuffer *pOBufferFull = pRequest->mOBufferFull->acquire();
if (pRequest->mIMetadataDynamic != nullptr) {
IMetadata *meta = pRequest->mIMetadataDynamic->acquire();
if (meta != NULL)
MY_LOGD("[IN] Dynamic metadata count: %d", meta->count());
else
MY_LOGD("[IN] Dynamic metadata empty");
}
int frameNo = 0, requestNo = 0;
if (pRequest->mIMetadataHal != nullptr) {
IMetadata *pIMetataHAL = pRequest->mIMetadataHal->acquire();
if (pIMetataHAL != NULL) {
MY_LOGD("[IN] HAL metadata count: %d", pIMetataHAL->count());
if (!tryGetMetadata<int>(pIMetataHAL, MTK_PIPELINE_FRAME_NUMBER, frameNo)) {
frameNo = 0;
}
if (!tryGetMetadata<int>(pIMetataHAL, MTK_PIPELINE_REQUEST_NUMBER, requestNo)) {
requestNo = 0;
}
MY_LOGD("frameNo: %d, requestNo: %d", frameNo, requestNo);
} else {
MY_LOGD("[IN] HAL metadata empty");
}
}
if (pRequest->mIMetadataApp != nullptr) {
IMetadata *pIMetadataApp = pRequest->mIMetadataApp->acquire();
MINT32 mode = 0;
if (!tryGetMetadata<MINT32>(pIMetadataApp, QXT_FEATURE_WATERMARK, mode)) {
mode = 0;
}
needRun = mode == 1 ? 1 : 0;
}
MY_LOGD("needRun: %d", needRun);
int width = pIBufferFull->getImgSize().w;
int height = pIBufferFull->getImgSize().h;
MINT inFormat = pIBufferFull->getImgFormat();
if (needRun && inFormat == NSCam::eImgFmt_I420) {
uint32_t currentTime = (NSCam::Utils::TimeTool::getReadableTime()) % 1000;
time_t timep;
time (&timep);
char currentDate[20];
strftime(currentDate, sizeof(currentDate), "%Y%m%d_%H%M%S", localtime(&timep));
//dump input I420
if (mDump) {
char path[256];
snprintf(path, sizeof(path), "/data/vendor/camera_dump/capture_in_frame%d_%dx%d_%s_%d.i420",
frameNo, width, height, currentDate, currentTime);
pIBufferFull->saveToFile(path);
}
nsecs_t t1 = systemTime(CLOCK_MONOTONIC);
if (mSrcRGBA == NULL) {
mSrcRGBA = (unsigned char *) malloc(width * height * 4);
}
//convert I420 to RGBA
libyuv::I420ToABGR((unsigned char *) (pIBufferFull->getBufVA(0)), width,
(unsigned char *) (pIBufferFull->getBufVA(1)), width >> 1,
(unsigned char *) (pIBufferFull->getBufVA(2)), width >> 1,
mSrcRGBA, width * 4,
width, height);
nsecs_t t2 = systemTime(CLOCK_MONOTONIC);
MY_LOGD("Prepare src cost %02ld ms", ns2ms(t2 - t1));
Watermark::add(mSrcRGBA, width, height, mWatermarkRGBA, mWatermarkWidth, mWatermarkHeight, (width - mWatermarkWidth) / 2, (height - mWatermarkHeight) / 2);
nsecs_t t3 = systemTime(CLOCK_MONOTONIC);
MY_LOGD("Add watermark cost %02ld ms", ns2ms(t3 - t2));
//convert RGBA to I420
libyuv::ABGRToI420(mSrcRGBA, width * 4,
(unsigned char *) (pOBufferFull->getBufVA(0)), width,
(unsigned char *) (pOBufferFull->getBufVA(1)), width >> 1,
(unsigned char *) (pOBufferFull->getBufVA(2)), width >> 1,
width, height);
nsecs_t t4 = systemTime(CLOCK_MONOTONIC);
MY_LOGD("Copy in to out cost %02ld ms", ns2ms(t4 - t3));
//dump output I420
if (mDump) {
char path[256];
snprintf(path, sizeof(path), "/data/vendor/camera_dump/capture_out_frame%d_%dx%d_%s_%d.i420",
frameNo, width, height, currentDate, currentTime);
pOBufferFull->saveToFile(path);
}
free(mSrcRGBA);
} else {
if (!needRun) {
MY_LOGE("No need run, skip add watermark for capture.");
} else if (inFormat != NSCam::eImgFmt_YV12) {
MY_LOGE("Unsupported format, skip add watermark for capture.");
} else {
MY_LOGE("Unknown exception, skip add watermark for capture.");
}
memcpy((unsigned char *) (pOBufferFull->getBufVA(0)),
(unsigned char *) (pIBufferFull->getBufVA(0)),
pIBufferFull->getBufSizeInBytes(0));
memcpy((unsigned char *) (pOBufferFull->getBufVA(1)),
(unsigned char *) (pIBufferFull->getBufVA(1)),
pIBufferFull->getBufSizeInBytes(1));
memcpy((unsigned char *) (pOBufferFull->getBufVA(2)),
(unsigned char *) (pIBufferFull->getBufVA(2)),
pIBufferFull->getBufSizeInBytes(2));
}
pRequest->mIBufferFull->release();
pRequest->mOBufferFull->release();
if (pRequest->mIMetadataDynamic != nullptr) {
pRequest->mIMetadataDynamic->release();
}
if (pRequest->mIMetadataHal != nullptr) {
pRequest->mIMetadataHal->release();
}
if (pRequest->mIMetadataApp != nullptr) {
pRequest->mIMetadataApp->release();
}
}
if (pCallback != nullptr) {
MY_LOGD("callback request");
pCallback->onCompleted(pRequest, 0);
}
FUNCTION_OUT;
return OK;
}
REGISTER_PLUGIN_PROVIDER(Yuv, WatermarkCapture);
4.2.4 mtkcam3/3rdparty/customer/tp_watermark/WatermarkPreview.cpp
#include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
#include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
#include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
#include <cutils/properties.h>
#include <watermark.h>
#include <mtkcam/utils/std/Time.h>
#include <time.h>
#include <libyuv.h>
#include <dlfcn.h>
using NSCam::NSPipelinePlugin::Interceptor;
using NSCam::NSPipelinePlugin::PipelinePlugin;
using NSCam::NSPipelinePlugin::PluginRegister;
using NSCam::NSPipelinePlugin::Join;
using NSCam::NSPipelinePlugin::JoinPlugin;
using namespace NSCam::NSPipelinePlugin;
using NSCam::MSize;
using NSCam::MERROR;
using NSCam::IImageBuffer;
using NSCam::IMetadata;
using NSCam::Type2Type;
#ifdef LOG_TAG
#undef LOG_TAG
#endif // LOG_TAG
#define LOG_TAG "WatermarkPreview"
#include <log/log.h>
#include <android/log.h>
#define MY_LOGI(fmt, arg...) ALOGI("[%s] " fmt, __FUNCTION__, ##arg)
#define MY_LOGD(fmt, arg...) ALOGD("[%s] " fmt, __FUNCTION__, ##arg)
#define MY_LOGW(fmt, arg...) ALOGW("[%s] " fmt, __FUNCTION__, ##arg)
#define MY_LOGE(fmt, arg...) ALOGE("[%s] " fmt, __FUNCTION__, ##arg)
#define FUNCTION_IN MY_LOGD("%s +", __FUNCTION__)
#define FUNCTION_OUT MY_LOGD("%s -", __FUNCTION__)
template <class T>
inline bool
tryGetMetadata(IMetadata const *pMetadata, MUINT32 tag, T& rVal)
{
if(pMetadata == nullptr) return MFALSE;
IMetadata::IEntry entry = pMetadata->entryFor(tag);
if(!entry.isEmpty())
{
rVal = entry.itemAt(0,Type2Type<T>());
return true;
}
else
{
#define var(v) #v
#define type(t) #t
MY_LOGW("no metadata %s in %s", var(tag), type(pMetadata));
#undef type
#undef var
}
return false;
}
class WatermarkPreview : public JoinPlugin::IProvider {
public:
typedef JoinPlugin::Property Property;
typedef JoinPlugin::Selection Selection;
typedef JoinPlugin::Request::Ptr RequestPtr;
typedef JoinPlugin::RequestCallback::Ptr RequestCallbackPtr;
private:
bool mDisponly = false;
bool mInplace = false;
int mOpenID1 = 0;
int mOpenID2 = 0;
MBOOL mEnable = 1;
MBOOL mDump = 0;
unsigned char *mSrcRGBA = nullptr;
unsigned char *mWatermarkRGBA = nullptr;
int mWatermarkWidth = 0;
int mWatermarkHeight = 0;
public:
WatermarkPreview();
~WatermarkPreview();
void init();
void uninit();
void abort(std::vector <RequestPtr> &pRequests);
void set(MINT32 openID1, MINT32 openID2);
const Property &property();
MERROR negotiate(Selection &sel);
MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback);
private:
MERROR getConfigSetting(Selection &sel);
MERROR getP1Setting(Selection &sel);
MERROR getP2Setting(Selection &sel);
};
WatermarkPreview::WatermarkPreview() {
FUNCTION_IN;
mEnable = property_get_bool("vendor.debug.camera.watermark.preview.enable", 1);
mDump = property_get_bool("vendor.debug.camera.watermark.preview.dump", 0);
FUNCTION_OUT;
}
WatermarkPreview::~WatermarkPreview() {
FUNCTION_IN;
FUNCTION_OUT;
}
void WatermarkPreview::init() {
FUNCTION_IN;
mWatermarkWidth = 180;
mWatermarkHeight = 640;
int watermarkSize = mWatermarkWidth * mWatermarkHeight * 4;
mWatermarkRGBA = (unsigned char *) malloc(watermarkSize);
FILE *fp;
char path[256];
snprintf(path, sizeof(path), "/vendor/res/images/watermark.rgba");
if ((fp = fopen(path, "r")) == NULL) {
MY_LOGE("Failed to open /vendor/res/images/watermark.rgba");
}
fread(mWatermarkRGBA, 1, watermarkSize, fp);
fclose(fp);
FUNCTION_OUT;
}
void WatermarkPreview::uninit() {
FUNCTION_IN;
free(mSrcRGBA);
free(mWatermarkRGBA);
FUNCTION_OUT;
}
void WatermarkPreview::abort(std::vector <RequestPtr> &pRequests) {
FUNCTION_IN;
(void)pRequests;
FUNCTION_OUT;
}
void WatermarkPreview::set(MINT32 openID1, MINT32 openID2) {
FUNCTION_IN;
MY_LOGD("set openID1:%d openID2:%d", openID1, openID2);
mOpenID1 = openID1;
mOpenID2 = openID2;
FUNCTION_OUT;
}
const WatermarkPreview::Property &WatermarkPreview::property() {
FUNCTION_IN;
static Property prop;
static bool inited;
if (!inited) {
prop.mName = "TP_WATERMARK";
prop.mFeatures = TP_FEATURE_WATERMARK;
//prop.mInPlace = MTRUE;
//prop.mFaceData = eFD_Current;
//prop.mPosition = 0;
inited = true;
}
FUNCTION_OUT;
return prop;
}
MERROR WatermarkPreview::negotiate(Selection &sel) {
FUNCTION_IN;
MERROR ret = OK;
if (sel.mSelStage == eSelStage_CFG) {
ret = getConfigSetting(sel);
} else if (sel.mSelStage == eSelStage_P1) {
ret = getP1Setting(sel);
} else if (sel.mSelStage == eSelStage_P2) {
ret = getP2Setting(sel);
}
FUNCTION_OUT;
return ret;
}
MERROR WatermarkPreview::process(RequestPtr pRequest, RequestCallbackPtr pCallback) {
FUNCTION_IN;
(void) pCallback;
MERROR ret = -EINVAL;
MBOOL needRun = MFALSE;
IImageBuffer *in = NULL, *out = NULL;
if (pRequest->mIBufferMain1 != NULL && pRequest->mOBufferMain1 != NULL) {
in = pRequest->mIBufferMain1->acquire();
out = pRequest->mOBufferMain1->acquire();
int frameNo = 0, requestNo = 0;
if (pRequest->mIMetadataHal1 != nullptr) {
IMetadata *pIMetataHAL1 = pRequest->mIMetadataHal1->acquire();
if (pIMetataHAL1 != NULL) {
if (!tryGetMetadata<int>(pIMetataHAL1, MTK_PIPELINE_FRAME_NUMBER, frameNo)) {
frameNo = 0;
}
if (!tryGetMetadata<int>(pIMetataHAL1, MTK_PIPELINE_REQUEST_NUMBER, requestNo)) {
requestNo = 0;
}
pRequest->mIMetadataHal1->release();
MY_LOGD("frameNo: %d, requestNo: %d", frameNo, requestNo);
} else {
MY_LOGD("HAL metadata empty");
}
}
MY_LOGD("in[%d](%dx%d)=%p out[%d](%dx%d)=%p",
in->getPlaneCount(), in->getImgSize().w, in->getImgSize().h, in,
out->getPlaneCount(), out->getImgSize().w, out->getImgSize().h, out);
if (pRequest->mIMetadataApp != nullptr) {
IMetadata *pIMetadataApp = pRequest->mIMetadataApp->acquire();
MINT32 mode = 0;
if (!tryGetMetadata<MINT32>(pIMetadataApp, QXT_FEATURE_WATERMARK, mode)) {
mode = 0;
}
needRun = mode == 1 ? 1 : 0;
pRequest->mIMetadataApp->release();
}
MY_LOGD("needRun: %d", needRun);
int width = in->getImgSize().w;
int height = in->getImgSize().h;
MINT inFormat = in->getImgFormat();
if (needRun && inFormat == NSCam::eImgFmt_YV12) {
uint32_t currentTime = (NSCam::Utils::TimeTool::getReadableTime()) % 1000;
time_t timep;
time (&timep);
char currentDate[20];
strftime(currentDate, sizeof(currentDate), "%Y%m%d_%H%M%S", localtime(&timep));
//dump input YV12
if (mDump) {
char path[256];
snprintf(path, sizeof(path), "/data/vendor/camera_dump/preview_in_frame%d_%dx%d_%s_%d.yv12",
frameNo, width, height, currentDate, currentTime);
in->saveToFile(path);
}
nsecs_t t1 = systemTime(CLOCK_MONOTONIC);
if (mSrcRGBA == NULL) {
mSrcRGBA = (unsigned char *) malloc(width * height * 4);
}
//convert YV12 to RGBA
libyuv::I420ToABGR((unsigned char *)(in->getBufVA(0)), width,
(unsigned char *)(in->getBufVA(2)), width >> 1,
(unsigned char *)(in->getBufVA(1)), width >> 1,
mSrcRGBA, width * 4,
width, height);
nsecs_t t2 = systemTime(CLOCK_MONOTONIC);
MY_LOGD("Prepare src cost %02ld ms", ns2ms(t2 - t1));
Watermark::add(mSrcRGBA, width, height, mWatermarkRGBA, mWatermarkWidth, mWatermarkHeight, (width - mWatermarkWidth) / 2, (height - mWatermarkHeight) / 2);
nsecs_t t3 = systemTime(CLOCK_MONOTONIC);
MY_LOGD("Add watermark cost %02ld ms", ns2ms(t3 - t2));
//convert RGBA to YV12
libyuv::ABGRToI420(mSrcRGBA, width * 4,
(unsigned char *)(out->getBufVA(0)), width,
(unsigned char *)(out->getBufVA(2)), width >> 1,
(unsigned char *)(out->getBufVA(1)), width >> 1,
width, height);
nsecs_t t4 = systemTime(CLOCK_MONOTONIC);
MY_LOGD("Copy in to out cost %02ld ms", ns2ms(t4 - t3));
//dump output YV12
if (mDump) {
char path[256];
snprintf(path, sizeof(path), "/data/vendor/camera_dump/preview_out_frame%d_%dx%d_%s_%d.yv12",
frameNo, width, height, currentDate, currentTime);
out->saveToFile(path);
}
} else {
if (!needRun) {
MY_LOGE("No need run, skip add watermark for preview.");
} else if (inFormat != NSCam::eImgFmt_YV12) {
MY_LOGE("Unsupported format, skip add watermark for preview.");
} else {
MY_LOGE("Unknown exception, skip add watermark for preview.");
}
memcpy((unsigned char *) (out->getBufVA(0)),
(unsigned char *)(in->getBufVA(0)),
in->getBufSizeInBytes(0));
memcpy((unsigned char *) (out->getBufVA(1)),
(unsigned char *)(in->getBufVA(1)),
in->getBufSizeInBytes(1));
memcpy((unsigned char *) (out->getBufVA(2)),
(unsigned char *)(in->getBufVA(2)),
in->getBufSizeInBytes(2));
}
pRequest->mIBufferMain1->release();
pRequest->mOBufferMain1->release();
ret = OK;
}
FUNCTION_OUT;
return ret;
}
MERROR WatermarkPreview::getConfigSetting(Selection &sel) {
MY_LOGI("max out size(%dx%d)",
sel.mCfgInfo.mMaxOutSize.w, sel.mCfgInfo.mMaxOutSize.h);
mDisponly = property_get_bool("vendor.debug.tpi.s.fb.disponly", 0);
mInplace = mDisponly || property_get_bool("vendor.debug.tpi.s.fb.inplace", 0);
sel.mCfgOrder = 3;
sel.mCfgJoinEntry = eJoinEntry_S_YUV;
sel.mCfgInplace = mInplace;
sel.mCfgEnableFD = MTRUE;
sel.mCfgRun = mEnable;
sel.mIBufferMain1.setRequired(MTRUE);
if (!mDisponly && property_get_bool("vendor.debug.tpi.s.fb.nv21", 0)) {
sel.mIBufferMain1.addAcceptedFormat(NSCam::eImgFmt_NV21);
}
if (!mDisponly && property_get_bool("vendor.debug.tpi.s.fb.size", 0)) {
sel.mIBufferMain1.setSpecifiedSize(sel.mCfgInfo.mMaxOutSize);
}
sel.mOBufferMain1.setRequired(MTRUE);
sel.mIBufferMain1.addAcceptedFormat(NSCam::eImgFmt_YV12);
sel.mIBufferMain1.addAcceptedSize(eImgSize_Full);
IMetadata *meta = sel.mIMetadataApp.getControl().get();
MY_LOGD("sessionMeta=%p", meta);
return OK;
}
MERROR WatermarkPreview::getP1Setting(Selection &sel) {
(void) sel;
return OK;
}
MERROR WatermarkPreview::getP2Setting(Selection &sel) {
MBOOL run = MTRUE;
sel.mP2Run = run;
return OK;
}
REGISTER_PLUGIN_PROVIDER(Join, WatermarkPreview);
4.2.5 mtkcam3/3rdparty/customer/Android.mk
最終vendor.img需要的目標共享庫是libmtkcam_3rdparty.customer.so。因此,我們還需要修改Android.mk,使模塊libmtkcam_3rdparty.customer依賴libmtkcam.plugin.tp_watermark。vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
old mode 100644
new mode 100755
index ce060c39f9..ff5763d3c2
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
@@ -70,6 +70,13 @@ LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_purebokeh
LOCAL_SHARED_LIBRARIES += libcam.iopipe
LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
endif
+
+ifeq ($(QXT_WATERMARK_SUPPORT), yes)
+LOCAL_SHARED_LIBRARIES += libwatermark
+LOCAL_SHARED_LIBRARIES += libyuv.vendor
+LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_watermark
+endif
+
# for app super night ev decision (experimental for customer only)
LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.control.customersupernightevdecision
4.2.6 預置水印文件
diff --git a/device/mediateksample/k63v2_64_bsp/device.mk b/device/mediateksample/k63v2_64_bsp/device.mk
index 2619000c72..048c33462e 100644
--- a/device/mediateksample/k63v2_64_bsp/device.mk
+++ b/device/mediateksample/k63v2_64_bsp/device.mk
@@ -98,6 +98,9 @@ PRODUCT_COPY_FILES += vendor/mediatek/proprietary/custom/k63v2_64_bsp/factory/re
PRODUCT_COPY_FILES += vendor/mediatek/proprietary/custom/k63v2_64_bsp/factory/res/images/lcd_test_01.png:$(TARGET_COPY_OUT_VENDOR)/res/images/lcd_test_01.png:mtk
PRODUCT_COPY_FILES += vendor/mediatek/proprietary/custom/k63v2_64_bsp/factory/res/images/lcd_test_02.png:$(TARGET_COPY_OUT_VENDOR)/res/images/lcd_test_02.png:mtk
+ifeq ($(QXT_WATERMARK_SUPPORT),yes)
+PRODUCT_COPY_FILES += vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/tp_watermark/res/watermark.rgba::$(TARGET_COPY_OUT_VENDOR)/res/images/watermark.rgba
+endif
# overlay has priorities. high <-> low.
camera hal進程爲mtk_camera_hal,它要讀取/vendor/res/images/watermark.rgba,讀取需要vendor_file SELinux權限。這裏爲mtk_camera_hal配置SELinux權限:
diff --git a/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te b/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te
index 8de5d0a437..7ebd9a03e5 100644
--- a/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te
+++ b/device/mediatek/sepolicy/bsp/non_plat/mtk_hal_camera.te
@@ -92,6 +92,7 @@ allow mtk_hal_camera sysfs_boot_mode:file { read open };
# Purpose: NDD
allow mtk_hal_camera vendor_data_file:dir create_dir_perms;
allow mtk_hal_camera vendor_data_file:file create_file_perms;
+allow mtk_hal_camera vendor_file:file { read getattr open };
五、自定義metadata
添加metadata是爲了讓APP層能夠通過metadata傳遞相應的參數給HAL層。APP層是通過CaptureRequest.Builder.set(@NonNull Key<T> key, T value)來設置參數的。
由於我們是自定義的feature,無法複用MTK提供的metadata,因此,我們需要自定義metadata。
vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
index 22d4aa2bf2..b020352092 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
@@ -89,6 +89,7 @@ typedef enum mtk_camera_metadata_section {
MTK_BGSERVICE_FEATURE = 12,
MTK_CONFIGURE_SETTING = 13,
MTK_FLASH_FEATURE = 14,
+ QXT_FEATURE = 15,
MTK_VENDOR_SECTION_COUNT,
} mtk_camera_metadata_section_t;
@@ -146,6 +147,7 @@ typedef enum mtk_camera_metadata_section_start {
MTK_CONFIGURE_SETTING_START = (MTK_CONFIGURE_SETTING + MTK_VENDOR_TAG_SECTION) << 16,
MTK_FLASH_FEATURE_START = (MTK_FLASH_FEATURE + MTK_VENDOR_TAG_SECTION) << 16,
+ QXT_FEATURE_START = (QXT_FEATURE + MTK_VENDOR_TAG_SECTION) << 16,
} mtk_camera_metadata_section_start_t;
@@ -599,6 +601,8 @@ typedef enum mtk_camera_metadata_tag {
MTK_FLASH_FEATURE_CALIBRATION_RESULT, // flash calibration result
MTK_FLASH_FEATURE_END,
+ QXT_FEATURE_WATERMARK = QXT_FEATURE_START,
+ QXT_FEATURE_END,
} mtk_camera_metadata_tag_t;
/**
vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
index 15449c433d..1b4fc75a0e 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
@@ -91,6 +91,11 @@ _IMP_SECTION_INFO_(MTK_DISTORTION_CORRECTION_INFO, "mtk.distortionCorrection")
_IMP_SECTION_INFO_(MTK_IOPIPE_INFO, "mtk.iopipe.info")
_IMP_SECTION_INFO_(MTK_HAL_INFO, "mtk.hal.info")
+_IMP_SECTION_INFO_(QXT_FEATURE, "com.qxt.camera")
+
+_IMP_TAG_INFO_( QXT_FEATURE_WATERMARK,
+ MINT32, "watermark")
+
/******************************************************************************
*
******************************************************************************/
vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
index 2481492f90..33e581adfd 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
@@ -377,6 +377,16 @@ static auto& _FlashFeature_()
}
+static auto& _QxtFeature_()
+{
+ static const std::map<uint32_t, VendorTag_t>
+ sInst = {
+ _TAG_(QXT_FEATURE_WATERMARK,
+ "watermark", TYPE_INT32),
+ };
+ //
+ return sInst;
+}
/******************************************************************************
*
@@ -460,6 +470,10 @@ static auto& getGlobalSections()
MTK_FLASH_FEATURE_END,
_FlashFeature_() ),
+ _SECTION_( "com.qxt.camera",
+ QXT_FEATURE_START,
+ QXT_FEATURE_END,
+ _QxtFeature_() ),
};
// append custom vendor tags sections to mtk sections
vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp:
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
index edd5b5f1b9..591b25b162 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
@@ -578,6 +578,19 @@ updateData(IMetadata &rMetadata)
}
}
#endif
+
+#if 1
+ {
+ IMetadata::IEntry qxtAvailRequestEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_REQUEST_KEYS);
+ qxtAvailRequestEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
+ rMetadata.update(qxtAvailRequestEntry.tag(), qxtAvailRequestEntry);
+
+ IMetadata::IEntry qxtAvailSessionEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_SESSION_KEYS);
+ qxtAvailSessionEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
+ rMetadata.update(qxtAvailSessionEntry.tag(), qxtAvailSessionEntry);
+ }
+#endif
+
// update multi-cam feature mode to static metadata
// vendor tag
{
前面這些步驟完成之後,集成工作就基本完成了。我們需要重新編譯一下系統源碼,爲節約時間,也可以只編譯vendor.img。趁着編譯的時間,我們可以寫一個demo來驗證算法是否集成成功了。
六、APP調用算法
WatermarkActivity:
public class WatermarkActivity extends BaseActivity {
private static final String TAG = WatermarkActivity.class.getSimpleName();
/*
* 16:9 picture size: 3840x2160 preview size 1280x720
* 4:3 picture size: 3264x2448 preview size 960x720
* Now is 4:3
*/
private static final int PREVIEW_WIDTH = 1280;
private static final int PREVIEW_HEIGHT = 720;
private static final int CAPTURE_WIDTH = 3264;
private static final int CAPTURE_HEIGHT = 2448;
private static final String IMAGE_PATH =
Environment.getExternalStorageDirectory().getAbsolutePath()
+ File.separator + "DCIM" + File.separator + "Camera";
private static final String CAMERA_ID = "0";
private static final String KEY_WATERMARK = "com.qxt.camera.watermark";
private static final String SP_NAME = "watermark";
private static final String SP_STATE_KEY = "state";
private AutoFitTextureView mTextureView;
private ProgressBar mProgressBar;
private Handler mMainHandler;
private Handler mCameraHandler;
private HandlerThread mCameraHandlerThread;
private CameraManager mCameraManager;
private CaptureRequest.Builder mPreviewBuilder;
private CameraDevice mCameraDevice;
private CameraCaptureSession mCameraCaptureSession;
private MediaActionSound mCameraSound;
private String mTakePictureTime;
private SimpleDateFormat mDateFormat = new SimpleDateFormat(
"yyyyMMdd_HHmmss", Locale.getDefault());
private ImageReader mCaptureImageReader;
private Surface mSurface;
public CaptureRequest.Key<int[]> mVendorKey;
private int mVendorKeyEnable;
private SharedPreferences mSharedPref;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON,
WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.activity_watermark);
mProgressBar = findViewById(R.id.progressbar);
mTextureView = findViewById(R.id.texture);
mTextureView.setAspectRatio(PREVIEW_HEIGHT, PREVIEW_WIDTH);
mCameraSound = new MediaActionSound();
mCameraSound.load(MediaActionSound.SHUTTER_CLICK);
mCameraManager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
mMainHandler = new Handler();
initVendorTag();
mSharedPref = getSharedPreferences(SP_NAME, Context.MODE_PRIVATE);
mVendorKeyEnable = mSharedPref.getInt(SP_STATE_KEY, 0);
getCameraCharacteristics(CAMERA_ID);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_watermark, menu);
Switch s = menu.findItem(R.id.action_watermark)
.getActionView().findViewById(R.id.switch_watermark);
s.setChecked(mVendorKeyEnable > 0);
s.setOnCheckedChangeListener(new CompoundButton.OnCheckedChangeListener() {
@Override
public void onCheckedChanged(CompoundButton btn, boolean isChecked) {
if (isChecked) {
mVendorKeyEnable = 1;
} else {
mVendorKeyEnable = 0;
}
mSharedPref.edit().putInt(SP_STATE_KEY, mVendorKeyEnable).commit();
if (mPreviewBuilder != null && mCameraCaptureSession != null) {
try {
mCameraCaptureSession.stopRepeating();
setVendorTag(mPreviewBuilder);
mCameraCaptureSession.setRepeatingRequest(mPreviewBuilder.build(),
mSessionCaptureCallback, mCameraHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
LogUtils.d(TAG, "[onCheckedChanged] isChecked=" + isChecked
+ ", mWideAngleEnable=" + mVendorKeyEnable);
}
});
return true;
}
@Override
protected void onResume() {
super.onResume();
initLooper();
if (mTextureView.isAvailable()) {
openCamera();
} else {
mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
}
}
@Override
protected void onPause() {
super.onPause();
closeCamera();
stopLooper();
}
@Override
protected void onDestroy() {
super.onDestroy();
}
public void onClick(View view) {
if (view != null && view.getId() == R.id.btn_capture) {
takePicture();
}
}
private void initLooper() {
mCameraHandlerThread = new HandlerThread("WideAngleCamera");
mCameraHandlerThread.start();
mCameraHandler = new Handler(mCameraHandlerThread.getLooper());
}
private void stopLooper() {
try {
mCameraHandlerThread.quit();
mCameraHandlerThread.join();
mCameraHandlerThread = null;
mCameraHandler = null;
} catch (Exception e) {
e.printStackTrace();
}
}
@SuppressLint("MissingPermission")
private void openCamera() {
try {
mCameraManager.openCamera(CAMERA_ID, new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice camera) {
mCameraDevice = camera;
createCameraPreviewSession();
}
@Override
public void onDisconnected(@NonNull CameraDevice camera) {
LogUtils.d(TAG, "onDisconnected");
camera.close();
mCameraDevice = null;
}
@Override
public void onError(@NonNull CameraDevice camera, int error) {
LogUtils.d(TAG, "onError error=" + error);
camera.close();
mCameraDevice = null;
}
}, mCameraHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
private void closeCamera() {
try {
if (null != mCameraCaptureSession) {
mCameraCaptureSession.close();
mCameraCaptureSession = null;
}
if (null != mCameraDevice) {
mCameraDevice.close();
mCameraDevice = null;
}
if (null != mCaptureImageReader) {
mCaptureImageReader.close();
mCaptureImageReader = null;
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void createCameraPreviewSession() {
if (isFinishing() || isDestroyed() || mCameraDevice == null) {
return;
}
try {
mCaptureImageReader = ImageReader.newInstance(CAPTURE_WIDTH,
CAPTURE_HEIGHT, ImageFormat.YUV_420_888, 2);
mCaptureImageReader.setOnImageAvailableListener(
mCaptureOnImageAvailableListener, mCameraHandler);
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
setVendorTag(mPreviewBuilder);
mPreviewBuilder.addTarget(mSurface);
mCameraDevice.createCaptureSession(Arrays.asList(mSurface,
mCaptureImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession session) {
if (isFinishing() || isDestroyed() || mCameraDevice == null) {
return;
}
try {
mCameraCaptureSession = session;
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_AUTO);
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
mCameraCaptureSession.setRepeatingRequest(mPreviewBuilder.build(),
mSessionCaptureCallback, mCameraHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(@NonNull CameraCaptureSession session) {
}
}, mCameraHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
private void takePicture() {
try {
mTakePictureTime = mDateFormat.format(System.currentTimeMillis());
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
setVendorTag(captureBuilder);
Surface surface = mCaptureImageReader.getSurface();
captureBuilder.addTarget(surface);
mCameraCaptureSession.capture(captureBuilder.build(),
new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
}
}, mCameraHandler);
if (mCameraSound != null) {
mCameraSound.play(MediaActionSound.SHUTTER_CLICK);
}
mProgressBar.setVisibility(View.VISIBLE);
} catch (Exception e) {
e.printStackTrace();
}
}
private void notifyPictureTaken() {
mProgressBar.setVisibility(View.GONE);
Toast toast = Toast.makeText(WatermarkActivity.this,
getString(R.string.image_saved, IMAGE_PATH), Toast.LENGTH_SHORT);
toast.setGravity(Gravity.CENTER, 0, 0);
toast.show();
}
@SuppressWarnings("unused")
private void getCameraCharacteristics(String cameraId) {
try {
CameraCharacteristics cs = mCameraManager.getCameraCharacteristics(cameraId);
StreamConfigurationMap map = cs.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map != null) {
//獲取圖像輸出的尺寸
Size[] pictureSize = map.getOutputSizes(ImageFormat.JPEG);
Size[] previewSize = map.getOutputSizes(SurfaceTexture.class);
StringBuilder pictureBuilder = new StringBuilder("picture sizes: ");
for (Size size : pictureSize) {
pictureBuilder.append(size);
pictureBuilder.append(", ");
}
LogUtils.d(TAG, pictureBuilder.toString());
StringBuilder previewBuilder = new StringBuilder("preview sizes: ");
for (Size size : previewSize) {
previewBuilder.append(size);
previewBuilder.append(", ");
}
LogUtils.d(TAG, previewBuilder.toString());
}
} catch (Exception e) {
e.printStackTrace();
}
}
TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener() {
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int w, int h) {
LogUtils.d(TAG, "onSurfaceAvaliable, width:" + w + ", height:" + h);
surfaceTexture.setDefaultBufferSize(PREVIEW_WIDTH, PREVIEW_HEIGHT);
mSurface = new Surface(surfaceTexture);
openCamera();
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surfaceTexture, int w, int h) {
LogUtils.d(TAG, "onSurfaceTextureSizeChanged, width:" + w + ", height:" + h);
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
LogUtils.d(TAG, "onSurfaceTextureDestroyed");
mSurface = null;
return false;
}
@Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
}
};
private final CameraCaptureSession.CaptureCallback mSessionCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
mCameraCaptureSession = session;
}
};
private final ImageReader.OnImageAvailableListener mCaptureOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(final ImageReader reader) {
LogUtils.d(TAG, "capture onImageAvailable");
Image image = reader.acquireLatestImage();
if (image == null) return;
ImageUtils.saveImage(WatermarkActivity.this, image, IMAGE_PATH,
"WIDE_" + mTakePictureTime, ImageUtils.ROTATE_90);
image.close();
LogUtils.d(TAG, "saved");
mMainHandler.post(new Runnable() {
@Override
public void run() {
notifyPictureTaken();
}
});
}
};
private void initVendorTag() {
try {
CameraCharacteristics c = mCameraManager.getCameraCharacteristics(CAMERA_ID);
mVendorKey = CameraUtils.getSessionKey(c, KEY_WATERMARK);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
private void setVendorTag(CaptureRequest.Builder builder) {
if (mVendorKey != null) {
builder.set(mVendorKey, new int[]{mVendorKeyEnable});
LogUtils.d(TAG, "[setVendorTag] set watermark to " + mVendorKeyEnable);
}
}
}
CameraUtils:
public class CameraUtils {
private static final String TAG = CameraUtils.class.getSimpleName();
@RequiresApi(api = Build.VERSION_CODES.P)
public static CaptureRequest.Key<int[]> getSessionKey(
CameraCharacteristics cs, String key) {
if (cs == null) {
LogUtils.i(TAG, "[getSessionKey] CameraCharacteristics is null");
return null;
}
CaptureRequest.Key<int[]> targetKey = null;
List<CaptureRequest.Key<?>> sessionKeys = cs.getAvailableSessionKeys();
if (sessionKeys == null) {
LogUtils.i(TAG, "[getSessionKey] No keys!");
return null;
}
for (CaptureRequest.Key<?> sessionKey : sessionKeys) {
if (sessionKey.getName().equals(key)) {
LogUtils.i(TAG, "[getSessionKey] key :" + key);
targetKey = (CaptureRequest.Key<int[]>) sessionKey;
break;
}
}
return targetKey;
}
}
爲樣機刷入系統整包或者vendor.img,開機後,安裝demo驗證。我們來拍一張電腦顯示器看看效果:
預覽:
拍照:
七、遇到的問題及解決方法
問題1:
如果process函數中buffer的acquire和release沒有成對出現,也就是buffer沒正常release,那麼就會出現連續拍多張之後,算法未被調用的情況。
問題1解決方法:
YUVNode.cpp中加入一個保險的代碼,萬一集成代碼中忘記release,在YUVNode中release。
diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp
index 8bb794ba02..d4343aaccf 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/feature/core/featurePipe/capture/nodes/YUVNode.cpp
@@ -1050,9 +1051,11 @@ MBOOL YUVNode::onRequestProcess(RequestPtr& pRequest)
auto pPlgRequest = mPlugin->createRequest();
- pPlgRequest->mIBufferFull = (iBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, INPUT) : iBufferFullHandle;
+ //pPlgRequest->mIBufferFull = (iBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, INPUT) : iBufferFullHandle;
+ pPlgRequest->mIBufferFull = (iBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, INPUT) : std::move(iBufferFullHandle);
pPlgRequest->mIBufferClean = PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_PURE_YUV, INPUT);
- pPlgRequest->mOBufferFull = (oBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, OUTPUT) : oBufferFullHandle;
+ //pPlgRequest->mOBufferFull = (oBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, OUTPUT) : oBufferFullHandle;
+ pPlgRequest->mOBufferFull = (oBufferFullHandle == NULL) ? PluginHelper::CreateBuffer(pNodeReq, TID_MAN_FULL_YUV, OUTPUT) : std::move(oBufferFullHandle);
pPlgRequest->mIMetadataDynamic = PluginHelper::CreateMetadata(pNodeReq, MID_MAN_IN_P1_DYNAMIC);
pPlgRequest->mIMetadataApp = PluginHelper::CreateMetadata(pNodeReq, MID_MAN_IN_APP);
問題2:
算法需要RGB數據,HAL層是YUV數據,使用openGL和各類RGB轉換公式進行YUV和RGB互轉後,最終照片有色差。
問題2解決方法:
使用libyuv進行轉換,libyuv轉換效率非常高,經測試,libyuv比公式法和opencv都要快,並且沒有色差。android源碼本身已集成libyuv,使用起來也非常方便。
Android.mk:
LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/
LOCAL_SHARED_LIBRARIES += libyuv.vendor
如不清楚libyuv的使用,請參考本人的另外一篇文章:YUV420轉RGBA之使用libyuv
八、結語
現公司是家ODM公司,2018年起公司開始做AI算法,我被招來做android平臺上的算法集成工作。然而我雖有幾年App和Frameworks的工作經驗,之前卻一直是以java作爲主要開發語言,C/C++語言的開發工作甚至是大學時代的事情,而部門當中會android開發的也僅有我一人。在這種情形下,重新學C/C++,並摸索着開始集成算法,真的非常具有挑戰性。
當時我們公司與某公司有合作,我們剛開始寄希望於該公司可以提供技術支援,以幫助我們完成算法集成的工作。然而,該公司不但有手機OS,也有自家的AI算法部門,在技術支援上對方不願意配合。
在一段時間的學習和實踐之後,我也終於基本掌握了算法集成的方法和步驟。很感激部門leader L老師能夠給予我如此多的耐心和支持。回過頭來看,算法集成並不複雜,甚至沒有任何技術原理上的東西,只是一些繁瑣的步驟。於是,在項目不忙的情況下,寫下本文。希望以此記錄自己的心路歷程,也期待可以幫助到有需要的人。本文的主要內容最初於2019年就寫好了,但是由於某些原因,時至今日才整理髮出來,不管怎樣,但願好文不怕晚。
原文鏈接:https://www.jianshu.com/p/bf385ff1dafe
至此,本篇已結束。轉載網絡的文章,小編覺得很優秀,歡迎點擊閱讀原文,支持原創作者,如有侵權,懇請聯繫小編刪除,歡迎您的建議與指正。同時期待您的關注,感謝您的閱讀,謝謝!