title: Android Arcore 簡單的實現人臉增強,人臉識別,加遮照,精確單目測距計算屏幕到人的距離
categories:
- Android
tags: - arCore
- 人臉增強
- 人臉識別
date: 2020-05-29 10:12:46
前言
午後略困,倒杯咖啡,把之前挖的坑補上,今天來說一說arcore,arcore是google提供的一個增強現實的服務
,該服務的目的是做相機增強現實,,,ar,然而裏面有個人臉增強的模塊,可以我們用來實現人臉識別,和人臉增強
ARCore官網
圖片可以看到效果,arcore識別人臉,建模3d,以鼻子後面位置爲臉部中心點,
而相機的位置則是模型的宇宙中心,給人臉打上468個點,精確貼合
注意
該服務雖然好用,功能強大,但是對於我大天朝來說,需要點要求
1。minSdk 版本24,也就是最少要7.0以上才支持
2。需要有支持ar的硬件,
3。需要有ARCore的服務,如果沒有可以下載和升級,有些市場有有些可能需要VPN
樓主測試使用的是小米8測試機
好了下面就開始使用吧
使用
在項目的build.gradle中確保repositories中有google在
repositories {
google()
}
然後在app的build中引用arcore 我這裏用的是1.15.0版本,
implementation 'com.google.ar:core:1.15.0'
// Provides ArFragment, and other UX resources.
implementation 'com.google.ar.sceneform.ux:sceneform-ux:1.15.0'
// Alternatively, use ArSceneView without the UX dependency.
implementation 'com.google.ar.sceneform:core:1.15.0'
擴展ArFragment
要實現人臉增強,我們需要擴展Arfragment更改相機,以及session
具體代碼如下:
/**
public class FaceArFragment extends ArFragment {
@Override
protected Config getSessionConfiguration(Session session) {
Config config = new Config(session);
config.setAugmentedFaceMode(AugmentedFaceMode.MESH3D);
return config;
}
@Override
protected Set<Session.Feature> getSessionFeatures() {
return EnumSet.of(Session.Feature.FRONT_CAMERA);
}
@Override
protected void handleSessionException(UnavailableException sessionException) {
String message;
if (sessionException instanceof UnavailableArcoreNotInstalledException) {
message = "請安裝ARCore";
} else if (sessionException instanceof UnavailableApkTooOldException) {
message = "請升級ARCore";
} else if (sessionException instanceof UnavailableSdkTooOldException) {
message = "請升級app";
} else if (sessionException instanceof UnavailableDeviceNotCompatibleException) {
message = "當前設備部不支持AR";
} else {
message = "未能創建AR會話,請查看機型適配,arcore版本與系統版本";
String var3 = String.valueOf(sessionException);
}
Toast.makeText(getContext(),"==" + message,Toast.LENGTH_LONG).show();
}
/**
* Override to turn off planeDiscoveryController. Plane trackables are not supported with the
* front camera.
*/
@Override
public View onCreateView(
LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
FrameLayout frameLayout =
(FrameLayout) super.onCreateView(inflater, container, savedInstanceState);
getPlaneDiscoveryController().hide();
getPlaneDiscoveryController().setInstructionView(null);
return frameLayout;
}
}
然後自己的佈局:
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".AugmentedFacesActivity">
<fragment android:name="com.google.ar.sceneform.samples.augmentedfaces.FaceArFragment"
android:id="@+id/face_fragment"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<TextView
android:id="@+id/mTv"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/app_name"/>
</FrameLayout>
設置人臉face—mesh
設置人臉face—mesh—ModelRenderable和faceMeshTexture
private ModelRenderable faceRegionsRenderable;
private Texture faceMeshTexture;
ModelRenderable.builder()
.setSource(this, R.raw.fox_face)
.build()
.thenAccept(
modelRenderable -> {
faceRegionsRenderable = modelRenderable;
modelRenderable.setShadowCaster(false);
modelRenderable.setShadowReceiver(false);
});
// Load the face mesh texture.
Texture.builder()
.setSource(this, R.drawable.fox_face_mesh_texture)
.build()
.thenAccept(texture -> faceMeshTexture = texture);
ArSceneView sceneView = arFragment.getArSceneView();
核心檢測
然後對Arfragment的代碼 其中包括算精確距離的方法
sceneView.setCameraStreamRenderPriority(Renderable.RENDER_PRIORITY_FIRST);
Scene scene = sceneView.getScene();
scene.addOnUpdateListener(
(FrameTime frameTime) -> {
if (faceRegionsRenderable == null || faceMeshTexture == null) {
return;
}
Collection<AugmentedFace> faceList =
sceneView.getSession().getAllTrackables(AugmentedFace.class);
// Make new AugmentedFaceNodes for any new faces.
for (AugmentedFace face : faceList) {
if (!faceNodeMap.containsKey(face)) {
AugmentedFaceNode faceNode = new AugmentedFaceNode(face);
faceNode.setParent(scene);
faceNode.setFaceRegionsRenderable(faceRegionsRenderable);
faceNode.setFaceMeshTexture(faceMeshTexture);
faceNodeMap.put(face, faceNode);
}
}
// Remove any AugmentedFaceNodes associated with an AugmentedFace that stopped tracking.
Iterator<Map.Entry<AugmentedFace, AugmentedFaceNode>> iter =
faceNodeMap.entrySet().iterator();
while (iter.hasNext()) {
Map.Entry<AugmentedFace, AugmentedFaceNode> entry = iter.next();
AugmentedFace face = entry.getKey();
Pose left = face.getRegionPose(AugmentedFace.RegionType.FOREHEAD_LEFT);
Pose right = face.getRegionPose(AugmentedFace.RegionType.FOREHEAD_RIGHT);
// face.getm
// AugmentedFace node
// face.createAnchor(left);
// face.createAnchor(right);
float lx = left.tx();
float ly = left.ty();
float lz = left.tz();
float rx = right.tx();
float ry = right.ty();
float rz = right.tz();
double llength = Math.sqrt(lx * lx + ly * ly + lz * lz);
double rlength = Math.sqrt(rx * rx + ry * ry + rz * rz);
BigDecimal b1 = new BigDecimal(llength);
BigDecimal r1 = new BigDecimal(rlength);
double spec = b1.add(r1).divide(new BigDecimal("2")).multiply(new BigDecimal("100")).floatValue();
Log.d("wzz","-----" + llength + "----" + rlength);
Log.d("wzz","-----" + b1.add(r1).divide(new BigDecimal("2")));
Log.d("wzz","-----" + decimalFormat.format((b1.add(r1).divide(new BigDecimal("2")))) + "m");
mTv.setText("到屏幕距離: " + decimalFormat.format(spec) + "cm");
if (face.getTrackingState() == TrackingState.STOPPED) {
drawLine(face.createAnchor(left),face.createAnchor(right));
AugmentedFaceNode faceNode = entry.getValue();
faceNode.setParent(null);
iter.remove();
}
}
});
好了到這就可以實現了,是不是賊簡單,
關於人臉識別的坑,基本上都完結了,(pass opencv 2d to 3d)
有任何問題歡迎評論,討論