記一次不成熟功能的開發記錄
主要使用的API
- camera2
- FaceDetector
一、實現思路:
1.首先用camera2打開前置攝像頭獲取前置攝像頭的照片
2.通過FaceDetetoc來獲取人雙眼的間距
3.我們知道攝像頭的成像,近大遠小。那麼越遠的距離,我們獲取到眼間距的距離會越小,反之,則越大。
4.取一個已知大小(長或寬)的參照物,在已知特定距離在手機上成像的大小,通過上一步的眼間距來獲取到眼屏的距離。
實現的頁面截圖
二、相關代碼實現細節
1. camera2
camera2的預覽需要 TextureView或者SurfaceView
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextureView
android:id="@+id/texture_preview"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
<ImageView
android:onClick="switchCamera"
android:layout_gravity="end|bottom"
android:layout_marginBottom="90dp"
android:layout_marginRight="40dp"
android:src="@drawable/ic_switch_camera"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<TextView
android:id="@+id/tv_start_calculate"
android:text="開始計算"
android:textSize="26sp"
android:textStyle="bold"
android:textColor="@color/c1"
android:layout_margin="10dp"
android:layout_gravity="bottom|left"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
<TextView
android:id="@+id/tv_stop_calculate"
android:text="停止計算"
android:textSize="26sp"
android:textStyle="bold"
android:textColor="@color/c1"
android:layout_margin="10dp"
android:layout_gravity="bottom|right"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
<TextView
android:id="@+id/currentDistance"
android:text="-1cm"
android:textSize="26sp"
android:textStyle="bold"
android:textColor="@color/c1"
android:layout_gravity="bottom|center"
android:layout_margin="10dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"/>
</FrameLayout>
2.打開攝像頭可以選擇前置或者後置
public static final String CAMERA_ID_FRONT = "1";
public static final String CAMERA_ID_BACK = "0";
private void openCamera() {
CameraManager cameraManager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
setUpCameraOutputs(cameraManager);
configureTransform(mTextureView.getWidth(), mTextureView.getHeight());
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
cameraManager.openCamera(mCameraId, mDeviceStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
if (camera2Listener != null) {
camera2Listener.onCameraError(e);
}
} catch (InterruptedException e) {
if (camera2Listener != null) {
camera2Listener.onCameraError(e);
}
}
}
3. CameraDevice.StateCallback 涉及攝像頭的打開 關閉等回調。
public static abstract class StateCallback {
@Retention(RetentionPolicy.SOURCE)
@IntDef(prefix = {"ERROR_"}, value =
{ERROR_CAMERA_IN_USE,
ERROR_MAX_CAMERAS_IN_USE,
ERROR_CAMERA_DISABLED,
ERROR_CAMERA_DEVICE,
ERROR_CAMERA_SERVICE })
public @interface ErrorCode {};
public abstract void onOpened(@NonNull CameraDevice camera); // Must implement
public void onClosed(@NonNull CameraDevice camera) {
// Default empty implementation
}
public abstract void onDisconnected(@NonNull CameraDevice camera); // Must implement
public abstract void onError(@NonNull CameraDevice camera,
@ErrorCode int error); // Must implement
}
public CameraDevice() {}
……
}
4.獲取攝像頭的數據流,這一步我們需要將數據流轉爲bitmap,獲取數據流我們需要用到 ImageReader 這個類
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
mCaptureStateCallback, mBackgroundHandler
);
………………
mImageReader = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(),
ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(
new OnImageAvailableListenerImpl(), mBackgroundHandler);
private class OnImageAvailableListenerImpl implements ImageReader.OnImageAvailableListener {
private byte[] y;
private byte[] u;
private byte[] v;
private ReentrantLock lock = new ReentrantLock();
@Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireNextImage();
// Y:U:V == 4:2:2
if (camera2Listener != null && image.getFormat() == ImageFormat.YUV_420_888) {
Image.Plane[] planes = image.getPlanes();
// 加鎖確保y、u、v來源於同一個Image
lock.lock();
// 重複使用同一批byte數組,減少gc頻率
if (y == null) {
y = new byte[planes[0].getBuffer().limit() - planes[0].getBuffer().position()];
u = new byte[planes[1].getBuffer().limit() - planes[1].getBuffer().position()];
v = new byte[planes[2].getBuffer().limit() - planes[2].getBuffer().position()];
}
if (image.getPlanes()[0].getBuffer().remaining() == y.length) {
planes[0].getBuffer().get(y);
planes[1].getBuffer().get(u);
planes[2].getBuffer().get(v);
camera2Listener.onPreview(y, u, v, mPreviewSize, planes[0].getRowStride());
}
lock.unlock();
}
image.close();
}
}
二、FaceDetector
有了數據流我們需要轉化成bitmap 來讓FaceDetector檢測,首先需要用YuvImage來將數據流轉換掉,再通過bitmapFactory.Options來轉化成圖片,這裏要注意一點 FaceDetector d = new FaceDetector(_currentFrame.getWidth(),
_currentFrame.getHeight(), 1); 需要必須是一個正臉的頭像,參考上面截圖的方向,所以對於橫屏需要採用矩陣變化將bitmap轉向。
YuvImage yuvimage = new YuvImage(_data, ImageFormat.NV21,
_previewSize.getWidth(), _previewSize.getHeight(), null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
if (!yuvimage.compressToJpeg(new Rect(0, 0, _previewSize.getWidth(),
_previewSize.getHeight()), 100, baos)) {
Log.e("Camera", "compressToJpeg failed");
}
Log.i("Timing", "Compression finished: "
+ (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
BitmapFactory.Options bfo = new BitmapFactory.Options();
bfo.inPreferredConfig = Bitmap.Config.RGB_565;
_currentFrame = BitmapFactory.decodeStream(new ByteArrayInputStream(
baos.toByteArray()), null, bfo);
Log.i("Timing", "Decode Finished: " + (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
// Rotate the so it siuts our portrait mode
Matrix matrix = new Matrix();
if(mIsVertical){
matrix.postRotate(90);
matrix.preScale(-1, 1); //Android內置人臉識別的圖像必須是頭在上,所以要做旋轉變換
// We rotate the same Bitmap
_currentFrame = Bitmap.createBitmap(_currentFrame, 0, 0,
_previewSize.getWidth(), _previewSize.getHeight(), matrix, false);
}
// We rotate the same Bitmap
_currentFrame = Bitmap.createBitmap(_currentFrame, 0, 0,
_previewSize.getWidth(), _previewSize.getHeight(), matrix, false);
Log.e("run","previewSize.width"+_previewSize.getWidth()+"_previewSize.height"+_previewSize.getHeight());
Log.i("Timing",
"Rotate, Create finished: " + (System.currentTimeMillis() - t));
t = System.currentTimeMillis();
if (_currentFrame == null) {
Log.e(FACEDETECTIONTHREAD_TAG, "Could not decode Image");
return;
}
// Log.e("aaa","_currentFrame.getWidth()"+_currentFrame.getWidth());
// Log.e("aaa","_currentFrame.getHeight(),"+_currentFrame.getHeight());
FaceDetector d = new FaceDetector(_currentFrame.getWidth(),
_currentFrame.getHeight(), 1);
Face[] faces = new Face[1];
d.findFaces(_currentFrame, faces);
三、距離換算
距離換算主要參考了下面github的鏈接,也就是一開始的思路,主要通過參考系值的距離來換算
Pref 與 Dref爲參考值在手機成像的距離與離屏幕的距離,Psf爲雙眼的間距。
以上是整體的實現思路和代碼思路
建議直接看demo來熟悉相應實現。
參考 https://github.com/philiiiiiipp/Android-Screen-to-Face-Distance-Measurement