還是先看效果:
這裏還是再放一下預覽圖,方便如果覺得符合自己需求的兄弟萌可以看下去。
相機預覽
這裏我用的Camera2的API,具體有關Camera2的簡介可以看下這篇博客https://blog.csdn.net/HardWorkingAnt/article/details/72786782
具體的Helper類可以移步到這裏:https://github.com/wangshengyang1996/GLCameraDemo/tree/master/app/src/main/java/com/wsy/glcamerademo/camera2
我也是參考以上兩個鏈接地址的博客/github的代碼來完善CameraHelper類的。
當我們在確認權限開啓後,即可初始化Helper。
fun initCamera() {
mTextureView ?: return
Log.d(TAG, "initCamera")
mCameraHelper = CameraHelper.Companion.Builder()
.cameraListener(this)
.specificCameraId(CAMERA_ID)
.mContext(mFragment?.context!!)
.previewOn(mTextureView)
.previewViewSize(
Point(
mTextureView.layoutParams.width,
mTextureView.layoutParams.height
)
)
.rotation(mFragment?.activity?.windowManager?.defaultDisplay?.rotation ?: 0)
.build()
Log.d(TAG, "mCameraHelper = $mCameraHelper is null ? -> ${mCameraHelper == null}")
mCameraHelper?.start()
switchText("請將人臉放入取景框中", "請點擊按鈕拍照")
}
那麼start方法具體做的其實還是通過systemservice打開camera:
@Synchronized
fun start() {
Log.i(TAG, "start")
if (mCameraDevice != null) return
startBackgroundThread()
// When the screen is turned off the turned back on, the SurfaceTexture is already available,
// and "onSurfaceTextureAvailable" will not be called. In that case, we can open a camera
// and start preview from here (otherwise, we wait until the surface is ready in the
// SurfaceTextureListener).
if (mTextureView?.isAvailable == true) {
openCamera()
} else {
mTextureView?.surfaceTextureListener = mSurfaceTextureListener
}
}
/**
* Opens the camera specified by {@link #mCameraId}
*/
private fun openCamera() {
val cameraManager = mContext?.getSystemService(Context.CAMERA_SERVICE) as CameraManager?
cameraManager?: return
Log.e(TAG, "openCamera")
setUpCameraOutputs(cameraManager)
mTextureView?.apply {
configureTransform(width, height)
}
try {
if (!mCameraOpenLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw RuntimeException("Time out waiting to lock camera opening.")
}
mContext?.apply {
if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
cameraManager.openCamera(mCameraId, mDeviceStateCallback, mBackgroundHandler)
}
}
} catch (e: CameraAccessException) {
cameraListener?.onCameraError(e)
} catch (e: InterruptedException) {
cameraListener?.onCameraError(e)
}
}
可以看到,當滿足獲取相機的所有條件後,會通過cameraManager去開啓相機,cameraId是代表指定開啓哪個相機(前置、後置、外接等),deviceStateCallback是設備狀態的回調接口,可以在回調方法onOpened裏開啓預覽會話previewSession
private val mDeviceStateCallback = object: CameraDevice.StateCallback() {
override fun onOpened(camera: CameraDevice) {
Log.i(TAG, "onOpened: ")
// This method is called when the camera is opened. We start camera preview here.
mCameraOpenLock.release()
mCameraDevice = camera
createCameraPreviewSession()
mPreviewSize?.let {
cameraListener?.onCameraOpened(camera, mCameraId, it, getCameraOri(rotation, mCameraId), isMirror)
}
}
// 此處省略...
}
/**
* Creates a new {@link CameraCaptureSession} for camera preview.
*/
private fun createCameraPreviewSession() {
try {
val texture = mTextureView?.surfaceTexture
assert(texture != null)
// We configure the size of default buffer to be the size of camera preview we want.
mPreviewSize?.let {
texture?.setDefaultBufferSize(it.width, it.height)
}
// This is the output Surface we need to start preview !!!
val surface = Surface(texture)
// We set up a CaptureRequest.Builder with the output Surface
mPreviewRequestBuilder =
mCameraDevice?.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
// 自動對焦
mPreviewRequestBuilder?.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE)
// 增加曝光度
mPreviewRequestBuilder?.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, getRange())
mPreviewRequestBuilder?.addTarget(surface)
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice?.createCaptureSession(listOf(surface, mImageReader?.surface),
mCaptureStateCallback, mBackgroundHandler)
} catch (e: CameraAccessException) {
e.printStackTrace()
}
}
通過定義previewRequestBuilder將相機設置成預覽模式TEMPLATE_PREVIEW,然後將TextureView的surface作爲預覽對象透傳,這裏還可以對這個預覽請求進行曝光度、對焦等屬性設置。然後通過cameraDevice的createCaptureSession來創建預覽會話。其中,第一個參數是指一個輸出的Surface集合,在源碼裏是這樣定義的
也就是說,這裏的surface是可以作爲獲取到的圖像數據的目標,即還可以將我們的圖像數據進行預覽。第二個參數則是一個capture狀態的回調,裏面包含了創建會話成功與否的回調方法,在這裏我們可以讓這個會話不斷進行(setRepeat),那麼這個會話就會一直在captureImage,並且在回調。那麼相機的預覽就完成了。
private val mCaptureStateCallback = object: CameraCaptureSession.StateCallback() {
override fun onConfigureFailed(session: CameraCaptureSession) {
Log.i(TAG, "onConfigureFailed: ")
cameraListener?.onCameraError(Exception("configuredFailed"))
}
override fun onConfigured(session: CameraCaptureSession) {
Log.i(TAG, "onConfigured: ")
// The camera is already closed
mCameraDevice?: return
// When the session is ready, we start displaying the preview
mCaptureSession = session
try {
mPreviewRequestBuilder?.let {
mCaptureSession?.setRepeatingRequest(it.build(),
object: CameraCaptureSession.CaptureCallback() {}, mBackgroundHandler)
}
} catch (e: CameraAccessException) {
e.printStackTrace()
}
}
}
相機拍照
前面扯了這麼多,其實還是在講預覽,那麼我在做畢業設計的時候,就考慮到其實如果實現實時的人臉檢測,會不斷地對圖像數據進行提交,然後檢測返回結果,這裏當然還要保證線程安全性。要麼就設置一個頻控,讓capture後的圖像先暫停一段時間再繼續預覽,可是這不就是和拍照差不多麼。。。於是我就做了拍照的流程。
fun takePhoto() {
// 一定要在這裏加 不然回調不了 如果在上面加 會導致一直在捕獲圖像
mImageReader?.let {
mPreviewRequestBuilder?.addTarget(it.surface)
}
// 這裏保存時要選sensorOrientation(照相機的方向)
mPreviewRequestBuilder?.set(CaptureRequest.JPEG_ORIENTATION, mSensorOrientation)
// 設置對焦觸發器爲空閒狀態
mPreviewRequestBuilder?.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_IDLE)
mCaptureSession?.stopRepeating()
// 開始拍照
mPreviewRequestBuilder?.let {
mCaptureSession?.capture(it.build(), null, mBackgroundHandler)
}
}
這裏註釋說一定要加的意思是,這個預覽請求PreviewRequestBuilder添加的目標surface會將新加的surface作爲目標,那麼如果把ImageReader的surface在預覽時就添加上的話,會不斷地執行預覽拍照的請求,那麼自然就變得很卡了。
這裏再加的話,就可以讓捕獲的圖像暫停,然後去處理捕獲的圖像數據了。至於這裏在將ImageReader的surface作爲目標透傳時,當捕獲的圖像數據獲取完成後就會執行OnImageAvailableListener的OnImageAvailable方法,從而可以進行保存圖片。
private val mOnImageAvailableListener = object: ImageReader.OnImageAvailableListener {
private val lock = ReentrantLock()
override fun onImageAvailable(reader: ImageReader?) {
// 這裏做保存工作
Log.e(TAG, "onImageAvailable")
val image = reader?.acquireNextImage()
if (cameraListener != null && image?.format == ImageFormat.JPEG) {
val planes = image.planes
// 加鎖保證來源於同一個Image
lock.lock()
val byteBuffer = planes[0].buffer
val byteArray = ByteArray(byteBuffer.remaining())
byteBuffer.get(byteArray)
cameraListener?.onPreview(byteArray)
lock.unlock()
}
image?.close()
}
}
這裏會通過cameraListener去回調給外部業務組件,那麼就可以對圖像進行保存或者進行識別了~
ok 到這裏已經說完了整個預覽以及拍照的流程 下一章會說一下如何通過調用SDK來實現人臉識別。