Android Camera 原理之拍照流程zsl優化方案

一、背景介紹

拍照的手機基本的功能,優化拍照性能,主要是優化點擊拍照到生成照片的這一段時間,看看可以在什麼地方減少耗時。下面將打開camera到拍照完成這段時間拆解一下。

這段過程主要分爲:

  • capture session配置階段:這是預覽之前的階段。
  • 預覽流程:這段時間,camera不斷出幀,顯示在TextureView 上。
  • 拍照流程:點擊拍照到最終生效圖片的流程。

Note:將預覽流程拍照流程合成一個大的流程,因爲我們本文所說的優化重點就在這裏。

二、核心思想


預覽出幀是爲了讓用戶感覺到此時camera正在運行,但是預覽的幀數據是不能直接用作拍照的幀數據,爲什麼?因爲預覽的幀數據太小,拍照的幀數據很大,所以不能直接複用。那如果能直接複用呢?就是預覽的幀數據可以直接被拍照來使用。
這也是我們本文討論的重點,直接複用預覽的幀數據。


直接複用預覽的幀數據,那麼首先需要保證的是 預覽幀的大小必須和 實際拍照的幀大小是相同的,不然獲取的預覽幀數據也是沒用的,沒有意義。
預覽的surface我們需要自定義,而且大小要和拍照的ImageReader的surface大小相同的。

2.1 定義Yuv Full ImageReader
private ImageReader mYuv1ImageReader;

初始化的時候需要創建這個 ImageReader的實例:

        mYuv1ImageReader = ImageReader.newInstance(
                mCameraInfoCache.getYuvStream1Size().getWidth(),
                mCameraInfoCache.getYuvStream1Size().getHeight(),
                ImageFormat.YUV_420_888,
                YUV1_IMAGEREADER_SIZE);
        mYuv1ImageReader.setOnImageAvailableListener(mYuv1ImageListener, mOpsHandler);

2.2 ImageReader的監聽回調
    ImageReader.OnImageAvailableListener mYuv1ImageListener =
            new ImageReader.OnImageAvailableListener() {
                @Override
                public void onImageAvailable(ImageReader reader) {
                    Image img = reader.acquireLatestImage();
                    if (img == null) {
                        Log.e(TAG, "Null image returned YUV1");
                        return;
                    }
                    if (mYuv1LastReceivedImage != null) {
                        mYuv1LastReceivedImage.close();
                    }
                    mYuv1LastReceivedImage = img;
                    if (++mYuv1ImageCounter % LOG_NTH_FRAME == 0) {
                        Log.v(TAG, "YUV1 buffer available, Frame #=" + mYuv1ImageCounter + " w=" + img.getWidth() + " h=" + img.getHeight() + " time=" + img.getTimestamp());
                    }

                }
            };

只要是處於預覽狀態,底層的sensor會一直出幀數據,這個onImageAvailable(ImageReader reader)會一直回調,發現我們在其中又定義了一個Image變量。

2.3 定義實時的Image返回值
    // Handle to last received Image: allows ZSL to be implemented.
    private Image mYuv1LastReceivedImage = null;

這個mYuv1LastReceivedImage從定義的變量名上就能看出來,是預覽的最後一幀的數據,顯然這個幀數據是完全的,和出圖的大小完全一樣的。

mYuv1LastReceivedImage保證本地總是存儲預覽的最後一幀數據。

2.4 創建captureSession

Camera打開的時候onOpened回調的時候,開始創建captureSession:

    private CameraDevice.StateCallback mCameraStateCallback = new LoggingCallbacks.DeviceStateCallback() {
        @Override
        public void onOpened(CameraDevice camera) {
            super.onOpened(camera);
            startCaptureSession();
        }
    };

    // Create CameraCaptureSession. Callback will start repeating request with current parameters.
    private void startCaptureSession() {
        Log.v(TAG, "Configuring session..");
        List<Surface> outputSurfaces = new ArrayList<Surface>(4);

        outputSurfaces.add(mPreviewSurface);
        Log.v(TAG, "  .. added SurfaceView " + mCameraInfoCache.getPreviewSize().getWidth() +
                " x " + mCameraInfoCache.getPreviewSize().getHeight());

        outputSurfaces.add(mYuv1ImageReader.getSurface());
        Log.v(TAG, "  .. added YUV ImageReader " + mCameraInfoCache.getYuvStream1Size().getWidth() +
                " x " + mCameraInfoCache.getYuvStream1Size().getHeight());

        if (mIsDepthCloudSupported) {
            outputSurfaces.add(mDepthCloudImageReader.getSurface());
            Log.v(TAG, "  .. added Depth cloud ImageReader");
        }

        if (SECOND_YUV_IMAGEREADER_STREAM) {
            outputSurfaces.add(mYuv2ImageReader.getSurface());
            Log.v(TAG, "  .. added YUV ImageReader " + mCameraInfoCache.getYuvStream2Size().getWidth() +
                    " x " + mCameraInfoCache.getYuvStream2Size().getHeight());
        }

        if (SECOND_SURFACE_TEXTURE_STREAM) {
            outputSurfaces.add(mSurfaceTextureSurface);
            Log.v(TAG, "  .. added SurfaceTexture");
        }

        if (RAW_STREAM_ENABLE && mCameraInfoCache.rawAvailable()) {
            outputSurfaces.add(mRawImageReader.getSurface());
            Log.v(TAG, "  .. added Raw ImageReader " + mCameraInfoCache.getRawStreamSize().getWidth() +
                    " x " + mCameraInfoCache.getRawStreamSize().getHeight());
        }

        if (USE_REPROCESSING_IF_AVAIL && mCameraInfoCache.isYuvReprocessingAvailable()) {
            outputSurfaces.add(mJpegImageReader.getSurface());
            Log.v(TAG, "  .. added JPEG ImageReader " + mCameraInfoCache.getJpegStreamSize().getWidth() +
                    " x " + mCameraInfoCache.getJpegStreamSize().getHeight());
        }

        try {
            if (USE_REPROCESSING_IF_AVAIL && mCameraInfoCache.isYuvReprocessingAvailable()) {
                InputConfiguration inputConfig = new InputConfiguration(mCameraInfoCache.getYuvStream1Size().getWidth(),
                        mCameraInfoCache.getYuvStream1Size().getHeight(), ImageFormat.YUV_420_888);
                mCameraDevice.createReprocessableCaptureSession(inputConfig, outputSurfaces,
                        mSessionStateCallback, null);
                Log.v(TAG, "  Call to createReprocessableCaptureSession complete.");
            } else {
                mCameraDevice.createCaptureSession(outputSurfaces, mSessionStateCallback, null);
                Log.v(TAG, "  Call to createCaptureSession complete.");
            }

        } catch (CameraAccessException e) {
            Log.e(TAG, "Error configuring ISP.");
        }
    }

使用zsl的方式的話,就需要輸入InputConfiguration配置數據,好讓底層的camera hal複用這部分數據,我們也能真正達到zsl的目的。

                InputConfiguration inputConfig = new InputConfiguration(mCameraInfoCache.getYuvStream1Size().getWidth(),
                        mCameraInfoCache.getYuvStream1Size().getHeight(), ImageFormat.YUV_420_888);
                mCameraDevice.createReprocessableCaptureSession(inputConfig, outputSurfaces,
                        mSessionStateCallback, null);

mSessionStateCallback是當前captureSession所處狀態的回調,我們會在captureSession的onReady回調函數中設置ImageWriter對象:

    ImageWriter mImageWriter;

    private CameraCaptureSession.StateCallback mSessionStateCallback = new LoggingCallbacks.SessionStateCallback() {
        @Override
        public void onReady(CameraCaptureSession session) {
            Log.v(TAG, "capture session onReady().  HAL capture session took: (" + (SystemClock.elapsedRealtime() - CameraTimer.t_session_go) + " ms)");
            mCurrentCaptureSession = session;
            issuePreviewCaptureRequest(false);

            if (session.isReprocessable()) {
                mImageWriter = ImageWriter.newInstance(session.getInputSurface(), IMAGEWRITER_SIZE);
                mImageWriter.setOnImageReleasedListener(
                        new ImageWriter.OnImageReleasedListener() {
                            @Override
                            public void onImageReleased(ImageWriter writer) {
                                Log.v(TAG, "ImageWriter.OnImageReleasedListener onImageReleased()");
                            }
                        }, null);
                Log.v(TAG, "Created ImageWriter.");
            }
            super.onReady(session);
        }
    };

session.getInputSurface() 表示之前輸入的inputConfiguration數據,這個數據暫時初始化放在ImageWriter中。後續每次得到的預覽的最後一幀數據都會放在ImageWriter對象中,直接送入到底層。

2.5 設置預覽
        try {
            CaptureRequest.Builder b1 = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
            b1.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_USE_SCENE_MODE);
            b1.set(CaptureRequest.CONTROL_SCENE_MODE, CameraMetadata.CONTROL_SCENE_MODE_FACE_PRIORITY);
            if (AFtrigger) {
                b1.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_AUTO);
            } else {
                b1.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
            }

            b1.set(CaptureRequest.NOISE_REDUCTION_MODE, mCaptureNoiseMode);
            b1.set(CaptureRequest.EDGE_MODE, mCaptureEdgeMode);
            b1.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE, mCaptureFace ? mCameraInfoCache.bestFaceDetectionMode() : CaptureRequest.STATISTICS_FACE_DETECT_MODE_OFF);

            Log.v(TAG, "  .. NR=" + mCaptureNoiseMode + "  Edge=" + mCaptureEdgeMode + "  Face=" + mCaptureFace);

            if (mCaptureYuv1) {
                b1.addTarget(mYuv1ImageReader.getSurface());
                Log.v(TAG, "  .. YUV1 on");
            }

            if (mCaptureRaw) {
                b1.addTarget(mRawImageReader.getSurface());
            }

            b1.addTarget(mPreviewSurface);

            if (mIsDepthCloudSupported && !mCaptureYuv1 && !mCaptureYuv2 && !mCaptureRaw) {
                b1.addTarget(mDepthCloudImageReader.getSurface());
            }

            if (mCaptureYuv2) {
                if (SECOND_SURFACE_TEXTURE_STREAM) {
                    b1.addTarget(mSurfaceTextureSurface);
                }
                if (SECOND_YUV_IMAGEREADER_STREAM) {
                    b1.addTarget(mYuv2ImageReader.getSurface());
                }
                Log.v(TAG, "  .. YUV2 on");
            }

            if (AFtrigger) {
                b1.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START);
                mCurrentCaptureSession.capture(b1.build(), mCaptureCallback, mOpsHandler);
                b1.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_IDLE);
            }
            mCurrentCaptureSession.setRepeatingRequest(b1.build(), mCaptureCallback, mOpsHandler);
        } catch (CameraAccessException e) {
            Log.e(TAG, "Could not access camera for issuePreviewCaptureRequest.");
        }

這兒很多代碼,核心的代碼只有3行:

CaptureRequest.Builder b1 = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
b1.addTarget(mYuv1ImageReader.getSurface());
mCurrentCaptureSession.setRepeatingRequest(b1.build(), mCaptureCallback, mOpsHandler);

傳入了初始定義的full yuv的ImageReader的surface結構,然後在CaptureCallback中需要獲取captureResult,這個數據在拍照的時候還有用處。

2.6 CaptureCallback處理
    private CameraCaptureSession.CaptureCallback mCaptureCallback = new LoggingCallbacks.SessionCaptureCallback() {
        @Override
        public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
            if (!mFirstFrameArrived) {
                mFirstFrameArrived = true;
                long now = SystemClock.elapsedRealtime();
                long dt = now - CameraTimer.t0;
                long camera_dt = now - CameraTimer.t_session_go + CameraTimer.t_open_end - CameraTimer.t_open_start;
                long repeating_req_dt = now - CameraTimer.t_burst;
                Log.v(TAG, "App control to first frame: (" + dt + " ms)");
                Log.v(TAG, "HAL request to first frame: (" + repeating_req_dt + " ms) " + " Total HAL wait: (" + camera_dt + " ms)");
                mMyCameraCallback.receivedFirstFrame();
                mMyCameraCallback.performanceDataAvailable((int) dt, (int) camera_dt, null);
            }
            publishFrameData(result);
            // Used for reprocessing.
            mLastTotalCaptureResult = result;
            super.onCaptureCompleted(session, request, result);
        }
    };

這個mLastTotalCaptureResult是預覽capture的時候捕獲的一個captureResult,後續處理的時候會用到

    // Last total capture result
    TotalCaptureResult mLastTotalCaptureResult;

2.7 拍照處理

終於來到了最核心的步驟,這兒的拍照處理,當然不會像之前那樣直接調用CaptureSession的capture方法,因爲執行capture方法,就必定要重新發送capture request,重新獲取幀數據。
但是我們現在已經有了幀數據,就是之前保存的幀數據,這時候幀數據就起到了非常重要的作用。

    void runReprocessing() {
        if (mYuv1LastReceivedImage == null) {
            Log.e(TAG, "No YUV Image available.");
            return;
        }
        mImageWriter.queueInputImage(mYuv1LastReceivedImage);
        Log.v(TAG, "  Sent YUV1 image to ImageWriter.queueInputImage()");
        try {
            CaptureRequest.Builder b1 = mCameraDevice.createReprocessCaptureRequest(mLastTotalCaptureResult);
            // Todo: Read current orientation instead of just assuming device is in native orientation
            b1.set(CaptureRequest.JPEG_ORIENTATION, mCameraInfoCache.sensorOrientation());
            b1.set(CaptureRequest.JPEG_QUALITY, (byte) 95);
            b1.set(CaptureRequest.NOISE_REDUCTION_MODE, mReprocessingNoiseMode);
            b1.set(CaptureRequest.EDGE_MODE, mReprocessingEdgeMode);
            b1.addTarget(mJpegImageReader.getSurface());
            mCurrentCaptureSession.capture(b1.build(), mReprocessingCaptureCallback, mOpsHandler);
            mReprocessingRequestNanoTime = System.nanoTime();
        } catch (CameraAccessException e) {
            Log.e(TAG, "Could not access camera for issuePreviewCaptureRequest.");
        }
        mYuv1LastReceivedImage = null;
        Log.v(TAG, "  Reprocessing request submitted.");
    }

mImageWriter.queueInputImage(mYuv1LastReceivedImage);將預覽最後一幀數據放入ImageWriter的input 隊列中。

    // Reprocessing capture completed.
    private CameraCaptureSession.CaptureCallback mReprocessingCaptureCallback = new LoggingCallbacks.SessionCaptureCallback() {
        @Override
        public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
            Log.v(TAG, "Reprocessing onCaptureCompleted()");
        }
    };

處理完成之後回調onCaptureCompleted(...)函數。

三、總結

zsl方案有多快:原圖拍照一張150ms,快得一筆
下面是截圖樣例:


優化之後的流程可以總結成如下:

作者:碼上就說
鏈接:https://www.jianshu.com/p/3beb7403025f

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章