花了不少時間在這個數據流的分析上面,自己畢竟沒怎麼做過android,這裏記錄一下自己的見解,任何理解錯誤還望高人指教,以後還需慢慢糾正
整個分析過程從app的onCreate開始:packages/apps/OMAPCamera/src/com/ti/omap4/android/camera/Camera.java
在onCreate中做了很多的初始化,我們真正關注的是一下幾條語句:
-
// don't set mSurfaceHolder
here. We have it set ONLY
within
-
// surfaceChanged / surfaceDestroyed, other
parts of the code
-
// assume that when it is set, the
surface is also set.
-
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
-
SurfaceHolder holder = preview.getHolder();
- holder.addCallback(this);
其中SurfaceView的定義在以下路徑:frameworks/base/core/java/android/view/SurfaceView.java
其中SurfaceHolder的定義在以下路徑:frameworks/base/core/java/android/view/SurfaceHolder.java
這裏看看這個文章的解釋,寫的很是不錯:http://blog.chinaunix.net/uid-9863638-id-1996383.html
SurfaceFlinger 是Android multimedia 的一個部分,在Android 的實現中它是一個service ,提供系統範圍內的surface composer 功能,它能夠將各種應用程序的2D,3D surface 進行組合。
在具體講SurfaceFlinger 之前,我們先來看一下有關顯示方面的一些基礎知識 。
每個應用程序可能對應着一個或者多個圖形界面,而每個界面我們就稱之爲一個surface ,或者說是window ,在上面的圖中我們能看到4 個surface ,一個是home 界面,還有就是紅、綠、藍分別代表的3 個surface ,而兩個button 實際是home surface 裏面的內容。在這裏我們能看到我們進行圖形顯示所需要解決 的問題:
a 、首先每個surface 在屏幕上有它的位置,以及大小,然後每個surface 裏面還有要顯示的內容,內容,大小,位置 這些元素 在我們改變應用程序的時候都可能會改變,改變時應該如何處理
b 、然後就各個surface 之間可能有重疊,比如說在上面的簡略圖中,綠色覆蓋了藍色,而紅色又覆蓋了綠色和藍色以及下面的home ,而且還具有一定透明度。這種層之間的關係應該如何描述
我們首先來看第二個問題,我們可以想象在屏幕平面的垂直方向還有一個Z 軸,所有的surface 根據在Z 軸上的座標來確定前後,這樣就可以描述各個surface 之間的上下覆蓋關係了,而這個在Z 軸上的順序,圖形上有個專業術語叫Z-order 。
對於第一個問題,我們需要一個結構來記錄應用程序界面的位置,大小,以及一個buffer 來記錄需要顯示的內容,所以這就是我們surface 的概念,surface 實際我們可以把它理解成一個容器,這個容器記錄着應用程序界面的控制信息,比如說大小啊,位置啊,而它還有buffer 來專門存儲需要顯示的內容。
在這裏還存在一個問題,那就是當存在圖形重合的時候應該如何處理呢,而且可能有些surface 還帶有透明信息,這裏就是我們SurfaceFlinger 需要解決問題,它要把各個surface 組合(compose/merge) 成一個main Surface ,最後將Main Surface 的內容發送給FB/V4l2 Output ,這樣屏幕上就能看到我們想要的效果。
在實際中對這些Surface 進行merge 可以採用兩種方式,一種就是採用軟件的形式來merge ,還一種就是採用硬件的方式,軟件的方式就是我們的SurfaceFlinger ,而硬件的方式就是Overlay 。
首先繼承SurfaceView並實現SurfaceHolder.Callback接口
使用接口的原因:因爲使用SurfaceView 有一個原則,所有的繪圖工作必須得在Surface 被創建之後才能開始(Surface—表面,基本上我們可以把它當作顯存的一個映射,寫入到Surface 的內容可以被直接複製到顯存從而顯示出來,這使得顯示速度會非常快),而在Surface 被銷燬之前必須結束。所以Callback 中的surfaceCreated 和surfaceDestroyed 就成了繪圖處理代碼的邊界。
需要重寫的方法
(1)public void surfaceChanged(SurfaceHolder holder,int format,int width,int height){}//在surface的大小發生改變時激發
(2)public void surfaceCreated(SurfaceHolder holder){}//在創建時激發,一般在這裏調用畫圖的線程。
(3)public void surfaceDestroyed(SurfaceHolder holder) {} //銷燬時激發,一般在這裏將畫圖的線程停止、釋放。
這幾個方法在在app中都已經重新實現了,重點分析surfaceChanged
-
public void surfaceChanged(SurfaceHolder
holder, int format, int w, int h) {
-
// Make
sure we have a surface in the holder before proceeding.
-
if (holder.getSurface() == null) {
-
Log.d(TAG, "holder.getSurface()
== null");
-
return;
-
}
-
-
Log.v(TAG, "surfaceChanged.
w=" + w + ".
h=" + h);
-
-
// We
need to save the holder for later
use, even when the mCameraDevice
-
// is null. This
could happen if onResume() is invoked
after this
-
// function.
-
mSurfaceHolder = holder;
-
-
// The
mCameraDevice will be null if it
fails to connect to the
camera
-
// hardware. In this case we
will show a dialog and then finish
the
-
// activity, so
it's OK to ignore
it.
-
if (mCameraDevice == null) return;
-
-
// Sometimes
surfaceChanged is called after onPause or before
onResume.
-
// Ignore
it.
-
if (mPausing || isFinishing()) return;
-
-
setSurfaceLayout();
-
-
// Set preview
display if the surface is being
created. Preview was
-
// already
started. Also restart the preview if display
rotation has
-
// changed. Sometimes
this happens when the device is held in portrait
-
// and camera
app is opened. Rotation
animation takes some time and
-
// display
rotation in onCreate may not be
what we want.
-
if (mCameraState == PREVIEW_STOPPED) {//這裏check攝像頭是否已經啓動,第一次啓動攝像頭和攝像頭已經打開從新進入攝像頭實現方法不同
-
startPreview(true);
-
startFaceDetection();
-
} else {
-
if (Util.getDisplayRotation(this) != mDisplayRotation) {
-
setDisplayOrientation();
-
}
-
if (holder.isCreating()) {
-
// Set preview
display if the surface is being
created and preview
-
// was
already started. That means preview display was set to null
-
// and we
need to set it now.
-
setPreviewDisplay(holder);
-
}
-
}
-
-
// If first time initialization is not finished, send
a message to do
-
// it
later. We want to finish
surfaceChanged as soon as possible to let
-
// user
see preview first.
-
if (!mFirstTimeInitialized) {
-
mHandler.sendEmptyMessage(FIRST_TIME_INIT);
-
} else {
-
initializeSecondTime();
-
}
-
-
SurfaceView preview = (SurfaceView) findViewById(R.id.camera_preview);
-
CameraInfo info = CameraHolder.instance().getCameraInfo()[mCameraId];
-
boolean mirror = (info.facing == CameraInfo.CAMERA_FACING_FRONT);
-
int displayRotation = Util.getDisplayRotation(this);
-
int displayOrientation = Util.getDisplayOrientation(displayRotation, mCameraId);
-
-
mTouchManager.initialize(preview.getHeight() / 3, preview.getHeight() / 3,
-
preview, this, mirror, displayOrientation);
-
- }
-
private void startPreview(boolean
updateAll) {
-
if (mPausing || isFinishing()) return;
-
-
mFocusManager.resetTouchFocus();
-
-
mCameraDevice.setErrorCallback(mErrorCallback);
-
-
// If we're
previewing already, stop the preview first (this
will blank
-
// the
screen).
-
if (mCameraState != PREVIEW_STOPPED) stopPreview();
-
-
setPreviewDisplay(mSurfaceHolder);
-
setDisplayOrientation();
-
-
if (!mSnapshotOnIdle) {
-
// If the
focus mode is continuous autofocus, call cancelAutoFocus to
-
// resume it
because it may have been paused by autoFocus call.
-
if (Parameters.FOCUS_MODE_CONTINUOUS_PICTURE.equals(mFocusManager.getFocusMode())) {
-
mCameraDevice.cancelAutoFocus();
-
}
-
mFocusManager.setAeAwbLock(false); // Unlock
AE and AWB.
-
}
-
-
if ( updateAll ) {
-
Log.v(TAG, "Updating
all parameters!");
-
setCameraParameters(UPDATE_PARAM_INITIALIZE | UPDATE_PARAM_ZOOM | UPDATE_PARAM_PREFERENCE);
-
} else {
-
setCameraParameters(UPDATE_PARAM_MODE);
-
}
-
-
//setCameraParameters(UPDATE_PARAM_ALL);
-
-
// Inform
the mainthread to go on the
UI initialization.
-
if (mCameraPreviewThread != null) {
-
synchronized (mCameraPreviewThread) {
-
mCameraPreviewThread.notify();
-
}
-
}
-
-
try {
-
Log.v(TAG, "startPreview");
-
mCameraDevice.startPreview();
-
} catch (Throwable
ex) {
-
closeCamera();
-
throw new RuntimeException("startPreview
failed", ex);
-
}
-
-
mZoomState = ZOOM_STOPPED;
-
setCameraState(IDLE);
-
mFocusManager.onPreviewStarted();
-
if ( mTempBracketingEnabled ) {
-
mFocusManager.setTempBracketingState(FocusManager.TempBracketingStates.ACTIVE);
-
}
-
-
if (mSnapshotOnIdle) {
-
mHandler.post(mDoSnapRunnable);
-
}
- }
這裏我必須得着重着重的進行分析,我一直在尋找是什麼決定了overlay的使用與不適用,這裏就這個setPreviewDisplay方法就是“罪魁禍首”
在setPreview方法中傳入的參數是surfaceview,這個surfaceview傳到底層HAL層是參數形式發生了改變,但是在我的理解下,就是人換衣服一樣,
張三今天換了一身衣服,但這個張三跟昨天穿不同衣服的張三是同一個人,到了HAL層這個參數的形式是preview_stream_ops ,下面慢慢你就可以知道了,
在camerahal中的setPreviewDisplay方法中,是通過判斷傳下來的的preview_stream_ops 參數是否爲空決定使用overlay還是不適用overlay的,很重要的
這篇文章只是在這裏提及一下,下面不會提及overlay的內容,默認是以不適用overlay的方式分析數據流的整個過程的,這裏可千萬別混淆了
使用overl的數據迴流方式將單獨作爲一章分析,同時會詳細分析使用和不適用overlay的最終決定權
流程如下:app-->frameworks-->通過JNI-->camera client-->camera service-->通過hardware-interface-->hal_module-->HAL
這裏十分有必要看一下camera service層的調用過程:
-
// set the
Surface that the preview will use
-
status_t CameraService::Client::setPreviewDisplay(const sp<Surface>& surface) {
-
LOG1("setPreviewDisplay(%p) (pid
%d)", surface.get(), getCallingPid());
-
-
sp<IBinder> binder(surface != 0 ? surface->asBinder() : 0);
-
sp<ANativeWindow> window(surface);
-
return setPreviewWindow(binder, window);
- }
-
status_t CameraService::Client::setPreviewWindow(const sp<IBinder>& binder,
-
const sp<ANativeWindow>& window) {
-
Mutex::Autolock
lock(mLock);
-
status_t result = checkPidAndHardware();
-
if (result != NO_ERROR) return
result;
-
-
// return if no
change in surface.
-
if (binder == mSurface) {
-
return NO_ERROR;
-
}
-
-
if (window != 0) {
-
result = native_window_api_connect(window.get(), NATIVE_WINDOW_API_CAMERA);
-
if (result != NO_ERROR) {
-
LOGE("native_window_api_connect
failed: %s (%d)", strerror(-result),
-
result);
-
return result;
-
}
-
}
-
-
// If preview
has been already started, register preview buffers now.
-
if (mHardware->previewEnabled()) {
-
if (window != 0) {
-
native_window_set_scaling_mode(window.get(),
-
NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW);
-
native_window_set_buffers_transform(window.get(), mOrientation);
-
result = mHardware->setPreviewWindow(window);
-
}
-
}
-
-
if (result == NO_ERROR) {
-
// Everything
has succeeded. Disconnect the old window and remember
the
-
// new window.
-
disconnectWindow(mPreviewWindow);
-
mSurface = binder;
-
mPreviewWindow = window;
-
} else {
-
// Something
went wrong after we connected to the new window, so
-
// disconnect
here.
-
disconnectWindow(window);
-
}
-
-
return result;
- }
-
status_t setPreviewWindow(const sp<ANativeWindow>& buf)
-
{
-
LOGV("%s(%s) buf %p", __FUNCTION__, mName.string(), buf.get());
-
-
if (mDevice->ops->set_preview_window) {
-
mPreviewWindow = buf;
-
#ifdef OMAP_ENHANCEMENT_CPCAM
-
mHalPreviewWindow.user = mPreviewWindow.get();
-
#else
-
mHalPreviewWindow.user = this;
-
#endif
-
LOGV("%s &mHalPreviewWindow
%p mHalPreviewWindow.user %p", __FUNCTION__,
-
&mHalPreviewWindow, mHalPreviewWindow.user);
-
return mDevice->ops->set_preview_window(mDevice,
-
buf.get() ? &mHalPreviewWindow.nw : 0);
-
}
-
return INVALID_OPERATION;
- }
其實我說的本質的變化這裏也只能這麼說,但往深入追究,這個preview_stream_ops也可以說只是surface的另外一種形式而已
這樣才通過hardware調用到hal-module再調用到hal層
-
int camera_set_preview_window(struct
camera_device * device,
-
struct preview_stream_ops *window)
-
{
-
int rv = -EINVAL;
-
ti_camera_device_t* ti_dev = NULL;
-
-
LOGV("%s", __FUNCTION__);
-
-
if(!device)
-
return rv;
-
-
ti_dev = (ti_camera_device_t*) device;
-
-
rv = gCameraHals[ti_dev->cameraid]->setPreviewWindow(window);
-
-
return rv;
- }
-
status_t CameraHal::setPreviewWindow(struct
preview_stream_ops *window)
-
{
-
status_t ret = NO_ERROR;
-
CameraAdapter::BuffersDescriptor
desc;
-
-
LOG_FUNCTION_NAME;
-
mSetPreviewWindowCalled = true;
-
-
//If the
Camera service passes a null window, we
destroy existing window and free
the DisplayAdapter
-
if(!window)
-
{
-
if(mDisplayAdapter.get() != NULL)
-
{
-
///NULL window passed, destroy
the display adapter if present
-
CAMHAL_LOGD("NULL window
passed, destroying display adapter");
-
mDisplayAdapter.clear();
-
///@remarks If there
was a window previously existing, we
usually expect another valid window to be
passed by the client
-
///@remarks
so, we will wait until it
passes a valid window to begin
the preview again
-
mSetPreviewWindowCalled = false;
-
}
-
CAMHAL_LOGD("NULL ANativeWindow
passed to setPreviewWindow");
-
return NO_ERROR;
-
}else if(mDisplayAdapter.get() == NULL)
-
{
-
// Need to create
the display adapter since it has not been
created
-
// Create
display adapter
-
mDisplayAdapter = new
ANativeWindowDisplayAdapter();
-
ret = NO_ERROR;
-
if(!mDisplayAdapter.get() || ((ret=mDisplayAdapter->initialize())!=NO_ERROR))
-
{
-
if(ret!=NO_ERROR)
-
{
-
mDisplayAdapter.clear();
-
CAMHAL_LOGEA("DisplayAdapter
initialize failed");
-
LOG_FUNCTION_NAME_EXIT;
-
return ret;
-
}
-
else
-
{
-
CAMHAL_LOGEA("Couldn't
create DisplayAdapter");
-
LOG_FUNCTION_NAME_EXIT;
-
return NO_MEMORY;
-
}
-
}
-
-
// DisplayAdapter
needs to know where to get the
CameraFrames from inorder to display
-
// Since
CameraAdapter is the one that provides
the frames, set it
as the frame provider for DisplayAdapter
-
mDisplayAdapter->setFrameProvider(mCameraAdapter);
-
-
// Any
dynamic errors that happen during the camera use case has to be
propagated back to the application
-
// via
CAMERA_MSG_ERROR. AppCallbackNotifier is the class that
notifies such errors to the application
-
// Set it
as the error handler for the
DisplayAdapter
-
mDisplayAdapter->setErrorHandler(mAppCallbackNotifier.get());
-
-
// Update
the display adapter with the new window that is passed
from CameraService
-
ret = mDisplayAdapter->setPreviewWindow(window);
-
if(ret!=NO_ERROR)
-
{
-
CAMHAL_LOGEB("DisplayAdapter
setPreviewWindow returned error %d", ret);
-
}
-
-
if(mPreviewStartInProgress)
-
{
-
CAMHAL_LOGDA("setPreviewWindow
called when preview running");
-
// Start
the preview since the window is now available
-
ret = startPreview();
-
}
-
} else {
-
// Update
the display adapter with the new window that is passed
from CameraService
-
ret = mDisplayAdapter->setPreviewWindow(window);
-
if ( (NO_ERROR == ret) && previewEnabled() ) {
-
restartPreview();
-
} else if (ret == ALREADY_EXISTS) {
-
// ALREADY_EXISTS
should be treated as a noop in this case
-
ret = NO_ERROR;
-
}
-
}
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
-
- }
-
status_t CameraHal::startPreview() {
-
LOG_FUNCTION_NAME;
-
-
// When
tunneling is enabled during VTC, startPreview
happens in 2 steps:
-
// When
the application sends the command CAMERA_CMD_PREVIEW_INITIALIZATION,
-
// cameraPreviewInitialization() is called, which in turn
causes the CameraAdapter
-
// to move
from loaded to idle state. And when
the application calls startPreview,
-
// the
CameraAdapter moves from idle to executing state.
-
//
-
// If the
application calls startPreview() without
sending the command
-
// CAMERA_CMD_PREVIEW_INITIALIZATION, then the function cameraPreviewInitialization()
-
// AND startPreview() are
executed. In other words, if the
application calls
-
// startPreview() without
sending the command CAMERA_CMD_PREVIEW_INITIALIZATION,
-
// then the
CameraAdapter moves from loaded to idle to executing
state in one shot.
-
status_t ret = cameraPreviewInitialization();這個地方十分重要,下面會具體分析
-
-
// The
flag mPreviewInitializationDone is set to true at
the end of the function
-
// cameraPreviewInitialization(). Therefore, if everything
goes alright, then the
-
// flag
will be set. Sometimes, the function cameraPreviewInitialization() may
-
// return
prematurely if all the resources are not available for starting
preview.
-
// For example, if the
preview window is not set, then it
would return NO_ERROR.
-
// Under
such circumstances, one should return from startPreview as well and should
-
// not continue
execution. That is why, we
check the flag and not the return
value.
-
if (!mPreviewInitializationDone) return
ret;
-
-
// Once
startPreview is called, there is no
need to continue to remember
whether
-
// the function cameraPreviewInitialization() was
called earlier or not. And so
-
// the
flag mPreviewInitializationDone is reset here. Plus, this
preserves the
-
// current
behavior of startPreview under the circumstances where the application
-
// calls
startPreview twice or more.
-
mPreviewInitializationDone = false;
-
-
//Enable
the display adapter if present, actual
overlay enable happens when we post the buffer這裏說overlay happens,我一直在找的地方,上面棕色標註將來會在詳細說說這裏
-
if(mDisplayAdapter.get() != NULL) {
-
CAMHAL_LOGDA("Enabling display");
-
int width, height;
-
mParameters.getPreviewSize(&width, &height);
-
-
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
-
ret = mDisplayAdapter->enableDisplay(width, height, &mStartPreview);
-
#else
-
ret = mDisplayAdapter->enableDisplay(width, height, NULL);
-
#endif
-
-
if ( ret != NO_ERROR ) {
-
CAMHAL_LOGEA("Couldn't enable
display");
-
-
// FIXME: At
this stage mStateSwitchLock is locked and unlock is supposed to be
called
-
// only
from mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW)
-
// below. But
this will never happen because of goto error. Thus
at next
-
// startPreview() call CameraHAL
will be deadlocked.
-
// Need to revisit
mStateSwitch lock, for now just
abort the process.
-
CAMHAL_ASSERT_X(false,
-
"At this stage mCameraAdapter->mStateSwitchLock is still locked, "
-
"deadlock is guaranteed");
-
-
goto error;
-
}
-
-
}
-
- CAMHAL_LOGDA("Starting CameraAdapter preview mode");
-
//Send
START_PREVIEW command to adapter
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_START_PREVIEW);//從這裏開始調用到BaseCameraAdapter
-
-
if(ret!=NO_ERROR) {
-
CAMHAL_LOGEA("Couldn't start
preview w/ CameraAdapter");
-
goto error;
-
}
-
CAMHAL_LOGDA("Started preview");
-
-
mPreviewEnabled = true;
-
mPreviewStartInProgress = false;
-
return ret;
-
-
error:
-
-
CAMHAL_LOGEA("Performing cleanup
after error");
-
-
//Do all
the cleanup
-
freePreviewBufs();
-
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
-
if(mDisplayAdapter.get() != NULL) {
-
mDisplayAdapter->disableDisplay(false);
-
}
-
mAppCallbackNotifier->stop();
-
mPreviewStartInProgress = false;
-
mPreviewEnabled = false;
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
- }
-
case CameraAdapter::CAMERA_START_PREVIEW:
-
{
-
-
CAMHAL_LOGDA("Start
Preview");
-
-
if ( ret == NO_ERROR )
-
{
-
ret = setState(operation);
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = startPreview();
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = commitState();
-
}
-
else
-
{
-
ret |= rollbackState();
-
}
-
-
break;
-
- }
-
status_t V4LCameraAdapter::startPreview()
-
{
-
status_t ret = NO_ERROR;
-
-
LOG_FUNCTION_NAME;
-
Mutex::Autolock
lock(mPreviewBufsLock);
-
-
if(mPreviewing) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
-
for (int i = 0; i < mPreviewBufferCountQueueable; i++) {
-
-
mVideoInfo->buf.index = i;
-
mVideoInfo->buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-
mVideoInfo->buf.memory = V4L2_MEMORY_MMAP;
-
-
ret = v4lIoctl(mCameraHandle, VIDIOC_QBUF, &mVideoInfo->buf);//申請內存空間
-
if (ret < 0) {
-
CAMHAL_LOGEA("VIDIOC_QBUF
Failed");
-
goto EXIT;
-
}
-
nQueued++;
-
}
-
-
ret = v4lStartStreaming();
-
-
// Create and start
preview thread for receiving buffers from V4L Camera
-
if(!mCapturing) {
-
mPreviewThread = new
PreviewThread(this);//開始preview線程
-
CAMHAL_LOGDA("Created preview
thread");
-
}
-
-
//Update
the flag to indicate we are previewing
-
mPreviewing = true;
-
mCapturing = false;
-
-
EXIT:
-
LOG_FUNCTION_NAME_EXIT;
-
return ret;
- }
-
status_t V4LCameraAdapter::v4lStartStreaming () {
-
status_t ret = NO_ERROR;
-
enum v4l2_buf_type bufType;
-
-
if (!mVideoInfo->isStreaming) {
-
bufType = V4L2_BUF_TYPE_VIDEO_CAPTURE;
-
-
ret = v4lIoctl (mCameraHandle, VIDIOC_STREAMON, &bufType);開始preview
-
if (ret < 0) {
-
CAMHAL_LOGEB("StartStreaming:
Unable to start capture: %s", strerror(errno));
-
return ret;
-
}
-
mVideoInfo->isStreaming = true;
-
}
-
return ret;
- }
-
int V4LCameraAdapter::previewThread()
-
{
-
status_t ret = NO_ERROR;
-
int width, height;
-
CameraFrame frame;
-
void *y_uv[2];
-
int index = 0;
-
int stride = 4096;
-
char *fp = NULL;
-
-
mParams.getPreviewSize(&width, &height);
-
-
if (mPreviewing) {
-
-
fp = this->GetFrame(index);
-
if(!fp) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
CameraBuffer *buffer = mPreviewBufs.keyAt(index);
-
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(buffer);
-
if (!lframe) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
-
debugShowFPS();
-
-
if ( mFrameSubscribers.size() == 0 ) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
//從這裏開始以我的理解是進行數據的轉換和保存操作
-
y_uv[0] = (void*) lframe->mYuv[0];
-
//y_uv[1] = (void*) lframe->mYuv[1];
-
//y_uv[1] = (void*) (lframe->mYuv[0] + height*stride);
-
convertYUV422ToNV12Tiler ( (unsigned
char*)fp, (unsigned
char*)y_uv[0], width, height);
-
CAMHAL_LOGVB("##...index= %d.;camera
buffer= 0x%x; y= 0x%x; UV= 0x%x.",index, buffer, y_uv[0], y_uv[1] );
-
-
#ifdef SAVE_RAW_FRAMES
-
unsigned char* nv12_buff = (unsigned
char*) malloc(width*height*3/2);
-
//Convert
yuv422i to yuv420sp(NV12) & dump
the frame to a file
-
convertYUV422ToNV12 ( (unsigned
char*)fp, nv12_buff, width, height);
-
saveFile( nv12_buff, ((width*height)*3/2) );
-
free (nv12_buff);
-
#endif
-
-
frame.mFrameType = CameraFrame::PREVIEW_FRAME_SYNC;
-
frame.mBuffer = buffer;
-
frame.mLength = width*height*3/2;
-
frame.mAlignment = stride;
-
frame.mOffset = 0;
-
frame.mTimestamp = systemTime(SYSTEM_TIME_MONOTONIC);
-
frame.mFrameMask = (unsigned int)CameraFrame::PREVIEW_FRAME_SYNC;
-
-
if (mRecording)
-
{
-
frame.mFrameMask |= (unsigned int)CameraFrame::VIDEO_FRAME_SYNC;
-
mFramesWithEncoder++;
-
}
-
-
ret = setInitFrameRefCount(frame.mBuffer, frame.mFrameMask);
-
if (ret != NO_ERROR) {
-
CAMHAL_LOGDB("Error
in setInitFrameRefCount %d", ret);
-
} else {
-
ret = sendFrameToSubscribers(&frame);
-
}
-
}
-
EXIT:
-
-
return ret;
- }
那麼我不是很明白的是,driver中的視頻數據是怎麼和mPreviewBufs還有index關聯在一起的,並且這裏可以通過buffer = mPreviewBufs.keyAt(index)獲取到CameraBuffer,這裏待會會詳細探究一下
先接着往下說,獲取到視頻數據之後,如果需要,會將數據經過轉換保存到file中方便之後使用,
最後使用得到的camerabuffer填充CameraFrame,這個結構至關重要,在我的理解,最終是通過sendFrameToSubscribers(&frame);方法將數據迴流的
這裏就先追蹤一下driver中的視頻數據是怎麼和mPreviewBufs還有index關聯在一起的
到了這裏就不得不提及上面已經說的一個很重要的方法,先看看這個方法:
他是startPreview的第一步,cameraPreviewInitialization
-
status_t CameraHal::cameraPreviewInitialization()
-
{
-
-
status_t ret = NO_ERROR;
-
CameraAdapter::BuffersDescriptor
desc;
-
CameraFrame frame;
-
unsigned int required_buffer_count;
-
unsigned int max_queueble_buffers;
-
-
#if PPM_INSTRUMENTATION || PPM_INSTRUMENTATION_ABS
-
gettimeofday(&mStartPreview, NULL);
-
#endif
-
-
LOG_FUNCTION_NAME;
-
-
if (mPreviewInitializationDone) {
-
return NO_ERROR;
-
}
-
-
if ( mPreviewEnabled ){
-
CAMHAL_LOGDA("Preview already
running");
-
LOG_FUNCTION_NAME_EXIT;
-
return ALREADY_EXISTS;
-
}
-
-
if ( NULL != mCameraAdapter ) {
-
ret = mCameraAdapter->setParameters(mParameters);配置參數到CameraAdapter
-
}
-
-
if ((mPreviewStartInProgress == false) && (mDisplayPaused == false)){
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_RESOLUTION_PREVIEW,( int ) &frame);//通過這個command獲取frame
-
if ( NO_ERROR != ret ){
-
CAMHAL_LOGEB("Error: CAMERA_QUERY_RESOLUTION_PREVIEW
%d", ret);
-
return ret;
-
}
-
-
///Update
the current preview width and height
-
mPreviewWidth = frame.mWidth;//初始化寬和高
-
mPreviewHeight = frame.mHeight;
-
}
-
-
///If we
don't have the preview callback enabled and display
adapter,
-
if(!mSetPreviewWindowCalled || (mDisplayAdapter.get() == NULL)){
-
CAMHAL_LOGD("Preview not started.
Preview in progress flag set");
-
mPreviewStartInProgress = true;
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_SWITCH_TO_EXECUTING);
-
if ( NO_ERROR != ret ){
-
CAMHAL_LOGEB("Error: CAMERA_SWITCH_TO_EXECUTING
%d", ret);
-
return ret;
-
}
-
return NO_ERROR;
-
}
-
-
if( (mDisplayAdapter.get() != NULL) && ( !mPreviewEnabled ) && ( mDisplayPaused ) )
-
{
-
CAMHAL_LOGDA("Preview is in
paused state");
-
-
mDisplayPaused = false;
-
mPreviewEnabled = true;
-
if ( NO_ERROR == ret )
-
{
-
ret = mDisplayAdapter->pauseDisplay(mDisplayPaused);
-
-
if ( NO_ERROR != ret )
-
{
-
CAMHAL_LOGEB("Display
adapter resume failed %x", ret);
-
}
-
}
-
//restart
preview callbacks
-
if(mMsgEnabled & CAMERA_MSG_PREVIEW_FRAME)
-
{
-
mAppCallbackNotifier->enableMsgType (CAMERA_MSG_PREVIEW_FRAME);//
-
}
-
-
signalEndImageCapture();
-
return ret;
-
}
-
-
required_buffer_count = atoi(mCameraProperties->get(CameraProperties::REQUIRED_PREVIEW_BUFS));
-
-
///Allocate
the preview buffers
-
ret = allocPreviewBufs(mPreviewWidth, mPreviewHeight, mParameters.getPreviewFormat(), required_buffer_count, max_queueble_buffers);
-
-
if ( NO_ERROR != ret )
-
{
-
CAMHAL_LOGEA("Couldn't allocate
buffers for Preview");
-
goto error;
-
}
-
-
if ( mMeasurementEnabled )
-
{
-
-
ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_QUERY_BUFFER_SIZE_PREVIEW_DATA,
-
( int ) &frame,
-
required_buffer_count);
-
if ( NO_ERROR != ret )
-
{
-
return ret;
-
}
-
-
///Allocate
the preview data buffers
-
ret = allocPreviewDataBufs(frame.mLength, required_buffer_count);
-
if ( NO_ERROR != ret ) {
-
CAMHAL_LOGEA("Couldn't allocate
preview data buffers");
-
goto error;
-
}
-
-
if ( NO_ERROR == ret )
-
{
-
desc.mBuffers = mPreviewDataBuffers;
-
desc.mOffsets = mPreviewDataOffsets;
-
desc.mFd = mPreviewDataFd;
-
desc.mLength = mPreviewDataLength;
-
desc.mCount = ( size_t ) required_buffer_count;
-
desc.mMaxQueueable = (size_t) required_buffer_count;
-
-
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW_DATA,
-
( int ) &desc);
-
}
-
-
}
-
-
///Pass
the buffers to Camera Adapter
-
desc.mBuffers = mPreviewBuffers;
-
desc.mOffsets = mPreviewOffsets;
-
desc.mFd = mPreviewFd;
-
desc.mLength = mPreviewLength;
-
desc.mCount = ( size_t ) required_buffer_count;
-
desc.mMaxQueueable = (size_t) max_queueble_buffers;
-
- ret = mCameraAdapter->sendCommand(CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW,( int ) &desc);
-
-
if ( NO_ERROR != ret )
-
{
-
CAMHAL_LOGEB("Failed to register
preview buffers: 0x%x", ret);
-
freePreviewBufs();
-
return ret;
-
}
-
- mAppCallbackNotifier->startPreviewCallbacks(mParameters, mPreviewBuffers, mPreviewOffsets, mPreviewFd, mPreviewLength, required_buffer_count);
-
///Start
the callback notifier
-
ret = mAppCallbackNotifier->start();
-
-
if( ALREADY_EXISTS == ret )
-
{
-
//Already
running, do nothing
-
CAMHAL_LOGDA("AppCallbackNotifier
already running");
-
ret = NO_ERROR;
-
}
-
else if ( NO_ERROR == ret ) {
-
CAMHAL_LOGDA("Started AppCallbackNotifier..");
-
mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
-
}
-
else
-
{
-
CAMHAL_LOGDA("Couldn't start
AppCallbackNotifier");
-
goto error;
-
}
-
-
if (ret == NO_ERROR) mPreviewInitializationDone = true;
-
return ret;
-
-
error:
-
-
CAMHAL_LOGEA("Performing cleanup
after error");
-
-
//Do all
the cleanup
-
freePreviewBufs();
-
mCameraAdapter->sendCommand(CameraAdapter::CAMERA_STOP_PREVIEW);
-
if(mDisplayAdapter.get() != NULL)
-
{
-
mDisplayAdapter->disableDisplay(false);
-
}
-
mAppCallbackNotifier->stop();
-
mPreviewStartInProgress = false;
-
mPreviewEnabled = false;
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
- }
在sendcommand中實現如下:
-
case CameraAdapter::CAMERA_USE_BUFFERS_PREVIEW:
-
CAMHAL_LOGDA("Use buffers
for preview");
-
desc = ( BuffersDescriptor * ) value1;
-
-
if ( NULL == desc )
-
{
-
CAMHAL_LOGEA("Invalid
preview buffers!");
-
return -EINVAL;
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = setState(operation);
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
Mutex::Autolock
lock(mPreviewBufferLock);
-
mPreviewBuffers = desc->mBuffers;
-
mPreviewBuffersLength = desc->mLength;
-
mPreviewBuffersAvailable.clear();
-
mSnapshotBuffersAvailable.clear();
-
for ( uint32_t
i = 0 ; i < desc->mMaxQueueable ; i++ )
-
{
-
mPreviewBuffersAvailable.add(&mPreviewBuffers[i], 0);這裏實現了mPreviewBuffersAvailable與mPreviewBuffers的關聯
-
}
-
// initial
ref count for undeqeueued buffers is 1
since buffer provider
-
// is still
holding on to it
-
for ( uint32_t
i = desc->mMaxQueueable ; i < desc->mCount ; i++ )
-
{
-
mPreviewBuffersAvailable.add(&mPreviewBuffers[i], 1);
-
}
-
}
-
-
if ( NULL != desc )
-
{
-
ret = useBuffers(CameraAdapter::CAMERA_PREVIEW,
-
desc->mBuffers,
-
desc->mCount,
-
desc->mLength,
-
desc->mMaxQueueable);
-
}
-
-
if ( ret == NO_ERROR )
-
{
-
ret = commitState();
-
}
-
else
-
{
-
ret |= rollbackState();
-
}
-
- break;
-
status_t V4LCameraAdapter::UseBuffersPreview(CameraBuffer *bufArr, int num)
-
{
-
int ret = NO_ERROR;
-
LOG_FUNCTION_NAME;
-
-
if(NULL == bufArr) {
-
ret = BAD_VALUE;
-
goto EXIT;
-
}
-
-
ret = v4lInitMmap(num);
-
if (ret == NO_ERROR) {
-
for (int i = 0; i < num; i++) {
-
//Associate each Camera
internal buffer with the one from Overlay
-
mPreviewBufs.add(&bufArr[i], i);//這裏實現了mPreviewBufs和desc->mBuffers的關聯
-
CAMHAL_LOGDB("Preview- buff
[%d] = 0x%x ",i, mPreviewBufs.keyAt(i));
-
}
-
-
// Update
the preview buffer count
-
mPreviewBufferCount = num;
-
}
-
EXIT:
-
LOG_FUNCTION_NAME_EXIT;
-
return ret;
- }
的初始化是在哪裏實現的呢??在在camerahal文件的initial中初始化的
-
/**
-
@brief Initialize the Camera HAL
-
-
Creates CameraAdapter, AppCallbackNotifier, DisplayAdapter and MemoryManager
-
-
@param None
-
@return NO_ERROR - On success
-
NO_MEMORY - On failure to allocate
memory for any of the objects
-
@remarks Camera Hal internal function
-
-
*/
-
-
status_t CameraHal::initialize(CameraProperties::Properties* properties)
-
{
-
LOG_FUNCTION_NAME;
-
-
int sensor_index = 0;
-
const char* sensor_name = NULL;
-
-
///Initialize
the event mask used for registering an event provider for AppCallbackNotifier
-
///Currently, registering
all events as to be coming from CameraAdapter
-
int32_t eventMask = CameraHalEvent::ALL_EVENTS;
-
-
// Get my
camera properties
-
mCameraProperties = properties;
-
-
if(!mCameraProperties)
-
{
-
goto fail_loop;
-
}
-
-
// Dump
the properties of this Camera
-
// will
only print if DEBUG macro is defined
-
mCameraProperties->dump();
-
-
if (strcmp(CameraProperties::DEFAULT_VALUE, mCameraProperties->get(CameraProperties::CAMERA_SENSOR_INDEX)) != 0 )
-
{
-
sensor_index = atoi(mCameraProperties->get(CameraProperties::CAMERA_SENSOR_INDEX));
-
}
-
-
if (strcmp(CameraProperties::DEFAULT_VALUE, mCameraProperties->get(CameraProperties::CAMERA_NAME)) != 0 ) {
-
sensor_name = mCameraProperties->get(CameraProperties::CAMERA_NAME);
-
}
-
CAMHAL_LOGDB("Sensor index= %d;
Sensor name= %s", sensor_index, sensor_name);
-
-
if (strcmp(sensor_name, V4L_CAMERA_NAME_USB) == 0) {
-
#ifdef V4L_CAMERA_ADAPTER
-
mCameraAdapter = V4LCameraAdapter_Factory(sensor_index);
-
#endif
-
}
-
else {
-
#ifdef OMX_CAMERA_ADAPTER
-
mCameraAdapter = OMXCameraAdapter_Factory(sensor_index);
-
#endif
-
}
-
-
if ( ( NULL == mCameraAdapter ) || (mCameraAdapter->initialize(properties)!=NO_ERROR))
-
{
-
CAMHAL_LOGEA("Unable to create
or initialize CameraAdapter");
-
mCameraAdapter = NULL;
-
goto fail_loop;
-
}
-
-
mCameraAdapter->incStrong(mCameraAdapter);
-
mCameraAdapter->registerImageReleaseCallback(releaseImageBuffers, (void *) this);
-
mCameraAdapter->registerEndCaptureCallback(endImageCapture, (void *)this);
-
-
if(!mAppCallbackNotifier.get())
-
{
-
/// Create
the callback notifier
-
mAppCallbackNotifier = new
AppCallbackNotifier();
-
if( ( NULL == mAppCallbackNotifier.get() ) || ( mAppCallbackNotifier->initialize() != NO_ERROR))
-
{
-
CAMHAL_LOGEA("Unable to
create or initialize AppCallbackNotifier");
-
goto fail_loop;
-
}
-
}
-
-
if(!mMemoryManager.get())
-
{
-
/// Create
Memory Manager
-
mMemoryManager = new
MemoryManager();
-
if( ( NULL == mMemoryManager.get() ) || ( mMemoryManager->initialize() != NO_ERROR))
-
{
-
CAMHAL_LOGEA("Unable to
create or initialize MemoryManager");
-
goto fail_loop;
-
}
-
}
-
-
///Setup
the class dependencies...
-
-
///AppCallbackNotifier
has to know where to get the
Camera frames and the events like auto focus lock etc from.
-
///CameraAdapter is the
one which provides those events
-
///Set it
as the frame and event providers for AppCallbackNotifier
-
///@remarks
setEventProvider API takes in a bit mask of events for registering
a provider for the different events
-
/// That
way, if events can come from
DisplayAdapter in future, we
will be able to add it as provider
-
/// for any
event
-
mAppCallbackNotifier->setEventProvider(eventMask, mCameraAdapter);
-
mAppCallbackNotifier->setFrameProvider(mCameraAdapter);
-
-
///Any
dynamic errors that happen during the camera use case has to be
propagated back to the application
-
///via
CAMERA_MSG_ERROR. AppCallbackNotifier is the class that
notifies such errors to the application
-
///Set it
as the error handler for CameraAdapter
-
mCameraAdapter->setErrorHandler(mAppCallbackNotifier.get());
-
-
///Start
the callback notifier
-
if(mAppCallbackNotifier->start() != NO_ERROR)
-
{
-
CAMHAL_LOGEA("Couldn't start
AppCallbackNotifier");
-
goto fail_loop;
-
}
-
-
CAMHAL_LOGDA("Started AppCallbackNotifier..");
-
mAppCallbackNotifier->setMeasurements(mMeasurementEnabled);
-
-
///Initialize
default parameters
-
initDefaultParameters();
-
-
-
if ( setParameters(mParameters) != NO_ERROR )
-
{
-
CAMHAL_LOGEA("Failed to set
default parameters?!");
-
}
-
-
// register for sensor
events
-
mSensorListener = new
SensorListener();
-
if (mSensorListener.get()) {
-
if (mSensorListener->initialize() == NO_ERROR) {
-
mSensorListener->setCallbacks(orientation_cb, this);
-
mSensorListener->enableSensor(SensorListener::SENSOR_ORIENTATION);
-
} else {
-
CAMHAL_LOGEA("Error initializing
SensorListener. not fatal, continuing");
-
mSensorListener.clear();
-
mSensorListener = NULL;
-
}
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
-
return NO_ERROR;
-
-
fail_loop:
-
-
///Free
up the resources because we failed somewhere up
-
deinitialize();
-
LOG_FUNCTION_NAME_EXIT;
-
-
return NO_MEMORY;
-
- }
我們就看一下setFrameProvider這個方法都做了什麼事情,
-
void AppCallbackNotifier::setFrameProvider(FrameNotifier *frameNotifier)
-
{
-
LOG_FUNCTION_NAME;
-
///@remarks
There is no NULL check
here. We will check
-
///for NULL when
we get the start command from CameraAdapter
-
mFrameProvider = new
FrameProvider(frameNotifier, this, frameCallbackRelay);
-
if ( NULL == mFrameProvider )
-
{
-
CAMHAL_LOGEA("Error in creating
FrameProvider");
-
}
-
else
-
{
-
//Register
only for captured images and RAW for now
-
//TODO: Register for and handle
all types of frames
-
mFrameProvider->enableFrameNotification(CameraFrame::IMAGE_FRAME);
-
mFrameProvider->enableFrameNotification(CameraFrame::RAW_FRAME);
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
- }
這個方法只是調用了下面這個方法實現:
-
status_t BaseCameraAdapter::__sendFrameToSubscribers(CameraFrame* frame,
-
KeyedVector<int, frame_callback> *subscribers,
-
CameraFrame::FrameType
frameType)
-
{
-
size_t refCount = 0;
-
status_t ret = NO_ERROR;
-
frame_callback callback = NULL;
-
-
frame->mFrameType = frameType;
-
-
if ( (frameType == CameraFrame::PREVIEW_FRAME_SYNC) ||
-
(frameType == CameraFrame::VIDEO_FRAME_SYNC) ||
-
(frameType == CameraFrame::SNAPSHOT_FRAME) ){
-
if (mFrameQueue.size() > 0){
-
CameraFrame *lframe = (CameraFrame *)mFrameQueue.valueFor(frame->mBuffer);
-
frame->mYuv[0] = lframe->mYuv[0];
-
frame->mYuv[1] = frame->mYuv[0] + (frame->mLength + frame->mOffset)*2/3;
-
}
-
else{
-
CAMHAL_LOGDA("Empty Frame
Queue");
-
return -EINVAL;
-
}
-
}
-
-
if (NULL != subscribers) {
-
refCount = getFrameRefCount(frame->mBuffer, frameType);
-
-
if (refCount == 0) {
-
CAMHAL_LOGDA("Invalid ref
count of 0");
-
return -EINVAL;
-
}
-
-
if (refCount > subscribers->size()) {
-
CAMHAL_LOGEB("Invalid ref
count for frame type: 0x%x", frameType);
-
return -EINVAL;
-
}
-
-
CAMHAL_LOGVB("Type of Frame:
0x%x address: 0x%x refCount start %d",
-
frame->mFrameType,
-
( uint32_t ) frame->mBuffer,
-
refCount);
-
-
for ( unsigned int i = 0 ; i < refCount; i++ ) {
-
frame->mCookie = ( void * ) subscribers->keyAt(i);
-
callback = (frame_callback) subscribers->valueAt(i);
-
-
if (!callback) {
-
CAMHAL_LOGEB("callback
not set for frame type: 0x%x", frameType);
-
return -EINVAL;
-
}
-
-
callback(frame);
-
}
-
} else {
-
CAMHAL_LOGEA("Subscribers is
null??");
-
return -EINVAL;
-
}
-
-
return ret;
- }
這裏所要獲取到的callback方法就是上面setFrameProvider時引入的frameCallbackRelay這個函數,我們看看這個函數的具體實現
-
void AppCallbackNotifier::frameCallbackRelay(CameraFrame* caFrame)
-
{
-
LOG_FUNCTION_NAME;
-
AppCallbackNotifier *appcbn = (AppCallbackNotifier*) (caFrame->mCookie);
-
appcbn->frameCallback(caFrame);
-
LOG_FUNCTION_NAME_EXIT;
-
}
-
-
void AppCallbackNotifier::frameCallback(CameraFrame* caFrame)
-
{
-
///Post
the event to the event queue of AppCallbackNotifier
-
TIUTILS::Message
msg;
-
CameraFrame *frame;
-
-
LOG_FUNCTION_NAME;
-
-
if ( NULL != caFrame )
-
{
-
-
frame = new CameraFrame(*caFrame);
-
if ( NULL != frame )
-
{
-
msg.command = AppCallbackNotifier::NOTIFIER_CMD_PROCESS_FRAME;
-
msg.arg1 = frame;
-
mFrameQ.put(&msg);
-
}
-
else
-
{
-
CAMHAL_LOGEA("Not enough
resources to allocate CameraFrame");
-
}
-
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
- }
我們可以看一下在AppCallbackNotifier初始化的時候就調用了initialize做一下初始設置
-
/**
-
* NotificationHandler class
-
*/
-
-
///Initialization function for AppCallbackNotifier
-
status_t AppCallbackNotifier::initialize()
-
{
-
LOG_FUNCTION_NAME;
-
-
mPreviewMemory = 0;
-
-
mMeasurementEnabled = false;
-
-
mNotifierState = NOTIFIER_STOPPED;
-
-
///Create
the app notifier thread
-
mNotificationThread = new
NotificationThread(this);
-
if(!mNotificationThread.get())
-
{
-
CAMHAL_LOGEA("Couldn't create
Notification thread");
-
return NO_MEMORY;
-
}
-
-
///Start
the display thread
-
status_t ret = mNotificationThread->run("NotificationThread", PRIORITY_URGENT_DISPLAY);
-
if(ret!=NO_ERROR)
-
{
-
CAMHAL_LOGEA("Couldn't run NotificationThread");
-
mNotificationThread.clear();
-
return ret;
-
}
-
-
mUseMetaDataBufferMode = true;
-
mRawAvailable = false;
-
-
mRecording = false;
-
mPreviewing = false;
-
-
LOG_FUNCTION_NAME_EXIT;
-
-
return ret;
- }
-
bool AppCallbackNotifier::notificationThread()
-
{
-
bool shouldLive = true;
-
status_t ret;
-
-
LOG_FUNCTION_NAME;
-
-
//CAMHAL_LOGDA("Notification
Thread waiting for message");
-
ret = TIUTILS::MessageQueue::waitForMsg(&mNotificationThread->msgQ(),
-
&mEventQ,
-
&mFrameQ,
-
AppCallbackNotifier::NOTIFIER_TIMEOUT);
-
-
//CAMHAL_LOGDA("Notification
Thread received message");
-
-
if (mNotificationThread->msgQ().hasMsg()) {
-
///Received
a message from CameraHal, process it
-
CAMHAL_LOGDA("Notification Thread
received message from Camera HAL");
-
shouldLive = processMessage();
-
if(!shouldLive) {
-
CAMHAL_LOGDA("Notification
Thread exiting.");
-
return shouldLive;
-
}
-
}
-
-
if(mEventQ.hasMsg()) {
-
///Received
an event from one of the event providers
-
CAMHAL_LOGDA("Notification Thread
received an event from event provider (CameraAdapter)");
-
notifyEvent();
-
}
-
-
if(mFrameQ.hasMsg()) {
-
///Received
a frame from one of the frame providers
-
//CAMHAL_LOGDA("Notification
Thread received a frame from frame provider (CameraAdapter)");
-
notifyFrame();
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
-
return shouldLive;
- }
-
void AppCallbackNotifier::notifyFrame()
-
{
-
///Receive and send
the frame notifications to app
-
TIUTILS::Message
msg;
-
CameraFrame *frame;
-
MemoryHeapBase *heap;
-
MemoryBase *buffer = NULL;
-
sp<MemoryBase> memBase;
-
void *buf = NULL;
-
-
LOG_FUNCTION_NAME;
-
-
{
-
Mutex::Autolock
lock(mLock);
-
if(!mFrameQ.isEmpty()) {
-
mFrameQ.get(&msg);
-
} else {
-
return;
-
}
-
}
-
-
bool ret = true;
-
-
frame = NULL;
-
switch(msg.command)
-
{
-
case AppCallbackNotifier::NOTIFIER_CMD_PROCESS_FRAME:
-
-
frame = (CameraFrame *) msg.arg1;
-
if(!frame)
-
{
-
break;
-
}
-
-
if ( (CameraFrame::RAW_FRAME == frame->mFrameType )&&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( NULL != mNotifyCb ) )
-
{
-
-
if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE) )
-
{
-
#ifdef COPY_IMAGE_BUFFER
-
copyAndSendPictureFrame(frame, CAMERA_MSG_RAW_IMAGE);
-
#else
-
//TODO: Find
a way to map a Tiler buffer to a
MemoryHeapBase
-
#endif
-
}
-
else {
-
if ( mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE_NOTIFY) ) {
-
mNotifyCb(CAMERA_MSG_RAW_IMAGE_NOTIFY, 0, 0, mCallbackCookie);
-
}
-
mFrameProvider->returnFrame(frame->mBuffer,
-
(CameraFrame::FrameType) frame->mFrameType);
-
}
-
-
mRawAvailable = true;
-
-
}
-
else if ( (CameraFrame::IMAGE_FRAME == frame->mFrameType) &&
-
(NULL != mCameraHal) &&
-
(NULL != mDataCb) &&
-
(CameraFrame::ENCODE_RAW_YUV422I_TO_JPEG & frame->mQuirks) )
-
{
-
-
int encode_quality = 100, tn_quality = 100;
-
int tn_width, tn_height;
-
unsigned int current_snapshot = 0;
-
Encoder_libjpeg::params *main_jpeg = NULL, *tn_jpeg = NULL;
-
void* exif_data = NULL;
-
const char *previewFormat = NULL;
-
camera_memory_t* raw_picture = mRequestMemory(-1, frame->mLength, 1, NULL);
-
-
if(raw_picture) {
-
buf = raw_picture->data;
-
}
-
-
CameraParameters parameters;
-
char *params = mCameraHal->getParameters();
-
const String8 strParams(params);
-
parameters.unflatten(strParams);
-
-
encode_quality = parameters.getInt(CameraParameters::KEY_JPEG_QUALITY);
-
if (encode_quality < 0 || encode_quality > 100) {
-
encode_quality = 100;
-
}
-
-
tn_quality = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_QUALITY);
-
if (tn_quality < 0 || tn_quality > 100) {
-
tn_quality = 100;
-
}
-
-
if (CameraFrame::HAS_EXIF_DATA & frame->mQuirks) {
-
exif_data = frame->mCookie2;
-
}
-
-
main_jpeg = (Encoder_libjpeg::params*)
-
malloc(sizeof(Encoder_libjpeg::params));
-
-
// Video
snapshot with LDCNSF on adds a few bytes start offset
-
// and a
few bytes on every line. They
must be skipped.
-
int rightCrop = frame->mAlignment/2 - frame->mWidth;
-
-
CAMHAL_LOGDB("Video
snapshot right crop = %d", rightCrop);
-
CAMHAL_LOGDB("Video
snapshot offset = %d", frame->mOffset);
-
-
if (main_jpeg) {
-
main_jpeg->src = (uint8_t *)frame->mBuffer->mapped;
-
main_jpeg->src_size = frame->mLength;
-
main_jpeg->dst = (uint8_t*) buf;
-
main_jpeg->dst_size = frame->mLength;
-
main_jpeg->quality = encode_quality;
-
main_jpeg->in_width = frame->mAlignment/2; // use
stride here
-
main_jpeg->in_height = frame->mHeight;
-
main_jpeg->out_width = frame->mAlignment/2;
-
main_jpeg->out_height = frame->mHeight;
-
main_jpeg->right_crop = rightCrop;
-
main_jpeg->start_offset = frame->mOffset;
-
if ( CameraFrame::FORMAT_YUV422I_UYVY & frame->mQuirks) {
-
main_jpeg->format = TICameraParameters::PIXEL_FORMAT_YUV422I_UYVY;
-
}
-
else { //if ( CameraFrame::FORMAT_YUV422I_YUYV & frame->mQuirks)
-
main_jpeg->format = CameraParameters::PIXEL_FORMAT_YUV422I;
-
}
-
}
-
-
tn_width = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_WIDTH);
-
tn_height = parameters.getInt(CameraParameters::KEY_JPEG_THUMBNAIL_HEIGHT);
-
previewFormat = parameters.getPreviewFormat();
-
-
if ((tn_width > 0) && (tn_height > 0) && ( NULL != previewFormat )) {
-
tn_jpeg = (Encoder_libjpeg::params*)
-
malloc(sizeof(Encoder_libjpeg::params));
-
// if malloc
fails just keep going and encode main jpeg
-
if (!tn_jpeg) {
-
tn_jpeg = NULL;
-
}
-
}
-
-
if (tn_jpeg) {
-
int width, height;
-
parameters.getPreviewSize(&width,&height);
-
current_snapshot = (mPreviewBufCount + MAX_BUFFERS - 1) % MAX_BUFFERS;
-
tn_jpeg->src = (uint8_t *)mPreviewBuffers[current_snapshot].mapped;
-
tn_jpeg->src_size = mPreviewMemory->size / MAX_BUFFERS;
-
tn_jpeg->dst_size = calculateBufferSize(tn_width,
-
tn_height,
-
previewFormat);
-
tn_jpeg->dst = (uint8_t*) malloc(tn_jpeg->dst_size);
-
tn_jpeg->quality = tn_quality;
-
tn_jpeg->in_width = width;
-
tn_jpeg->in_height = height;
-
tn_jpeg->out_width = tn_width;
-
tn_jpeg->out_height = tn_height;
-
tn_jpeg->right_crop = 0;
-
tn_jpeg->start_offset = 0;
-
tn_jpeg->format = CameraParameters::PIXEL_FORMAT_YUV420SP;;
-
}
-
-
sp<Encoder_libjpeg> encoder = new
Encoder_libjpeg(main_jpeg,
-
tn_jpeg,
-
AppCallbackNotifierEncoderCallback,
-
(CameraFrame::FrameType)frame->mFrameType,
-
this,
-
raw_picture,
-
exif_data, frame->mBuffer);
-
gEncoderQueue.add(frame->mBuffer->mapped, encoder);
-
encoder->run();
-
encoder.clear();
-
if (params != NULL)
-
{
-
mCameraHal->putParameters(params);
-
}
-
}
-
else if ( ( CameraFrame::IMAGE_FRAME == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) )
-
{
-
-
// CTS, MTS
requirements: Every 'takePicture()' call
-
// who
registers a raw callback should receive one
-
// as
well. This is not always
the case with
-
// CameraAdapters
though.
-
if (!mCameraHal->msgTypeEnabled(CAMERA_MSG_RAW_IMAGE)) {
-
dummyRaw();
-
} else {
-
mRawAvailable = false;
-
}
-
-
#ifdef COPY_IMAGE_BUFFER
-
{
-
Mutex::Autolock
lock(mBurstLock);
-
#if defined(OMAP_ENHANCEMENT)
-
if ( mBurst )
-
{
-
copyAndSendPictureFrame(frame, CAMERA_MSG_COMPRESSED_BURST_IMAGE);
-
}
-
else
-
#endif
-
{
-
copyAndSendPictureFrame(frame, CAMERA_MSG_COMPRESSED_IMAGE);
-
}
-
}
-
#else
-
//TODO: Find
a way to map a Tiler buffer to a
MemoryHeapBase
-
#endif
-
}
-
else if ( ( CameraFrame::VIDEO_FRAME_SYNC == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( mCameraHal->msgTypeEnabled(CAMERA_MSG_VIDEO_FRAME) ) )
-
{
-
AutoMutex locker(mRecordingLock);
-
if(mRecording)
-
{
-
if(mUseMetaDataBufferMode)
-
{
-
camera_memory_t *videoMedatadaBufferMemory =
-
mVideoMetadataBufferMemoryMap.valueFor(frame->mBuffer->opaque);
-
video_metadata_t *videoMetadataBuffer = (video_metadata_t *) videoMedatadaBufferMemory->data;
-
-
if( (NULL == videoMedatadaBufferMemory) || (NULL == videoMetadataBuffer) || (NULL == frame->mBuffer) )
-
{
-
CAMHAL_LOGEA("Error!
One of the video buffers is NULL");
-
break;
-
}
-
-
if ( mUseVideoBuffers )
-
{
-
CameraBuffer *vBuf = mVideoMap.valueFor(frame->mBuffer->opaque);
-
GraphicBufferMapper &mapper = GraphicBufferMapper::get();
-
Rect bounds;
-
bounds.left = 0;
-
bounds.top = 0;
-
bounds.right = mVideoWidth;
-
bounds.bottom = mVideoHeight;
-
-
void *y_uv[2];
-
mapper.lock((buffer_handle_t)vBuf, CAMHAL_GRALLOC_USAGE, bounds, y_uv);
-
y_uv[1] = y_uv[0] + mVideoHeight*4096;
-
-
structConvImage input = {frame->mWidth,
-
frame->mHeight,
-
4096,
-
IC_FORMAT_YCbCr420_lp,
-
(mmByte *)frame->mYuv[0],
-
(mmByte *)frame->mYuv[1],
-
frame->mOffset};
-
-
structConvImage output = {mVideoWidth,
-
mVideoHeight,
-
4096,
-
IC_FORMAT_YCbCr420_lp,
-
(mmByte *)y_uv[0],
-
(mmByte *)y_uv[1],
-
0};
-
-
VT_resizeFrame_Video_opt2_lp(&input, &output, NULL, 0);
-
mapper.unlock((buffer_handle_t)vBuf->opaque);
-
videoMetadataBuffer->metadataBufferType = (int) kMetadataBufferTypeCameraSource;
-
/* FIXME
remove cast */
-
videoMetadataBuffer->handle = (void *)vBuf->opaque;
-
videoMetadataBuffer->offset = 0;
-
}
-
else
-
{
-
videoMetadataBuffer->metadataBufferType = (int) kMetadataBufferTypeCameraSource;
-
videoMetadataBuffer->handle = camera_buffer_get_omx_ptr(frame->mBuffer);
-
videoMetadataBuffer->offset = frame->mOffset;
-
}
-
-
CAMHAL_LOGVB("mDataCbTimestamp
: frame->mBuffer=0x%x, videoMetadataBuffer=0x%x, videoMedatadaBufferMemory=0x%x",
-
frame->mBuffer->opaque, videoMetadataBuffer, videoMedatadaBufferMemory);
-
-
mDataCbTimestamp(frame->mTimestamp, CAMERA_MSG_VIDEO_FRAME,
-
videoMedatadaBufferMemory, 0, mCallbackCookie);
-
}
-
else
-
{
-
//TODO: Need to revisit
this, should ideally be mapping the TILER buffer using mRequestMemory
-
camera_memory_t* fakebuf = mRequestMemory(-1, sizeof(buffer_handle_t), 1, NULL);
-
if( (NULL == fakebuf) || ( NULL == fakebuf->data) || ( NULL == frame->mBuffer))
-
{
-
CAMHAL_LOGEA("Error!
One of the video buffers is NULL");
-
break;
-
}
-
-
*reinterpret_cast<buffer_handle_t*>(fakebuf->data) = reinterpret_cast<buffer_handle_t>(frame->mBuffer->mapped);
-
mDataCbTimestamp(frame->mTimestamp, CAMERA_MSG_VIDEO_FRAME, fakebuf, 0, mCallbackCookie);
-
fakebuf->release(fakebuf);
-
}
-
}
-
}
-
else if(( CameraFrame::SNAPSHOT_FRAME == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( NULL != mNotifyCb)) {
-
//When
enabled, measurement data is sent
instead of video data
-
if ( !mMeasurementEnabled ) {
-
copyAndSendPreviewFrame(frame, CAMERA_MSG_POSTVIEW_FRAME);
-
} else {
-
mFrameProvider->returnFrame(frame->mBuffer,
-
(CameraFrame::FrameType) frame->mFrameType);
-
}
-
}
-
else if ( ( CameraFrame::PREVIEW_FRAME_SYNC== frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME)) ) {
-
//When
enabled, measurement data is sent
instead of video data
-
if ( !mMeasurementEnabled ) {
-
copyAndSendPreviewFrame(frame, CAMERA_MSG_PREVIEW_FRAME);
-
} else {
-
mFrameProvider->returnFrame(frame->mBuffer,
-
(CameraFrame::FrameType) frame->mFrameType);
-
}
-
}
-
else if ( ( CameraFrame::FRAME_DATA_SYNC == frame->mFrameType ) &&
-
( NULL != mCameraHal ) &&
-
( NULL != mDataCb) &&
-
( mCameraHal->msgTypeEnabled(CAMERA_MSG_PREVIEW_FRAME)) ) {
-
copyAndSendPreviewFrame(frame, CAMERA_MSG_PREVIEW_FRAME);
-
} else {
-
mFrameProvider->returnFrame(frame->mBuffer,
-
( CameraFrame::FrameType ) frame->mFrameType);
-
CAMHAL_LOGDB("Frame
type 0x%x is still unsupported!", frame->mFrameType);
-
}
-
-
break;
-
-
default:
-
-
break;
-
-
};
-
-
exit:
-
-
if ( NULL != frame )
-
{
-
delete frame;
-
}
-
-
LOG_FUNCTION_NAME_EXIT;
- }
-
void AppCallbackNotifier::copyAndSendPreviewFrame(CameraFrame* frame, int32_t
msgType)
-
{
-
camera_memory_t* picture = NULL;
-
CameraBuffer * dest = NULL;
-
-
// scope for lock
-
{
-
Mutex::Autolock
lock(mLock);
-
-
if(mNotifierState != AppCallbackNotifier::NOTIFIER_STARTED) {
-
goto exit;
-
}
-
-
if (!mPreviewMemory || !frame->mBuffer) {
-
CAMHAL_LOGDA("Error! One
of the buffer is NULL");
-
goto exit;
-
}
-
-
dest = &mPreviewBuffers[mPreviewBufCount];
-
-
CAMHAL_LOGVB("%d:copy2Dto1D(%p,
%p, %d, %d, %d, %d, %d,%s)",
-
__LINE__,
-
dest,
-
frame->mBuffer,
-
mPreviewWidth,
-
mPreviewHeight,
-
mPreviewStride,
-
2,
-
frame->mLength,
-
mPreviewPixelFormat);
-
-
/* FIXME
map dest */
-
if ( NULL != dest && dest->mapped != NULL ) {
-
// data
sync frames don't need conversion
-
if (CameraFrame::FRAME_DATA_SYNC == frame->mFrameType) {
-
if ( (mPreviewMemory->size / MAX_BUFFERS) >= frame->mLength ) {
-
memcpy(dest->mapped, (void*) frame->mBuffer->mapped, frame->mLength);
-
} else {
-
memset(dest->mapped, 0, (mPreviewMemory->size / MAX_BUFFERS));
-
}
-
} else {
-
if ((NULL == frame->mYuv[0]) || (NULL == frame->mYuv[1])){
-
CAMHAL_LOGEA("Error!
One of the YUV Pointer is NULL");
-
goto exit;
-
}
-
else{
-
copy2Dto1D(dest->mapped,
-
frame->mYuv,
-
mPreviewWidth,
-
mPreviewHeight,
-
mPreviewStride,
-
frame->mOffset,
-
2,
-
frame->mLength,
-
mPreviewPixelFormat);
-
}
-
}
-
}
-
}
-
-
exit:
-
mFrameProvider->returnFrame(frame->mBuffer, (CameraFrame::FrameType) frame->mFrameType);
-
-
if((mNotifierState == AppCallbackNotifier::NOTIFIER_STARTED) &&
-
mCameraHal->msgTypeEnabled(msgType) &&
-
(dest != NULL) && (dest->mapped != NULL)) {
-
AutoMutex locker(mLock);
-
if ( mPreviewMemory )
-
mDataCb(msgType, mPreviewMemory, mPreviewBufCount, NULL, mCallbackCookie);
-
}
-
-
// increment for next buffer
-
mPreviewBufCount = (mPreviewBufCount + 1) % AppCallbackNotifier::MAX_BUFFERS;
- }
camera_data_callback mDataCb;
這個方法的實現很關鍵,其實他是在cameraservice中實現,過程如下:
1.cameraservice中調用mHardware->setCallbacks(notifyCallback,dataCallback, dataCallbackTimestamp, (void *)cameraId);
2.camerahardwareinterface中調用mDevice->ops->set_callbacks(mDevice,__notify_cb,__data_cb, __data_cb_timestamp, __get_memory, this);
3.camerahal_module中調用gCameraHals[ti_dev->cameraid]->setCallbacks(notify_cb, data_cb, data_cb_timestamp, get_memory, user);
4.camerahal中調用mAppCallbackNotifier->setCallbacks(this,notify_cb, data_cb, data_cb_timestamp, get_memory, user);
5.這裏就到了appcallbacknotifier中,我們就看看這個setcallbacks的實現吧
-
void AppCallbackNotifier::setCallbacks(CameraHal* cameraHal,
-
camera_notify_callback notify_cb,
-
camera_data_callback data_cb,
-
camera_data_timestamp_callback data_cb_timestamp,
-
camera_request_memory get_memory,
-
void *user)
-
{
-
Mutex::Autolock
lock(mLock);
-
-
LOG_FUNCTION_NAME;
-
-
mCameraHal = cameraHal;
-
mNotifyCb = notify_cb;
-
mDataCb = data_cb;
-
mDataCbTimestamp = data_cb_timestamp;
-
mRequestMemory = get_memory;
-
mCallbackCookie = user;
-
-
LOG_FUNCTION_NAME_EXIT;
- }
不過這裏還是要很注意一點,我上面我說的mDataCb就是指向cameraservice中定義的回調函數也是不準確的說法,準確的說法應該是mDataCb的實現方法最紅會調用到cameraservice中定義的回調函數,
這裏還是花點時間說明一下這個回調過程:
mDataCb其實真正知道的是camerahardwareinterface中定義的__data_cb實現,這是由以下調用決定的
mDevice->ops->set_callbacks(mDevice,
__notify_cb,
__data_cb,
__data_cb_timestamp,
__get_memory,
this);
下面來看看的__data_cb定義
-
static void __data_cb(int32_t msg_type,
-
const camera_memory_t *data, unsigned int index,
-
camera_frame_metadata_t *metadata,
-
void *user)
-
{
-
LOGV("%s", __FUNCTION__);
-
CameraHardwareInterface *__this =
-
static_cast<CameraHardwareInterface *>(user);
-
sp<CameraHeapMemory> mem(static_cast<CameraHeapMemory *>(data->handle));
-
if (index >= mem->mNumBufs) {
-
LOGE("%s: invalid buffer
index %d, max allowed is %d", __FUNCTION__,
-
index, mem->mNumBufs);
-
return;
-
}
-
__this->mDataCb(msg_type, mem->mBuffers[index], metadata, __this->mCbUser);
- }
-
/** Set the
notification and data callbacks */
-
void setCallbacks(notify_callback
notify_cb,
-
data_callback data_cb,
-
data_callback_timestamp data_cb_timestamp,
-
void* user)
-
{
-
mNotifyCb = notify_cb;
-
mDataCb = data_cb;
-
mDataCbTimestamp = data_cb_timestamp;
-
mCbUser = user;
-
-
LOGV("%s(%s)", __FUNCTION__, mName.string());
-
-
if (mDevice->ops->set_callbacks) {
-
mDevice->ops->set_callbacks(mDevice,
-
__notify_cb,
-
__data_cb,
-
__data_cb_timestamp,
-
__get_memory,
-
this);
-
}
- }
這裏還是繼續分析吧,看看數據到底是怎樣送到app的,下面看看cameraservice中的這個datacallback方法的定義
-
void CameraService::Client::dataCallback(int32_t
msgType,
-
const sp<IMemory>& dataPtr, camera_frame_metadata_t *metadata, void* user) {
-
LOG2("dataCallback(%d)", msgType);
-
-
sp<Client> client = getClientFromCookie(user);
-
if (client == 0) return;
-
if (!client->lockIfMessageWanted(msgType)) return;
-
-
if (dataPtr == 0 && metadata == NULL) {
-
LOGE("Null data returned in
data callback");
-
client->handleGenericNotify(CAMERA_MSG_ERROR, UNKNOWN_ERROR, 0);
-
return;
-
}
-
-
switch (msgType & ~CAMERA_MSG_PREVIEW_METADATA) {
-
case CAMERA_MSG_PREVIEW_FRAME:
-
client->handlePreviewData(msgType, dataPtr, metadata);
-
break;
-
case CAMERA_MSG_POSTVIEW_FRAME:
-
client->handlePostview(dataPtr);
-
break;
-
case CAMERA_MSG_RAW_IMAGE:
-
client->handleRawPicture(dataPtr);
-
break;
-
case CAMERA_MSG_COMPRESSED_IMAGE:
-
client->handleCompressedPicture(dataPtr);
-
break;
-
#ifdef OMAP_ENHANCEMENT
-
case CAMERA_MSG_COMPRESSED_BURST_IMAGE:
-
client->handleCompressedBurstPicture(dataPtr);
-
break;
-
#endif
-
default:
-
client->handleGenericData(msgType, dataPtr, metadata);
-
break;
-
}
- }
接着看看cameraclent的實現
-
// callback from camera service when
frame or image is ready
-
void Camera::dataCallback(int32_t
msgType, const sp<IMemory>& dataPtr,
-
camera_frame_metadata_t *metadata)
-
{
-
sp<CameraListener> listener;
-
{
-
Mutex::Autolock
_l(mLock);
-
listener = mListener;
-
}
-
if (listener != NULL) {
-
listener->postData(msgType, dataPtr, metadata);
-
}
- }
-
// connect to camera
service
-
static void android_hardware_Camera_native_setup(JNIEnv *env, jobject
thiz,
-
jobject weak_this, jint cameraId)
-
{
-
sp<Camera> camera = Camera::connect(cameraId);
-
-
if (camera == NULL) {
-
jniThrowRuntimeException(env, "Fail
to connect to camera service");
-
return;
-
}
-
-
// make
sure camera hardware is alive
-
if (camera->getStatus() != NO_ERROR) {
-
jniThrowRuntimeException(env, "Camera
initialization failed");
-
return;
-
}
-
-
jclass clazz = env->GetObjectClass(thiz);
-
if (clazz == NULL) {
-
jniThrowRuntimeException(env, "Can't
find android/hardware/Camera");
-
return;
-
}
-
-
// We
use a weak reference so the Camera object can be garbage collected.
-
// The
reference is only used as a proxy for callbacks.
-
sp<JNICameraContext> context = new
JNICameraContext(env, weak_this, clazz, camera);
-
context->incStrong(thiz);
-
camera->setListener(context);
-
-
// save
context in opaque field
-
env->SetIntField(thiz, fields.context, (int)context.get());
- }
-
// provides persistent context for calls
from native code to Java
-
class JNICameraContext: public CameraListener
-
{
-
public:
-
JNICameraContext(JNIEnv* env, jobject
weak_this, jclass clazz, const sp<Camera>& camera);
-
~JNICameraContext() { release(); }
-
virtual void notify(int32_t msgType, int32_t
ext1, int32_t ext2);
-
virtual void postData(int32_t
msgType, const sp<IMemory>& dataPtr,
-
camera_frame_metadata_t *metadata);
-
virtual void postDataTimestamp(nsecs_t
timestamp, int32_t msgType, const sp<IMemory>& dataPtr);
-
void postMetadata(JNIEnv *env, int32_t
msgType, camera_frame_metadata_t *metadata);
-
void addCallbackBuffer(JNIEnv *env, jbyteArray
cbb, int msgType);
-
void setCallbackMode(JNIEnv *env, bool
installed, bool manualMode);
-
sp<Camera> getCamera() { Mutex::Autolock
_l(mLock); return
mCamera; }
-
bool isRawImageCallbackBufferAvailable() const;
-
void release();
-
-
private:
-
void copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType);
-
void clearCallbackBuffers_l(JNIEnv *env, Vector<jbyteArray> *buffers);
-
void clearCallbackBuffers_l(JNIEnv *env);
-
jbyteArray getCallbackBuffer(JNIEnv *env, Vector<jbyteArray> *buffers, size_t
bufferSize);
-
-
jobject mCameraJObjectWeak; // weak
reference to java object
-
jclass mCameraJClass; // strong
reference to java class
-
sp<Camera> mCamera; // strong
reference to native object
-
jclass mFaceClass; // strong
reference to Face class
-
jclass mRectClass; // strong
reference to Rect class
-
Mutex mLock;
-
-
/*
-
* Global reference application-managed
raw image buffer queue.
-
*
-
* Manual-only
mode is supported for raw
image callbacks, which is
-
* set whenever
method addCallbackBuffer() with
msgType =
-
* CAMERA_MSG_RAW_IMAGE is called; otherwise, null is returned
-
* with raw image callbacks.
-
*/
-
Vector<jbyteArray> mRawImageCallbackBuffers;
-
-
/*
-
* Application-managed
preview buffer queue and the flags
-
* associated with the usage of
the preview buffer callback.
-
*/
-
Vector<jbyteArray> mCallbackBuffers; // Global
reference application managed byte[]
-
bool mManualBufferMode; // Whether to use
application managed buffers.
-
bool mManualCameraCallbackSet; // Whether
the callback has been set, used to
-
// reduce
unnecessary calls to set the
callback.
- };
-
void JNICameraContext::postData(int32_t
msgType, const sp<IMemory>& dataPtr,
-
camera_frame_metadata_t *metadata)
-
{
-
// VM
pointer will be NULL if object is released
-
Mutex::Autolock
_l(mLock);
-
JNIEnv *env = AndroidRuntime::getJNIEnv();
-
if (mCameraJObjectWeak == NULL) {
-
LOGW("callback on dead camera
object");
-
return;
-
}
-
-
int32_t dataMsgType = msgType & ~CAMERA_MSG_PREVIEW_METADATA;
-
-
// return
data based on callback type
-
switch (dataMsgType) {
-
case CAMERA_MSG_VIDEO_FRAME:
-
// should
never happen
-
break;
-
-
// For backward-compatibility
purpose, if there is no
callback
-
// buffer for raw
image, the callback returns null.
-
case CAMERA_MSG_RAW_IMAGE:
-
LOGV("rawCallback");
-
if (mRawImageCallbackBuffers.isEmpty()) {
-
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
-
mCameraJObjectWeak, dataMsgType, 0, 0, NULL);
-
} else {
-
copyAndPost(env, dataPtr, dataMsgType);
-
}
-
break;
-
-
// There is no
data.
-
case 0:
-
break;
-
-
default:
-
LOGV("dataCallback(%d, %p)", dataMsgType, dataPtr.get());
-
copyAndPost(env, dataPtr, dataMsgType);
-
break;
-
}
-
-
// post
frame metadata to Java
-
if (metadata && (msgType & CAMERA_MSG_PREVIEW_METADATA)) {
-
postMetadata(env, CAMERA_MSG_PREVIEW_METADATA, metadata);
-
}
- }
-
void JNICameraContext::copyAndPost(JNIEnv* env, const sp<IMemory>& dataPtr, int msgType)
-
{
-
jbyteArray obj = NULL;
-
-
// allocate
Java byte array and copy
data
-
if (dataPtr != NULL) {
-
ssize_t offset;
-
size_t size;
-
sp<IMemoryHeap> heap = dataPtr->getMemory(&offset, &size);
-
LOGV("copyAndPost: off=%ld,
size=%d", offset, size);
-
uint8_t *heapBase = (uint8_t*)heap->base();
-
-
if (heapBase != NULL) {
-
const jbyte* data = reinterpret_cast<const jbyte*>(heapBase + offset);
-
-
if (msgType == CAMERA_MSG_RAW_IMAGE) {
-
obj = getCallbackBuffer(env, &mRawImageCallbackBuffers, size);
-
} else if (msgType == CAMERA_MSG_PREVIEW_FRAME && mManualBufferMode) {
-
obj = getCallbackBuffer(env, &mCallbackBuffers, size);
-
-
if (mCallbackBuffers.isEmpty()) {
-
LOGV("Out of buffers,
clearing callback!");
-
mCamera->setPreviewCallbackFlags(CAMERA_FRAME_CALLBACK_FLAG_NOOP);
-
mManualCameraCallbackSet = false;
-
-
if (obj == NULL) {
-
return;
-
}
-
}
-
} else {
-
LOGV("Allocating callback
buffer");
-
obj = env->NewByteArray(size);
-
}
-
-
if (obj == NULL) {
-
LOGE("Couldn't allocate
byte array for JPEG data");
-
env->ExceptionClear();
-
} else {
-
env->SetByteArrayRegion(obj, 0, size, data);
-
}
-
} else {
-
LOGE("image heap is NULL");
-
}
-
}
-
-
// post
image data to Java
-
env->CallStaticVoidMethod(mCameraJClass, fields.post_event,
-
mCameraJObjectWeak, msgType, 0, 0, obj);
-
if (obj) {
-
env->DeleteLocalRef(obj);
-
}
- }
從這裏開始,回調函數進入到camera framework層
frameworks/base/core/java/android/hardware/Camera.java
-
private static void postEventFromNative(Object
camera_ref,
-
int what, int arg1, int arg2, Object
obj)
-
{
-
Camera c = (Camera)((WeakReference)camera_ref).get();
-
if (c == null)
-
return;
-
-
if (c.mEventHandler != null) {
-
Message m = c.mEventHandler.obtainMessage(what, arg1, arg2, obj);
-
c.mEventHandler.sendMessage(m);
-
}
- }
-
private class EventHandler
extends Handler
-
{
-
private Camera mCamera;
-
-
public EventHandler(Camera
c, Looper looper) {
-
super(looper);
-
mCamera = c;
-
}
-
-
@Override
-
public void handleMessage(Message
msg) {
-
switch(msg.what) {
-
case CAMERA_MSG_SHUTTER:
-
if (mShutterCallback != null) {
-
mShutterCallback.onShutter();
-
}
-
return;
-
-
case CAMERA_MSG_RAW_IMAGE:
-
if (mRawImageCallback != null) {
-
mRawImageCallback.onPictureTaken((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_COMPRESSED_IMAGE:
-
if (mJpegCallback != null) {
-
mJpegCallback.onPictureTaken((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_PREVIEW_FRAME:
-
if (mPreviewCallback != null) {
-
PreviewCallback cb = mPreviewCallback;
-
if (mOneShot) {
-
// Clear
the callback variable before the callback
-
// in case the
app calls setPreviewCallback from
-
// the
callback function
-
mPreviewCallback = null;
-
} else if (!mWithBuffer) {
-
// We're
faking the camera preview mode to prevent
-
// the
app from being flooded with preview frames.
-
// Set to oneshot
mode again.
-
setHasPreviewCallback(true, false);
-
}
-
cb.onPreviewFrame((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_POSTVIEW_FRAME:
-
if (mPostviewCallback != null) {
-
mPostviewCallback.onPictureTaken((byte[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_FOCUS:
-
if (mAutoFocusCallback != null) {
-
mAutoFocusCallback.onAutoFocus(msg.arg1 == 0 ? false : true, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_ZOOM:
-
if (mZoomListener != null) {
-
mZoomListener.onZoomChange(msg.arg1, msg.arg2 != 0, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_PREVIEW_METADATA:
-
if (mFaceListener != null) {
-
mFaceListener.onFaceDetection((Face[])msg.obj, mCamera);
-
}
-
return;
-
-
case CAMERA_MSG_ERROR :
-
Log.e(TAG, "Error
" + msg.arg1);
-
if (mErrorCallback != null) {
-
mErrorCallback.onError(msg.arg1, mCamera);
-
}
-
return;
-
-
default:
-
Log.e(TAG, "Unknown
message type " + msg.what);
-
return;
-
}
-
}
- }
默認是沒有previewcallback這個回調的,除非你的app設置了setPreviewCallback,可以看出preview的數據還是可以向上層回調,只是系統默認不回調,這裏再說深一些:
由上面綠色標註的地方可以看出,我們需要做以下事情,檢查PreviewCallback 這個在framework中定義的接口有沒有設置了setPreviewCallback,設置則調用,這裏接口中
的onPreviewFrame方法需要開發者自己實現,這裏默認是沒有實現的,需要特殊使用的要自己添加,這裏是自己的理解,看一下PreviewCallback 接口的定義:frameworks/base/core/java/android/hardware/Camera.java
-
/**
-
* Callback interface used to deliver
copies of preview frames as
-
* they are displayed.
-
*
-
* @see #setPreviewCallback(Camera.PreviewCallback)
-
* @see #setOneShotPreviewCallback(Camera.PreviewCallback)
-
* @see #setPreviewCallbackWithBuffer(Camera.PreviewCallback)
-
* @see #startPreview()
-
*/
-
public interface PreviewCallback
-
{
-
/**
-
* Called as preview frames
are displayed. This callback is invoked
-
* on the
event thread {@link #open(int)} was
called from.
-
*
-
* @param data the contents
of the preview frame in the format defined
-
* by {@link android.graphics.ImageFormat}, which
can be queried
-
* with {@link android.hardware.Camera.Parameters#getPreviewFormat()}.
-
* If {@link android.hardware.Camera.Parameters#setPreviewFormat(int)}
-
* is never
called, the default will be the YCbCr_420_SP
-
* (NV21) format.
-
* @param camera the Camera
service object.
-
*/
-
void onPreviewFrame(byte[] data, Camera
camera);
- };
到這裏爲止,整個過程大致走了一遍,中間必定有很多不多,這也只是自己的學習記錄,難免有自己錯誤的見解,待修正
待續。。。。。