样例地址:android/camera-samples/Camera2BasicJava/ - Github
大家可以把项目下载到本地并用AndroidStudio打开了再看
简介
在Android5.0的时候,谷歌推出了Camera2API,较上一代Camera1,Camera2支持了很多Camera1所不支持的特性:
- 更先进的API架构
- 可以获取更多的帧信息、以及手动控制每一帧的参数
- 对Camera的控制更加完全
- 支持更多的格式以及高速连拍
……
API大致的使用流程如下:
- 通过
context.getSystemService(Context.CAMERA_SERVICE)
获取CameraManager
. - 调用
CameraManager .open()
方法在回调中得到CameraDevice
. - 通过
CameraDevice.createCaptureSession()
在回调中获取CameraCaptureSession
. - 构建
CaptureRequest
, 有三种模式可选 预览/拍照/录像. - 通过
CameraCaptureSession
发送CaptureRequest
, capture表示只发一次请求, setRepeatingRequest表示不断发送请求. - 拍照数据可以在
ImageReader.OnImageAvailableListener
回调中获取,CaptureCallback
中则可获取拍照实际的参数和Camera当前状态.
CameraActivity
这个类是主要的Activity类,也是唯一的一个Activity。他的代码也很简单:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_camera);
if (null == savedInstanceState) {
getSupportFragmentManager().beginTransaction()
.replace(R.id.container, Camera2BasicFragment.newInstance())
.commit();
}
}
其实就是调用Camera2BasicFragment的newInstance()
方法将Camera2BasicFragment加载进来。
接下来我们看下这个方法
Camera2BasicFragment
Camera2BasicFragment # newInstance()
public static Camera2BasicFragment newInstance() {
return new Camera2BasicFragment();
}
这就是一个静态方法,返回了一个Camera2BasicFragment的对象。
而Camera2BasicFragment是什么,他是一个Fragment,所以我们就从一个Fragment的生命周期开始看起。
Camera2BasicFragment # onCreateView()
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
return inflater.inflate(R.layout.fragment_camera2_basic, container, false);
}
绑定Layout
Camera2BasicFragment # onViewCreated()
@Override
public void onViewCreated(final View view, Bundle savedInstanceState) {
view.findViewById(R.id.picture).setOnClickListener(this);
view.findViewById(R.id.info).setOnClickListener(this);
mTextureView = view.findViewById(R.id.texture);
}
设置点击事件和绑定view
Camera2BasicFragment # onResume()
@Override
public void onResume() {
super.onResume();
startBackgroundThread();
if (mTextureView.isAvailable()) {
openCamera(mTextureView.getWidth(), mTextureView.getHeight());
} else {
mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
}
}
那我们来看看startBackgroundThread()方法:
Camera2BasicFragment # startBackgroundThread()
private void startBackgroundThread() {
mBackgroundThread = new HandlerThread("CameraBackground");
mBackgroundThread.start();
mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
}
这就是开启了一个线程而已。
再来看看openCamera方法
Camera2BasicFragment # openCamera()
private void openCamera(int width, int height) {
// 获取权限
if (ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
return;
}
setUpCameraOutputs(width, height);
configureTransform(width, height);
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
}
}
此处将mTextureView的宽和高传进来了。其中mTextureView就是预览的窗口View。
首先判断是否获得Camera权限,如果没有那就去获取。获取权限这块我就不细说,大家可以去了解下Android6.0权限。
接着进入了setUpCameraOutputs方法:
Camera2BasicFragment # setUpCameraOutputs()
private void setUpCameraOutputs(int width, int height) {
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
// CameraCharacteristics.LENS_FACING 相机设备相对于屏幕的方向
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
// 如果该摄像头是前置摄像头,就不进行处理
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
// 得到该摄像头设备支持的可用流配置;还包括每种格式/尺寸组合的最小帧时长和停顿时长。
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
}
// For still image captures, we use the largest available size.
// 对于静态图像捕获,我们使用最大的可用大小。
Size largest = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
new CompareSizesByArea());
// ImageReader类允许应用程序直接访问渲染到Surface中的图像数据
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
ImageFormat.JPEG, /*maxImages*/2);
mImageReader.setOnImageAvailableListener(
mOnImageAvailableListener, mBackgroundHandler);
// Find out if we need to swap dimension to get the preview size relative to sensor
// coordinate.
// 找出是否需要交换尺寸以获得相对于传感器座标的预览尺寸。
int displayRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
//noinspection ConstantConditions
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
boolean swappedDimensions = false;
switch (displayRotation) {
case Surface.ROTATION_0:
case Surface.ROTATION_180:
if (mSensorOrientation == 90 || mSensorOrientation == 270) {
swappedDimensions = true;
}
break;
case Surface.ROTATION_90:
case Surface.ROTATION_270:
if (mSensorOrientation == 0 || mSensorOrientation == 180) {
swappedDimensions = true;
}
break;
default:
Log.e(TAG, "Display rotation is invalid: " + displayRotation);
}
Point displaySize = new Point();
activity.getWindowManager().getDefaultDisplay().getSize(displaySize);
int rotatedPreviewWidth = width;
int rotatedPreviewHeight = height;
int maxPreviewWidth = displaySize.x;
int maxPreviewHeight = displaySize.y;
if (swappedDimensions) {
rotatedPreviewWidth = height;
rotatedPreviewHeight = width;
maxPreviewWidth = displaySize.y;
maxPreviewHeight = displaySize.x;
}
if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
maxPreviewWidth = MAX_PREVIEW_WIDTH;
}
if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
maxPreviewHeight = MAX_PREVIEW_HEIGHT;
}
// Danger, W.R.! Attempting to use too large a preview size could exceed the camera
// bus' bandwidth limitation, resulting in gorgeous previews but the storage of
// garbage capture data.
// 危险!尝试使用太大的预览大小可能会超出相机总线的带宽限制,导致华丽的预览,但会存储垃圾捕获数据。
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
// We fit the aspect ratio of TextureView to the size of preview we picked.
// 我们将TextureView的宽高比与我们选择的预览大小相匹配。
int orientation = getResources().getConfiguration().orientation;
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
mTextureView.setAspectRatio(
mPreviewSize.getWidth(), mPreviewSize.getHeight());
} else {
mTextureView.setAspectRatio(
mPreviewSize.getHeight(), mPreviewSize.getWidth());
}
// Check if the flash is supported.
Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
mFlashSupported = available == null ? false : available;
mCameraId = cameraId;
return;
}
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
// Currently an NPE is thrown when the Camera2API is used but not supported on the
// device this code runs.
ErrorDialog.newInstance(getString(R.string.camera_error))
.show(getChildFragmentManager(), FRAGMENT_DIALOG);
}
}
首先他得到了CameraManager对象。然后就能得到所有摄像头的集合,接着遍历这个集合,先判断是否为前摄像头,如果是的话就跳出,循环下一个摄像头;然后再得到该摄像头支持的流配置,如果没有,就跳出循环下一个;接着在得到该摄像头在JPEG格式下所支持的最大的大小配置,同时配置好ImageReader。
接下来根据摄像头的方向和屏幕方向设置预览的方向以及尺寸。核心方法是chooseOptimalSize()。我们来看下次方法:
Camera2BasicFragment # chooseOptimalSize()
private static Size chooseOptimalSize(Size[] choices, int textureViewWidth,
int textureViewHeight, int maxWidth, int maxHeight, Size aspectRatio) {
// Collect the supported resolutions that are at least as big as the preview Surface
List<Size> bigEnough = new ArrayList<>();
// Collect the supported resolutions that are smaller than the preview Surface
List<Size> notBigEnough = new ArrayList<>();
int w = aspectRatio.getWidth();
int h = aspectRatio.getHeight();
for (Size option : choices) {
if (option.getWidth() <= maxWidth && option.getHeight() <= maxHeight &&
option.getHeight() == option.getWidth() * h / w) {
if (option.getWidth() >= textureViewWidth &&
option.getHeight() >= textureViewHeight) {
bigEnough.add(option);
} else {
notBigEnough.add(option);
}
}
}
// Pick the smallest of those big enough. If there is no one big enough, pick the
// largest of those not big enough.
if (bigEnough.size() > 0) {
return Collections.min(bigEnough, new CompareSizesByArea());
} else if (notBigEnough.size() > 0) {
return Collections.max(notBigEnough, new CompareSizesByArea());
} else {
Log.e(TAG, "Couldn't find any suitable preview size");
return choices[0];
}
}
首先说一下传进来的参数:
choices
:根据预览SurfaceView的得到的摄像头所支持的流配置textureViewWidth
:TextureView的宽textureViewHeight
:TextureView的高maxWidth
:可显示区域的宽maxHeight
:可显示区域的宽aspectRatio
:可使用的最大尺寸
先创建两个集合:bigEnough和notBigEnough。然后再遍历choices,也就是遍历所有SurfaceView所支持的流配置的Size,然后判断宽和高是不是小于maxWidth和maxHeight,以及比例是否相同。然后判端如果尺寸大于TextureView尺寸就放入bigEnough,小于就放入notBigEnough。最后从bigEnough中找到最小的尺寸,如果bigEnough没有就从notBigEnough中找到最大的尺寸。然后返回。
接下来我们回到setUpCameraOutputs()方法
接下来判端了屏幕的方向,是横屏还是竖屏,接着根据即如果配置预览的宽高。最后配置好mFlashSupported和mCameraId变量。方法结束。
Camera2BasicFragment # configureTransform()
然后我们返回openCamera()方法,下面又是configureTransform()方法:
private void configureTransform(int viewWidth, int viewHeight) {
Activity activity = getActivity();
if (null == mTextureView || null == mPreviewSize || null == activity) {
return;
}
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
Matrix matrix = new Matrix();
RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);
RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());
float centerX = viewRect.centerX();
float centerY = viewRect.centerY();
if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) {
bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());
matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);
float scale = Math.max(
(float) viewHeight / mPreviewSize.getHeight(),
(float) viewWidth / mPreviewSize.getWidth());
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
} else if (Surface.ROTATION_180 == rotation) {
matrix.postRotate(180, centerX, centerY);
}
mTextureView.setTransform(matrix);
}
配置预览图的大小、方向/角度。在此就不多说了。
然后再返回openCamera()方法
下面就是得到CameraManager实例,并通过manager打开Camera,这样相机打开流程就结束了。
Camera2BasicFragment # onPause()
@Override
public void onPause() {
closeCamera();
stopBackgroundThread();
super.onPause();
}
```
具体也不细说了,就是把Camera关闭,并关掉后台线程。
## TextureView.SurfaceTextureListener
我们来看下TextureView的配置。
```java
private final TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener() {
// 在view或view的祖先的可见性更改时调用。
@Override
public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {
openCamera(width, height);
}
// 当此view的大小更改时调用此方法。
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height) {
configureTransform(width, height);
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) {
return true;
}
@Override
public void onSurfaceTextureUpdated(SurfaceTexture texture) {
}
};
两个方法都在上面介绍过,也不再细说了。
CameraDevice.StateCallback
我们再来看下这个方法
private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice cameraDevice) {
// camera开启时
mCameraOpenCloseLock.release();
mCameraDevice = cameraDevice;
createCameraPreviewSession();
}
@Override
public void onDisconnected(@NonNull CameraDevice cameraDevice) {
// camera摧毁时
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
}
@Override
public void onError(@NonNull CameraDevice cameraDevice, int error) {
// camera报错时
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
Activity activity = getActivity();
if (null != activity) {
activity.finish();
}
}
};
首先先来看opOpened方法中的createCameraPreviewSession方法:
Camera2BasicFragment # createCameraPreviewSession()
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// 我们将默认缓冲区的大小配置为所需的相机预览大小。
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// 这是我们需要开始预览的输出Surface。
Surface surface = new Surface(texture);
// 我们用输出Surface设置CaptureRequest.Builder。
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// 在这里,我们为相机预览创建一个CameraCaptureSession
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
// 相机已经关闭
if (null == mCameraDevice) {
return;
}
// 当会话准备好后,我们开始显示预览。
mCaptureSession = cameraCaptureSession;
try {
// 相机预览时自动对焦应连续。
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// 必要时自动启用flash。
setAutoFlash(mPreviewRequestBuilder);
// 最后,我们开始显示相机预览。
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
}
}, null
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
然后回到StateCallback。onDisconnected和onError的内容相似,区别就是onError退出的同时把Activity也强制关闭了。
CameraCaptureSession.CaptureCallback
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
// 当相机预览正常工作时,我们什么也不做。
break;
}
case STATE_WAITING_LOCK: {
Integer afState = result.get(CaptureResult.CONTROL_AF_STATE);
if (afState == null) {
captureStillPicture();
} else if (CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED == afState ||
CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED == afState) {
// CONTROL_AE_STATE在某些设备上可以为空
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_CONVERGED) {
mState = STATE_PICTURE_TAKEN;
captureStillPicture();
} else {
runPrecaptureSequence();
}
}
break;
}
case STATE_WAITING_PRECAPTURE: {
// CONTROL_AE_STATE在某些设备上可以为空
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null ||
aeState == CaptureResult.CONTROL_AE_STATE_PRECAPTURE ||
aeState == CaptureRequest.CONTROL_AE_STATE_FLASH_REQUIRED) {
mState = STATE_WAITING_NON_PRECAPTURE;
}
break;
}
case STATE_WAITING_NON_PRECAPTURE: {
// CONTROL_AE_STATE在某些设备上可以为空
Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
if (aeState == null || aeState != CaptureResult.CONTROL_AE_STATE_PRECAPTURE) {
mState = STATE_PICTURE_TAKEN;
captureStillPicture();
}
break;
}
}
}
这块也没啥多讲的,看下注释。
然后再看看captureStillPicture()方法:
Camera2BasicFragment # captureStillPicture()
private void captureStillPicture() {
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
// 这是我们用来拍照的capturerequest.builder。
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mImageReader.getSurface());
// 使用与预览相同的ae和af模式。
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
setAutoFlash(captureBuilder);
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getOrientation(rotation));
CameraCaptureSession.CaptureCallback captureCallback
= new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
showToast("Saved: " + mFile);
Log.d(TAG, mFile.toString());
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mCaptureSession.capture(captureBuilder.build(), captureCallback, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
看到这里代码已经不复杂了,组装好我们的请求然后用CameraCaptureSession发送这个请求就可以了。