想要將攝像頭進行視頻錄製或者拍照可以用UIImagePickerController,不過UIImagePickerController會彈出一個自己的界面,可是有時候我們不想要彈出的這個界面,那麼就可以用另一種方法來獲取攝像頭得到的數據了。
首先需要引入一個包#import <AVFoundation/AVFoundation.h>,接下來你的類需要實現AVCaptureVideoDataOutputSampleBufferDelegate這個協議,只需要實現協議中的一個方法就可以得到攝像頭捕獲的數據了
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
mData = UIImageJPEGRepresentation(image, 0.5);//這裏的mData是NSData對象,後面的0.5代表生成的圖片質量
}
下面是imageFromSampleBuffer方法,方法經過一系列轉換,將CMSampleBufferRef轉爲UIImage對象,並返回這個對象:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
//UIImage *image = [UIImage imageWithCGImage:quartzImage];
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
不過要想讓攝像頭工作起來,還得做一些工作纔行:
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[[AVCaptureSession alloc] init] autorelease];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];//這裏默認是使用後置攝像頭,你可以改成前置攝像頭
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
[NSNumber numberWithInt: 320], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithInt: 240], (id)kCVPixelBufferHeightKey,
nil];
AVCaptureVideoPreviewLayer* preLayer = [AVCaptureVideoPreviewLayer layerWithSession: session];
//preLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
preLayer.frame = CGRectMake(0, 0, 320, 240);
preLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer:preLayer];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1, 15);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
//[self setSession:session];
}
其中preLayer是一個預覽攝像的界面,加不加全看自己了,位置什麼的也是在preLayer.frame裏可設置。這裏強調一下output.videoSettings,這裏可以配置輸出數據的一些配置,比如寬高和視頻的格式。你可以在你這個controller中的初始化調用- (void)setupCaptureSession 方法,這樣攝像頭就開始工作了,這裏沒有處理關閉什麼的,大家可以查文檔。