iOS AVFoundation動態人臉識別功能

一、AVCaptureSession:設備輸入數據管理對象

  • 可以根據AVCaptureSession創建對應的AVCaptureDeviceInputAVCaptureVideoDataOutput對象
  • 創建出來的Input、Output對象會被添加到AVCaptureSession中管理,代表輸入、輸出數據對象,它配置抽象硬件設備的ports。
// 1.創建媒體管理會話
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    self.session = captureSession;
    // 判斷分辨率是否支持 640x480,支持就設置爲:640x480
    if ([captureSession canSetSessionPreset:AVCaptureSessionPreset640x480]) {
        captureSession.sessionPreset = AVCaptureSessionPreset640x480;
    }

二、AVCaptureDevice:代表硬件設備

  • 可以從這個類中獲取手機硬件的照相機、聲音傳感器等
  • 當我們在應用程序中需要改變一些硬件設備的屬性(切換攝像頭、閃光模式改變、相機聚焦改變)的時候必須要先爲設備加鎖,修改完成後解鎖。
    (補充)
//4. 移除舊輸入,添加新輸入
//4.1 設備加鎖
session.beginConfiguration()
//4.2. 移除舊設備
session.removeInput(deviceIn)
//4.3 添加新設備
session.addInput(newVideoInput)
//4.4 設備解鎖
session.commitConfiguration()

// 2.獲取前置攝像頭
    AVCaptureDevice *captureDevice = nil;
    NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *camera in cameras) {
        if (camera.position == AVCaptureDevicePositionFront) {
            captureDevice = camera;
        }
    }
    if (!captureDevice) {
        [DLLoading DLToolTipInWindow:@"無前置攝像頭!"];
        return;
    }

三、AVCaptureDeviceInput設備輸入數據管理對象

  • 可以根據AVCaptureDevice創建對應的AVCaptureDeviceInput對象
  • 該對象將會被添加到AVCaptureSession中管理,代表輸入設備,它配置抽象硬件設備的ports,常用的有麥克風、相機等
// 3.創建輸入數據對象
    NSError *error = nil;
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
    if (error) {
        [DLLoading DLToolTipInWindow:@"創建輸入數據對象錯誤"];
        return;
    }

四、AVCaptureOutput輸出數據

  • 輸出的可以是圖片(AVCaptureStillImageOutput)或者視頻(AVCaptureMovieFileOutput)
// 4.創建輸出數據對象
    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.alwaysDiscardsLateVideoFrames = YES;
    [captureOutput setSampleBufferDelegate:self queue:dispatch_queue_create("cameraQueue", NULL)];
    
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
    [captureOutput setVideoSettings:videoSettings];

五、添加輸入、輸出數據對象到session中

// 5.添加【輸入數據對象】和【輸出數據對象】到會話中
    if ([captureSession canAddInput:captureInput]) {
        [captureSession addInput:captureInput];
    }
    if ([captureSession canAddOutput:captureOutput]) {
        [captureSession addOutput:captureOutput];
    }

六、AVCaptureVideoPreviewLayer創建實時預覽圖層

  • 我們手機的照片以及視頻是怎樣顯示在手機屏幕上的呢,就是通過把這個對象添加到UIViewlayer上的。
// 6.創建實時預覽圖層
    AVCaptureVideoPreviewLayer *previewlayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    [previewlayer connection].videoOrientation = (AVCaptureVideoOrientation)[[UIApplication sharedApplication] statusBarOrientation];
    self.view.layer.masksToBounds = YES;
    previewlayer.frame = CGRectMake((kMainScreenWidth-200)/2, 90, 200, 200);
    previewlayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.scanView insertPreviewLayer:previewlayer];

人臉檢測器

#pragma mark - 人臉檢測器
- (CIDetector *)detector{
    if (_detector == nil){
        CIContext *context = [CIContext contextWithOptions:nil];
        NSDictionary *options = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
        _detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:options];
    }
    return _detector;
}

檢測人臉照片

#pragma mark - 檢測人臉照片
- (UIImage *)getFaceImageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage;
    if ([[UIApplication sharedApplication] statusBarOrientation] == UIInterfaceOrientationPortrait) {
        videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 80, 480, 480)];
    }else{
        videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(80, 0, 480, 480)];
    }
    UIImage *resultImg = [[UIImage alloc] initWithCGImage:videoImage];
    CGImageRelease(videoImage);
    
    //人臉檢測
    CIImage *resultCmg = [[CIImage alloc] initWithCGImage:resultImg.CGImage];
    CIFaceFeature * faceFeature = [self.detector featuresInImage:resultCmg].linq_firstOrNil;
    if (faceFeature && faceFeature.hasLeftEyePosition && faceFeature.hasRightEyePosition && faceFeature.hasMouthPosition) {
        return resultImg;
    }
    return nil;
}

代理方法

  • 獲取到outputSampleBuffer 後進行人臉識別操作。
#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    if (!self.isDetecting) {
        self.isDetecting = YES;
        [connection setVideoOrientation:(AVCaptureVideoOrientation)[[UIApplication sharedApplication] statusBarOrientation]];
        UIImage *img = [self getFaceImageFromSampleBuffer:sampleBuffer];
        if (img && self.timeoutTime > 2) {
            dispatch_async(dispatch_get_main_queue(), ^{
                [self.session stopRunning];
                self.isDetecting = NO;
                self.timeoutTime = 0;
                [self.scanView startAnimating];
                [self.viewModel faceScanWithImg:img];
            });
        }else{
            self.isDetecting = NO;
        }
    }
}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章