前段時間遇到一個需求,需要實時掃描圖像,并且設定攝像頭的尺寸為1080x1920,然后從中間截取出512x512的圖片傳給第三方SDK做進一步業務處理,直到SDK返回正確的處理結果。
一頓Google,發現圖像預覽、人臉識別、二維碼識別這些蘋果都幫我們做好了,而且它們都是基于AVFoundation框架實現的。
話不多說,上代碼~!
用到的類,主要有這些:
//硬件設備
@property (nonatomic, strong) AVCaptureDevice *device;
//輸入流
@property (nonatomic, strong) AVCaptureDeviceInput *input;
//協調輸入輸出流的數據
@property (nonatomic, strong) AVCaptureSession *session;
//預覽層
@property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer;
//輸出流
@property (nonatomic, strong) AVCaptureStillImageOutput *stillImageOutput; //用于捕捉靜態圖片
@property (nonatomic, strong) AVCaptureVideoDataOutput *videoDataOutput; //原始視頻幀,用于獲取實時圖像以及視頻錄制
@property (nonatomic, strong) AVCaptureMetadataOutput *metadataOutput; //用于二維碼識別以及人臉識別
-
首先我們需要在手機上把圖像顯示出來
1.1 獲取硬件設備
-(AVCaptureDevice *)device{
if (_device == nil) {
_device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:nil]) {
//自動閃光燈
if ([_device isFlashModeSupported:AVCaptureFlashModeAuto]) {
[_device setFlashMode:AVCaptureFlashModeAuto];
}
//自動白平衡
if ([_device isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance]) {
[_device setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
}
//自動對焦
if ([_device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {
[_device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
}
//自動曝光
if ([_device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {
[_device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
}
[_device unlockForConfiguration];
}
}
return _device;
}
device有很多屬性可以調整(注意調整device屬性的時候需要上鎖, 調整完再解鎖):
閃光燈
typedef NS_ENUM(NSInteger, AVCaptureFlashMode) {
AVCaptureFlashModeOff = 0,
AVCaptureFlashModeOn = 1,
AVCaptureFlashModeAuto = 2
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
前后置攝像頭
typedef NS_ENUM(NSInteger, AVCaptureDevicePosition) {
AVCaptureDevicePositionUnspecified = 0,
AVCaptureDevicePositionBack = 1,
AVCaptureDevicePositionFront = 2
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
手電筒
typedef NS_ENUM(NSInteger, AVCaptureTorchMode) {
AVCaptureTorchModeOff = 0,
AVCaptureTorchModeOn = 1,
AVCaptureTorchModeAuto = 2,
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
對焦
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) {
AVCaptureFocusModeLocked = 0,
AVCaptureFocusModeAutoFocus = 1,
AVCaptureFocusModeContinuousAutoFocus = 2,
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
曝光
typedef NS_ENUM(NSInteger, AVCaptureExposureMode) {
AVCaptureExposureModeLocked = 0,
AVCaptureExposureModeAutoExpose = 1,
AVCaptureExposureModeContinuousAutoExposure = 2,
AVCaptureExposureModeCustom NS_ENUM_AVAILABLE_IOS(8_0) = 3,
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
白平衡
typedef NS_ENUM(NSInteger, AVCaptureWhiteBalanceMode) {
AVCaptureWhiteBalanceModeLocked = 0,
AVCaptureWhiteBalanceModeAutoWhiteBalance = 1,
AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance = 2,
} NS_AVAILABLE(10_7, 4_0) __TVOS_PROHIBITED;
1.2 獲取硬件的輸入流
-(AVCaptureDeviceInput *)input{
if (_input == nil) {
_input = [[AVCaptureDeviceInput alloc] initWithDevice:self.device error:nil];
}
return _input;
}
創建輸入流的時候,會彈出alert向用戶獲取相機權限
1.3 需要一個用來協調輸入和輸出數據的會話,然后把input添加到會話中
-(AVCaptureSession *)session{
if (_session == nil) {
_session = [[AVCaptureSession alloc] init];
if ([_session canAddInput:self.input]) {
[_session addInput:self.input];
}
}
return _session;
}
1.4 然后我們需要一個預覽圖像的層
-(AVCaptureVideoPreviewLayer *)previewLayer{
if (_previewLayer == nil) {
_previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
_previewLayer.frame = self.view.layer.bounds;
}
return _previewLayer;
}
1.5 最后把previewLayer添加到self.view.layer上
[self.view.layer addSublayer:self.previewLayer];
1.6 找個合適的位置,讓session運行起來,比如viewWillAppear
-(void)viewWillAppear:(BOOL)animated{
[super viewWillAppear:animated];
[self.session startRunning];
}
-
搞一個按鈕用來控制手電筒
#pragma mark - 手電筒
-(void)openTorch:(UIButton*)button{
button.selected = !button.selected;
Class captureDeviceClass = NSClassFromString(@"AVCaptureDevice");
if (captureDeviceClass != nil) {
if ([self.device hasTorch] && [self.device hasFlash]){
[self.device lockForConfiguration:nil];
if (button.selected) {
[self.device setTorchMode:AVCaptureTorchModeOn];
} else {
[self.device setTorchMode:AVCaptureTorchModeOff];
}
[self.device unlockForConfiguration];
}
}
}
-
再搞一個按鈕來切換前后置攝像頭
#pragma mark - 切換前后攝像頭
-(void)switchCamera{
NSUInteger cameraCount = [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
if (cameraCount > 1) {
AVCaptureDevice *newCamera = nil;
AVCaptureDeviceInput *newInput = nil;
AVCaptureDevicePosition position = [[self.input device] position];
if (position == AVCaptureDevicePositionFront){
newCamera = [self cameraWithPosition:AVCaptureDevicePositionBack];
}else {
newCamera = [self cameraWithPosition:AVCaptureDevicePositionFront];
}
newInput = [AVCaptureDeviceInput deviceInputWithDevice:newCamera error:nil];
if (newInput != nil) {
[self.session beginConfiguration];
[self.session removeInput:self.input];
if ([self.session canAddInput:newInput]) {
[self.session addInput:newInput];
self.input = newInput;
}else {
[self.session addInput:self.input];
}
[self.session commitConfiguration];
}
}
}
-(AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for ( AVCaptureDevice *device in devices )
if ( device.position == position ) return device;
return nil;
}
-
使用AVCaptureStillImageOutput捕獲靜態圖片
4.1 創建一個AVCaptureStillImageOutput對象
-(AVCaptureStillImageOutput *)stillImageOutput{
if (_stillImageOutput == nil) {
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
}
return _stillImageOutput;
}
4.2 將stillImageOutput添加到session中
if ([_session canAddOutput:self.stillImageOutput]) {
[_session addOutput:self.stillImageOutput];
}
4.3 搞個拍照按鈕,截取靜態圖片
//AVCaptureStillImageOutput截取靜態圖片,會有快門聲
-(void)screenshot{
AVCaptureConnection * videoConnection = [self.stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
if (!videoConnection) {
NSLog(@"take photo failed!");
return;
}
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if (imageDataSampleBuffer == NULL) {
return;
}
NSData * imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
[self saveImageToPhotoAlbum:image];
}];
}
-
使用AVCaptureVideoOutput實時獲取預覽圖像,這也是樓主的項目需求所在
5.1 創建AVCaptureVideoOutput對象
-(AVCaptureVideoDataOutput *)videoDataOutput{
if (_videoDataOutput == nil) {
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
}
return _videoDataOutput;
}
5.2 將videoDataOutput添加session中
if ([_session canAddOutput:self.videoDataOutput]) {
[_session addOutput:self.videoDataOutput];
}
5.3 遵守AVCaptureVideoDataOutputSampleBufferDelegate協議,并實現它的代理方法
#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate
//AVCaptureVideoDataOutput獲取實時圖像,這個代理方法的回調頻率很快,幾乎與手機屏幕的刷新頻率一樣快
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
largeImage = [self imageFromSampleBuffer:sampleBuffer];
}
5.4 實現imageFromSampleBuffer:方法,將CMSampleBufferRef轉為NSImage
//CMSampleBufferRef轉NSImage
-(UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer{
// 為媒體數據設置一個CMSampleBuffer的Core Video圖像緩存對象
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// 鎖定pixel buffer的基地址
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// 得到pixel buffer的基地址
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// 得到pixel buffer的行字節數
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// 得到pixel buffer的寬和高
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// 創建一個依賴于設備的RGB顏色空間
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// 用抽樣緩存的數據創建一個位圖格式的圖形上下文(graphics context)對象
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// 根據這個位圖context中的像素數據創建一個Quartz image對象
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// 解鎖pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// 釋放context和顏色空間
CGContextRelease(context); CGColorSpaceRelease(colorSpace);
// 用Quartz image創建一個UIImage對象image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// 釋放Quartz image對象
CGImageRelease(quartzImage);
return (image);
}
眼看大功告成,結果一運行,創建core graphic上下文的時候報錯:
CGBitmapContextCreate: invalid data bytes/row: should be at least 7680 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.**
CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.**
又是一通Google,發現stackoverflow上很多這種問答,樓主英語不好,又是一通翻譯,發現大家都是在說像素組件位數什么的,摸索半天找到解決辦法,設置videoDataOutput的像素格式:
[_videoDataOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
再運行,還有問題,獲取到的圖片是顛倒的,尼瑪,真是多災多難,不過這個簡單,很快找到解決方法,設置一下視頻的方向:
#pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate
//AVCaptureVideoDataOutput獲取實時圖像,這個代理方法的回調頻率很快,幾乎與手機屏幕的刷新頻率一樣快
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
largeImage = [self imageFromSampleBuffer:sampleBuffer];
}
5.5 還記得樓主一開始的需求嗎,設定攝像頭的尺寸為1080x1920,然后從中間截取出512x512的圖片傳給第三方SDK做進一步業務處理:
[_session setSessionPreset:AVCaptureSessionPreset1920x1080];
smallImage = [largeImage imageCompressTargetSize:CGSizeMake(512.0f, 512.0f)];
到這里為止,樓主的需求就大功告成啦
-
使用AVCaptureMetadataOutput識別二維碼
6.1 創建AVCaptureMetadataOutput對象
-(AVCaptureMetadataOutput *)metadataOutput{
if (_metadataOutput == nil) {
_metadataOutput = [[AVCaptureMetadataOutput alloc]init];
[_metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//設置掃描區域
_metadataOutput.rectOfInterest = self.view.bounds;
}
return _metadataOutput;
}
6.2 將metadataOutput添加到session中,并且設置掃描類型
if ([_session canAddOutput:self.metadataOutput]) {
[_session addOutput:self.metadataOutput];
//設置掃碼格式
self.metadataOutput.metadataObjectTypes = @[
AVMetadataObjectTypeQRCode,
AVMetadataObjectTypeEAN13Code,
AVMetadataObjectTypeEAN8Code,
AVMetadataObjectTypeCode128Code
];
}
6.3 遵守AVCaptureMetadataOutputObjectsDelegate協議,并實現其代理方法
#pragma mark - AVCaptureMetadataOutputObjectsDelegate
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects.count>0) {
[self.session stopRunning];
AVMetadataMachineReadableCodeObject *metadataObject = [metadataObjects objectAtIndex :0];
NSLog(@"二維碼內容 : %@",metadataObject.stringValue);
}
}
-
關于人臉識別
人臉識別也是基于AVCaptureMetadataOutput實現的,跟二維碼識別的區別在于,掃描類型:
self.metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeFace];
#pragma mark - AVCaptureMetadataOutputObjectsDelegate
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects.count>0) {
[self.session stopRunning];
AVMetadataMachineReadableCodeObject *metadataObject = [metadataObjects objectAtIndex :0];
if (metadataObject.type == AVMetadataObjectTypeFace) {
AVMetadataObject *objec = [self.previewLayer transformedMetadataObjectForMetadataObject:metadataObject];
NSLog(@"%@",objec);
}
}
}
至于怎么利用它來實現具體的功能需求,樓主也很方哈,這里有個鏈接可以參考一下:基于 OpenCV 的人臉識別
好啦,就這么多了,代碼在這里,水平有限,有不對的地方還請多多指教
參考資料:iOS 上的相機捕捉