AVCaptureSession獲取攝像頭視頻及buffer(可用來(lái)自定義相機(jī))

發(fā)現(xiàn)自己真的很懶,很久都不想寫文章(主要是不知道寫什么,太難的不會(huì),太簡(jiǎn)單的又感覺沒必要??)

這篇文章的初衷是為了獲取涉嫌頭取到的視頻buffer以和本地視頻的buffer通過OpenGL混合疊加,跟上一篇是姊妹篇:AVPlayer實(shí)現(xiàn)播放視頻和AVFoundation獲取視頻的buffer
還是先上效果圖:

效果圖.gif

1、創(chuàng)建session

// 1 創(chuàng)建session
_captureSession = [[AVCaptureSession alloc]init];
// 設(shè)置視頻質(zhì)量,這里根據(jù)自己需要設(shè)置;
_captureSession.sessionPreset = AVCaptureSessionPreset640x480;

2、拿到設(shè)備的攝像頭
// 2 device
    _device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    _device =  [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
/**
 取得攝像頭的方向

 @param position 攝像頭方向
 @return 攝像頭
 */
-(AVCaptureDevice *)getCameraDeviceWithPosition:(AVCaptureDevicePosition )position{
    
    NSArray *cameras = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    
    for (AVCaptureDevice *camera in cameras) {
        
        if ([camera position]==position) {
            
            return camera;
            
        }
    }
    return nil;
    
}
3、創(chuàng)建并添加input
// 3 input
    NSError *deviceError = nil;
   AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc]initWithDevice:_device error:&deviceError];
    // 4 add input
    if ([_captureSession canAddInput:input]) {
        [_captureSession addInput:input];
    }else{
        NSLog(@"創(chuàng)建失敗了,%@",deviceError);
        return;
    }

4、添加Video Out
    // 5 video out
    
   dispatch_queue_t queue =  dispatch_queue_create("cameraQueue", NULL);
    
    AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc]init];
    
    [videoOut setSampleBufferDelegate:self queue:queue];
    
    videoOut.alwaysDiscardsLateVideoFrames = NO;
5、設(shè)置視頻格式

可以根據(jù)自己的需要設(shè)置,視頻設(shè)置是一個(gè)字典形式的設(shè)置

    // 6 視頻格式設(shè)置
//    [videoOut setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)}];
    
    [videoOut connectionWithMediaType:AVMediaTypeVideo];
6、添加輸出 并顯示預(yù)覽

這里將_prevLayer設(shè)置成屬性是為了后續(xù)調(diào)節(jié)Frame,我是在viewDidLoad中加載的攝像頭,所以View的frame可能還不準(zhǔn)確,frame我在

    // 7 添加輸出
    if ([_captureSession canAddOutput:videoOut]) {
        [_captureSession addOutput:videoOut];
    }
//    self.mGLView.isFullYUVRange = YES;

    AVCaptureConnection *connection = [videoOut connectionWithMediaType:AVMediaTypeVideo];
    [connection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];
    [_captureSession startRunning];
    // 8 預(yù)覽的layer
    _prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: _captureSession];
    
//    _prevLayer.frame = self.view.bounds;
    
    _prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    [self.view.layer addSublayer: _prevLayer];
// 不要忘了在viewDidLayoutSubviews中給prevLayer的frame賦值
-(void)viewDidLayoutSubviews{
    _prevLayer.frame = self.view.bounds;
}

實(shí)現(xiàn)AVCaptureVideoDataOutputSampleBufferDelegate的代理方法拿到buffer做進(jìn)一步處理

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    
//     int width1 = (int)CVPixelBufferGetWidth(pixelBuffer);
//     
//     int height1 = (int)CVPixelBufferGetHeight(pixelBuffer);
//     
//     NSLog(@"video width: %d  height: %d", width1, height1);
// 拿到某一幀可以轉(zhuǎn)換為圖片保存起來(lái),跟直接照相功能相似,也可以拿到某段時(shí)間內(nèi)的視頻幀,保存成視頻
//    或者做一些其他美顏之類的處理等待;
//    NSLog(@"在這里獲得video sampleBuffer,做進(jìn)一步處理(編碼H.264)");
    
//     [self.mGLView displayPixelBuffer:pixelBuffer];
    //    [pool drain];
    
}
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

推薦閱讀更多精彩內(nèi)容