IOS【AVFoundation(一)】 視頻錄制

4246D3BA-EE6A-43D0-BDEF-6D9C1E109114.png

AVFoundation 查看介紹,蘋果在官方文檔上寫的比較清楚。大概如下圖所示。

11603306-3CE5-406A-B735-8617EBB78D5C.png

AVFoundation 捕獲視頻
======

AVCaptureSession : 負責管理 音頻與視頻之間的數據流

AVCaptureDevice :視頻或者音頻設備(攝像頭,麥克風)

AVCaptureDeviceInput :音視頻輸入,需要綁定AVCaptureDevice(設備)

AVCaptureVideoPreviewLayer :AVCaptureSession捕捉到的信息會通過此layer 顯示出來

整個流程圖看起來像這樣:

下面附上代碼

1.定義所需對象###


//隊列
@property(nonatomic,copy)dispatch_queue_t captureQueue;
//捕獲視頻的會話
@property (strong, nonatomic) AVCaptureSession *session;
///捕捉到現實的view
@property(nonatomic,strong)AVCaptureVideoPreviewLayer *previewLayer;
//后置攝像頭輸入
@property (strong, nonatomic) AVCaptureDeviceInput *backCameraInput;
//前置攝像頭輸入
@property (strong, nonatomic) AVCaptureDeviceInput *frontCameraInput;
//麥克風輸入
@property (strong, nonatomic) AVCaptureDeviceInput *audioMicInput;
//音頻錄制連接
@property (strong, nonatomic) AVCaptureConnection *audioConnection;
//視頻錄制連接
@property (strong, nonatomic) AVCaptureConnection *videoConnection;
//視頻輸出
@property (strong, nonatomic) AVCaptureVideoDataOutput *videoOutput;
//音頻輸出
@property (strong, nonatomic) AVCaptureAudioDataOutput *audioOutput;


2.實例化###


-(void)initSession{
    _fristRun = YES;
    _paused = YES;  //默認是暫停(未開始錄制)
    _isFront = YES;  //默認為前攝像頭
    //錄制隊列
    _captureQueue = dispatch_queue_create("com.capture", DISPATCH_QUEUE_SERIAL);
    NSError *error;
    //默認前攝像頭輸入
    AVCaptureDevice *frontDevice = [self cameraWithPosition:AVCaptureDevicePositionFront];
    _frontCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:frontDevice error:&error];
    if (error) {
        NSLog(@"獲取攝像頭失敗、、、、");
    }
    //實例化后后攝像頭
    AVCaptureDevice *backDevice = [self cameraWithPosition:AVCaptureDevicePositionBack];
    _backCameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:backDevice error:&error];
    if (error) {
        NSLog(@"獲取后攝像頭失敗、、、、");
    }
    
    //麥克風輸入
    NSError *micError;
    AVCaptureDevice *audioDevice =[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    _audioMicInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:&micError];
    if (micError) {
        NSLog(@"獲取麥克風失敗。。。。");
    }
    
    //音視頻輸出
    _videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    [_videoOutput setSampleBufferDelegate:self queue:self.captureQueue];
    //視頻輸出的設置
    NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                    [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
                                    nil];
    _videoOutput.videoSettings = setcapSettings;
    _audioOutput = [[AVCaptureAudioDataOutput alloc] init];
    [_audioOutput setSampleBufferDelegate:self queue:self.captureQueue];
    
    
    
    _session = [[AVCaptureSession alloc] init];
    _session.sessionPreset = AVCaptureSessionPreset1280x720;
    //添加設備
    if ([_session canAddInput:self.frontCameraInput]) {
        [_session addInput:self.frontCameraInput];
    }
    if ([_session canAddInput:self.audioMicInput]) {
        [_session addInput:self.audioMicInput];
    }
    
    //添加輸出
    if ([_session canAddOutput:self.audioOutput]) {
        [_session addOutput:self.audioOutput];
    }
    if ([_session canAddOutput:self.videoOutput]) {
        [_session addOutput:self.videoOutput];
    }
    
    //捕獲view
    _previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    _previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [_previewLayer setFrame:CGRectMake(0, 0, WIDTH, HEIGHT)];
    [self.showView.layer insertSublayer:_previewLayer atIndex:0];

    //音視頻連接
    _audioConnection = [self.audioOutput connectionWithMediaType:AVMediaTypeAudio];
    self.videoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}

3.開啟捕獲###


[self.session startRunning];

做完以上操作步驟,AVCaptureSession 已經在開始捕獲視頻,此處需要遵守代理

AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureAudioDataOutputSampleBufferDelegate

在捕獲的每一幀,都會回調協議方法

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

4.視頻寫入###


//媒體寫入對象
@property(nonatomic,strong)AVAssetWriter *writer;
//視頻寫入
@property (nonatomic, strong) AVAssetWriterInput *videoInput;
//音頻寫入
@property (nonatomic, strong) AVAssetWriterInput *audioInput;

直接保存視頻信息會非常大,所以這里要進行設置,需要進行視頻編碼:

 //初始化視頻輸入
        NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:
                                  AVVideoCodecH264, AVVideoCodecKey,
                                  nil];
        //初始化視頻寫入類
        _videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:settings];

開始寫入

 [_writer startWriting];

寫入完成回調方法

 [_writer finishWritingWithCompletionHandler: handler];

總結###

AVCaptureSession 對象的構建:需要AVCaptureDeviceInput 設備輸入-->AVCaptureDevice 通過調用設備(相機和麥克風等)
在AVCaptureVideoDataOutputSampleBufferDelegate 協議方法的理解。
AVAssetWriter :視頻寫入的靈活運用。


代碼地址

思考###

1.在多次錄制完成之后,為什么偶爾會出現首幀黑屏現象?

--一般情況下,音頻采集要快于視頻采集,在寫入過程中,第一幀為音頻幀,所有有黑屏現象。解決方法,在- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
方法中,判斷captureOutput 參數,如果首幀為音頻幀,直接舍棄。

2.在錄制完成后會調用[_session stopRunning]; 而業務需求經常會有,視頻多次錄制需求,如何進行 開始錄制-->暫停錄制-->開啟錄制-->....?

---1.)在暫停錄制的時候,同時調用[_session stopRunning] 方法。開始的時候,在重新實例化session對象,但在stopRunning時,同時手機畫面也會被暫停,體驗非常不好,而且重新實例化對性能也會有一些損耗,所以不推薦此方法

---2.)文件寫入時候做處理。[self.session startRunning];開始捕獲之后,用戶點擊暫停按鈕,響應時間為,停止寫入視頻幀,點擊開始按鈕時,繼續進入視頻幀,錄制完成后。
例:錄制過程為: A段--暫停2s--B段
此時會發現錄制完的視頻,在播放完A段視頻時候,會有2s的卡界面情況,然后播放B段視頻,解決方法。


最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容