一、綜述
AVFoundation定義了一組功能可以用于創建媒體應用程序時遇到的大部分用例場景。
還有一些功能不受AVFoundation框架的內置支持,需要使用框架的AVAssetReader
和AVAssetWriter
類提供的低級功能,可以直接處理媒體樣本。
1、AVAssetReader
用于從AVAsset中讀取媒體樣本,通常會配置一個或多個AVAssetReaderOutput
實例,并通過copyNextSampleBuffer
方法訪問音頻樣本和視頻幀。
AVAssetReaderOutput是一個抽象類,不過框架定義了具體實例來從指定的AVAssetTrack中讀取解碼的媒體樣本,從多音頻軌道中讀取混合輸出,或者從多視頻軌道總讀取組合輸出。
AVAssetReaderAudioMixOutput
AVAssetReaderTrackOutput
AVAssetReaderVideoCompositionOutput
AVAssetReaderSampleReferenceOutput
一個資源讀取器內部通道都是以多線程的方式不斷提取下一個可用樣本的,這樣可以在系統請求資源時最小化時延。盡管提供了低時延的檢索操作,還是不傾向于實時操作,比如播放。
AVAssetReader只針對于帶有一個資源的媒體樣本,如果需要同時從多個基于文件的資源中讀取樣本,可將它們組合到一個AVAsset子類AVComposition中。
NSURL *fileUrl ;
AVAsset *asset = [AVAsset assetWithURL:fileUrl];
AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
NSError *serror;
self.assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&serror];
NSDictionary *readerOutputSetting = @{(id)kCVPixelBufferPixelFormatTypeKey :@(kCVPixelFormatType_32BGRA)};
AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:readerOutputSetting];
//從資源視頻軌道中讀取樣本,將視頻幀解壓縮為BGRA格式。
if ([self.assetReader canAddOutput:trackOutput]) {
[self.assetReader addOutput:trackOutput];
}
[self.assetReader startReading];
2、AVAssetWriter
對媒體資源進行編碼并將其寫入到容器文件中,日服一個MPEG-4文件或一個QuickTime文件。
它由一個或多個AVAssetWriterInput
對象配置,用于附加將包含要寫入容器的媒體樣本的CMSampleBuffer
對象。
AVAssetWriterInput
被配置為可以處理指定的媒體類型,比如音頻或視頻,并且附加在其后的樣本會在最終輸出時生成一個獨立的AVAssetTrack
。當使用一個配置了處理視頻樣本的AVAssetWriterInput
時,會常用到一個專門的適配器對象AVAssetWriterInputPixelBufferAdaptor
,這個類在附加被包裝為CVPixelBuffer對象的視頻樣本時提供最優性能。
輸入信息也可以通過使用AVAssetWriterInputGroup
組成互斥的參數,可以創建特定資源,包含在播放時使用AVMediaSelectionGroup
和AVMediaSelectionOption
類選擇的指定語言媒體軌道。
AVAssetWriter可以自動支持交叉媒體樣本。AVAssetWriterInput提供一個readyForMoreMediaData屬性來指示在保持所需的交錯情況下輸入信息是否還可以附加更多數據,只有在這個屬性值為YES時才可以將一個新的樣本添加到輸入信息中。
AVAssetWriter可用于實時操作和離線操作兩種情況。對于每個場景中都有不同的方法將樣本buffer添加到寫入對象的輸入中。
- 實時:處理實時資源時,比如從AVCaptureVideoDataOutput寫入捕捉的樣本時,AVAssetWriter應該另expectsMediaDataInRealTime為YES來確保readyForMoreMediaData值被正確計算。從實時資源寫入數據優化了寫入器,與維持理想交錯效果相比,快速寫入樣本具有更高的優先級。
- 離線:當從離線資源中讀取媒體資源時,比如從AVAssetReader讀取樣本buffer,在附加樣本前仍需寫入器輸入的readyForMoreMediaData屬性的狀態,不過可以使用
requestMediaDataWhenReadyOnQueue:usingBlock:
方法控制數據的提供。傳到這個方法中的代碼塊會隨寫入器輸入準備附加更多的樣本而不斷被調用。添加樣本時需要檢索數據并從資源中找到下一個樣本進行添加。
NSURL *outputUrl ;
NSError *wError;
self.assetWriter = [[AVAssetWriter alloc] initWithURL:outputUrl fileType:AVFileTypeQuickTimeMovie error:&wError];
NSDictionary *writerOutputSettings =
@{
AVVideoCodecKey:AVVideoCodecH264,
AVVideoWidthKey:@1280,
AVVideoHeightKey:@720,
AVVideoCompressionPropertiesKey:@{
AVVideoMaxKeyFrameIntervalKey:@1,
AVVideoAverageBitRateKey:@10500000,
AVVideoProfileLevelKey:AVVideoProfileLevelH264Main31,
}
};
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:writerOutputSettings];
if ([self.assetWriter canAddInput:writerInput]) {
[self.assetWriter addInput:writerInput];
}
[self.assetWriter startWriting];
與AVAssetExportSession相比,AVAssetWriter明顯的優勢是它對輸出進行編碼時能夠進行更加細致的壓縮設置控制??梢灾付P鍵幀間隔、視頻比特率、H.264配置文件、像素寬高比和純凈光圈等設置。
3、示例,從非實時資源中寫入樣本
dispatch_queue_t dispatchQueue = dispatch_queue_create("com.writerQueue", NULL);
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
//創建一個新的寫入會話,傳遞資源樣本的開始時間。
/**
在寫入器輸入準備好添加更多樣本時,被不斷調用。
每次調用期間,輸入準備添加更多數據時,再從軌道的輸出中復制可用的樣本,并附加到輸入中。
所有樣本從軌道輸出中復制后,標記AVAssetWriterInput已經結束并指明添加操作已完成。
**/
[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
BOOL complete = NO ;
while ([writerInput isReadyForMoreMediaData] && !complete) {
CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];
if (sampleBuffer) {
BOOL result = [writerInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
complete = !result;
} else {
[writerInput markAsFinished];
complete = YES;
}
}
if (complete) {
[self.assetWriter finishWritingWithCompletionHandler:^{
AVAssetWriterStatus status = self.assetWriter.status;
if (status == AVAssetWriterStatusCompleted) {
//
} else {
}
}];
}
}];
二、創建音頻波形(waveform)視圖
繪制波形三個步驟:
- 1、讀取,讀取音頻樣本進行渲染。需要讀取或可能解壓縮音頻數據。
- 2、縮減,實際讀取到的樣本數量要遠比在屏幕上渲染的多??s減過程必須作用域樣本集,這一過程包括樣本總量分為小的樣本塊,并在每個樣本塊上找到最大的樣本、所有樣本的平均值或min/max值。
- 3、渲染,將縮減后的樣本呈現在屏幕上。通常用到Quartz框架,可以使用蘋果支持的繪圖框架。如何繪制這些數據的類型取決于如何縮減樣本的。采用min/max對,怎為它的每一對繪制一條垂線。如果使用每個樣本塊平均值或最大值,使用Quartz Bezier路徑繪制波形。
1、讀取音頻樣本 --提取全部樣本集合
- 1、加載AVAsset資源軌道數據
- 2、加載完成之后,創建
AVAssertReader
,并配置AVAssetReaderTrackOutput
- 3、AVAssertReader讀取數據,并將讀取到的樣本數據添加到NDSdata實例后面。
+ (void)loadAudioSamplesFromAsset:(AVAsset *)asset
completionBlock:(THSampleDataCompletionBlock)completionBlock {
// Listing 8.2
NSString *tracks = @"tracks";
[asset loadValuesAsynchronouslyForKeys:@[tracks] completionHandler:^{
AVKeyValueStatus status = [asset statusOfValueForKey:tracks error:nil];
NSData *sampleData = nil;
if (status == AVKeyValueStatusLoaded) { //資源已經加載完成
sampleData = [self readAudioSamplesFromAsset:asset];
}
dispatch_async(dispatch_get_main_queue(), ^{
completionBlock(sampleData);
});
}];
}
+ (NSData *)readAudioSamplesFromAsset:(AVAsset *)asset {
// Listing 8.3
NSError *error = nil;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
//創建一個AVAssetReader實例,并賦給他一個資源讀取。
if (!assetReader) {
NSLog(@"error creating asset reader :^%@",error);
return nil;
}
AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeAudio] firstObject];
//獲取資源找到的第一個音頻軌道,根據期望的媒體類型獲取軌道。
NSDictionary *outputSettings =
@{
AVFormatIDKey:@(kAudioFormatLinearPCM),//樣本需要以未壓縮的格式被讀取
AVLinearPCMIsBigEndianKey:@NO,
AVLinearPCMIsFloatKey:@NO,
AVLinearPCMBitDepthKey:@(16)
};
//創建NSDictionary保存從資源軌道讀取音頻樣本時使用的解壓設置。
AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:outputSettings];
if ([assetReader canAddOutput:trackOutput]) {
[assetReader addOutput:trackOutput];
}
//創建新的AVAssetReaderTrackOutput實例,將創建的輸出設置傳遞給它,
//將其作為AVAssetReader的輸出并調用startReading來允許資源讀取器開始預收取樣本數據。
[assetReader startReading];
NSMutableData *sampleData = [NSMutableData data];
while (assetReader.status == AVAssetReaderStatusReading) {
CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];
//調用跟蹤輸出的方法開始迭代,每次返回一個包含音頻樣本的下一個可用樣本buffer。
if (sampleBuffer) {
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer);
//CMSampleBuffer中的音頻樣本包含在一個CMBlockBuffer類型中
//CMSampleBufferGetDataBuffer函數可以方位block buffer
size_t length = CMBlockBufferGetDataLength(blockBufferRef);
SInt16 sampleBytes[length];
//確定長度并創建一個16位帶符號整型數組來保存音頻樣本
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, sampleBytes);
//生成一個數組,數組中元素為CMBlockBuffer所包含的數據
[sampleData appendBytes:sampleBytes length:length];
//將數組數據內容附加在NDSData實例后面。
CMSampleBufferInvalidate(sampleBuffer);
//指定樣本buffer已經處理和不可再繼續使用
CFRelease(sampleBuffer);
//釋放CMSampleBuffer副本來釋放內容
}
}
if (assetReader.status == AVAssetReaderStatusCompleted) {
//數據讀取成功,返回包含音頻樣本數據的NData
return sampleData;
} else {
NSLog(@"Failed to read audio samples from asset");
return nil;
}
return nil;
}
2、縮減音頻樣本
根據指定壓縮空間,壓縮樣本。即將,總樣本分塊,取每塊子樣本最大值,重新組成新的音頻樣本集合。
//指定尺寸約束篩選數據集
- (NSArray *)filteredSamplesForSize:(CGSize)size {
NSMutableArray *filterDataSamples = [[NSMutableArray alloc] init];
NSUInteger sampleCount = self.sampleData.length/sizeof(SInt16);
//樣本總長度
NSUInteger binSize = sampleCount/size.width;
//子樣本長度
SInt16 *bytes = (SInt16 *)self.sampleData.bytes;
SInt16 maxSample = 0;
for (NSUInteger i = 0; i < sampleCount; i += binSize) {
//迭代所有樣本集合
SInt16 sampleBin[binSize];
for (NSUInteger j = 0; j < binSize; j ++) {
sampleBin[j] = CFSwapInt16LittleToHost(bytes[i+j]);
//CFSwapInt16LittleToHost確保樣本是按主機內置的字節順序處理
}
SInt16 value = [self maxValueInArray:sampleBin ofSize:binSize];
[filterDataSamples addObject:@(value)];
//找到樣本最大絕對值。
if (value > maxSample) {
maxSample = value;
}
}
CGFloat scaleFactor = (size.height/2) / maxSample;
//所有樣本中的最大值,計算篩選樣本使用的比例因子
for (NSUInteger i = 0; i < filterDataSamples.count; i ++) {
filterDataSamples[i] = @([filterDataSamples[i] integerValue] *scaleFactor);
}
return filterDataSamples;
}
- (SInt16)maxValueInArray:(SInt16[])values ofSize:(NSUInteger)size {
SInt16 maxValue = 0;
for (int i = 0; i < size; i ++) {
if (abs(values[i]) > maxValue) {
maxValue = abs(values[i]);
}
}
return maxValue;
}
3、渲染音頻樣本
將篩選出來的音頻樣本數據,繪制成波形圖。這里使用Quartz的Bezier繪制。
- (void)setAsset:(AVAsset *)asset {
if (_asset != asset) {
_asset = asset;
[THSampleDataProvider loadAudioSamplesFromAsset:asset completionBlock:^(NSData *sampleData) {
self.filter = [[THSampleDataFilter alloc] initWithData:sampleData];
[self.loadingView stopAnimating];
[self setNeedsDisplay];
}];
}
}
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
//在視圖內呈現這個波形,首先基于定義的寬和高常量來縮放圖像上下文
CGContextScaleCTM(context, THWidthScaling, THHeightScaling);
//計算x,y偏移量,轉換上下文,在縮放上下文中適當調整便宜
CGFloat xOffset = self.bounds.size.width-self.bounds.size.width*THWidthScaling;
CGFloat yOffset = self.bounds.size.height-self.bounds.size.height*THHeightScaling;
CGContextTranslateCTM(context, xOffset/2, yOffset/2);
//獲取篩選樣本,并傳遞視圖邊界的尺寸。
//實際可能希望在drawRect方法之外執行這一檢索操作,這樣在篩選樣本時會有更好的優化效果
NSArray *filteredSamples = [self.filter filteredSamplesForSize:self.bounds.size];
CGFloat midY = CGRectGetMidY(rect);
//創建一個新的CGMutablePathRef,用來繪制波形Bezier路徑的上半部
CGMutablePathRef halfPath = CGPathCreateMutable();
CGPathMoveToPoint(halfPath, NULL, 0.0f, midY);
for (NSUInteger i = 0; i < filteredSamples.count; i ++) {
float sample = [filteredSamples[i] floatValue];
//每次迭代,向路徑中添加一個點,索引i作為x坐標,樣本值作為y坐標
CGPathAddLineToPoint(halfPath, NULL, i, midY-sample);
}
//創建第二個CGMutablepathRef,是Bezier路徑繪制完整波形
CGPathAddLineToPoint(halfPath, NULL, filteredSamples.count, midY);
CGMutablePathRef fullPath = CGPathCreateMutable();
CGPathAddPath(fullPath, NULL, halfPath);
//要繪制波形下半部,需要對上半部路徑應用translate和scale變化,是的上半部路徑翻轉到下面,填滿整個波形
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, 0, CGRectGetHeight(rect));
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGPathAddPath(fullPath, &transform, halfPath);
//將完整路徑添加到圖像上下文,根據指定的waveColor設置填充色。并繪制路徑到圖像上下文。
CGContextAddPath(context, fullPath);
CGContextSetFillColorWithColor(context, self.waveColor.CGColor);
CGContextDrawPath(context, kCGPathFill);
//創建Quartz對象,在使用之后釋放相應內存。
CGPathRelease(halfPath);
CGPathRelease(fullPath);
}
三、捕捉錄制的高級方法
將AVCaptureVideoDataOutput
捕捉的CVPixelBuffer
對象最為OpenGL ES的貼圖來呈現,這是一個強大的功能,不過使用AVCaptureVideoDataOutput
的一個問題是會失去AVCaptureMovieFileOutput
來記錄輸出的便利性。
AVCaptureVideoDataOutput
和AVCaptureAudioDataOutput
如果需要對數據進行更復雜的處理,要為每一個使用單獨的隊列。
1、實現捕捉會話配置
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureDevice *videoDevice =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *videoInput =
[AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error];
if (videoInput) {
if ([self.captureSession canAddInput:videoInput]) {
[self.captureSession addInput:videoInput];
self.activeVideoInput = videoInput;
} else {
}
} else {
}
// Setup default microphone
AVCaptureDevice *audioDevice =
[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput =
[AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:error];
if (audioInput) {
if ([self.captureSession canAddInput:audioInput]) {
[self.captureSession addInput:audioInput];
} else {
}
} else {
}
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
//設置輸出格式kCVPixelFormatType_32BGRA,結合OpenGL ES和CoreImage時這一格式非常適合。
NSDictionary *outputSettigns = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
self.videoDataOutput.videoSettings = outputSettigns;
//要記錄輸出內容,所以通常我們希望捕捉全部的可用幀
//設置alwaysDiscardsLateVideoFrames為NO,會給委托方法一些額外的時間來處理樣本buffer
self.videoDataOutput.alwaysDiscardsLateVideoFrames = NO;
[self.videoDataOutput setSampleBufferDelegate:self queue:self.dispatchQueue];
if ([self.captureSession canAddOutput:self.videoDataOutput]) {
[self.captureSession addOutput:self.videoDataOutput];
} else {
NSLog(@"add video data output error");
}
//捕捉音頻樣本
self.audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[self.audioDataOutput setSampleBufferDelegate:self queue:self.dispatchQueue];
if ([self.captureSession canAddOutput:self.audioDataOutput]) {
[self.captureSession addOutput:self.audioDataOutput];
} else {
NSLog(@"add audio data output error");
}
NSString *fileType = AVFileTypeQuickTimeMovie;
NSDictionary *videoSettings = [self.videoDataOutput recommendedVideoSettingsForAssetWriterWithOutputFileType:fileType];
NSDictionary *audioSettings = [self.audioDataOutput recommendedAudioSettingsForAssetWriterWithOutputFileType:fileType];
self.movieWriter = [[THMovieWriter alloc] initWithVideoSettings:videoSettings audioSettings:audioSettings dispatchQueue:self.dispatchQueue];
self.movieWriter.delegate = self;
保存視頻到相冊
- (void)didWriteMovieAtURL:(NSURL *)outputURL {
// Listing 8.17
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputURL]) {
//檢驗是否可以寫入
ALAssetsLibraryWriteVideoCompletionBlock completionBlock;
completionBlock = ^(NSURL *assetURL, NSError *error) {
if (error) {
[self.delegate assetLibraryWriteFailedWithError:error];
} else {
}
};
[library writeVideoAtPathToSavedPhotosAlbum:outputURL completionBlock:completionBlock];
}
}
2、代理回調方法處理
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//處理視頻幀,并寫入
[self.movieWriter processSampleBuffer:sampleBuffer];
// Listing 8.11
if (captureOutput == self.videoDataOutput) {
//獲取基礎CVPixelBuffer
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//從CVPixelBuffer中創建一個新的CIImage,并將它傳遞給需要在屏幕上呈現的圖片目標
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:imageBuffer options:nil];
//將圖片在preview上展示,這個時候可以對圖片做相關處理。加濾鏡的內容后面再加。
[self.imageTarget setImage:sourceImage];
}
}
3、創建文件寫入
創建一個對象,通過AVAssetWriter執行視頻編碼和文件寫入。
將功能封裝
.h
#import <AVFoundation/AVFoundation.h>
@protocol THMovieWriterDelegate <NSObject>
- (void)didWriteMovieAtURL:(NSURL *)outputURL;
@end
@interface THMovieWriter : NSObject
/**
* 實例化,
* videoSettings,audioSettings兩個字典用來描述基礎AVAssetWriter的配置參數
* dispatchQueue 調度隊列
*/
- (id)initWithVideoSettings:(NSDictionary *)videoSettings
audioSettings:(NSDictionary *)audioSettings
dispatchQueue:(dispatch_queue_t)dispatchQueue;
/**
* 寫入進程開始
*/
- (void)startWriting;
/**
* 寫入進程停止
*/
- (void)stopWriting;
/**
* 工作狀態監聽
*/
@property (nonatomic) BOOL isWriting;
/**
* 定義委托協議,監聽寫入磁盤時間
*/
@property (weak, nonatomic) id<THMovieWriterDelegate> delegate;
/**
* 捕捉到新的樣本,調用這個方法
*/
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer;
.m文件
#import "THMovieWriter.h"
#import <AVFoundation/AVFoundation.h>
#import "THContextManager.h"
#import "THFunctions.h"
#import "THPhotoFilters.h"
#import "THNotifications.h"
static NSString *const THVideoFilename = @"movie.mov";
@interface THMovieWriter ()
@property (strong, nonatomic) AVAssetWriter *assetWriter;
@property (strong, nonatomic) AVAssetWriterInput *assetWriterVideoInput;
@property (strong, nonatomic) AVAssetWriterInput *assetWriterAudioInput;
@property (strong, nonatomic)
AVAssetWriterInputPixelBufferAdaptor *assetWriterInputPixelBufferAdaptor;
@property (strong, nonatomic) dispatch_queue_t dispatchQueue;
@property (weak, nonatomic) CIContext *ciContext;
@property (nonatomic) CGColorSpaceRef colorSpace;
@property (strong, nonatomic) CIFilter *activeFilter;
@property (strong, nonatomic) NSDictionary *videoSettings;
@property (strong, nonatomic) NSDictionary *audioSettings;
@property (nonatomic) BOOL firstSample;
@end
@implementation THMovieWriter
- (id)initWithVideoSettings:(NSDictionary *)videoSettings
audioSettings:(NSDictionary *)audioSettings
dispatchQueue:(dispatch_queue_t)dispatchQueue {
self = [super init];
if (self) {
// Listing 8.13
_videoSettings = videoSettings;
_audioSettings = audioSettings;
_dispatchQueue = dispatchQueue;
//得到Core Image上下文,這個對象受OpenGL ES的支持,并用于篩選傳進來的視頻樣本
//最后得到一個VCPixelBuffer
_ciContext = [THContextManager sharedInstance].ciContext;
_colorSpace = CGColorSpaceCreateDeviceRGB();
_activeFilter = [THPhotoFilters defaultFilter];
_firstSample = YES;
NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
//切換濾鏡通知監聽器
[nc addObserver:self selector:@selector(filterChanged:) name:THFilterSelectionChangedNotification object:nil];
}
return self;
}
- (void)dealloc {
// Listing 8.13
CGColorSpaceRelease(_colorSpace);
[[NSNotificationCenter defaultCenter] removeObserver:self];
}
- (void)filterChanged:(NSNotification *)notification {
// Listing 8.13
self.activeFilter = [notification.object copy];
}
- (void)startWriting {
// Listing 8.14
//開始錄像,避免卡頓,異步調度到dispatchQueue隊列,設置AVAssetWriter對象
dispatch_async(self.dispatchQueue, ^{
NSError *error = nil;
NSString *fileType = AVFileTypeQuickTimeMovie;
//創建新的AVAssetWriter實例
self.assetWriter = [AVAssetWriter assetWriterWithURL:[self outputURL]
fileType:fileType
error:&error];
if (!self.assetWriter || error) {
NSLog(@"Could not create AVAssetWriter: %@",error);
return ;
}
//創建一個新的AVAssetWriterInput,附加從AVCaptureVideoDataOutput中得到的樣本
self.assetWriterVideoInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:self.videoSettings];
//設置YES指明這個輸入針對實時性進行優化
self.assetWriterVideoInput.expectsMediaDataInRealTime = YES;
//判斷用戶界面方向,為輸入設置一個合適的轉換。
//寫入會話期間,方向會按照這一設定保持不變。
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
self.assetWriterVideoInput.transform = THTransformForDeviceOrientation(orientation);
NSDictionary *attributes =
@{
(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA),
(id)kCVPixelBufferWidthKey:self.videoSettings[AVVideoWidthKey],
(id)kCVPixelBufferHeightKey:self.videoSettings[AVVideoHeightKey],
(id)kCVPixelFormatOpenGLESCompatibility:(id)kCFBooleanTrue,
};
//創建AVAssetWriterInputPixelBufferAdaptor
//提供了一個優化的CVPixelBufferPool,使用它可以創建CVPixelBuffer對象來渲染篩選視頻幀。
self.assetWriterInputPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:self.assetWriterVideoInput sourcePixelBufferAttributes:attributes];
if ([self.assetWriter canAddInput:self.assetWriterVideoInput]) {
[self.assetWriter addInput:self.assetWriterVideoInput];
} else {
NSLog(@"Unable to add video input");
return ;
}
//創建AVAssetWriterInput附加AVCaptureAudioDataOutput樣本
self.assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:self.audioSettings];
self.assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([self.assetWriter canAddInput:self.assetWriterAudioInput]) {
[self.assetWriter addInput:self.assetWriterAudioInput];
} else {
NSLog(@"Unable to add audio input");
return;
}
self.isWriting = YES;
self.firstSample = YES;
});
}
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer {
// Listing 8.15
if (!self.isWriting) {
return ;
}
//這個方法可以處理音頻和視頻兩種樣本,所以需要確定樣本的媒體類型才能附加到正確的寫入器輸入。
//查看buffer的CMFormatDescription
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
//使用CMFormatDescriptionGetMediaType判斷媒體類型
CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDesc);
if (mediaType == kCMMediaType_Video) {
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
//如果開始捕捉后,正在處理的是第一個視頻樣本
//調用資源寫入器的startWriting啟動一個新的寫入會話
//startSessionAtSourceTime: 將樣本呈現時間作為源時間傳遞到方法中。
if (self.firstSample) {
if ([self.assetWriter startWriting]) {
[self.assetWriter startSessionAtSourceTime:timestamp];
} else {
NSLog(@"failed to start writing");
}
self.firstSample = NO;
}
//從像素buffer適配器池中創建一個空的CVPixelBuffer
//使用該像素buffer渲染篩選好的視頻幀的輸出
CVPixelBufferRef outputRenderBuffer = NULL;
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterInputPixelBufferAdaptor.pixelBufferPool;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &outputRenderBuffer);
if (err) {
NSLog(@"Unable to obtain a pixel buffer from thr pool.");
return ;
}
//獲取當前視頻樣本的CVPixelBuffer
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//根據像素buffer窗機啊一個新的CIImage并將他設置為篩選器的kCIInputImageKey值。
CIImage *sourceImage =[CIImage imageWithCVPixelBuffer:imageBuffer options:nil];
[self.activeFilter setValue:sourceImage forKey:kCIInputImageKey];
//通過篩選器得到輸出圖片,會返回一個封裝了CIFilter操作的CIImage對象
CIImage *filterImage = self.activeFilter.outputImage;
if (!filterImage) {
filterImage = sourceImage;
}
//將篩選好的CIImage的輸出渲染到outputRenderBuffer
[self.ciContext render:filterImage toCVPixelBuffer:outputRenderBuffer bounds:filterImage.extent colorSpace:self.colorSpace];
if (self.assetWriterVideoInput.readyForMoreMediaData) {
//如果輸入的readyForMoreMediaData為YES
//將像素buffer連同當前樣本的時間附加到AVAssetWriterPixelBUfferAdaptor。
if (![self.assetWriterInputPixelBufferAdaptor appendPixelBuffer:outputRenderBuffer withPresentationTime:timestamp]) {
NSLog(@"Error Appending pixel buffer.");
}
}
//完成對當前視頻樣本的處理,釋放像素buffer
CVPixelBufferRelease(outputRenderBuffer);
} else if (!self.firstSample && mediaType == kCMMediaType_Audio) {
//如果第一個樣本處理完成并且當前的CMSampleBuffer是一個音頻樣本。
if (self.assetWriterAudioInput.isReadyForMoreMediaData) {
if (![self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]) {
NSLog(@"Error appending audio sample buffer");
}
}
}
}
- (void)stopWriting {
// Listing 8.16
//設置為NO,processSampleBuffer:mediaType:就不會再處理更多的樣本
self.isWriting = NO;
dispatch_async(self.dispatchQueue, ^{
//終止寫入會話并關閉磁盤上的文件
[self.assetWriter finishWritingWithCompletionHandler:^{
//判斷資源寫入器狀態
if (self.assetWriter.status == AVAssetWriterStatusCompleted) {
//回調到主線程,調用委托的方法。
dispatch_async(dispatch_get_main_queue(), ^{
NSURL *fileUrl = [self.assetWriter outputURL];
[self.delegate didWriteMovieAtURL:fileUrl];
//回調 保存到相冊
});
} else {
NSLog(@"Failed to write movie: %@",self.assetWriter.error);
}
}];
});
}
// 定義outPutUrl配置AVAssetWriter實例。
- (NSURL *)outputURL {
NSString *filePath =
[NSTemporaryDirectory() stringByAppendingPathComponent:THVideoFilename];
NSURL *url = [NSURL fileURLWithPath:filePath];
if ([[NSFileManager defaultManager] fileExistsAtPath:url.path]) {
[[NSFileManager defaultManager] removeItemAtURL:url error:nil];
}
return url;
}
@end
通過AVAssetWriter和AVAssetReader實現視頻文件的讀去和寫入,同時可以再錄制過程中對視頻進行更多可擴展性的處理。
書中的示例中實現,濾鏡視頻的錄制處理,使用到CoreImage對圖片處理,后面也要學習這方面的內容。
這一節只是熟悉AVAssetWriter和AVAssetReader的基本用法,有所了解,它們還有更多更深入的功能,后期需要更多的時間去學習。