寫在前面
公司近期讓做一個錄制屏幕類的App,我研究了iOS9新增的Replaykit框架,使用起來確實挺簡單的,性能也很好,但是獲取不到視頻文件,這一點就決定了我不能使用這個框架。那么我只能使用最原始的方法,抓取view的截圖,然后將這些截圖合成視頻,最后再把同時錄制的音頻合成到視頻中。這篇文章先介紹如何抓取截圖并合成視頻。
抓取截屏
先貼一下代碼:通過一個layer抓取屏幕截圖
/// view => screen shot image
- (UIImage *)fetchScreenshot {
UIImage *image = nil;
if (self.captureLayer) {
CGSize imageSize = self.captureLayer.bounds.size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.captureLayer renderInContext:context];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return image;
}
上面的方法主要通過使用圖形上下文CGContextRef來抓取layer的內(nèi)容。代碼很簡單,不做過多的解釋了。
將CGImage轉(zhuǎn)換成CVPixelBufferRef緩存數(shù)據(jù)
/// image => PixelBuffer
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat frameWidth = CGImageGetWidth(image);
CGFloat frameHeight = CGImageGetHeight(image);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,frameWidth,frameHeight,kCVPixelFormatType_32ARGB,(__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameWidth, frameHeight, 8,CVPixelBufferGetBytesPerRow(pxbuffer),rgbColorSpace,(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformIdentity);
CGContextDrawImage(context, CGRectMake(0, 0,frameWidth,frameHeight), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
只有將CGImage轉(zhuǎn)換成CVPixelBufferRef緩存數(shù)據(jù)后,才能存儲到視頻中。
將多張圖片合成視頻
將圖片合成視頻需要使用到以下幾個類:
AVAssetWriter
AVAssetWriter負(fù)責(zé)將媒體數(shù)據(jù)寫入到文件。創(chuàng)建AVAssetWriter對象需要傳入的參數(shù)包括文件的輸出路徑URL和文件格式。文件格式選擇AVFileTypeMPEG4即是MP4格式。
NSURL *fileUrl = [NSURL fileURLWithPath:self.videoPath];
self.videoWriter = [[AVAssetWriter alloc] initWithURL:fileUrl fileType:AVFileTypeMPEG4 error:&error];
AVAssetWriterInput
AVAssetWriterInput負(fù)責(zé)存儲視頻或者音頻緩存數(shù)據(jù),AVAssetWriterInput對象創(chuàng)建完成后需要添加到AVAssetWriter中。
// 設(shè)置視頻的編碼和尺寸
NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithDouble:size.width * size.height], AVVideoAverageBitRateKey, nil];
NSDictionary *videoSettings = @{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: @(size.width),
AVVideoHeightKey: @(size.height),
AVVideoCompressionPropertiesKey: videoCompressionProps};
self.videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(self.videoWriterInput);
// expectsMediaDataInRealTime設(shè)置為YES, 表示實時獲取攝像頭和麥克風(fēng)采集到的視頻數(shù)據(jù)和音頻數(shù)據(jù)
self.videoWriterInput.expectsMediaDataInRealTime = YES;
AVAssetWriterInputPixelBufferAdaptor
AVAssetWriterInputPixelBufferAdaptor負(fù)責(zé)將圖片轉(zhuǎn)成的緩存數(shù)據(jù)CVPixelBufferRef追加到AVAssetWriterInput中。
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
self.adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.videoWriterInput sourcePixelBufferAttributes:bufferAttributes];
以上只寫了圖片合成視頻的主要幾個點,如果大家有不明白的,或者需要參考代碼的話,請點擊以下鏈接:GitHub