原文:AVFoundation Programming Guide
寫在前面
簡單翻譯一下
AVFoundation
的相關(guān)文檔,本人英語水平一般,有不對的歡迎大家留言指正。
=。= 這一篇翻譯的不好 主要是一些專業(yè)詞匯了解的太少,大家可以參考原文來看,以后會修正。。。
AVFoundation框架提供了一個豐富的類集來處理視聽媒體的編輯。AVFoundation編輯API中最重要的是compositions
。一個composition
是包含一個或多個媒體assets的tracks的集合。AVMutableComposition 類提供了一個插入和刪除tracks的接口,也可以用來管理它們的排序。Figure 3-1展示了一個新的composition
是怎么從現(xiàn)有的assets中抽取拼接成一個新的asset的,如果你想要做的是將多個assets有序的合并到一個文件中,這是你需要詳細(xì)了解的。如果你想要在composition的tracks上處理自定義的音頻或視頻,你需要單獨的合并一個音頻或者視頻composition。
使用AVMutableAudioMix 類,你可以在composition中的音頻tracks上實現(xiàn)自定義的音頻處理。如Figure 3-2。目前,你可以給一個音頻track指定一個最大音量或者設(shè)置volume ramp
。
你可以使用AVMutableVideoComposition類直接操作composition中的視頻tracks的編輯,如題 Figure 3-3。通過一個單獨的視頻composition, 你可以給輸出的視頻指定顯示的大小和比例,也可以設(shè)置幀持續(xù)時間。通過一個視頻composition的說明(instructions)(定義為類AVMutableVideoCompositionLayerInstruction),你可以改變視頻的背景色,應(yīng)用圖層說明(layer instructions). 這些圖層說明(定義為類AVMutableVideoCompositionLayerInstruction)可以用來實現(xiàn)視頻旋轉(zhuǎn),設(shè)置透明度。你可以使用animationTool 屬性實現(xiàn)Core Animation。
你可以使用一個AVAssetExportSession對象實現(xiàn)將audio mix
和video composition
組合在一起,如圖3-4。你可以使用你的composition初始化輸出會話,然后將audio mix
和video composition
賦值給audioMix 和 videoComposition 屬性。
創(chuàng)建Composition
你可以使用AVMutableComposition 類來創(chuàng)建你自己的composition
。為了添加媒體數(shù)據(jù)到你的composition
中,你必須添加一個或者多個composition tracks
,可以使用AVMutableCompositionTrack這個類。下面是一個簡單的使用一個video track
和一個audio track
創(chuàng)建一個可變的composition的栗子:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
初始化Composition Track
當(dāng)添加一個新的tracks
到一個composition
的時候,你必須提供一個媒體類型和一個track ID。音頻和視頻是最經(jīng)常使用的媒體類型,當(dāng)然你也可以使用一些其他的特定類型如AVMediaTypeSubtitle or AVMediaTypeText。
每一個track包含一些視聽數(shù)據(jù),并且擁有一個唯一的標(biāo)識(track ID)。如果你使用了一個特定的標(biāo)識kCMPersistentTrackID_Invalid作為track ID,系統(tǒng)會為你生成一個唯一的標(biāo)識綁定到對應(yīng)的track上。
給Composition添加視聽數(shù)據(jù)
當(dāng)你有一個包含一個或多個track的composition后,你就可以開始添加你的媒體數(shù)據(jù)到對應(yīng)的tracks上。為了添加媒體數(shù)據(jù)到一個composition track上,你需要訪問對應(yīng)媒體數(shù)據(jù)所在的 AVAsset 對象。你可以使用可變的composition track接口將多個同樣類型的tracks放在同一個track上。以下示例說明如何依次將兩個不同的視頻tracks添加到相同的composition track上:
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
檢索兼容的Composition Tracks
在可能的情況下,每種類型的媒體應(yīng)該只有一個composition track。這種統(tǒng)一兼容的asset tracks使得系統(tǒng)可以使用少量的資源。當(dāng)連續(xù)的呈現(xiàn)媒體數(shù)據(jù)的時候,你應(yīng)該將相同類型的媒體數(shù)據(jù)放置在相同的composition track上。你可以查詢一個可變的composition來找到是否存在一個可以兼容你想要放置的track的composition track。
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
注意:將多個視頻片段放置在同一個composition track上可能會導(dǎo)致在切換視頻片段的時候丟幀,尤其是在嵌入式設(shè)備。視頻片段使用的composition tracks的個數(shù),完全取決于你的應(yīng)用程序的設(shè)計和平臺。
Generating a Volume Ramp
一個AVMutableAudioMix
對象可以對你的composition上的所有的音頻軌道實現(xiàn)單獨的自定義的音頻處理。創(chuàng)建一個音頻混合可以使用類方法audioMix實現(xiàn),你可以使用AVMutableAudioMixInputParameters 實例來關(guān)聯(lián)音頻混合到你的composition中的特定的tracks上。一個音頻混合可以用來改變一個音頻軌道的音量。下面的類子展示了如何給一個特定的音軌設(shè)置volume ramp來實現(xiàn)聲音的慢慢減弱:
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
自定義視頻處理
和音頻混合一樣,你只需要一個AVMutableVideoComposition
對象來實現(xiàn)在你的composition的視頻tracks上的所有的自定義視頻處理。使用一個視頻composition,你可以直接給composition的視頻tracks設(shè)置的合適的顯示大小,分辨率,幀率等。詳細(xì)的屬性設(shè)置可以參考Setting the Render Size and Frame Duration這個例子。
改變Composition的背景色
所有的視頻compositions必須有一個AVVideoCompositionInstruction 對象,至少包含一個視頻composition說明。你可以是使用AVMutableVideoCompositionInstruction來創(chuàng)建你自己的視頻composition說明。使用視頻composition說明,你可以改變composition的背景色,指定是否需要后處理或者應(yīng)用圖層說明。
下面的例子說明了怎么創(chuàng)建一個視頻composition instruction來將整個composition的背景色改成紅色。
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
應(yīng)用不透明斜率
視頻composition instructions也可以用來實現(xiàn)視頻組件的圖層說明。一個AVMutableVideoCompositionLayerInstruction對象可以在一個視頻軌道上實現(xiàn)變換和設(shè)置透明度。一個視頻composition instruction的layerInstructions數(shù)組的順序決定了視頻在composition instruction的持續(xù)時間內(nèi),來自源軌道的視頻幀應(yīng)該怎么分層和組合。下面的代碼片段展示了如何設(shè)置透明度斜率以在轉(zhuǎn)換到第二個視頻之前慢慢淡出組合中的前一個視頻:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
結(jié)合核心動畫效果
視頻composition可以通過 animationTool屬性添加核心動畫的功能。通過這個動畫工具,您可以完成視頻水印和添加標(biāo)題或動畫遮罩等任務(wù)。核心動畫可以使用兩種不同的方式:您可以添加一個核心動畫圖層作為它自己的composition track,或者您可以直接將Core Animation效果(使用核心動畫圖層)渲染到您的composition中的視頻幀中。以下代碼通過向視頻的中心添加水印展示了后一種方式的實現(xiàn):
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
整合: 組合多個資源并將結(jié)果保存到相冊中
這個簡短的代碼示例說明了如何組合兩個視頻資源軌道和一個音頻資源軌道來創(chuàng)建一個單獨的視頻文件。它展示了如何做:
- 創(chuàng)建一個 AVMutableComposition對象并且添加多個AVMutableCompositionTrack 對象
- 將AVAssetTrack對象的時間范圍添加到兼容的composition tracks上
- 檢查一個視頻資源的preferredTransform 屬性以確定視頻的方向
- 使用AVMutableVideoCompositionLayerInstruction對象將變換應(yīng)用到視頻軌道上
- 為視頻composition的renderSize和frameDuration屬性設(shè)置適當(dāng)?shù)闹?/li>
- 導(dǎo)出到視頻文件時,使用組合與視頻合成
- 保存視頻文件到相冊
注意:為了專注于最相關(guān)的代碼,本示例省略了一個完整的應(yīng)用程序的幾個方面,如內(nèi)存管理和錯誤處理。為了使用AVFoundation,你需要有足夠的Cocoa開發(fā)經(jīng)驗來推斷丟失的部分。
創(chuàng)建Composition
你可以使用AVMutableComposition
對象來將不同資源的tracks組合在一起。創(chuàng)建composition并添加一個音頻和一個視頻軌道:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
添加Assets
一個空的composition對你沒有用處。你需要將兩個視頻資源軌道和音頻資源軌道添加到composition中。
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
檢查視頻方向
將視頻軌道和音軌添加到composition中后,您需要確保兩個視頻軌道的方向正確。默認(rèn)情況下,所有視頻軌道都假定為橫向模式。如果您的視頻軌道以縱向模式拍攝,導(dǎo)出時視頻將無法正確定向。而且,如果您嘗試將縱向模式的視頻與橫向模式下的視頻組合在一起,導(dǎo)出會話將無法完成。
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
應(yīng)用視頻組件圖層說明(composition layer instructions)
一旦您知道視頻片段具有兼容的方向,您可以對每個視圖片段應(yīng)用必要的圖層說明(layer instructions),并將這些圖層說明添加到視頻組件(composition)中。
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
所有AVAssetTrack 對象都有一個preferredTransform 屬性,其中包含該資源軌道的方向信息。每當(dāng)資源軌道顯示在屏幕上時,都會應(yīng)用此轉(zhuǎn)換。在前面的代碼中,圖層說明(layer instruction)的變換設(shè)置為資源軌道的變換,以便在調(diào)整渲染大小后,新的composition的視頻會正確顯示。
設(shè)置渲染大小和幀持續(xù)時間
要完成視頻方向修復(fù),您必須相應(yīng)地調(diào)整renderSize屬性。您還應(yīng)為 frameDuration 屬性選擇合適的值,例如1/30秒(或每秒30幀)。默認(rèn)情況下,renderScale屬性設(shè)置為1.0,在這個composition中是合適的。
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
}
else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
}
else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
導(dǎo)出Composition并且保存到相冊
最后一步包括將整個composition導(dǎo)出到單個視頻文件中,并將該視頻保存到相冊。您可以使用AVAssetExportSession對象來創(chuàng)建新的視頻文件,并傳入輸出文件所需的URL。然后,您可以使用ALAssetsLibrary 類將生成的視頻文件保存到相冊。
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];