編輯

AVFoundation框架提供了一組功能豐富的類,以簡化音視頻的編輯。compositions是AVFoundation編輯API的核心。compositions只是來自一個或多個不同媒體asset的track的集合。AVMutableComposition類提供用于插入和移除軌道,以及管理他們的時間排序的接口。圖3-1顯示了如何將現有asset組合成新的composition以形成新asset。如果您要做的只是將多個資產依次合并到一個文件中,那么這就是您需要的詳細信息。如果你要對composition中的音軌或視軌進行自定義處理,則需要分別合并混音或視頻作品。

圖3-1 AVMutableComposition將資產組合在一起

image

使用AVMutableAudioMix該類,您可以對composition的音軌執(zhí)行自定義音頻處理,如圖3-2所示。當前,您可以為音頻軌道指定最大音量或設置音量斜坡。

圖3-2 AVMutableAudioMix執(zhí)行音頻混合

image

您可以使用AVMutableVideoComposition該類直接將作品中的視軌進行編輯,如圖3-3所示。使用單個視頻合成,您可以為輸出視頻指定所需的渲染大小和比例以及幀持續(xù)時間。通過視頻合成的指令(由AVMutableVideoCompositionInstruction類表示),您可以修改視頻的背景色并應用圖層指令。這些圖層指令(由AVMutableVideoCompositionLayerInstruction該類表示)可用于將變型,變換漸變,不透明度和不透明度漸變應用于合成中的視頻軌道。視頻合成類還使您能夠使用animationTool屬性將Core Animation框架中的效果引入視頻中。

圖3-3 AVMutableVideoComposition

image

可以使用AVAssetExportSession對象,將composition與 audio mix 和video composition合在一起,如圖3-4所示。您可以使用composition來初始化export session,然后分別將 audio mix 和 video composition分別分配給audioMixvideoComposition屬性。

圖3-4 使用AVAssetExportSession將媒體元素到輸出文件中

image

創(chuàng)建composition

要創(chuàng)建自己的composition,請使用AVMutableComposition該類。要將媒體數據添加到您的composition中,您必須添加一個或多個以AVMutableCompositionTrack類表示的composition track。最簡單的情況是使用一個video track和一個audio track創(chuàng)建mutable composition:

 AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
    

初始composition track的option

向composition添加新tracks時,必須同時提供媒體類型和 track ID。盡管音頻和視頻是最常用的媒體類型,但是您也可以指定其他媒體類型,例如AVMediaTypeSubtitleAVMediaTypeText

與某些視聽數據關聯的每個軌道都有一個唯一的標識符,稱為trace ID。如果您指定kCMPersistentTrackID_Invalid為首選軌道ID,則會自動為您生成一個唯一標識符并將其與該軌道相關聯。

將音視數據添加到composition中

一旦你有一個包含一個或多個tracks的composition,你就可以開始將媒體數據添加到適當的track。要將媒體數據添加到composition track,您需要訪問AVAsset媒體數據所在的對象。您可以使用mutable composition track interface 將具有相同基礎媒體類型的多個tracks放到同一track。以下示例說明了如何將兩個不同的asset tracks 依次添加到同一composition track:

 // You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];

    

檢索兼容的Composition Tracks

在可能的情況下,每種媒體類型盡可能只有一個composition track。兼容 asset tracks的這種統(tǒng)一導致最少的資源使用量。串行顯示媒體數據時,應將相同類型的所有媒體數據放在同一composition track。您可以查詢mutable composition,以了解是否有任何與所需composition tracks兼容的 asset track:

AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
    // Implementation continues.
}

注意: 將多個視頻片段放置在同一構圖軌道上可能會導致在視頻片段之間的過渡處(特別是在嵌入式設備上)過渡時丟幀。為視頻片段選擇composition tracks數量完全取決于應用程序的設計及其預期的平臺。

生成音量斜坡

單個AVMutableAudioMix對象可以單獨對composition中的所有audio tracks執(zhí)行自定義音頻處理。您可以使用audioMixclass方法創(chuàng)建audio mix ,并使用AVMutableAudioMixInputParameters的實例將音頻混音與composition中的特定tracks相關聯。audio mix可用于改變track的音量。下面的示例顯示如何在特定音軌上設置音量漸變,以在composition期間使音頻逐漸淡出:

AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];

執(zhí)行自定義視頻處理

與audio mix一樣,您只需要一個AVMutableVideoComposition對象就可以對合成的視頻軌道執(zhí)行所有自定義視頻處理。使用視頻composition,您可以直接為合成的視頻track設置合適的渲染大小,比例和幀頻。有關為這些屬性設置適當值的詳細示例,請參見設置渲染大小和幀持續(xù)時間

更改構圖的背景色

所有視頻的compositions必須包含一個AVVideoCompositionInstruction的對象數組,這個數組中至少有一個video composition instruction。您可以使用AVMutableVideoCompositionInstruction該類來創(chuàng)建自己的視頻composition instructions。使用視頻composition instructions令,您可以修改composition的背景顏色,指定是否需要后期處理或應用圖層指令。

以下示例說明了如何創(chuàng)建視頻合成指令,該指令將整個composition的背景色更改為紅色。

AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];

應用不透明度斜坡

視頻composition instructions也可以用于應用composition layer instructions。一個AVMutableVideoCompositionLayerInstruction對象可以應用到composition中的變換,變換坡道,不透明度和不透明度坡道到一定視頻track 。一個視頻 composition instruction中的layerInstructions數組中各個layer instructions的順序決定了在該composition instruction期間,應如何對來自源軌道的視頻幀進行分層和組裝。以下代碼片段顯示了如何設置不透明度漸變,以在過渡到第二個視頻之前緩慢淡出合成中的第一個視頻:

AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];

整合核心動畫效果

視頻合成可以通過該animationTool屬性將Core Animation的功能添加到您的composition中。通過此動畫工具,您可以完成視頻加水印,添加標題或設置動畫之類的任務。可以以兩種不同的方式將Core Animation與視頻合成一起使用:您可以將Core Animation圖層添加為其自己的單獨composition track,或者可以將Core Animation效果(使用Core Animation圖層)直接渲染到composition中的視頻幀中。以下代碼通過在視頻中心添加水印來顯示后一種選項:

CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
   

放在一起:組合多種資產并將結果保存到相機膠卷中

此簡短的代碼示例說明了如何組合兩個視頻asset track和一個音頻asset track來創(chuàng)建單個視頻文件。它顯示了如何:

注意: 為了專注于最相關的代碼,此示例省略了完整應用程序的多個方面,例如內存管理和錯誤處理。要使用AVFoundation,您應該對Cocoa有足夠的經驗來推斷出缺失的部分。

創(chuàng)建composition

要將來自不同asset的track拼湊在一起,可以使用一個AVMutableComposition對象。創(chuàng)建composition并添加一個audio track和一個video track。

AVMutableComposition * mutableComposition = [AVMutableComposition組成];
AVMutableCompositionTrack * videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack * audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
   

添加asset

空的composition對您是沒用的。將兩個video asset tracks和audio asset track添加到composition中。

AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
   

注意: 假設您擁有兩個asset,每個asset至少包含一個video track,而第三asset又包含至少一個音audio track。可以從“相機膠卷”中檢索video,并且可以從音樂庫或視頻本身中檢索audio track。

檢查視頻方向

將video和audio軌道添加到composition中后,需要確保兩個video tracks的方向正確。默認情況下,所有視頻軌道均假定為橫向模式。如果您的視頻軌道是以縱向模式拍攝的,則在導出視頻時將無法正確定向視頻。同樣,如果您嘗試將縱向模式下的視頻與橫向模式下的視頻結合起來,則導出會話將無法完成。

BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
    isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
    isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
    UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
    [incompatibleVideoOrientationAlert show];
    return;
}
   

應用video composition layer Instructions
一旦知道視頻片段具有兼容的方向,就可以將必要的layer instructions應用于每個layer instructions,并將這些layer instructions添加到video composition中。

AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction * secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction]
   

所有AVAssetTrack對象都有一個preferredTransform屬性,其中包含該資產軌道的方向信息。每當資產軌道顯示在屏幕上時,都會應用此轉換。在先前的代碼中,將layer instruction’s transform 設置為asset track’s transform,以便一旦調整其渲染大小,新composition中的視頻就會正確顯示。

設置渲染大小和幀持續(xù)時間

要完成視頻方向修正,您必須相應地調整renderSize屬性。您還應該為frameDuration屬性選擇一個合適的值,例如1/30秒(或每秒30幀)。默認情況下,該renderScale屬性設置為1.0,適用于此composition。

CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
    naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
    naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
}
else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
    naturalSizeFirst = firstVideoAssetTrack.naturalSize;
    naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
    renderWidth = naturalSizeFirst.width;
}
else {
    renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
    renderHeight = naturalSizeFirst.height;
}
else {
    renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
   

導出composition并將其保存到相機膠卷

此過程的最后一步是將整個composition導出到單個視頻文件中,并將該視頻保存到相機膠卷中。您使用一個AVAssetExportSession對象來創(chuàng)建新的視頻文件,并將所需的輸出文件URL傳遞給該對象。然后,您可以使用ALAssetsLibrary該類將生成的視頻文件保存到“相機膠卷”中。

// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
    kDateFormatter = [[NSDateFormatter alloc] init];
    kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
    kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
    dispatch_async(dispatch_get_main_queue(), ^{
        if (exporter.status == AVAssetExportSessionStatusCompleted) {
            ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
            if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
                [assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
            }
        }
    });
}];
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發(fā)布,文章內容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。