AVFoundation概述

About AVFoundation - AVFoundation概述

AVFoundation is one of several frameworks that you can use to play and create time-based audiovisual media. It provides an Objective-C interface you use to work on a detailed level with time-based audiovisual data. For example, you can use it to examine, create, edit, or reencode media files. You can also get input streams from devices and manipulate video during realtime capture and playback. Figure I-1 shows the architecture on iOS.

AVFoundation是可以用它來播放和創建基于時間的視聽媒體的幾個框架之一。它提供了基于時間的視聽數據的詳細級別上的Objective-C接口。例如,你可以用它來檢查,創建,編輯或重新編碼媒體文件。您也可以從設備得到輸入流和在實時捕捉回放過程中操控視頻。圖I-1顯示了iOS上的架構。

Figure I-2 shows the corresponding media architecture on OS X.

圖1-2顯示了OS X上相關媒體的架構:

You should typically use the highest-level abstraction available that allows you to perform the tasks you want.

If you simply want to play movies, use the AVKit framework.

On iOS, to record video when you need only minimal control over format, use the UIKit framework(UIImagePickerController)

Note, however, that some of the primitive data structures that you use in AV Foundation—including time-related data structures and opaque objects to carry and describe media data—are declared in the Core Media framework.

通常,您應該使用可用的最高級別的抽象接口,執行所需的任務。

如果你只是想播放電影,使用AVKit框架。

在iOS上,當你在格式上只需要最少的控制,使用UIKit框架錄制視頻。(UIImagePickerController).

但是請注意,某些在AV Foundation中使用的原始數據結構,包括時間相關的數據結構和不透明數據對象的傳遞和描述媒體數據是在Core Media framework聲明的。

At a Glance - 摘要

There are two facets to the AVFoundation framework—APIs related to video and APIs related just to audio. The older audio-related classes provide easy ways to deal with audio. They are described in theMultimedia Programming Guide, not in this document.

To play sound files, you can useAVAudioPlayer.

To record audio, you can useAVAudioRecorder.

You can also configure the audio behavior of your application usingAVAudioSession; this is described inAudio Session Programming Guide.

AVFoundation框架包含視頻相關的APIs和音頻相關的APIs。舊的音頻相關類提供了簡便的方法來處理音頻。他們在Multimedia Programming Guide,中介紹,不在這個文檔中。

要播放聲音文件,您可以使用AVAudioPlayer

要錄制音頻,您可以使用AVAudioRecorder

您還可以使用AVAudioSession來配置應用程序的音頻行為;這是在Audio Session Programming Guide文檔中介紹的。

Representing and Using Media with AVFoundation - 用AVFoundation 表示和使用媒體

The primary class that the AV Foundation framework uses to represent media isAVAsset. The design of the framework is largely guided by this representation. Understanding its structure will help you to understand how the framework works. An AVAssetinstance is an aggregated representation of a collection of one or more pieces of media data (audio and video tracks). It provides information about the collection as a whole, such as its title, duration, natural presentation size, and so on. AVAsset is not tied to particular data format. AVAsset is the superclass of other classes used to create asset instances from media at a URL (seeUsing Assets) and to create new compositions (seeEditing).

AV Foundation框架用來表示媒體的主要類是AVAsset。框架的設計主要是由這種表示引導。了解它的結構將有助于您了解該框架是如何工作的。一個AVAsset實例的媒體數據的一個或更多個(音頻和視頻軌道)的集合的聚集表示。它規定將有關集合的信息作為一個整體,如它的名稱,時間,自然呈現大小等的信息。AVAsset是不依賴于特定的數據格式。AVAsset是常常從URL中的媒體創建資產實例的這種類父類(請參閱Using Assets),并創造新的成分(見Editing)。

Each of the individual pieces of media data in the asset is of a uniform type and called a track. In a typical simple case, one track represents the audio component, and another represents the video component; in a complex composition, however, there may be multiple overlapping tracks of audio and video. Assets may also have metadata.

Asset中媒體數據的各個部分,每一個都是一個統一的類型,把這個類型稱為“軌道”。在一個典型簡單的情況下,一個軌道代表這個音頻組件,另一個代表視頻組件。然而復雜的組合中,有可能是多個重疊的音頻和視頻軌道。Assets也可能有元數據。

A vital concept in AV Foundation is that initializing an asset or a track does not necessarily mean that it is ready for use. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you ask for values and get an answer back asynchronously through a callback that you define using a block.

在AV Foundation中一個非常重要的概念是:初始化一個asset或者一個軌道并不一定意味著它已經準備好可以被使用。這可能需要一些時間來計算一個項目的持續時間(例如一個MP3文件,其中可能不包含摘要信息)。而不是當一個值被計算的時候阻塞當前線程,你訪問這個值,并且通過調用你定義的一個block來得到異步返回。

Relevant Chapters:Using Assets,Time and Media Representations

相關章節:Using Assets,Time and Media Representations

Playback - 播放

AVFoundation allows you to manage the playback of asset in sophisticated ways. To support this, it separates the presentation state of an asset from the asset itself. This allows you to, for example, play two different segments of the same asset at the same time rendered at different resolutions. The presentation state for an asset is managed by a player item object; the presentation state for each track within an asset is managed by a player item track object. Using the player item and player item tracks you can, for example, set the size at which the visual portion of the item is presented by the player, set the audio mix parameters and video composition settings to be applied during playback, or disable components of the asset during playback.

AVFoundation允許你用一種復雜的方式來管理asset的播放。為了支持這一點,它將一個asset的呈現狀態從asset自身中分離出來。例如允許你在不同的分辨率下同時播放同一個asset中的兩個不同的片段。一個asset的呈現狀態是由player item對象管理的。Asset中的每個軌道的呈現狀態是由player item track對象管理的。例如使用player item和player item tracks,你可以設置被播放器呈現的項目中可視的那一部分,設置音頻的混合參數以及被應用于播放期間的視頻組合設定,或者播放期間的禁用組件。

You play player items using a player object, and direct the output of a player to the Core Animation layer. You can use a player queue to schedule playback of a collection of player items in sequence.

你可以使用一個player對象來播放播放器項目,并且直接輸出一個播放器給核心動畫層。你可以使用一個player queue(player對象的隊列)去給隊列中player items集合中的播放項目安排序列。

Relevant Chapter:Playback

相關章節:Playback

Reading, Writing, and Reencoding Assets - 讀取,寫入和重新編碼Assets

AVFoundation allows you to create new representations of an asset in several ways. You can simply reencode an existing asset, or—in iOS 4.1 and later—you can perform operations on the contents of an asset and save the result as a new asset.

AVFoundation允許你用幾種方式創建新的asset的表現形式。你可以簡單將已經存在的asset重新編碼,或者在iOS4.1以及之后的版本中,你可以在一個asset的目錄中執行一些操作并且將結果保存為一個新的asset。

You use an export session to reencode an existing asset into a format defined by one of a small number of commonly-used presets. If you need more control over the transformation, in iOS 4.1 and later you can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you can, for example, choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process.

你可以使用export session將一個現有的asset重新編碼為一個小數字,這個小數字是常用的預先設定好的一些小數字中的一個。如果在轉換中你需要更多的控制,在iOS4.1已經以后的版本中,你可以使用asset reader和asset writer對象串聯的一個一個的轉換。例如你可以使用這些對象選擇在輸出的文件中想要表示的軌道,指定你自己的輸出格式,或者在轉換過程中修改這個asset。

To produce a visual representation of the waveform, you use an asset reader to read the audio track of an asset.

為了產生波形的可視化表示,你可以使用asset reader去讀取asset中的音頻軌道。

Relevant Chapter:Using Assets

相關章節:Using Assets

Thumbnails - 縮略圖

To create thumbnail images of video presentations, you initialize an instance ofAVAssetImageGeneratorusing the asset from which you want to generate thumbnails. AVAssetImageGenerator uses the default enabled video tracks to generate images.

創建視頻演示圖像的縮略圖,使用想要生成縮略圖的asset初始化一個AVAssetImageGenerator的實例。AVAssetImageGenerator使用默認啟用視頻軌道來生成圖像。

Relevant Chapter:Using Assets

相關章節:Using Assets

Editing - 編輯

AVFoundation uses compositions to create new assets from existing pieces of media (typically, one or more video and audio tracks). You use a mutable composition to add and remove tracks, and adjust their temporal orderings. You can also set the relative volumes and ramping of audio tracks; and set the opacity, and opacity ramps, of video tracks. A composition is an assemblage of pieces of media held in memory. When you export a composition using an export session, it’s collapsed to a file.

AVFoundation使用compositions去從現有的媒體片段(通常是一個或多個視頻和音頻軌道)創建新的assets。你可以使用一個可變成分去添加和刪除軌道,并調整它們的時間排序。你也可以設置相對音量和增加音頻軌道;并且設置不透明度,渾濁坡道,視頻跟蹤。一種組合物,是一種在內存中存儲的介質的組合。當年你使用export session導出一個成份,它會坍塌到一個文件中。

You can also create an asset from media such as sample buffers or still images using an asset writer.

你也可以從媒體上創建一個asset,比如使用asset writer.的示例緩沖區或靜態圖像。

Relevant Chapter:Editing

相關章節:Editing

Still and Video Media Capture - 靜態和視頻媒體捕獲

Recording input from cameras and microphones is managed by a capture session. A capture session coordinates the flow of data from input devices to outputs such as a movie file. You can configure multiple inputs and outputs for a single session, even when the session is running. You send messages to the session to start and stop data flow.

從相機和麥克風記錄輸入是由一個capture session管理的。一個capture session協調從輸入設備到輸出的數據流,比如一個電影文件。你可以為一個單一的session配置多個輸入和輸出,甚至session正在運行的時候也可以。你將消息發送到session去啟動和停止數據流。

In addition, you can use an instance of a preview layer to show the user what a camera is recording.

此外,你可以使用preview layer的一個實例來向用戶顯示一個相機是正在錄制的。

Relevant Chapter:Still and Video Media Capture

相關章節:Still and Video Media Capture

Concurrent Programming with AVFoundation - AVFoundation并發編程

Callbacks from AVFoundation—invocations of blocks, key-value observers, and notification handlers—are not guaranteed to be made on any particular thread or queue. Instead, AVFoundation invokes these handlers on threads or queues on which it performs its internal tasks.

從AVFoundation回調,比如塊的調用、鍵值觀察者以及通知處理程序,都不能保證在任何特定的線程或隊列進行。相反,AVFoundation在線程或者執行其內部任務的隊列上調用這些處理程序。

There are two general guidelines as far as notifications and threading:

UI related notifications occur on the main thread.

Classes or methods that require you create and/or specify a queue will return notifications on that queue.

Beyond those two guidelines (and there are exceptions, which are noted in the reference documentation) you should not assume that a notification will be returned on any specific thread.

下面是兩個有關通知和線程的準則

在主線程上發生的與用戶界面相關的通知。

需要創建并且/或者 指定一個隊列的類或者方法將返回該隊列的通知。

除了這兩個準則(當然是有一些例外,在參考文檔中會被指出),你不應該假設一個通知將在任何特定的線程返回。

If you’re writing a multithreaded application, you can use the NSThread methodisMainThreador[[NSThread currentThread] isEqual:<#A stored thread reference#>]to test whether the invocation thread is a thread you expect to perform your work on. You can redirect messages to appropriate threads using methods such asperformSelectorOnMainThread:withObject:waitUntilDone:andperformSelector:onThread:withObject:waitUntilDone:modes:. You could also usedispatch_asyncto “bounce” to your blocks on an appropriate queue, either the main queue for UI tasks or a queue you have up for concurrent operations. For more about concurrent operations, seeConcurrency Programming Guide; for more about blocks, seeBlocks Programming Topics. TheAVCam-iOS: Using AVFoundation to Capture Images and Moviessample code is considered the primary example for all AVFoundation functionality and can be consulted for examples of thread and queue usage with AVFoundation.

如果你在寫一個多線程的應用程序,你可以使用NSThread方法isMainThread或者[[NSThread currentThread] isEqual:<#A stored thread reference#>]去測試是否調用了你期望執行你任務的線程。你可以使用方法重定向 消息給適合的線程,比如performSelectorOnMainThread:withObject:waitUntilDone:以及performSelector:onThread:withObject:waitUntilDone:modes:.你也可以使用dispatch_async彈回到適當隊列的blocks中,無論是在主界面的任務隊列還是有了并發操作的隊列。更多關于并行操作,請查看Concurrency Programming Guide;更多關于塊,請查看Blocks Programming Topics.AVCam-iOS: Using AVFoundation to Capture Images and Movies示例代碼是所有AVFoundation功能最主要的例子,可以對線程和隊列使用AVFoundation實例參考。

Prerequisites - 預備知識

AVFoundation is an advanced Cocoa framework. To use it effectively, you must have:

A solid understanding of fundamental Cocoa development tools and techniques

A basic grasp of blocks

A basic understanding of key-value coding and key-value observing

For playback, a basic understanding of Core Animation (seeCore Animation Programming Guideor, for basic playback, theAVKit Framework Reference.

AVFoundation是一種先進的Cocoa框架,為了有效的使用,你必須掌握下面的知識:

扎實的了解基本的Cocoa開發工具和框架

對塊有基本的了解

了解基本的鍵值編碼(key-value coding)和鍵值觀察(key-value observing)

對于播放,對核心動畫的基本理解 (seeCore Animation Programming Guide)或者,對于基本播放, 請看AVKit Framework Reference.

See Also - 參考

There are several AVFoundation examples including two that are key to understanding and implementation Camera capture functionality:

AVCam-iOS: Using AVFoundation to Capture Images and Moviesis the canonical sample code for implementing any program that uses the camera functionality. It is a complete sample, well documented, and covers the majority of the functionality showing the best practices.

AVCamManual: Extending AVCam to Use Manual Capture APIis the companion application to AVCam. It implements Camera functionality using the manual camera controls. It is also a complete example, well documented, and should be considered the canonical example for creating camera applications that take advantage of manual controls.

RosyWriteris an example that demonstrates real time frame processing and in particular how to apply filters to video content. This is a very common developer requirement and this example covers that functionality.

AVLocationPlayer: Using AVFoundation Metadata Reading APIs demonstrates using the metadata APIs.

有幾個AVFoundation的例子,包括兩個理解和實現攝像頭捕捉功能的關鍵點:

AVCam-iOS: Using AVFoundation to Capture Images and Movies是實現任何想使用攝像頭功能的程序的典型示例代碼。它是一個完整的樣本,以及記錄,并涵蓋了大部分主要的功能。

AVCamManual: Extending AVCam to Use Manual Capture API是AVCam相對應的應用程序。它使用手動相機控制實現相機功能。它也是一個完成的例子,以及記錄,并且應該被視為利用手動控制創建相機應用程序的典型例子。

RosyWriter是一個演示實時幀處理的例子,特別是如果過濾器應用到視頻內容。這是一個非常普遍的開發人員的需求,這個例子涵蓋了這個功能。

AVLocationPlayer: 使用AVFoundation Metadata Reading APIs演示使用the metadata APIs.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容