本篇文章是基于谷歌有關(guān)Graphic的一篇概覽文章的翻譯:http://source.android.com/devices/graphics/architecture.html
大量文字以及術(shù)語的理解基于自身的理解,可能并不準(zhǔn)確。文中有部分英文原文我也不能準(zhǔn)確理解,對(duì)于這種語句,我在翻譯的語句后加了(?)符號(hào)。
This document describes the essential elements of Android's "system-level" graphics architecture, and how it is used by the application framework and multimedia system. The focus is on how buffers of graphical data move through the system. If you've ever wondered why SurfaceView and TextureView behave the way they do, or how Surface and EGLSurface interact, you've come to the right place.
這篇文檔描述了android系統(tǒng)的子模塊Graphic的總體架構(gòu),以及APP Framework層和多媒體系統(tǒng)如何使用Graphic模塊的過程。這篇文章的重點(diǎn)在于講述Graphic的buffer數(shù)據(jù)如何在系統(tǒng)內(nèi)部傳輸?shù)摹H绻阍?jīng)對(duì)SurfaceView和TextureView工作方式表示好奇,如果你希望了解Surface和EGLSurface的交互方式,那么朋友,你來對(duì)地方了。
Some familiarity with Android devices and application development is assumed. You don't need detailed knowledge of the app framework, and very few API calls will be mentioned, but the material herein doesn't overlap much with other public documentation. The goal here is to provide a sense for the significant events involved in rendering a frame for output, so that you can make informed choices when designing an application. To achieve this, we work from the bottom up, describing how the UI classes work rather than how they can be used.
閱讀這篇文章前,我們假設(shè)你已經(jīng)對(duì)android設(shè)備和應(yīng)用開發(fā)有了一定的了解。你不需要了解app framework層的大量知識(shí),文中會(huì)涉及少量api,但是所涉及的材料并不會(huì)跟其它文檔有很大重疊。這篇文章重點(diǎn)講解在一幀的渲染過程中的重要步驟,目的在于使你在開發(fā)應(yīng)用程序時(shí)做出更明智的選擇。為實(shí)現(xiàn)這個(gè)目標(biāo),我們將自下向上的講解相關(guān)的UI類是如何工作的,至于如何使用這些類,則不在我們的講解范圍內(nèi)。
We start with an explanation of Android's graphics buffers, describe the composition and display mechanism, and then proceed to the higher-level mechanisms that supply the compositor with data.
我們從解釋android Graphic buffer開始講起,描述了buffer合成和顯示的原理,然后,我們將在更高的層面講解這些數(shù)據(jù)合成的原理。
This document is chiefly concerned with the system as it exists in Android 4.4 ("KitKat"). Earlier versions of the system worked differently, and future versions will likely be different as well. Version-specific features are called out in a few places.
這篇文章主要是基于android KK的,早期系統(tǒng)和后面的系統(tǒng)在一些細(xì)節(jié)方面會(huì)有一些不同。
At various points I will refer to source code from the AOSP sources or from Grafika. Grafika is a Google open-source project for testing; it can be found at https://github.com/google/grafika. It's more "quick hack" than solid example code, but it will suffice.
BufferQueue and gralloc
To understand how Android's graphics system works, we have to start behind the scenes. At the heart of everything graphical in Android is a class called BufferQueue. Its role is simple enough: connect something that generates buffers of graphical data (the "producer") to something that accepts the data for display or further processing (the "consumer"). The producer and consumer can live in different processes. Nearly everything that moves buffers of graphical data through the system relies on BufferQueue.
我們將從具體的場(chǎng)景來理解android Graphic 系統(tǒng)的運(yùn)作。整個(gè)繪制系統(tǒng)的核心是一個(gè)叫做BufferQueue的類。它的作用其實(shí)很簡(jiǎn)單:將一些可以生產(chǎn)繪制數(shù)據(jù)(buffers of graphical data)的模塊(producer)和一些將繪制數(shù)據(jù)顯示出來或做進(jìn)一步處理的模塊(consumer)相連。生產(chǎn)者和消費(fèi)者可以存在于不同的進(jìn)程內(nèi)。幾乎所有和buffers of graphical data移動(dòng)的過程都依賴與BufferQueue類。
The basic usage is straightforward. The producer requests a free buffer (dequeueBuffer()), specifying a set of characteristics including width, height, pixel format, and usage flags. The producer populates the buffer and returns it to the queue (queueBuffer()). Sometime later, the consumer acquires the buffer (acquireBuffer()) and makes use of the buffer contents. When the consumer is done, it returns the buffer to the queue (releaseBuffer()).
這個(gè)類基本的用法很簡(jiǎn)單。生產(chǎn)者申請(qǐng)一塊空閑的buffer(dequeueBuffer()),在申請(qǐng)時(shí)指定寬度,高度,像素的格式,以及使用的用途的一系列參數(shù)。生產(chǎn)者填充緩沖區(qū)后,將它送還給隊(duì)列(queueBuffer())。之后,消費(fèi)者申請(qǐng)buffer (acquireBuffer()),然后使用對(duì)應(yīng)的buffer數(shù)據(jù)。當(dāng)消費(fèi)者使用完成后,它將buffer返回給隊(duì)列(releaseBuffer())。
Most recent Android devices support the "sync framework". This allows the system to do some nifty thing when combined with hardware components that can manipulate graphics data asynchronously. For example, a producer can submit a series of OpenGL ES drawing commands and then enqueue the output buffer before rendering completes. The buffer is accompanied by a fence that signals when the contents are ready. A second fence accompanies the buffer when it is returned to the free list, so that the consumer can release the buffer while the contents are still in use. This approach improves latency and throughput as the buffers move through the system.
最新的android設(shè)備支持一種叫做sync framework的技術(shù)。這允許系統(tǒng)結(jié)合硬件來實(shí)現(xiàn)對(duì)Graphic數(shù)據(jù)的異步操作。比如說,一個(gè)生產(chǎn)者可以一次性提交一系列的OpenGL ES繪制命令,然后在渲染結(jié)束前將其入隊(duì)到輸出緩沖區(qū)中。Buffer會(huì)被一個(gè)fence保護(hù),當(dāng)內(nèi)容準(zhǔn)備好后,發(fā)出信號(hào)。當(dāng)buffer回到free列表時(shí)(有第二個(gè)fence保護(hù)),所以消費(fèi)者在內(nèi)容仍在使用的時(shí)候釋放緩沖區(qū)。這種方法提高了buffer在系統(tǒng)中移動(dòng)的速度和吞吐量。
Some characteristics of the queue, such as the maximum number of buffers it can hold, are determined jointly by the producer and the consumer.
隊(duì)列其他的一些特性,比如說它能擁有的最大緩沖區(qū)數(shù)量,則有生產(chǎn)者和消費(fèi)者共同決定。
The BufferQueue is responsible for allocating buffers as it needs them. Buffers are retained unless the characteristics change; for example, if the producer starts requesting buffers with a different size, the old buffers will be freed and new buffers will be allocated on demand.
BufferQueue負(fù)責(zé)緩沖區(qū)的分配。除非buffer的一些屬性發(fā)生變化,否則buffers將被保留。舉例說,如果生成者申請(qǐng)一個(gè)大小不同的buffers,舊的buffers將被釋放,而新的buffers將重新被申請(qǐng)。
The data structure is currently always created and "owned" by the consumer. In Android 4.3 only the producer side was "binderized", i.e. the producer could be in a remote process but the consumer had to live in the process where the queue was created. This evolved a bit in 4.4, moving toward a more general implementation.
這個(gè)數(shù)據(jù)結(jié)構(gòu)一直被消費(fèi)者創(chuàng)建并“持有”。在4.3版本時(shí),只有生成者一端是“binder化的”,也就是說生產(chǎn)者這端可以在遠(yuǎn)端進(jìn)程(binder的另一側(cè)進(jìn)程)里,而消費(fèi)者必須在隊(duì)列被創(chuàng)建的進(jìn)程里。這在KK版本里面有了一定程序的改進(jìn)。
Buffer contents are never copied by BufferQueue. Moving that much data around would be very inefficient. Instead, buffers are always passed by handle.
BufferQueue 永遠(yuǎn)不會(huì)拷貝Buffer的數(shù)據(jù),因?yàn)橐苿?dòng)如此多的數(shù)據(jù)效率將十分低下,buffers只會(huì)以句柄的方式被傳遞。
Gralloc HAL
The actual buffer allocations are performed through a memory allocator called "gralloc", which is implemented through a vendor-specific HAL interface (see hardware/libhardware/include/hardware/gralloc.h). The alloc()function takes the arguments you'd expect -- width, height, pixel format -- as well as a set of usage flags. Those flags merit closer attention.
事實(shí)上,緩沖區(qū)的分配是由一個(gè)叫做gralloc的內(nèi)存分配模塊控制的,這個(gè)模塊是是由具體廠商來實(shí)現(xiàn)的一個(gè)HAL接口(參見 hardware/libhardware/include/hardware/gralloc.h)。使用alloc函數(shù)被傳入你所需要的參數(shù)—寬度,高度,像素類型,以及用途的標(biāo)志(usage flags)。
The gralloc allocator is not just another way to allocate memory on the native heap. In some situations, the allocated memory may not be cache-coherent, or could be totally inaccessible from user space. The nature of the allocation is determined by the usage flags, which include attributes like: ? how often the memory will be accessed from software (CPU) ? how often the memory will be accessed from hardware (GPU) ? whether the memory will be used as an OpenGL ES ("GLES") texture ? whether the memory will be used by a video encoder
這個(gè)gralloc allocator并不是僅僅在native heap上分配內(nèi)存。在一些場(chǎng)景中,分配的內(nèi)存很可能并非緩存一致性(所謂緩存一致性,是指保留在高速緩存中的共享資源,保持?jǐn)?shù)據(jù)一致性的機(jī)制)的,或者是從用戶空間不可達(dá)的。分配到的內(nèi)存具有哪些特性,取決于在創(chuàng)建時(shí)傳入的usage flags:
? 從軟件層次來訪問這段內(nèi)存的頻率(CPU)
? 從硬件層次來訪問這段內(nèi)存的頻率(GPU)
? 這段內(nèi)存是否被用來做OpenGL ES的材質(zhì)(GLES)
? 這段內(nèi)存是否會(huì)被拿來做視頻的編碼
For example, if your format specifies RGBA 8888 pixels, and you indicate the buffer will be accessed from software -- meaning your application will touch pixels directly -- then the allocator needs to create a buffer with 4 bytes per pixel in R-G-B-A order. If instead you say the buffer will only be accessed from hardware and as a GLES texture, the allocator can do anything the GLES driver wants -- BGRA ordering, non-linear "swizzled" layouts, alternative color formats, etc. Allowing the hardware to use its preferred format can improve performance. Some values cannot be combined on certain platforms. For example, the "video encoder" flag may require YUV pixels, so adding "software access" and specifying RGBA 8888 would fail. The handle returned by the gralloc allocator can be passed between processes through Binder.
比如說,如果你設(shè)置了RGBA 8888的像素格式,并且你設(shè)置了緩沖區(qū)被軟件訪問(這意味著你的程序可以直接修改像素的數(shù)據(jù)),那么allocator會(huì)創(chuàng)建一個(gè)四字節(jié)的緩沖區(qū),其中順序按照R-G-B-A的存儲(chǔ)順序來排列。。。。。設(shè)置成硬件推薦的數(shù)據(jù)格式可以提高性能。
一些數(shù)值的組合在某些特定的平臺(tái)是不被允許的。比如說,video encoder對(duì)應(yīng)的可能是YUV的數(shù)據(jù)格式,所以如果我們?nèi)绻尤雜oftware access,并且指定數(shù)據(jù)格式為RGBA 8888就有可能失敗。
gralloc allocator創(chuàng)建的緩沖區(qū)的句柄將通過binder在不同進(jìn)程之間傳輸。
SurfaceFlinger and Hardware Composer
Having buffers of graphical data is wonderful, but life is even better when you get to see them on your device's screen. That's where SurfaceFlinger and the Hardware Composer HAL come in.
擁有Graphic數(shù)據(jù)的緩沖區(qū)很美妙,但是如果你能看到它們顯示在屏幕上才更是讓你覺得人生完美。是時(shí)候讓SurfaceFlinger 和 Hardware Composer HAL登場(chǎng)了。
SurfaceFlinger's role is to accept buffers of data from multiple sources, composite them, and send them to the display. Once upon a time this was done with software blitting to a hardware framebuffer (e.g./dev/graphics/fb0), but those days are long gone.
SurfaceFlinger的工作是接受來自不同來源的緩沖區(qū)數(shù)據(jù),將這些數(shù)據(jù)混合,然后發(fā)送數(shù)據(jù)到顯示設(shè)備上。曾幾何時(shí),這些功能是由軟件直接復(fù)制數(shù)據(jù)到硬件的framebuffer上(e.g./dev/graphics/fb0),但這樣的日子早已一去不復(fù)返。
*When an app comes to the foreground, the WindowManager service asks SurfaceFlinger for a drawing surface. SurfaceFlinger creates a "layer"
- the primary component of which is a BufferQueue - for which SurfaceFlinger acts as the consumer. A Binder object for the producer side is passed through the WindowManager to the app, which can then start sending frames directly to SurfaceFlinger. (Note: The WindowManager uses the term "window" instead of "layer" for this and uses "layer" to mean something else. We're going to use the SurfaceFlinger terminology. It can be argued that SurfaceFlinger should really be called LayerFlinger.)*
當(dāng)一個(gè)app轉(zhuǎn)到前端,WindowManager服務(wù)會(huì)要求SurfaceFlinger繪制一個(gè)surface。SurfaceFlinger將創(chuàng)建一個(gè)layer(它的主要組成部分是一個(gè)BufferQueue),而實(shí)際上SurfaceFlinger扮演了一個(gè)消費(fèi)者的角色。一個(gè)生產(chǎn)者一側(cè)的binder對(duì)象通過WindowManager傳輸給了app,所以它可以直接向SurfaceFlinger發(fā)送幀數(shù)據(jù)(注意:WindowManager實(shí)際上使用的是window這個(gè)術(shù)語,而不是layer這個(gè)術(shù)語。但是我們這里將使用SurfaceFlinger體系下的術(shù)語,可以說SurfaceFlinger更應(yīng)該被稱作是LayerFlinger)。
For most apps, there will be three layers on screen at any time: the "status bar" at the top of the screen, the "navigation bar" at the bottom or side, and the application's UI. Some apps will have more or less, e.g. the default home app has a separate layer for the wallpaper, while a full-screen game might hide the status bar. Each layer can be updated independently. The status and navigation bars are rendered by a system process, while the app layers are rendered by the app, with no coordination between the two.
對(duì)于大多數(shù)app來說,屏幕上一般總是有三個(gè)layer:屏幕上方的status bar,屏幕下方的navigation bar(實(shí)際上很多品牌的手機(jī)并沒有navigation bar,比如三星),以及應(yīng)用本身的UI。一些應(yīng)用的layer可能會(huì)有不同,比如home app的壁紙會(huì)有一個(gè)獨(dú)立的layer,而一直全屏幕的游戲可能不會(huì)有status bar。每個(gè)layer都是獨(dú)立的被更新。status和navigation bars是被系統(tǒng)進(jìn)程渲染的,而app的layer則被app渲染,這二者之間并不會(huì)有什么協(xié)同作業(yè)。
Device displays refresh at a certain rate, typically 60 frames per second on phones and tablets. If the display contents are updated mid-refresh, "tearing" will be visible; so it's important to update the contents only between cycles. The system receives a signal from the display when it's safe to update the contents. For historical reasons we'll call this the VSYNC signal.
設(shè)備的顯示刷新頻率是一個(gè)特定的值,一般是60幀每秒。如果顯示的內(nèi)容刷新不夠迅速,就可能出現(xiàn)顯示撕裂的情況。因?yàn)榘凑罩芷趤砀嘛@示的內(nèi)容至關(guān)重要。當(dāng)顯示系統(tǒng)可以安全的更新內(nèi)容時(shí),它會(huì)發(fā)送一個(gè)信號(hào)給系統(tǒng)。基于某種歷史上的原因,我們將這個(gè)信號(hào)稱之為VSYNC信號(hào)。
The refresh rate may vary over time, e.g. some mobile devices will range from 58 to 62fps depending on current conditions. For an HDMI-attached television, this could theoretically dip to 24 or 48Hz to match a video. Because we can update the screen only once per refresh cycle, submitting buffers for display at 200fps would be a waste of effort as most of the frames would never be seen. Instead of taking action whenever an app submits a buffer, SurfaceFlinger wakes up when the display is ready for something new.
設(shè)備的刷新率可能隨時(shí)間變化,基于不同的場(chǎng)景,一些型號(hào)的刷新率可能在58到62之間變化。對(duì)于一個(gè)連接了HDMI的電視,這個(gè)值理論上可以下降到24或者48。因?yàn)槲覀冎荒茉诿總€(gè)刷新周期上更新屏幕內(nèi)容,如果我們以200fps的頻率來提交buffer的數(shù)據(jù),那么由于大多數(shù)的數(shù)據(jù)并不會(huì)被顯示,這無疑是一種浪費(fèi)。因?yàn)槲覀儾粫?huì)在每次app提交buffer數(shù)據(jù)時(shí)就做相應(yīng)操作,只會(huì)在顯示系統(tǒng)可以接受數(shù)據(jù)時(shí)才喚醒SurfaceFlinger。
When the VSYNC signal arrives, SurfaceFlinger walks through its list of layers looking for new buffers. If it finds a new one, it acquires it; if not, it continues to use the previously-acquired buffer. SurfaceFlinger always wants to have something to display, so it will hang on to one buffer. If no buffers have ever been submitted on a layer, the layer is ignored.
當(dāng)VSYNC信號(hào)到達(dá)時(shí),SurfaceFlinger會(huì)遍歷它的layer列表來查找新的buffer。如果查找到一個(gè),SurfaceFlinger將請(qǐng)求它(acquires),否則的話,SurfaceFlinger將繼續(xù)使用之前的數(shù)據(jù)。SurfaceFlinger總是需要一些數(shù)據(jù)來顯示,因此它依賴于一個(gè)buffer(?)。如果一個(gè)layer沒有buffer被提交,那么這個(gè)layer將被忽略。
Once SurfaceFlinger has collected all of the buffers for visible layers, it asks the Hardware Composer how composition should be performed.
一旦SurfaceFlinger已經(jīng)收集到了所有可見layer的buffer,它將請(qǐng)求Hardware Composer來執(zhí)行混合的操作。
Hardware Composer
The Hardware Composer HAL ("HWC") was first introduced in Android 3.0 ("Honeycomb") and has evolved steadily over the years. Its primary purpose is to determine the most efficient way to composite buffers with the available hardware. As a HAL, its implementation is device-specific and usually implemented by the display hardware OEM.
HWC是從android 3.0版本引入的,在過去的數(shù)年中逐漸變得穩(wěn)定。它的作用是使用現(xiàn)有的硬件選擇最有效的方式來合成緩沖區(qū)。做為一個(gè)HAL層接口,它的內(nèi)容是由顯示硬件設(shè)備廠商來具體實(shí)現(xiàn)的。
The value of this approach is easy to recognize when you consider "overlay planes." The purpose of overlay planes is to composite multiple buffers together, but in the display hardware rather than the GPU. For example, suppose you have a typical Android phone in portrait orientation, with the status bar on top and navigation bar at the bottom, and app content everywhere else. The contents for each layer are in separate buffers. You could handle composition by rendering the app content into a scratch buffer, then rendering the status bar over it, then rendering the navigation bar on top of that, and finally passing the scratch buffer to the display hardware. Or, you could pass all three buffers to the display hardware, and tell it to read data from different buffers for different parts of the screen. The latter approach can be significantly more efficient.
如果設(shè)想一下"overlay planes."的場(chǎng)景,那么這個(gè)方法的價(jià)值是顯而易見的。"overlay planes"的作用是在display hardware而不是GPU中同時(shí)混合不同的buffer。打比方說,典型場(chǎng)景下,屏幕上方的status bar,屏幕下方的navigation bar,以及應(yīng)用本身的UI。每個(gè)layer都有自己獨(dú)立的buffer。你可以通過逐步繪制每個(gè)layer到緩沖區(qū)里的方式來合成,最后將緩沖區(qū)的數(shù)據(jù)傳遞給顯示硬件設(shè)備;或者,你也可以將每個(gè)layer數(shù)據(jù)分別傳給顯示硬件設(shè)備,然后告知顯示硬件設(shè)備從不同的緩沖區(qū)中讀取數(shù)據(jù)。顯然后一種方法更有效率。
*As you might expect, the capabilities of different display processors vary significantly. The number of overlays, whether layers can be rotated or blended, and restrictions on positioning and overlap can be difficult to express through an API. So, the HWC works like this:
- SurfaceFlinger provides the HWC with a full list of layers, and asks, "how do you want to handle this?"
- The HWC responds by marking each layer as "overlay" or "GLES composition."
- SurfaceFlinger takes care of any GLES composition, passing the output buffer to HWC, and lets HWC handle the rest.*
如你所料,不同顯示處理器之間的性能有巨大的差距。很多Overlay, layer被旋轉(zhuǎn)或者混合,因此一個(gè)api很難準(zhǔn)確表達(dá)在位置和遮蓋上的限制。因此,HWC模塊是這樣運(yùn)作的:
1.SurfaceFlinger給HWC提供一份完整的layer列表,然后問,“你打算如何處理?”
2.HWC將每個(gè)layer標(biāo)記為overlay或者GLES composition然后回復(fù)給SurfaceFlinger
3.SurfaceFlinger來處理被標(biāo)記為GLES composition的layer,將處理之后的數(shù)據(jù)傳輸給HWC,并且讓HWC模塊來處理剩下的工作。
Since the decision-making code can be custom tailored by the hardware vendor, it's possible to get the best performance out of every device.
因?yàn)橛布S商可以自己定制decision-making的代碼,所以每臺(tái)機(jī)器達(dá)到性能最優(yōu)成為了可能。
Overlay planes may be less efficient than GL composition when nothing on the screen is changing. This is particularly true when the overlay contents have transparent pixels, and overlapping layers are being blended together. In such cases, the HWC can choose to request GLES composition for some or all layers and retain the composited buffer. If SurfaceFlinger comes back again asking to composite the same set of buffers, the HWC can just continue to show the previously-composited scratch buffer. This can improve the battery life of an idle device.
當(dāng)屏幕上沒有任何東西變化時(shí),Overlay planes的效率并不如GL composition的效率高。當(dāng)overlay的內(nèi)容中有很多透明的像素,或者重疊的layer在一起被混合時(shí),這種差距尤其明顯。在這種情況下,HWC會(huì)請(qǐng)求讓GLES composition來處理部分或者全部的layer,并且保留混合后的buffer。如果Surfaceflinger又來請(qǐng)求混合相同的buffer時(shí),HWC會(huì)直接顯示之前保存的混合好的buffer。這么做將可以提升設(shè)備待機(jī)時(shí)間。
Devices shipping with Android 4.4 ("KitKat") typically support four overlay planes. Attempting to composite more layers than there are overlays will cause the system to use GLES composition for some of them; so the number of layers used by an application can have a measurable impact on power consumption and performance.
搭載了KK的android設(shè)備一般支持四條overlay planes。如果我們嘗試混合更多的layer時(shí),系統(tǒng)會(huì)使用GLES composition來處理其中的部分;所以一個(gè)應(yīng)用使用了多少layer會(huì)影響到系統(tǒng)的功耗和性能。
You can see exactly what SurfaceFlinger is up to with the command adb shell dumpsys SurfaceFlinger. The output is verbose. The part most relevant to our current discussion is the HWC summary that appears near the bottom of the output:
你可以通過adb shell dumpsys SurfaceFlinger這個(gè)命令來查看Surfaceflinger具體使用了什么。這個(gè)命令的輸出十分的長(zhǎng),其中和我們上面探討的問題關(guān)連最深的是HWC的一段總結(jié),這段一般在輸出內(nèi)容的底部:
type | source crop | frame name
------------+-----------------------------------+--------------------------------
HWC | [ 0.0, 0.0, 320.0, 240.0] | [ 48, 411, 1032, 1149] SurfaceView
HWC | [ 0.0, 75.0, 1080.0, 1776.0] | [ 0, 75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
HWC | [ 0.0, 0.0, 1080.0, 75.0] | [ 0, 0, 1080, 75] StatusBar
HWC | [ 0.0, 0.0, 1080.0, 144.0] | [ 0, 1776, 1080, 1920] NavigationBar
FB TARGET | [ 0.0, 0.0, 1080.0, 1920.0] | [ 0, 0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
This tells you what layers are on screen, whether they're being handled with overlays ("HWC") or OpenGL ES composition ("GLES"), and gives you a bunch of other facts you probably won't care about ("handle" and "hints" and "flags" and other stuff that we've trimmed out of the snippet above). The "source crop" and "frame" values will be examined more closely later on.
從這里我們可以看到那些顯示在屏幕上的layer,是被overlays ("HWC")處理,還是被OpenGL ES composition ("GLES")處理,另外還有一些我們目前不太關(guān)注的別的屬性("handle" and "hints" and "flags"還有別的一些屬性,我們沒有粘帖在上面的輸出中)。我們會(huì)在后面詳細(xì)解釋"source crop" and "frame"這兩個(gè)值的含義。
The FB_TARGET layer is where GLES composition output goes. Since all layers shown above are using overlays, FB_TARGET isn’t being used for this frame. The layer's name is indicative of its original role: On a device with/dev/graphics/fb0 and no overlays, all composition would be done with GLES, and the output would be written to the framebuffer. On recent devices there generally is no simple framebuffer, so the FB_TARGET layer is a scratch buffer. (Note: This is why screen grabbers written for old versions of Android no longer work: They're trying to read from The Framebuffer, but there is no such thing.)
FB_TARGET這個(gè)layer就是由GLES composition輸出組成。這個(gè)layer上面的其余l(xiāng)ayer都是由overlay渲染而成,所以在這一幀里面,F(xiàn)B_TARGET并沒有被使用。這個(gè)layer的名字表明了初始的角色:一個(gè)在/dev/graphics/fb0 的設(shè)備,所有的合成工作由GLES來完成,然后輸入將會(huì)被寫入framebuffer中。在當(dāng)前的設(shè)備上并沒有一個(gè)單純的framebuffer,所有這個(gè)FB_TARGET layer 實(shí)際上是一個(gè)scratch buffer(這就是為什么在android早期版本上寫的一些屏幕截圖工具現(xiàn)在不能正常工作的原因:程序試圖從framebuffer中讀取數(shù)據(jù),但是現(xiàn)在已經(jīng)沒有了framebuffer).
The overlay planes have another important role: they're the only way to display DRM content. DRM-protected buffers cannot be accessed by SurfaceFlinger or the GLES driver, which means that your video will disappear if HWC switches to GLES composition.
overlay planes有另外一個(gè)重要的作用:這是顯示DRM內(nèi)容的唯一方法。受保護(hù)的DRM視頻的buffer是無法被Surfaceflinger或者GLES來讀取的,這意味著如果你使用GLES而不是HWC的話,你的視頻將無法播放。
The Need for Triple-Buffering
To avoid tearing on the display, the system needs to be double-buffered: the front buffer is displayed while the back buffer is being prepared. At VSYNC, if the back buffer is ready, you quickly switch them. This works reasonably well in a system where you're drawing directly into the framebuffer, but there's a hitch in the flow when a composition step is added. Because of the way SurfaceFlinger is triggered, our double-buffered pipeline will have a bubble.
為了避免出現(xiàn)畫面撕裂,系統(tǒng)需要雙重緩沖:前臺(tái)緩沖在顯示時(shí),后臺(tái)緩沖則在準(zhǔn)備中。在接收到VSYNC信號(hào)后,如果后臺(tái)緩沖已經(jīng)準(zhǔn)備好,你就可以迅速切換到上面。如果你總是直接向framebuffer繪入數(shù)據(jù),那么這種工作方式就足夠了。但是,當(dāng)我們加入一個(gè)合成步驟后,這樣就會(huì)有問題。Because of the way
SurfaceFlinger is triggered, our double-buffered pipeline will have a bubble. Suppose frame N is being displayed, and frame N+1 has been acquired by SurfaceFlinger for display on the next VSYNC. (Assume frame N is composited with an overlay, so we can't alter the buffer contents until the display is done with it.) When VSYNC arrives, HWC flips the buffers. While the app is starting to render frame N+2 into the buffer that used to hold frame N, SurfaceFlinger is scanning the layer list, looking for updates. SurfaceFlinger won't find any new buffers, so it prepares to show frame N+1 again after the next VSYNC. A little while later, the app finishes rendering frame N+2 and queues it for SurfaceFlinger, but it's too late. This has effectively cut our maximum frame rate in half.
假設(shè)frame N正在被顯示,而frame N+1已經(jīng)被Surfaceflinger獲取用于下一次VSYNC發(fā)生時(shí)的顯示(假設(shè)frame N使用了overylay來做渲染,所以顯示處理完成之前,我們沒辦法修改buffer的內(nèi)容)。當(dāng)VSYNC信號(hào)到來時(shí),HWC投遞了緩沖區(qū)。當(dāng)app開始渲染frame N+2 到Frame N用過的緩沖區(qū)內(nèi)時(shí),Surfaceflinger開始檢查layer列表,查看是否有更新。此時(shí)Surfaceflinger并不會(huì)發(fā)現(xiàn)任何新的buffer,所以它會(huì)準(zhǔn)備在下一個(gè)VSYNC到來時(shí)繼續(xù)顯示N+1幀的內(nèi)容。一段時(shí)間后,app結(jié)束了N+2幀的渲染,然后將數(shù)據(jù)傳給Surfaceflinger,但是此時(shí)已經(jīng)為時(shí)太晚。這將導(dǎo)致我們最大幀率縮減為一半。
We can fix this with triple-buffering. Just before VSYNC, frame N is being displayed, frame N+1 has been composited (or scheduled for an overlay) and is ready to be displayed, and frame N+2 is queued up and ready to be acquired by SurfaceFlinger. When the screen flips, the buffers rotate through the stages with no bubble. The app has just less than a full VSYNC period (16.7ms at 60fps) to do its rendering and queue the buffer. And SurfaceFlinger / HWC has a full VSYNC period to figure out the composition before the next flip. The downside is that it takes at least two VSYNC periods for anything that the app does to appear on the screen. As the latency increases, the device feels less responsive to touch input.
三重緩沖可以解決我們的這個(gè)問題。VSYNC信號(hào)之前,幀N已經(jīng)被顯示,幀N+1已經(jīng)合成完畢(或者計(jì)劃進(jìn)行overlay),等待被顯示,而幀N+2已經(jīng)在排隊(duì)等候被Surfaceflinger獲取。When the screen flips, the buffers rotate through the stages with no bubble.App有略少于一個(gè)完整VSYNC周期的時(shí)間(當(dāng)幀率為60時(shí),這個(gè)時(shí)間為16.7毫秒)去做它的渲染工作并且將buffer入隊(duì)。在下一個(gè)VSYNC到來之前,Surfaceflinger/HWC有一個(gè)完整的VSYNC周期去完成合成的工作。壞消息是,app將內(nèi)容顯示在屏幕上,將需要花費(fèi)兩個(gè)VSYNC的周期。因?yàn)檠舆t增加了,所以設(shè)備會(huì)顯得會(huì)觸摸事件的響應(yīng)不夠靈敏。
SurfaceFlinger with BufferQueue
Figure 1. SurfaceFlinger + BufferQueue
The diagram above depicts the flow of SurfaceFlinger and BufferQueue. During frame:
上面的圖表描述了SurfaceFlinger and BufferQueue的處理流程,在每一幀中:
1.red buffer fills up, then slides into BufferQueue
2.after red buffer leaves app, blue buffer slides in, replacing it
3.green buffer and systemUI shadow-slide into HWC (showing that SurfaceFlinger still has the buffers, but now HWC has prepared them for display via overlay on the next VSYNC).
The blue buffer is referenced by both the display and the BufferQueue. The app is not allowed to render to it until the associated sync fence signals.*
1.紅色的緩沖區(qū)填滿后,進(jìn)入BufferQueue中
2.當(dāng)紅色緩沖區(qū)離開app后,藍(lán)色的緩沖區(qū)進(jìn)入并代替了它
3.綠色緩沖區(qū)和SystemUI的數(shù)據(jù)進(jìn)入HWC(這里顯示Surfaceflinger依然持有這些緩沖區(qū),但是現(xiàn)在HWC已經(jīng)準(zhǔn)備好在一個(gè)VSYNC到來時(shí),將數(shù)據(jù)通過overlay顯示在屏幕上了)
藍(lán)色的緩沖區(qū)同時(shí)被顯示和BufferQueue引用,因此在相關(guān)的同步信號(hào)到來前,app是不能在這塊緩沖區(qū)上渲染的。
On VSYNC, all of these happen at once:
當(dāng)VSYNC到來時(shí),以下操作同時(shí)發(fā)生:
1.Red buffer leaps into SurfaceFlinger, replacing green buffer
2.Green buffer leaps into Display, replacing blue buffer, and a dotted-line green twin appears in the BufferQueue
3.The blue buffer’s fence is signaled, and the blue buffer in App empties
4.Display rect changes from <blue + SystemUI> to <green + SystemUI>
1.紅色的緩沖區(qū)進(jìn)入Surfaceflinger,取代了綠色緩沖區(qū)
2.綠色緩沖區(qū)取代了藍(lán)色緩沖區(qū),開始顯示,同時(shí)圖中虛線連接的,綠色緩沖區(qū)的復(fù)制保存在了BufferQueue中
3.藍(lán)色緩沖區(qū)的fence被解除,進(jìn)入到了App empties**中
4.顯示內(nèi)容從藍(lán)色緩沖區(qū)+SystemUI變成了綠色緩沖區(qū)+systemUI
The System UI process is providing the status and nav bars, which for our purposes here aren’t changing, so SurfaceFlinger keeps using the previously-acquired buffer. In practice there would be two separate buffers, one for the status bar at the top, one for the navigation bar at the bottom, and they would be sized to fit their contents. Each would arrive on its own BufferQueue.
SystemUI提供了狀態(tài)欄和導(dǎo)航欄,我們這里認(rèn)為它是不變的,因此Surfaceflinger使用了前面保存的buffer。而實(shí)際上,這里會(huì)有兩個(gè)獨(dú)立的buffer,一個(gè)屬于上面的狀態(tài)欄,一個(gè)屬于下面的導(dǎo)航欄,并且他們的大小和內(nèi)容是匹配的。每一個(gè)都會(huì)獨(dú)立達(dá)到自己的BufferQueue中。
The buffer doesn’t actually “empty”; if you submit it without drawing on it you’ll get that same blue again. The emptying is the result of clearing the buffer contents, which the app should do before it starts drawing.
這里的buffer并非真的是空的,如果你不在上面繪制而是直接提交的話,你將會(huì)得到一個(gè)同樣的藍(lán)色緩沖區(qū)。App在繪制執(zhí)行應(yīng)該先執(zhí)行清空緩沖區(qū)的命令,這將會(huì)buffer變空。
We can reduce the latency by noting layer composition should not require a full VSYNC period. If composition is performed by overlays, it takes essentially zero CPU and GPU time. But we can't count on that, so we need to allow a little time. If the app starts rendering halfway between VSYNC signals, and SurfaceFlinger defers the HWC setup until a few milliseconds before the signal is due to arrive, we can cut the latency from 2 frames to perhaps 1.5. In theory you could render and composite in a single period, allowing a return to double-buffering; but getting it down that far is difficult on current devices. Minor fluctuations in rendering and composition time, and switching from overlays to GLES composition, can cause us to miss a swap deadline and repeat the previous frame.
通過讓合成不占用一整個(gè)VSYNC時(shí)間的辦法,我們可以降低延遲。如果合成是由overlay來實(shí)現(xiàn)的,那么它幾乎不需要消耗CPU和GPU時(shí)間。但我們不能依賴于此,因此我們需要一點(diǎn)額外的時(shí)間。如果app在兩個(gè)VSYNC信號(hào)中間開始渲染,而surfaceFlinger直到VSYNC到達(dá)前的幾毫秒才進(jìn)行了HWC的設(shè)置(譯者注:setUpHWComposer調(diào)用,也就是把需要顯示的layer數(shù)據(jù)準(zhǔn)備好,報(bào)給HWC模塊來決定使用誰來合成),那么我們可以將延遲從2幀降到1.5幀。理論上來說我們可以讓渲染和合成在一個(gè)周期內(nèi),這樣雙重緩沖區(qū)足矣(譯者注:的確,理論上來說如果這個(gè)過程不消耗時(shí)間的話,app在VSYNC之后dequeue到buffer,開始渲染,然后在這個(gè)VSYNC時(shí)間內(nèi)完成渲染,要求合成,合成如果瞬間完成,的確不需要多一個(gè)VSYNC周期,兩個(gè)周期足矣,但這要求太高了);但這對(duì)當(dāng)前的設(shè)備來說要求太高了,渲染和合成時(shí)一點(diǎn)微小的耗時(shí)變化(使用GLES而不是HWC來合成),都會(huì)導(dǎo)致錯(cuò)過更新時(shí)間,導(dǎo)致重復(fù)顯示上一幀。
SurfaceFlinger's buffer handling demonstrates the fence-based buffer management mentioned earlier. If we're animating at full speed, we need to have an acquired buffer for the display ("front") and an acquired buffer for the next flip ("back"). If we're showing the buffer on an overlay, the contents are being accessed directly by the display and must not be touched. But if you look at an active layer's BufferQueue state in the dumpsys SurfaceFlinger output, you'll see one acquired buffer, one queued buffer, and one free buffer. That's because, when SurfaceFlinger acquires the new "back" buffer, it releases the current "front" buffer to the queue. The "front" buffer is still in use by the display, so anything that dequeues it must wait for the fence to signal before drawing on it. So long as everybody follows the fencing rules, all of the queue-management IPC requests can happen in parallel with the display.
Surfaceflinger buffer的處理過程展示了我們前面提過的fence-based buffer的管理過程。如果畫面高速的變化,我們需要申請(qǐng)一個(gè)緩沖區(qū)用于顯示(front),同時(shí)需要申請(qǐng)一個(gè)緩沖區(qū)用于下一幀(back)。如果顯示的buffer是被overlay使用的,那么這里面的內(nèi)容是直接被顯示系統(tǒng)讀取的,因此不能被修改。但是如果你通過dumpsys SurfaceFlinger命令來check一個(gè)活動(dòng)的layer的BufferQueue狀態(tài)時(shí),你會(huì)看到一個(gè)acquired buffer, 一個(gè)queued buffer, 還有一個(gè)free buffer.這是因?yàn)椋?dāng)Surfaceflinger申請(qǐng)一個(gè)新的back buffer時(shí),它釋放了front buffer給隊(duì)列。但是這個(gè)front buffer依然被display使用,所以任何想要在繪制之前dequeue這段buffer的進(jìn)程,都必須等待fence signal的通知。只要每個(gè)人都遵守這套規(guī)則,所有的同步隊(duì)列管理IPC請(qǐng)求都可以在顯示系統(tǒng)中被并行的處理。
Virtual Displays
SurfaceFlinger supports a "primary" display, i.e. what's built into your phone or tablet, and an "external" display, such as a television connected through HDMI. It also supports a number of "virtual" displays, which make composited output available within the system. Virtual displays can be used to record the screen or send it over a network.
Surfaceflinger支持一個(gè)主顯示,也支持一個(gè)額外的顯示,比如一個(gè)通過HDMI線連接的電視機(jī)。同時(shí)它也支持一些虛擬的顯示,虛擬顯示可被用于錄制屏幕或者通過網(wǎng)絡(luò)發(fā)送。
Virtual displays may share the same set of layers as the main display (the "layer stack") or have its own set. There is no VSYNC for a virtual display, so the VSYNC for the primary display is used to trigger composition for all displays.
虛擬電視可以有跟主顯示相同的layer,也可以有它自己的layer stack。但是虛擬顯示并沒有VSYNC,所以主顯示的VSYNC將用于觸發(fā)所有顯示的合成工作。
In the past, virtual displays were always composited with GLES. The Hardware Composer managed composition for only the primary display. In Android 4.4, the Hardware Composer gained the ability to participate in virtual display composition.
在過去,虛擬顯示的合成一直是由GLES來完成的。HWC僅僅用于主顯示,但是在KK,HWC也可以參與虛擬顯示的合成工作了。
As you might expect, the frames generated for a virtual display are written to a BufferQueue.
正如你所料,虛擬顯示的幀是被寫入了一個(gè)BufferQueue的。
Surface and SurfaceHolder
The Surface class has been part of the public API since 1.0. Its description simply says, "Handle onto a raw buffer that is being managed by the screen compositor." The statement was accurate when initially written but falls well short of the mark on a modern system.
Surface類從1.0開始就是公開api的一部分。它的描述是這樣的:處理被屏幕合成器管理的raw buffer。這句話在當(dāng)時(shí)被寫下時(shí)(1.0時(shí)代)是準(zhǔn)確的,但是在當(dāng)代的操作系統(tǒng)的標(biāo)準(zhǔn)下這句話已經(jīng)遠(yuǎn)遠(yuǎn)落后。
The Surface represents the producer side of a buffer queue that is often (but not always!) consumed by SurfaceFlinger. When you render onto a Surface, the result ends up in a buffer that gets shipped to the consumer. A Surface is not simply a raw chunk of memory you can scribble on.
Surface代表了一個(gè)buffer queue的生產(chǎn)者一側(cè),這個(gè)buffer queue一般被(但不是總是)被Surfaceflinger來消費(fèi)。當(dāng)你向一個(gè)Surface渲染時(shí),結(jié)果最終在一個(gè)緩沖區(qū)內(nèi)被運(yùn)送到消費(fèi)者那里。一個(gè)Surface并不是一個(gè)可以任意修改的簡(jiǎn)單raw內(nèi)存數(shù)據(jù)塊。
The BufferQueue for a display Surface is typically configured for triple-buffering; but buffers are allocated on demand. So if the producer generates buffers slowly enough -- maybe it's animating at 30fps on a 60fps display -- there might only be two allocated buffers in the queue. This helps minimize memory consumption. You can see a summary of the buffers associated with every layer in the dumpsys SurfaceFlinger output.
一個(gè)顯示surface的bufferQueue一般被配置為三重緩沖區(qū),但是緩沖區(qū)是按需分配的。所以如果生產(chǎn)者生產(chǎn)緩沖區(qū)足夠緩慢(比如在一個(gè)刷新率60的設(shè)備上只有30的刷新率),這種情況下可能隊(duì)列中只有兩個(gè)被分配的緩沖區(qū),這樣可以有效的降低內(nèi)存使用。通過命令dumpsys SurfaceFlinger,你可以看到每個(gè)layer關(guān)連的buffer的匯總。
Canvas Rendering
Once upon a time, all rendering was done in software, and you can still do this today. The low-level implementation is provided by the Skia graphics library. If you want to draw a rectangle, you make a library call, and it sets bytes in a buffer appropriately. To ensure that a buffer isn't updated by two clients at once, or written to while being displayed, you have to lock the buffer to access it. lockCanvas() locks the buffer and returns a Canvas to use for drawing, and unlockCanvasAndPost() unlocks the buffer and sends it to the compositor.
曾經(jīng),所有的渲染工作都可以由軟件來完成,在今天你依然可以這么做。底層的實(shí)現(xiàn)是由Skia庫來實(shí)現(xiàn)的。如果你想繪制一個(gè)矩形,你調(diào)用一個(gè)庫函數(shù),函數(shù)就會(huì)設(shè)置好緩沖區(qū)中的數(shù)據(jù)。為了確保buffer同時(shí)被兩個(gè)客戶端同時(shí)更新,或者在顯示時(shí)被寫入,你需要在使用它之前鎖定這塊buffer。函數(shù)lockCanvas會(huì)鎖定一塊緩沖區(qū)并且返回一個(gè)canvas用來繪制,函數(shù)unlockCanvasAndPost函數(shù)解鎖緩沖區(qū),并且把它發(fā)送給合成器。
As time went on, and devices with general-purpose 3D engines appeared, Android reoriented itself around OpenGL ES. However, it was important to keep the old API working, for apps as well as app framework code, so an effort was made to hardware-accelerate the Canvas API. As you can see from the charts on the Hardware Acceleration page, this was a bit of a bumpy ride. Note in particular that while the Canvas provided to a View's onDraw() method may be hardware-accelerated, the Canvas obtained when an app locks a Surface directly with lockCanvas() never is.
隨著時(shí)間的推移,帶有通用3D加速引擎的設(shè)備出現(xiàn)了。Android圍繞OpenGL ES做了調(diào)整。然而,保證舊的API可以運(yùn)行同樣重要。因此我們努力使得Canvas的API支持硬件加速。如你在Hardware Acceleration頁面所能看到的圖表一樣,這是一段艱苦的旅程。特別要注意的是,當(dāng)一個(gè)Canvas提供到一個(gè)View的onDraw方法時(shí),它可能是硬件加速的;而你通過lockCanvas方法獲取到的Canvas則絕不可能是硬件加速的。
When you lock a Surface for Canvas access, the "CPU renderer" connects to the producer side of the BufferQueue and does not disconnect until the Surface is destroyed. Most other producers (like GLES) can be disconnected and reconnected to a Surface, but the Canvas-based "CPU renderer" cannot. This means you can't draw on a surface with GLES or send it frames from a video decoder if you've ever locked it for a Canvas.
當(dāng)你為了使用Canvas而鎖定一個(gè)Surface的時(shí)候,"CPU renderer"連接到了BufferQueue的生產(chǎn)者一端,直到Surface被銷毀才會(huì)斷開。大多數(shù)其他的生產(chǎn)者(比如GLES)可以斷開連接,并且重新連接到一個(gè)Surface之上;但是基于CPU渲染的Canvas不行。這意味著,一旦你為了使用一個(gè)Canvas而lock了一個(gè)Surface,你就不能使用GLES繪制這個(gè)Surface,你也不能將視頻解碼器生成的幀發(fā)送給它。
The first time the producer requests a buffer from a BufferQueue, it is allocated and initialized to zeroes. Initialization is necessary to avoid inadvertently sharing data between processes. When you re-use a buffer, however, the previous contents will still be present. If you repeatedly call lockCanvas() and unlockCanvasAndPost() without drawing anything, you'll cycle between previously-rendered frames.
第一次生產(chǎn)者從BufferQueue中請(qǐng)求一個(gè)buffer時(shí),它被分配并且被初始化為空。為了避免出現(xiàn)進(jìn)程間不經(jīng)意的分享數(shù)據(jù),初始化是必要的。因?yàn)楫?dāng)你重新使用一個(gè)buffer時(shí),之前的內(nèi)容可能還在那里。如果你重復(fù)調(diào)用 lockCanvas() 和unlockCanvasAndPost()函數(shù)而不繪制任何東西的話,你將會(huì)循環(huán)顯示前面渲染過的幀。
The Surface lock/unlock code keeps a reference to the previously-rendered buffer. If you specify a dirty region when locking the Surface, it will copy the non-dirty pixels from the previous buffer. There's a fair chance the buffer will be handled by SurfaceFlinger or HWC; but since we need to only read from it, there's no need to wait for exclusive access.
Surface的lock/unlock代碼保持了上次渲染過的buffer的引用。如果你在lock時(shí)指定了臟區(qū)域,那么它會(huì)將前一個(gè)緩沖區(qū)內(nèi)非臟區(qū)域的像素拷貝過來。有相當(dāng)大的可能這塊buffer正在被Surfaceflinger或者HWC處理,但是因?yàn)槲覀冎皇且獜闹凶x取內(nèi)容,因此我們沒必要一直等待互斥鎖。
The main non-Canvas way for an application to draw directly on a Surface is through OpenGL ES. That's described in the EGLSurface and OpenGL ES section.
一個(gè)app不通過Canvas這個(gè)方法,而直接在Surface上繪制的辦法是通過OpenGL ES。我們將在EGLSurface and OpenGL ES 這一節(jié)中講到這個(gè)問題。
SurfaceHolder
Some things that work with Surfaces want a SurfaceHolder, notably SurfaceView. The original idea was that Surface represented the raw compositor-managed buffer, while SurfaceHolder was managed by the app and kept track of higher-level information like the dimensions and format. The Java-language definition mirrors the underlying native implementation. It's arguably no longer useful to split it this way, but it has long been part of the public API.
一些在surface上工作的東西需要一個(gè)SurfaceHolder,尤其是SurfaceView。初始的想法是,Surface代表了raw格式的,被混合器管理的緩沖區(qū),而SurfaceHolder被app管理。這樣,app可以在比如大小以及格式等更高層面上來處理問題。Java層定義了一個(gè)底層實(shí)現(xiàn)的上層鏡像。這種分層方法目前已經(jīng)不再有意義,但是它已經(jīng)長(zhǎng)時(shí)間成為了公共API中的一部分。
Generally speaking, anything having to do with a View will involve a SurfaceHolder. Some other APIs, such as MediaCodec, will operate on the Surface itself. You can easily get the Surface from the SurfaceHolder, so hang on to the latter when you have it.
一般而言,對(duì)View所做的一切事情都需要通過一個(gè)SurfaceHolder。其他的一些api,比如MediaCodec,將直接在Surface上操作。你可以很容易的從一個(gè)SurfaceHolder中獲取一個(gè)Surface。
APIs to get and set Surface parameters, such as the size and format, are implemented through SurfaceHolder.
獲取和設(shè)置Surface參數(shù)的一些API,比如大小和格式,都是通過SurfaceHolder實(shí)現(xiàn)。
EGLSurface and OpenGL ES
OpenGL ES defines an API for rendering graphics. It does not define a windowing system. To allow GLES to work on a variety of platforms, it is designed to be combined with a library that knows how to create and access windows through the operating system. The library used for Android is called EGL. If you want to draw textured polygons, you use GLES calls; if you want to put your rendering on the screen, you use EGL calls.
OpenGL ES定義了一組Graphic的渲染API。它并沒有定義一個(gè)窗口系統(tǒng)。為了讓GLES可以工作在不同的平臺(tái)之上,它設(shè)計(jì)了一個(gè)庫,這個(gè)庫知道如何在指定的操作系統(tǒng)上創(chuàng)建和使用窗口。Android上的這個(gè)庫叫做EGL。如果你想繪制一個(gè)多邊形,那么使用GLES的函數(shù);如果你想要將它渲染到屏幕上,你需要使用EGL的調(diào)用。
Before you can do anything with GLES, you need to create a GL context. In EGL, this means creating an EGLContext and an EGLSurface. GLES operations apply to the current context, which is accessed through thread-local storage rather than passed around as an argument. This means you have to be careful about which thread your rendering code executes on, and which context is current on that thread.
在你使用GLES做事之前,你需要?jiǎng)?chuàng)建一個(gè)GL的上下文。具體針對(duì)EGL,這意味著創(chuàng)建一個(gè)EGLContext 和一個(gè) EGLSurface。GLES的操作作用在當(dāng)前的上下文之中,而上下文的訪問更多的依賴本地線程的存儲(chǔ)而不是參數(shù)的傳遞。這意味著你需要關(guān)注你的渲染代碼執(zhí)行在哪個(gè)線程之上,并且這個(gè)線程的當(dāng)前上下文是什么。
The EGLSurface can be an off-screen buffer allocated by EGL (called a "pbuffer") or a window allocated by the operating system. EGL window surfaces are created with the eglCreateWindowSurface() call. It takes a "window object" as an argument, which on Android can be a SurfaceView, a SurfaceTexture, a SurfaceHolder, or a Surface -- all of which have a BufferQueue underneath. When you make this call, EGL creates a new EGLSurface object, and connects it to the producer interface of the window object's BufferQueue. From that point onward, rendering to that EGLSurface results in a buffer being dequeued, rendered into, and queued for use by the consumer. (The term "window" is indicative of the expected use, but bear in mind the output might not be destined to appear on the display.)
EGLSurface可以是一個(gè)由EGL分配的離屏緩沖區(qū)("pbuffer")或者一個(gè)由操作系統(tǒng)分配的窗口緩沖區(qū)。EGL window Surface由eglCreateWindowSurface()函數(shù)創(chuàng)建。它持有一個(gè)窗口對(duì)象做為參數(shù),在Android系統(tǒng)上,這個(gè)對(duì)象可能是一個(gè)SurfaceView,一個(gè)SurfaceTexture,一個(gè)SurfaceHolder,或者一個(gè)Surface---所有的這些下面都有一個(gè)BuffferQueue。當(dāng)你調(diào)用這個(gè)函數(shù)時(shí),ELG創(chuàng)建了一個(gè)新的EGLSurface對(duì)象,并且將它連接到一個(gè)窗口對(duì)象的BufferQueue的生產(chǎn)者接口上。從這一刻開始,渲染到一個(gè)EGLSurface上將導(dǎo)致一個(gè)buffer經(jīng)歷出隊(duì),渲染,入隊(duì)供消費(fèi)者使用 這個(gè)過程。(這里屬于window被使用,但這只是一個(gè)預(yù)期,實(shí)際上,輸出可能并不顯示在屏幕上)
EGL does not provide lock/unlock calls. Instead, you issue drawing commands and then call eglSwapBuffers()to submit the current frame. The method name comes from the traditional swap of front and back buffers, but the actual implementation may be very different.
EGL并沒有提供lock/unlock的調(diào)用。你需要調(diào)用繪制命令,然后調(diào)用eglSwapBuffers()函數(shù)去提交當(dāng)前的幀。這個(gè)方法名字的來源是傳統(tǒng)的交換前后緩沖區(qū),但是目前實(shí)際的實(shí)現(xiàn)可能會(huì)有很大的不同。
Only one EGLSurface can be associated with a Surface at a time -- you can have only one producer connected to a BufferQueue -- but if you destroy the EGLSurface it will disconnect from the BufferQueue and allow something else to connect.
一個(gè)EGLSurface一次只能關(guān)連一個(gè)Surface---一次只能有一個(gè)生產(chǎn)者連接到一個(gè)BufferQueue上—但是你可以銷毀這個(gè)EGLSurface,使得它和BufferQueue的連接斷開,這樣就可以用其他的東西連接這個(gè)BufferQueue了。
A given thread can switch between multiple EGLSurfaces by changing what's "current." An EGLSurface must be current on only one thread at a time.
一個(gè)給定的線程可以通過設(shè)置哪個(gè)是Current的方法來在不同的EGLSurfaces間切換,一個(gè)線程同時(shí)只能有一個(gè)EGLSurface作為current。
The most common mistake when thinking about EGLSurface is assuming that it is just another aspect of Surface (like SurfaceHolder). It's a related but independent concept. You can draw on an EGLSurface that isn't backed by a Surface, and you can use a Surface without EGL. EGLSurface just gives GLES a place to draw.
一個(gè)普通的誤解是,很多人把EGLSurface當(dāng)做是Surface的另一種表現(xiàn)(就像是SurfaceHolder)。他們二者之間有關(guān)系,但是這是兩個(gè)獨(dú)立的概念。你可以在一個(gè)EGLSurface上繪制而不需要一個(gè)Surface的支持,你也可以不通過EGL而使用一個(gè)Surface。EGLSurface只不過給GLES提供了一個(gè)繪制的地方而已。
ANativeWindow
The public Surface class is implemented in the Java programming language. The equivalent in C/C++ is the ANativeWindow class, semi-exposed by the Android NDK. You can get the ANativeWindow from a Surface with the ANativeWindow_fromSurface() call. Just like its Java-language cousin, you can lock it, render in software, and unlock-and-post.
公共的Surface類是由Java實(shí)現(xiàn)的。在C/C++層對(duì)應(yīng)的是ANativeWindow類,半暴漏在Android NDK中。你可以通過使用 ANativeWindow_fromSurface()從Surface中獲得一個(gè)ANativeWindow。就像他的java表親一樣,你可以lock,使用軟件渲染,然后unlock-and-post.
To create an EGL window surface from native code, you pass an instance of EGLNativeWindowType to eglCreateWindowSurface(). EGLNativeWindowType is just a synonym for ANativeWindow, so you can freely cast one to the other.
為了從本地代碼中創(chuàng)建一個(gè)EGL window surface,你需要給eglCreateWindowSurface()方法傳遞一個(gè)EGLNativeWindowType實(shí)例。EGLNativeWindowType等同于ANativeWindow,所以你可以在二者之間自由的轉(zhuǎn)換。
The fact that the basic "native window" type just wraps the producer side of a BufferQueue should not come as a surprise.
事實(shí)上,native window的本質(zhì)不過是BufferQueue在生產(chǎn)者一側(cè)的包裝罷了。
SurfaceView and GLSurfaceView
Now that we've explored the lower-level components, it's time to see how they fit into the higher-level components that apps are built from.
現(xiàn)在我們已經(jīng)研究了底層的一些組件,是時(shí)候來看下更高層次上組件是如何工作的了。
The Android app framework UI is based on a hierarchy of objects that start with View. Most of the details don't matter for this discussion, but it's helpful to understand that UI elements go through a complicated measurement and layout process that fits them into a rectangular area. All visible View objects are rendered to a SurfaceFlinger-created Surface that was set up by the WindowManager when the app was brought to the foreground. The layout and rendering is performed on the app's UI thread.
一個(gè)Android的app Framework層的UI是從視圖上而來的基于對(duì)象的層次結(jié)構(gòu)。大多數(shù)的細(xì)節(jié)對(duì)我們的討論來說無關(guān)緊要,但是理解一下過程依然是對(duì)我們有幫助的:即UI元素是如何通過負(fù)責(zé)的測(cè)量和布局過程來將他們部署在一個(gè)矩形區(qū)域里面的。所有可見的view對(duì)象都呈現(xiàn)給了Surfaceflinger—當(dāng)app由后臺(tái)轉(zhuǎn)到前臺(tái)后,通過WindowManager創(chuàng)建了Surface。Layout和渲染都是在app的UI線程里面執(zhí)行的。
Regardless of how many Layouts and Views you have, everything gets rendered into a single buffer. This is true whether or not the Views are hardware-accelerated.
根據(jù)你有多少layout和view,每個(gè)對(duì)象都在一個(gè)獨(dú)立的buffer中渲染。無論是否使用硬件加速,都是如此。
A SurfaceView takes the same sorts of parameters as other views, so you can give it a position and size, and fit other elements around it. When it comes time to render, however, the contents are completely transparent. The View part of a SurfaceView is just a see-through placeholder.
一個(gè)SurfaceView有跟其他view一樣的一些參數(shù),所以你可以設(shè)置它的位置和大小等等。當(dāng)它被渲染時(shí),我們可以認(rèn)為他的所有內(nèi)容都是透明的。SurfaceView的視圖部分只不過是一個(gè)透明的占位區(qū)域。
When the SurfaceView's View component is about to become visible, the framework asks the WindowManager to ask SurfaceFlinger to create a new Surface. (This doesn't happen synchronously, which is why you should provide a callback that notifies you when the Surface creation finishes.) By default, the new Surface is placed behind the app UI Surface, but the default "Z-ordering" can be overridden to put the Surface on top.
當(dāng)SurfaceView的組件即將變?yōu)榭梢姇r(shí),F(xiàn)ramework層要求WindowManager請(qǐng)求Surfaceflinger創(chuàng)建一個(gè)新的Surface(這個(gè)過程是異步發(fā)生的,這就是為什么你應(yīng)該提供一個(gè)回調(diào)函數(shù),這樣當(dāng)Surface創(chuàng)建完成時(shí)你才能得到通知)。缺省情況下,新創(chuàng)建的Surface在app UI Surface的下面,但是Z軸順序可能將這個(gè)Surface放在上面。
Whatever you render onto this Surface will be composited by SurfaceFlinger, not by the app. This is the real power of SurfaceView: the Surface you get can be rendered by a separate thread or a separate process, isolated from any rendering performed by the app UI, and the buffers go directly to SurfaceFlinger. You can't totally ignore the UI thread -- you still have to coordinate with the Activity lifecycle, and you may need to adjust something if the size or position of the View changes -- but you have a whole Surface all to yourself, and blending with the app UI and other layers is handled by the Hardware Composer.
渲染在這個(gè)Surface(SurfaceView的surface)上的內(nèi)容將由Surfaceflinger來混合,而不是由app。這才是SurfaceView的真正作用:這個(gè)surface可以被一個(gè)獨(dú)立的線程或者進(jìn)程來渲染,和app UI上其他的渲染工作分開,這些緩沖區(qū)數(shù)據(jù)將直接傳遞給Surfaceflinger。當(dāng)然你不能完全忽略UI線程---你依然要和activity的生命周期保持一致,并且一旦view的大小或者位置發(fā)生了改變,你可能也需要做些調(diào)整—但是你擁有了一個(gè)完整的Surface,并且這個(gè)Surface和app UI以及其他layer的混合工作將由HWC來完成。
It's worth taking a moment to note that this new Surface is the producer side of a BufferQueue whose consumer is a SurfaceFlinger layer. You can update the Surface with any mechanism that can feed a BufferQueue. You can: use the Surface-supplied Canvas functions, attach an EGLSurface and draw on it with GLES, and configure a MediaCodec video decoder to write to it.
值得一提的是,新創(chuàng)建的surface實(shí)際上是生產(chǎn)者端,而消費(fèi)者端則是一個(gè)Surfaceflinger的layer。你可以通過任何可以填充BufferQueue的途徑來更新這個(gè)surface。你可以:使用Surface提供的Canvas相關(guān)的函數(shù),附加一個(gè)EGLSurface然后使用GLES在上面繪制,配置一個(gè)MediaCodec 視頻解碼器直接在上面寫數(shù)據(jù)。
Composition and the Hardware Scaler Now that we have a bit more context, it's useful to go back and look at a couple of fields from dumpsys SurfaceFlinger that we skipped over earlier on. Back in the Hardware Composer discussion, we looked at some output like this:
我們現(xiàn)在有了更多的上下文知識(shí),所以讓我們回去看看前面講到dumpsys SurfaceFlinger時(shí)我們忽略的幾個(gè)字段。我們來看看如下的幾個(gè)輸出:
type | source crop | frame name
------------+-----------------------------------+--------------------------------
HWC | [ 0.0, 0.0, 320.0, 240.0] | [ 48, 411, 1032, 1149] SurfaceView
HWC | [ 0.0, 75.0, 1080.0, 1776.0] | [ 0, 75, 1080, 1776] com.android.grafika/com.android.grafika.PlayMovieSurfaceActivity
HWC | [ 0.0, 0.0, 1080.0, 75.0] | [ 0, 0, 1080, 75] StatusBar
HWC | [ 0.0, 0.0, 1080.0, 144.0] | [ 0, 1776, 1080, 1920] NavigationBar
FB TARGET | [ 0.0, 0.0, 1080.0, 1920.0] | [ 0, 0, 1080, 1920] HWC_FRAMEBUFFER_TARGET
This was taken while playing a movie in Grafika's "Play video (SurfaceView)" activity, on a Nexus 5 in portrait orientation. Note that the list is ordered from back to front: the SurfaceView's Surface is in the back, the app UI layer sits on top of that, followed by the status and navigation bars that are above everything else. The video is QVGA (320x240).
這個(gè)抓取自Nexus 5豎屏模式,當(dāng)在Grafika里面播放視頻的時(shí)候。注意這列表是按照從后到前的順序排列的:SurfaceView的Surface在最后面,app ui layer在上面,然后是狀態(tài)欄和導(dǎo)航欄。視頻是QVGA的。
The "source crop" indicates the portion of the Surface's buffer that SurfaceFlinger is going to display. The app UI was given a Surface equal to the full size of the display (1080x1920), but there's no point rendering and compositing pixels that will be obscured by the status and navigation bars, so the source is cropped to a rectangle that starts 75 pixels from the top, and ends 144 pixels from the bottom. The status and navigation bars have smaller Surfaces, and the source crop describes a rectangle that begins at the the top left (0,0) and spans their content.
"source crop"指示了Surface的buffer中要被SurfaceFlinger顯示的部分。App UI的surface大小是整個(gè)顯示的大小(1080*1920),但是由于需要顯示狀態(tài)欄和導(dǎo)航欄,因此從上面裁剪了75個(gè)像素,從下面裁剪了144個(gè)像素。
The "frame" is the rectangle where the pixels end up on the display. For the app UI layer, the frame matches the source crop, because we're copying (or overlaying) a portion of a display-sized layer to the same location in another display-sized layer. For the status and navigation bars, the size of the frame rectangle is the same, but the position is adjusted so that the navigation bar appears at the bottom of the screen.
Frame一欄是指最終顯示在屏幕上的位置。APP UI因?yàn)槭峭耆嗤恢玫目截悾虼诉@個(gè)值和前一列完全相同。而對(duì)于狀態(tài)欄和導(dǎo)航欄,大小和前面一列是相似的,但是位置已經(jīng)發(fā)生了改變。
Now consider the layer labeled "SurfaceView", which holds our video content. The source crop matches the video size, which SurfaceFlinger knows because the MediaCodec decoder (the buffer producer) is dequeuing buffers that size. The frame rectangle has a completely different size -- 984x738.
現(xiàn)在我們來看下SurfaceView這個(gè)layer,這個(gè)layer里面是視頻的內(nèi)容。source crop一列和視頻的大小一致,SurfaceFlinger之所以知道這個(gè)信息是因?yàn)镸ediaCodec解碼器申請(qǐng)的出隊(duì)的buffer的大小就是這么大。而Frame則有一個(gè)完全不同的大小:984*738.
SurfaceFlinger handles size differences by scaling the buffer contents to fill the frame rectangle, upscaling or downscaling as needed. This particular size was chosen because it has the same aspect ratio as the video (4:3), and is as wide as possible given the constraints of the View layout (which includes some padding at the edges of the screen for aesthetic reasons).
SurfaceFlinger處理了這種大小的不同,通過縮放緩沖區(qū)數(shù)據(jù)的方式來填充到frame中,根據(jù)需要放大或者縮小。之所以選擇這個(gè)特殊的大小,是因?yàn)檫@個(gè)大小和視頻有相同的高寬比,并且在View Layout允許的寬度下盡可能的大(基于美觀的考慮,在屏幕邊緣也放置了一些填充物。)。
If you started playing a different video on the same Surface, the underlying BufferQueue would reallocate buffers to the new size automatically, and SurfaceFlinger would adjust the source crop. If the aspect ratio of the new video is different, the app would need to force a re-layout of the View to match it, which causes the WindowManager to tell SurfaceFlinger to update the frame rectangle.
如果我們?cè)谕粋€(gè)surface上面播放了一個(gè)不同大小的視頻,那么底層的BufferQueue會(huì)重新使用新的大小重新分配緩沖區(qū),SurfaceFlinger也會(huì)調(diào)整它的source crop。如果新的視頻的高寬比也發(fā)生了變化,app需要強(qiáng)制要求View來re-layout來匹配當(dāng)前大小,這將導(dǎo)致WindowManager通知SurfaceFlinger來更新每個(gè)frame矩形的大小。
If you're rendering on the Surface through some other means, perhaps GLES, you can set the Surface size using the SurfaceHolder#setFixedSize() call. You could, for example, configure a game to always render at 1280x720, which would significantly reduce the number of pixels that must be touched to fill the screen on a 2560x1440 tablet or 4K television. The display processor handles the scaling. If you don't want to letter- or pillar-box your game, you could adjust the game's aspect ratio by setting the size so that the narrow dimension is 720 pixels, but the long dimension is set to maintain the aspect ratio of the physical display (e.g. 1152x720 to match a 2560x1600 display). You can see an example of this approach in Grafika's "Hardware scaler exerciser" activity.
如果你使用這個(gè)Surface來做其他用途,比如GLES,那么你可以通過調(diào)用SurfaceHolder#setFixedSize() 函數(shù)來設(shè)置Surface的大小。比如你可以設(shè)置一個(gè)游戲的大小為1280*720,這樣當(dāng)你去運(yùn)行在一個(gè)大小為2K或者4K的屏幕上時(shí),你可以顯著的減少需要填充的像素的數(shù)目。顯示處理器會(huì)來處理縮放。。。。
GLSurfaceView
The GLSurfaceView class provides some helper classes that help manage EGL contexts, inter-thread communication, and interaction with the Activity lifecycle. That's it. You do not need to use a GLSurfaceView to use GLES.
GLSurfaceView類提供一個(gè)輔助類,這些類可以幫助我們管理EGL的上下文,線程通信以及與activity生命周期的交互。當(dāng)然,你不需要GLSurfaceView就可以使用GLES。
For example, GLSurfaceView creates a thread for rendering and configures an EGL context there. The state is cleaned up automatically when the activity pauses. Most apps won't need to know anything about EGL to use GLES with GLSurfaceView.
舉例來說,GLSurfaceView創(chuàng)建了一個(gè)用來渲染和管理EGL上下文的線程。當(dāng)activity pause時(shí),它會(huì)自動(dòng)清空所有的狀態(tài)。大多數(shù)應(yīng)用通過GLSurfaceView來使用GLES時(shí),不需要了解任何和EGL有關(guān)的事情。
In most cases, GLSurfaceView is very helpful and can make working with GLES easier. In some situations, it can get in the way. Use it if it helps, don't if it doesn't.
在大多數(shù)情況下,GLSurfaceView對(duì)處理GLES來說很有幫助。但是在一些情況下,它可能是一種阻礙。僅僅在你需要的時(shí)候使用它。
SurfaceTexture
The SurfaceTexture class is a relative newcomer, added in Android 3.0 ("Honeycomb"). Just as SurfaceView is the combination of a Surface and a View, SurfaceTexture is the combination of a Surface and a GLES texture. Sort of.
SurfaceTexture類是從Android 3.0開始引入的。就像是SurfaceView是一個(gè)Surface和view的組合一樣,SurfaceTexture某種程度上是一個(gè)Surface和GLES材質(zhì)的組合。
When you create a SurfaceTexture, you are creating a BufferQueue for which your app is the consumer. When a new buffer is queued by the producer, your app is notified via callback (onFrameAvailable()). Your app calls updateTexImage(), which releases the previously-held buffer, acquires the new buffer from the queue, and makes some EGL calls to make the buffer available to GLES as an "external" texture.
當(dāng)你創(chuàng)建了一個(gè)SurfaceTexture時(shí),你創(chuàng)建了一個(gè)BufferQueue,而你的app則是消費(fèi)者。當(dāng)一個(gè)新的buffer被生產(chǎn)者入隊(duì)后,你的app將會(huì)被回調(diào)函數(shù)通知(onFrameAvailable())。你的app會(huì)調(diào)用updateTexImage()函數(shù),釋放了先前持有的buffer,從隊(duì)列中acquire新的buffer,做了一些EGL調(diào)用,使得一些buffer可以作為額外的材質(zhì)被GLES獲取。
External textures (GL_TEXTURE_EXTERNAL_OES) are not quite the same as textures created by GLES (GL_TEXTURE_2D). You have to configure your renderer a bit differently, and there are things you can't do with them. But the key point is this: You can render textured polygons directly from the data received by your BufferQueue.
額外的材質(zhì)(GL_TEXTURE_EXTERNAL_OES)跟由GLES自身創(chuàng)建的材質(zhì)(GL_TEXTURE_2D)有一些不同。需要配置renderer略有不同,也有一些事情是不能做的。但是我們關(guān)注的重點(diǎn)是:我們可以使用從BufferQueue中收到的數(shù)據(jù)直接渲染多邊形。
You may be wondering how we can guarantee the format of the data in the buffer is something GLES can recognize -- gralloc supports a wide variety of formats. When SurfaceTexture created the BufferQueue, it set the consumer's usage flags to GRALLOC_USAGE_HW_TEXTURE, ensuring that any buffer created by gralloc would be usable by GLES.
你可以好奇,我們?nèi)绾伪WCbuffer中的數(shù)據(jù)格式可以被GLES正確讀取---要知道gralloc可是支持各種各樣的格式。當(dāng)SurfaceTexture創(chuàng)建BufferQueue時(shí),它設(shè)置消費(fèi)者的usage flags是GRALLOC_USAGE_HW_TEXTURE,這保證了任何由gralloc創(chuàng)建的buffer都是可以被GLES使用的。
Because SurfaceTexture interacts with an EGL context, you have to be careful to call its methods from the correct thread. This is spelled out in the class documentation.
因?yàn)镾urfaceTexture要和EGL上下文交互,因此務(wù)必保證調(diào)用的方法來自正確的線程,這個(gè)在類的說明文檔中已經(jīng)指出(SurfaceTexture objects may be created on any thread. updateTexImage() may only be called on the thread with the OpenGL ES context that contains the texture object. The frame-available callback is called on an arbitrary thread, so unless special care is taken updateTexImage() should not be called directly from the callback.)。
If you look deeper into the class documentation, you will see a couple of odd calls. One retrieves a timestamp, the other a transformation matrix, the value of each having been set by the previous call to updateTexImage(). It turns out that BufferQueue passes more than just a buffer handle to the consumer. Each buffer is accompanied by a timestamp and transformation parameters.
如果你對(duì)這個(gè)類的文檔做更深的研究,你會(huì)發(fā)現(xiàn)一些古怪的函數(shù)。一個(gè)檢索一個(gè)時(shí)間戳,而另外一個(gè)則變換一個(gè)矩陣,這兩個(gè)值都是在前面那個(gè)updateTexImage()調(diào)用時(shí)被改變的(?)。這說明BufferQueue傳給消費(fèi)者的不僅僅是一個(gè)buffer的句柄。每一個(gè)buffer都伴隨著一個(gè)時(shí)間戳參數(shù)和一個(gè)transformation參數(shù)。
The transformation is provided for efficiency. In some cases, the source data might be in the "wrong" orientation for the consumer; but instead of rotating the data before sending it, we can send the data in its current orientation with a transform that corrects it. The transformation matrix can be merged with other transformations at the point the data is used, minimizing overhead.
Transformation參數(shù)用于提高效率。在一些情況下,提供給消費(fèi)者的源數(shù)據(jù)可能方向是錯(cuò)誤的,但是相比發(fā)送之前旋轉(zhuǎn)數(shù)據(jù),我們可以發(fā)送一個(gè)當(dāng)前方向的數(shù)據(jù),同時(shí)一起發(fā)送一個(gè)transform參數(shù)用以變換。transformation matrix(變換矩陣)可以跟其他一些變換參數(shù)一起傳遞,用以減小系統(tǒng)開銷。
The timestamp is useful for certain buffer sources. For example, suppose you connect the producer interface to the output of the camera (with setPreviewTexture()). If you want to create a video, you need to set the presentation time stamp for each frame; but you want to base that on the time when the frame was captured, not the time when the buffer was received by your app. The timestamp provided with the buffer is set by the camera code, resulting in a more consistent series of timestamps.
時(shí)間戳則是為了明確具體的buffer源數(shù)據(jù)。舉例來說,假設(shè)你將camera的輸入做為生產(chǎn)者(通過setPreviewTexture)。如果你先要?jiǎng)?chuàng)建一段視頻,你需要給每一幀打上時(shí)間戳;你想要基于幀被錄制的時(shí)間,而不是app收到buffer的時(shí)間。因此這個(gè)時(shí)間戳是被camera的代碼提供的,這些時(shí)間戳將更加準(zhǔn)確一致。
SurfaceTexture and Surface
If you look closely at the API you'll see the only way for an application to create a plain Surface is through a constructor that takes a SurfaceTexture as the sole argument. (Prior to API 11, there was no public constructor for Surface at all.) This might seem a bit backward if you view SurfaceTexture as a combination of a Surface and a texture.
如果你仔細(xì)看過api說明的話,你會(huì)發(fā)現(xiàn)創(chuàng)建一個(gè)簡(jiǎn)易Surface的唯一辦法是使用一個(gè)帶有SurfaceTexture參數(shù)的構(gòu)造函數(shù)(在API11之前的版本,Surface根本就沒有一個(gè)公有的構(gòu)造函數(shù))。如果你認(rèn)為SurfaceTexture是Surface和texture的組合的話,似乎就有點(diǎn)落后。
Under the hood, SurfaceTexture is called GLConsumer, which more accurately reflects its role as the owner and consumer of a BufferQueue. When you create a Surface from a SurfaceTexture, what you're doing is creating an object that represents the producer side of the SurfaceTexture's BufferQueue.
在底層,SurfaceTexture實(shí)際上是一個(gè)GLConsumer,這個(gè)名字更能反應(yīng)出它作為一個(gè)BufferQueue的持有者和消費(fèi)者的角色。當(dāng)你使用一個(gè)SurfaceTexture來創(chuàng)建一個(gè)Surface時(shí),你實(shí)際上做的,其實(shí)是創(chuàng)建了BufferQueue(SurfaceTexture持有的)的生產(chǎn)者一側(cè)。
Figure 2.Grafika's continuous capture activity
In the diagram above, the arrows show the propagation of the data from the camera. BufferQueues are in color (purple producer, cyan consumer). Note “Camera” actually lives in the mediaserver process.
Encoded H.264 video goes to a circular buffer in RAM in the app process, and is written to an MP4 file on disk using the MediaMuxer class when the “capture” button is hit.
如上圖所示,箭頭顯示了從相機(jī)開始數(shù)據(jù)的傳輸。圖中紫色的是生產(chǎn)者,而藍(lán)色的則是消費(fèi)者。注意Camera實(shí)際上是在mediaserver進(jìn)程中的。編碼過的H.264 video存入app進(jìn)程的一個(gè)環(huán)形緩沖區(qū)內(nèi),并且通過MediaMuxer類被寫入一個(gè)硬盤上的MP4文件中(?)。
All three of the BufferQueues are handled with a single EGL context in the app, and the GLES operations are performed on the UI thread. Doing the SurfaceView rendering on the UI thread is generally discouraged, but since we're doing simple operations that are handled asynchronously by the GLES driver we should be fine. (If the video encoder locks up and we block trying to dequeue a buffer, the app will become unresponsive. But at that point, we're probably failing anyway.) The handling of the encoded data -- managing the circular buffer and writing it to disk -- is performed on a separate thread.
(上圖中)所有的三組BufferQueue都是由app中同一個(gè)EGL上下文來處理的,GLES的操作實(shí)在UI線程內(nèi)執(zhí)行的。在UI線程中來做SurfaceView的渲染工作一般是不推薦的。但是鑒于我們只需要做一些簡(jiǎn)單的操作,(其余的)由GLES來做異步的處理,所以應(yīng)該問題不大(如果視頻編碼器死鎖,那么我們?cè)噲D獲取Buffer的操作將被阻塞,app將會(huì)發(fā)生ANR。但是在這種情況下,無論如何錯(cuò)誤都會(huì)發(fā)生)。編碼好的數(shù)據(jù)的處理---管理這個(gè)環(huán)形緩沖區(qū)和將它寫到磁盤上----是在一個(gè)單獨(dú)的線程上的。
The bulk of the configuration happens in the SurfaceView's surfaceCreated() callback. The EGLContext is created, and EGLSurfaces are created for the display and for the video encoder. When a new frame arrives, we tell SurfaceTexture to acquire it and make it available as a GLES texture, then render it with GLES commands on each EGLSurface (forwarding the transform and timestamp from SurfaceTexture). The encoder thread pulls the encoded output from MediaCodec and stashes it in memory.
大量的配置工作發(fā)生在SurfaceView的 surfaceCreated()函數(shù)被調(diào)用時(shí)。EGLContext被創(chuàng)建,用于顯示和視頻編碼器的EGLSurfaces也同樣被創(chuàng)建。當(dāng)新的一幀來臨時(shí),我們通知SurfaceTexture來獲取并且使之變?yōu)镚LES texture,然后使用GLES命令來在各個(gè)EGLSurface上渲染(從SurfaceTexture來發(fā)送transform 和 timestamp)。解碼器線程從MediaCodec取出數(shù)據(jù)并且把它存放在內(nèi)存里。
TextureView
The TextureView class was introduced in Android 4.0 ("Ice Cream Sandwich"). It's the most complex of the View objects discussed here, combining a View with a SurfaceTexture.
TextureView類是由Android 4.0引入的。它是我們目前介紹過的最復(fù)雜的類對(duì)象,由一個(gè)View混合了一個(gè)SurfaceTexture而成。
Recall that the SurfaceTexture is a "GL consumer", consuming buffers of graphics data and making them available as textures. TextureView wraps a SurfaceTexture, taking over the responsibility of responding to the callbacks and acquiring new buffers. The arrival of new buffers causes TextureView to issue a View invalidate request. When asked to draw, the TextureView uses the contents of the most recently received buffer as its data source, rendering wherever and however the View state indicates it should.
讓我們回憶一下,SurfaceTexture實(shí)際上是一個(gè)‘GL consumer’,消費(fèi)Graphic緩沖區(qū)的數(shù)據(jù),并且使他們可以作為紋理。TextureView包裝了一個(gè)SurfaceTexture,接管了它響應(yīng)回調(diào)函數(shù)和申請(qǐng)新的緩沖區(qū)的工作。新的緩沖區(qū)的到來使得TextureView發(fā)出一個(gè)view重繪的請(qǐng)求。當(dāng)被要求繪制時(shí),TextureView使用了最新收到的緩沖區(qū)數(shù)據(jù)作為數(shù)據(jù)來源,渲染任何view狀態(tài)指示它做的工作。
You can render on a TextureView with GLES just as you would SurfaceView. Just pass the SurfaceTexture to the EGL window creation call. However, doing so exposes a potential problem.
你可以在TextureView上使用GLES渲染,就像你在SurfaceView上做的一樣。只需要在EGL window創(chuàng)建時(shí),把SurfaceTexture傳遞過去。但是,這樣做會(huì)暴露一個(gè)潛在的問題。
In most of what we've looked at, the BufferQueues have passed buffers between different processes. When rendering to a TextureView with GLES, both producer and consumer are in the same process, and they might even be handled on a single thread. Suppose we submit several buffers in quick succession from the UI thread. The EGL buffer swap call will need to dequeue a buffer from the BufferQueue, and it will stall until one is available. There won't be any available until the consumer acquires one for rendering, but that also happens on the UI thread… so we're stuck.
大多數(shù)情況下,BufferQueue在不同的進(jìn)程間傳遞數(shù)據(jù)。當(dāng)我們?cè)赥extureView上使用GLES渲染時(shí),生產(chǎn)者和消費(fèi)者在一個(gè)進(jìn)程內(nèi),而且他們很可能被同一個(gè)線程來處理。假設(shè)我們連續(xù)的UI線程上提交數(shù)據(jù),EGL需要從BufferQueue中dequeue一個(gè)Buffer,但是在知道一個(gè)buffer可用時(shí),這個(gè)操作才會(huì)完成。但是除非消費(fèi)者acquire了一個(gè)buffer用于渲染,否則不會(huì)有任何可用的buffer,但這個(gè)過程同樣發(fā)生了UI線程中。現(xiàn)在我們悲劇了。
The solution is to have BufferQueue ensure there is always a buffer available to be dequeued, so the buffer swap never stalls. One way to guarantee this is to have BufferQueue discard the contents of the previously-queued buffer when a new buffer is queued, and to place restrictions on minimum buffer counts and maximum acquired buffer counts. (If your queue has three buffers, and all three buffers are acquired by the consumer, then there's nothing to dequeue and the buffer swap call must hang or fail. So we need to prevent the consumer from acquiring more than two buffers at once.) Dropping buffers is usually undesirable, so it's only enabled in specific situations, such as when the producer and consumer are in the same process.
解決這個(gè)問題的辦法是:讓BufferQueue保證始終都有一個(gè)buffer是可以被dequeued的,這樣這個(gè)過程才不會(huì)阻塞。如何才能保證這一點(diǎn)?一個(gè)方法是當(dāng)BufferQueue上有新的buffer被queued時(shí),就丟棄掉之前queue的緩沖區(qū),我們還要設(shè)置最小緩沖區(qū)數(shù)量的限制和最大acquired緩沖區(qū)數(shù)量的限制(如果你的隊(duì)列只有三個(gè)緩沖區(qū),但是三個(gè)緩沖區(qū)都被消費(fèi)者acquired,這樣我們就不能dequeue到任何buffer,阻塞就發(fā)生了。因此我們需要阻止消費(fèi)者acquire兩個(gè)以上的緩沖區(qū))丟幀一般是不可接受的,因此這個(gè)方法只能用在一些特殊的場(chǎng)景里,比如當(dāng)生產(chǎn)者和消費(fèi)者在一個(gè)進(jìn)程里面的時(shí)候。
Conclusion
We hope this page has provided useful insights into the way Android handles graphics at the system level.
我們希望通過這篇文檔,給你提供了一個(gè)有用的視角來了解Android是如何在系統(tǒng)層面操作graphic數(shù)據(jù)的。