一:前言
一直對Android繪圖機制比較感興趣,雖然從書和博客中對SurfaceFlinger稍微有些了解,但是沒有往下去深究。剛好最近項目中設計openGL相關的需求,于是便研究了下Android的繪圖機制(基于7.0源碼)。
二:App的畫板Surface
回想一下如果我們小時候畫畫的時候,老師總會給我們發一張紙,讓我們畫在上面,同樣App如果想要繪制圖形,也總得給它一個畫板吧,在Android中,這塊畫板就是Surface。
那么問題來了,這塊畫板是何時給我們的呢?換句話說,App的Surface是何時創建的呢?先想想,Android開發最基本的知識就是四大組件,其中用于UI展示的就是Activity,而Activity中UI的根本就在于View,而一個Activity中的所有View結構為一棵View樹,既然為樹,則必有根,Activity中View的根就為ViewRootImpl。
ViewRootImpl中有個mSurface變量,該變量就承擔者java層Surface的角色,注意這里特地說了一下是java層,因為上層的Surface只是個傀儡,真正的Surface是在C++中的,java層的Surface只是保存了一個C++層中Surface的指針。那么java層的Surface是如何和C++層的關聯起來的呢?
ViewRootImpl的relayoutWindow()時,會調用mWindowSession.relayout(),并傳入自己的mSurface,該方法中調用了WindowManagerService.relayoutWindow(),最終會調用到WindowManagerService.createSurfaceControl():
private int createSurfaceControl(Surface outSurface, int result, WindowState win,WindowStateAnimator winAnimator) {
......
WindowSurfaceController surfaceController = winAnimator.createSurfaceLocked();
if (surfaceController != null) {
surfaceController.getSurface(outSurface);
} else
outSurface.release();
}
return result;
}
這里首先創建了一個WindowSurfaceController,接著調用了WindowSurfaceController.getSurface(outSurface):
void getSurface(Surface outSurface) {
outSurface.copyFrom(mSurfaceControl);
}
這里的outSurface即為ViewRootImpl中的mSurface,重點就在Surface.copyFrom()中:
public void copyFrom(SurfaceControl other) {
......
long surfaceControlPtr = other.mNativeObject;
......
long newNativeObject = nativeCreateFromSurfaceControl(surfaceControlPtr);
synchronized (mLock) {
......
setNativeObjectLocked(newNativeObject);
}
}
這里調用了nativeCreateFromSurfaceControl(surfaceControlPtr)來獲取C++層Surface的指針:
//frameworks/base/core/jni/android_view_Surface.cpp
static jlong nativeCreateFromSurfaceControl(JNIEnv* env, jclass clazz,
jlong surfaceControlNativeObj) {
sp<SurfaceControl> ctrl(reinterpret_cast<SurfaceControl *>(surfaceControlNativeObj));
sp<Surface> surface(ctrl->getSurface());
if (surface != NULL) {
surface->incStrong(&sRefBaseOwner);
}
return reinterpret_cast<jlong>(surface.get());
}
可見C++層調用了SurfaceControl.getSurface()來獲取Surface對象:
//frmeworks/native/libs/gui/SurfaceControl.cpp
sp<Surface> SurfaceControl::getSurface() const
{
Mutex::Autolock _l(mLock);
if (mSurfaceData == 0) {
mSurfaceData = new Surface(mGraphicBufferProducer, false);
}
return mSurfaceData;
}
到這里,Activity的Surface來源終于有了著落,我們終于可以向這個畫板繪制數據了。
三:Surface與緩沖區
上面我們將App繪圖類比為向畫板中畫畫,但是與畫畫不同的是,我們的繪制UI其實是寫入UI的像素數據,這些數據最終將被渲染到屏幕上,屏幕上的像素點顯示對應坐標的像素值。那么問題來了,我們怎么向屏幕中寫入數據呢?別忘了,Android是基于Linux的,Linux系統有個Framebuffer,我們只要向這個FrameBuffer中寫入數據,系統就會在一個合適的時機從Framebuffer中取出數據,渲染到屏幕上。那么我們是不是只要將app的UI數據寫入到這個FrameBuffer就可以了呢?
事實上,這是不可取的,因為Android系統里安裝了很多app,每個App又會有很多Surface,可是屏幕只有一個,如果大家同時都往屏幕上直接寫數據,豈不是亂套了,所以Android就搞了一個SurfaceFlinger,專門來負責這件事。屏幕相當于學校里的的畫廊,SurfaceFlinger就是負責畫廊的老師,它給每個學生發了一張紙,告訴大家將自己的畫畫在自己的紙上,最后再交給老師(SurfaceFlinger),老師再將大家的畫整理一下,按照一定順序貼到畫廊(屏幕)上。
畫畫的例子告訴我們,老師會給每個學生都發一張紙,所以SurfaceFlinger也會給每個Activity分配一個緩沖區來繪制UI,這些緩沖區叫圖形緩沖區,而上面的Framebuffer叫幀緩沖區。但是事實上,學校里有很多學生(Surface),每個學生畫完一張,交給SurfaceFlinger后,SurfaceFlinger又要接著給他再發一張,如果負責畫廊的老師(SurfaceFlinger)一個個去給每個學生(Surface)收發畫紙(緩沖區),豈不要累死,所以每個學生(Surface)又會有個助理老師來負責發畫紙,收畫紙這件事,而這個助理就是BufferQueue。
BufferQueue是Surface中所有緩沖區的管理者,內部分配了一個包含64個BufferSlot的數組:
//frameworks/native/include/gui/BufferQueueDefs.h
enum { NUM_BUFFER_SLOTS = 64 };
typedef BufferSlot SlotsType[NUM_BUFFER_SLOTS]
BufferSlot內部有個智能指針,指向一個緩沖區GraphicBuffer。也就是說,一個Surface最多可以有64個圖形緩沖區。
BufferQueue采用了生產者-消費者的模式:其中生產者為BufferQueueProducer,通過調用dequeBuffer()獲取到一個空閑的緩沖區,并填入要繪制的圖形數據,接著調用queueBuffer()將Buffer重新返回給BufferQueue。而消費者則為BufferQueueConsumer,通過調用acquireBuffer()從BufferQueue中拿到一個被填滿的緩沖區并消費。
Surface并不關心消費者,因為它應該不管這件事,它的責任是拿到空閑的緩沖區,并繪制自己的UI數據,所以它需要的是空閑緩沖區的生產者,及BufferQueueProducer。那么Surface是何時拿到這個BufferQueueProducer的呢?回想一下我們上面講的C++層Surface的創建:
//frmeworks/native/libs/gui/SurfaceControl.cpp
mSurfaceData = new Surface(mGraphicBufferProducer, false);
原來Surface創建的時候,就已經將一個生產者傳給它了。問題是:這里的mGraphicBufferProducer是從哪來的呢?Surface的創建在SurfaceControl中,這個mGraphicBufferProducer即為SurfaceControl中的一個智能指針,SurfaceControl也是在自己的構造函數中給該指針賦引用對象的:
//frmeworks/native/libs/gui/SurfaceControl.cpp
SurfaceControl::SurfaceControl(
const sp<SurfaceComposerClient>& client,
const sp<IBinder>& handle,
const sp<IGraphicBufferProducer>& gbp)
: mClient(client), mHandle(handle), mGraphicBufferProducer(gbp)
{
}
現在的問題就變成了這個SurfaceControl是何時被創建的了。記得java層在創建Surface之前先創建了一個WindowSurfaceController嗎?而WindowSurfaceController在其構造函數中會創建一個SurfaceControl對象,SurfaceControl創建時,會同樣創建一個C++層的SurfaceControl對象:
mNativeObject = nativeCreate(session, name, w, h, format, flags);
讓我們來看看這個native方法里干了什么:
//frmeworks/native/libs/gui/SurfaceControl.cpp
static jlong nativeCreate(JNIEnv* env, jclass clazz, jobject sessionObj,
jstring nameStr, jint w, jint h, jint format, jint flags) {
ScopedUtfChars name(env, nameStr);
sp<SurfaceComposerClient> client(android_view_SurfaceSession_getClient(env, sessionObj));
sp<SurfaceControl> surface = client->createSurface(
String8(name.c_str()), w, h, format, flags);
surface->incStrong((void *)nativeCreate);
return reinterpret_cast<jlong>(surface.get());
}
C++層的SurfaceControl對象是由通過一個SurfaceComposerClient對象的方法createSurface()創建的:
//frameworks/native/libs/gui/SurfaceComposerClient.cpp
sp<SurfaceControl> SurfaceComposerClient::createSurface(
const String8& name,
uint32_t w,
uint32_t h,
PixelFormat format,
uint32_t flags)
{
sp<SurfaceControl> sur;
if (mStatus == NO_ERROR) {
sp<IBinder> handle;
sp<IGraphicBufferProducer> gbp;
status_t err = mClient->createSurface(name, w, h, format, flags,
&handle, &gbp);
ALOGE_IF(err, "SurfaceComposerClient::createSurface error %s", strerror(-err));
if (err == NO_ERROR) {
sur = new SurfaceControl(this, handle, gbp);
}
}
return sur;
}
這里調用了mClient->createSurface(name, w, h, format, flags, &handle, &gbp),mClient是SurfaceComposerClient中的一個智能指針變量,但是它又指向什么呢?
SurfaceComposerClient繼承了RefBase,所以在它第一次被強引用時,會觸發它的onFirstRef()方法:
//frameworks/native/libs/gui/SurfaceComposerClient.cpp
void SurfaceComposerClient::onFirstRef() {
sp<ISurfaceComposer> sm(ComposerService::getComposerService());
if (sm != 0) {
sp<ISurfaceComposerClient> conn = sm->createConnection();
if (conn != 0) {
mClient = conn;
mStatus = NO_ERROR;
}
}
}
可見這里的mClient指向了sm->createConnection()返回的結果,而sm指向了ComposerService::getComposerService():
//frameworks/native/libs/gui/SurfaceComposerClient.cpp
sp<ISurfaceComposer> ComposerService::getComposerService() {
ComposerService& instance = ComposerService::getInstance();
Mutex::Autolock _l(instance.mLock);
if (instance.mComposerService == NULL) {
ComposerService::getInstance().connectLocked();
assert(instance.mComposerService != NULL);
ALOGD("ComposerService reconnected");
}
return instance.mComposerService;
}
這兒先調用了一下ComposerService::getInstance().connectLocked():
//frameworks/native/libs/gui/SurfaceComposerClient.cpp
void ComposerService::connectLocked() {
const String16 name("SurfaceFlinger");
while (getService(name, &mComposerService) != NO_ERROR) {
usleep(250000);
}
}
connectLocked()通過ServiceManager拿到了SurfaceFlinger的Binder代理對象,并賦給了ComposerService的變量mComposerService。所以ComposerService::getComposerService()返回的是SurfaceFlinger的在客戶端進程中的代理。
再回到之前的SurfaceComposerClient::onFirstRef()中,sm.createConnection()事實上就是跨進程調用SurfaceFlinger.createConnection():
//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
sp<ISurfaceComposerClient> SurfaceFlinger::createConnection()
{
sp<ISurfaceComposerClient> bclient;
sp<Client> client(new Client(this));
status_t err = client->initCheck();
if (err == NO_ERROR) {
bclient = client;
}
return bclient;
}
這里返回了一個Client對象,它是SurfaceFlinger的一個幫手,SurfaceComposerClient::onFirstRef()拿到這個Client對象后賦給了變量mClient,所以上面mClient的來頭我們就知道了。
再回到之前,我們講到了mClient->createSurface(name, w, h, format, flags, &handle, &gbp):
//frameworks/native/services/surfaceflinger/Client.cpp
status_t Client::createSurface(
const String8& name,
uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
sp<IBinder>* handle,
sp<IGraphicBufferProducer>* gbp)
{
/*
* createSurface must be called from the GL thread so that it can
* have access to the GL context.
*/
class MessageCreateLayer : public MessageBase {
SurfaceFlinger* flinger;
Client* client;
sp<IBinder>* handle;
sp<IGraphicBufferProducer>* gbp;
status_t result;
const String8& name;
uint32_t w, h;
PixelFormat format;
uint32_t flags;
public:
MessageCreateLayer(SurfaceFlinger* flinger,
const String8& name, Client* client,
uint32_t w, uint32_t h, PixelFormat format, uint32_t flags,
sp<IBinder>* handle,
sp<IGraphicBufferProducer>* gbp)
: flinger(flinger), client(client),
handle(handle), gbp(gbp), result(NO_ERROR),
name(name), w(w), h(h), format(format), flags(flags) {
}
status_t getResult() const { return result; }
virtual bool handler() {
result = flinger->createLayer(name, client, w, h, format, flags,
handle, gbp);
return true;
}
};
sp<MessageBase> msg = new MessageCreateLayer(mFlinger.get(),
name, this, w, h, format, flags, handle, gbp);
mFlinger->postMessageSync(msg);
return static_cast<MessageCreateLayer*>( msg.get() )->getResult();
}
這里調用了mFlinger.postMessageSync(msg)方法,通知SurfaceFlinger去處理,重點在于MessageCreateLayer的handler()中,調用了SurfaceFlinger.createLayer(),這里最終會掉用到SurfaceFlinger.createNormalLayer():
//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
status_t SurfaceFlinger::createNormalLayer(const sp<Client>& client,
const String8& name, uint32_t w, uint32_t h, uint32_t flags, PixelFormat& format,
sp<IBinder>* handle, sp<IGraphicBufferProducer>* gbp, sp<Layer>* outLayer)
{
......
*outLayer = new Layer(this, client, name, w, h, flags);
status_t err = (*outLayer)->setBuffers(w, h, format, flags);
if (err == NO_ERROR) {
*handle = (*outLayer)->getHandle();
*gbp = (*outLayer)->getProducer();
}
return err;
}
這里第終于給gbp賦值了,看到這里,希望你沒被繞暈,還記得這個gbp是指Surface中的mGraphicBufferProducer。
Layer.getProducer()直接返回了其中的mProducer變量,類型為MonitoredProducer,但是上面我們提到,BufferQueue的生產者為BufferQueueProducer,怎么變成了MonitoredProducer呢?別急,接著往下看:
在Layer的onFirstRef()被觸發時,會創建MonitoredProducer:
//frameworks/native/services/surfaceflinger/Layer.cpp
void Layer::onFirstRef() {
// Creates a custom BufferQueue for SurfaceFlingerConsumer to use
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferConsumer> consumer;
BufferQueue::createBufferQueue(&producer, &consumer);
mProducer = new MonitoredProducer(producer, mFlinger);
mSurfaceFlingerConsumer = new SurfaceFlingerConsumer(consumer, mTextureName);
......
}
這里BufferQueue.createBufferQueue()創建了一個我們熟悉的生產者BufferQueueProducer,并傳給了MonitoredProducer(),所以MonitoredProducer其實只是一個代理,真正的生產者還是BufferQueueProducer。
四:繪圖過程
Surface拿到mGraphicBufferProducer后,就可以從BufferQueue中拿到一個空閑的緩沖區,從而往里面寫入自己的UI數據了,至于怎么將UI轉為字節數組,可以通過opengl或者Skia庫,我們可以簡單的理解為,當我在上層調用底層畫了一個矩形,opengl或者skia其實是會將這個矩形轉化為組成它的每個像素點的顏色值,這些顏色值的組合不就是字節數組了嗎?那么設備之后就可以根據這個字節數組渲染自己屏幕上的每個像素點了。
Android里View在自己的繪制函數draw()中,最終都是通過Canvas來畫的,比如說我們畫個矩形,都是通過調用Canvas.drawRect(), drawRect()只是調用了native_drawRect():
//frameworks/base/core/jni/android_graphics_Canvas.cpp
static void drawRect(JNIEnv* env, jobject, jlong canvasHandle, jfloat left, jfloat top,
jfloat right, jfloat bottom, jlong paintHandle) {
const Paint* paint = reinterpret_cast<Paint*>(paintHandle);
get_canvas(canvasHandle)->drawRect(left, top, right, bottom, *paint);
}
這里調用了SkiaCanvas::drawRect():
//frameworks/base/libs/hwui/SkiaCanvas.cpp
void SkiaCanvas::drawRect(float left, float top, float right, float bottom,
const SkPaint& paint) {
mCanvas->drawRectCoords(left, top, right, bottom, paint);
}
這里的mCanvas為SkCanvas,SkCanvas持有一個SkBitmap對象,這個對象又持有著從BufferQueue中出列的一個緩沖區,所以SkCanvas就可以往緩沖區中寫入內容了。那么SkBitmap持有的緩沖區又是何時分配的呢?
用過SurfaceView和Canvas作圖的肯定知道,要想拿到一個Canvas,必須通過SurfaceHolder.lockCanvas(),最終走到的是java層的Surface.lockCanvas(),我們說過java層的Surface只是個傀儡,這個方法調用了jni方法nativeLockCanvas():
//frameworks/base/core/jni/android_view_Surface.cpp
static jlong nativeLockCanvas(JNIEnv* env, jclass clazz,
jlong nativeObject, jobject canvasObj, jobject dirtyRectObj) {
sp<Surface> surface(reinterpret_cast<Surface *>(nativeObject));
......
ANativeWindow_Buffer outBuffer;
status_t err = surface->lock(&outBuffer, dirtyRectPtr);
......
SkBitmap bitmap;
ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format);
bitmap.setInfo(info, bpr);
if (outBuffer.width > 0 && outBuffer.height > 0) {
bitmap.setPixels(outBuffer.bits);
} else {
// be safe with an empty bitmap.
bitmap.setPixels(NULL);
}
Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj);
nativeCanvas->setBitmap(bitmap);
......
return (jlong) lockedSurface.get();
}
上面通過surface->lock(&outBuffer, dirtyRectPtr) 獲取了一個ANativeWindow_Buffer,并最后通過bitmap.setPixels(outBuffer.bits)將ANativeWindow_Buffer中的bits傳給了SkBitmap。bits即為圖形緩沖區的首地址,因此SkCanvas之后就可以通過這個首地址來輸出UI數據了。現在的關鍵就是:outBuffer.bits是怎么拿到的?我們不妨先看看ANativeWindow_Buffer的來路:surface->lock(&outBuffer, dirtyRectPtr):
//frameworks/native/libs/gui/Surface.cpp
status_t Surface::lock(
ANativeWindow_Buffer* outBuffer, ARect* inOutDirtyBounds)
{
......
ANativeWindowBuffer* out;
int fenceFd = -1;
//拿到一個緩沖區
status_t err = dequeueBuffer(&out, &fenceFd);
if (err == NO_ERROR) {
sp<GraphicBuffer> backBuffer(GraphicBuffer::getSelf(out));
......
void* vaddr;
status_t res = backBuffer->lockAsync(
GRALLOC_USAGE_SW_READ_OFTEN | GRALLOC_USAGE_SW_WRITE_OFTEN,
newDirtyRegion.bounds(), &vaddr, fenceFd);
if (res != 0) {
err = INVALID_OPERATION;
} else {
mLockedBuffer = backBuffer;
outBuffer->width = backBuffer->width;
outBuffer->height = backBuffer->height;
outBuffer->stride = backBuffer->stride;
outBuffer->format = backBuffer->format;
outBuffer->bits = vaddr;
}
}
return err;
}
可見lock()中也是通過dequeueBuffer()拿到緩沖區的,這個方法最終也是調用的BufferQueueProducer.dequeueBuffer()。指針 vaddr,即緩沖區的首地址,被保存到outBuffer的 bits中了。至于這個首地址是如何得到的,表明上是調用的GraphicBuffer.lockAsync(),最終調用的是 GraphicBufferMapper::lock():
//frameworks/native/libs/ui/GraphicBufferMapper.cpp
status_t GraphicBufferMapper::lock(buffer_handle_t handle,
uint32_t usage, const Rect& bounds, void** vaddr)
err = mAllocMod->lock(mAllocMod, handle, static_cast<int>(usage),
bounds.left, bounds.top, bounds.width(), bounds.height(),
vaddr);
return err;
}
mAllocMod即gralloc_module_t,這里傳入了一個handle。mAllocMod->lock()最終實現在/hardware/libhardware/modules/gralloc/mapper.cpp中:
//hardware/libhardware/modules/gralloc/mapper.cpp:
int gralloc_lock(gralloc_module_t const* /*module*/,
buffer_handle_t handle, int /*usage*/,
int /*l*/, int /*t*/, int /*w*/, int /*h*/,
void** vaddr)
{
if (private_handle_t::validate(handle) < 0)
return -EINVAL;
private_handle_t* hnd = (private_handle_t*)handle;
*vaddr = (void*)hnd->base;
return 0;
}
這里直接返回了handle->base作為緩沖區的首地址,其實我們在請求gralloc_module_t分配緩沖區時,也有一個handle,這給handle一直被保存在GraphicBuffer中,所以根據這個handle,gralloc_module_t就能找到之前分配的緩沖區了,這里handle就相當于一個句柄,類似Binder傳輸時的句柄一樣。
到此時,SkCanvas總算徹底拿到了圖形緩沖區的首地址,可以開始繪制了。
當繪制完成后,又會調用SurfaceHolder.unlockCanvasAndPost(canvas),最終調用了Surface.unlockAndPost():
//frameworks/native/libs/gui/Surface.cpp
status_t Surface::unlockAndPost()
{
......
int fd = -1;
status_t err = mLockedBuffer->unlockAsync(&fd);
err = queueBuffer(mLockedBuffer.get(), fd);
mPostedBuffer = mLockedBuffer;
mLockedBuffer = 0;
return err;
}
此時繪制已經完成,所以需要調用queueBuffer()將緩沖區重新放回到BufferQueue中,這里的queueBuffer()最終調用的是BufferQueueProducer.queueBuffer(),該方法將緩沖區入列后,會調用:
frameAvailableListener->onFrameAvailable(item);
frameAvailableListener最終的真身為Layer,還記得SurfaceFlinger在創建Surface時,會同時創建一個Layer嗎?原來這個Layer還承擔著監聽queueBuffer()的責任。Layer在onFrameAvailable()中會調用SurfaceFlinger::signalLayerUpdate():
//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::signalLayerUpdate() {
mEventQueue.invalidate();
}
mEventQueue為MessageQueue:
//frameworks/native/services/surfaceflinger/MessageQueue.cpp
void MessageQueue::invalidate() {
#if INVALIDATE_ON_VSYNC
mEvents->requestNextVsync();
#else
mHandler->dispatchInvalidate();
#endif
}
dispatchInvalidate()原來通知SurfaceFlinger刷新。
五:圖形緩沖區的分配
之前將到SkCanvas最終通過GraphicBuffer中的句柄handle拿到了緩沖區的首地址,但是緩沖區的首地址究竟是何時被賦給了handle->base呢?上面說們在請求分配緩沖區時也有一個handle,是不是在分配緩沖區時賦的呢?
圖形緩沖區分配的起點在BufferQueueProducer::allocateBuffers(),這個函數很長,下面就把核心代碼粘出來:
//frameworks/native/libs/gui/BufferQueueProducer.cpp
void BufferQueueProducer::allocateBuffers(uint32_t width, uint32_t height,PixelFormat format, uint32_t usage) {
size_t newBufferCount = 0;
newBufferCount = mCore->mFreeSlots.size();
Vector<sp<GraphicBuffer>> buffers;
for (size_t i = 0; i < newBufferCount; ++i) {
status_t result = NO_ERROR;
sp<GraphicBuffer> graphicBuffer(mCore->mAllocator->createGraphicBuffer(allocWidth, allocHeight, allocFormat, allocUsage, &result));
buffers.push_back(graphicBuffer);
}
}
上面這部分主要是計算BufferQueueProducer中還剩多少空閑槽用來放緩沖區,然后循環體中去創建新的緩沖區填滿空閑槽位。緩沖區的創建為:
mCore->mAllocator->createGraphicBuffer(allocWidth, allocHeight, allocFormat, allocUsage, &result)
這里的mAllocator為GraphicBufferAlloc:
//frameworks/native/libs/gui/GraphicBufferAlloc.cpp
sp<GraphicBuffer> GraphicBufferAlloc::createGraphicBuffer(uint32_t width,
uint32_t height, PixelFormat format, uint32_t usage, status_t* error) {
sp<GraphicBuffer> graphicBuffer(
new GraphicBuffer(width, height, format, usage));
return graphicBuffer;
}
這里直接new了一個GraphicBuffer:
//frameworks/native/libs/ui/GraphicBuffer.cpp
GraphicBuffer::GraphicBuffer(uint32_t inWidth, uint32_t inHeight,
PixelFormat inFormat, uint32_t inUsage)
: BASE(), mOwner(ownData), mBufferMapper(GraphicBufferMapper::get()),
mInitCheck(NO_ERROR), mId(getUniqueId()), mGenerationNumber(0)
{
width =
height =
stride =
format =
usage = 0;
handle = NULL;
mInitCheck = initSize(inWidth, inHeight, inFormat, inUsage);
}
可見,GraphicBuffer雖然被創建了,但是里面所有變量全沒被初始化,不過幸虧最后調用了initSize():
//frameworks/native/libs/ui/GraphicBuffer.cpp
status_t GraphicBuffer::initSize(uint32_t inWidth, uint32_t inHeight,
PixelFormat inFormat, uint32_t inUsage)
{
GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();
uint32_t outStride = 0;
status_t err = allocator.alloc(inWidth, inHeight, inFormat, inUsage,
&handle, &outStride);
if (err == NO_ERROR) {
width = static_cast<int>(inWidth);
height = static_cast<int>(inHeight);
format = inFormat;
usage = static_cast<int>(inUsage);
stride = static_cast<int>(outStride);
}
return err;
}
你看,這里才是真正的初始化及分配緩沖區的地方,重點在于這句:
status_t err = allocator.alloc(inWidth, inHeight, inFormat, inUsage, &handle, &outStride);
這里又調用了GraphicBufferAllocator::alloc()來進行分配,并且傳入了很多參數,其中就包括了GraphicBuffe的handle。GraphicBufferAllocator最終調用的也是gralloc_module_t->alloc(), alloc()中會根據usage來判斷分配什么緩沖區:
//hardware/libhardware/mudules/gralloc/gralloc.cpp
if (usage & GRALLOC_USAGE_HW_FB) {
err = gralloc_alloc_framebuffer(dev, size, usage, pHandle);
} else {
err = gralloc_alloc_buffer(dev, size, usage, pHandle);
}
注意我們這里講的是分配圖形緩沖區,而不是幀緩沖區,幀緩沖區的分配為gralloc_alloc_framebuffer(),圖形緩沖區的分配方法為gralloc_alloc_buffer():
//hardware/libhardware/mudules/gralloc/gralloc.cpp
static int gralloc_alloc_buffer(alloc_device_t* dev,
size_t size, int /*usage*/, buffer_handle_t* pHandle)
{
int err = 0;
int fd = -1;
size = roundUpToPageSize(size);
fd = ashmem_create_region("gralloc-buffer", size);
if (err == 0) {
private_handle_t* hnd = new private_handle_t(fd, size, 0);
gralloc_module_t* module = reinterpret_cast<gralloc_module_t*>(
dev->common.module);
err = mapBuffer(module, hnd);
if (err == 0) {
*pHandle = hnd;
}
}
return err;
}
這里創建了一塊匿名共享內存,并返回了一個文件描述符fd,接下來創建了一個private_handle_t對象,并傳入了匿名共享內存塊的fd及size,最后賦給了pHandle,pHandle即為GraphicBuffe中的句柄handle。這里的關鍵點為mapBuffer():
//hardware/libhardware/mudules/gralloc/mapper.cpp
int mapBuffer(gralloc_module_t const* module,
private_handle_t* hnd)
{
void* vaddr;
return gralloc_map(module, hnd, &vaddr);
}
直接調用了gralloc_map():
//hardware/libhardware/mudules/gralloc/mapper.cpp
static int gralloc_map(gralloc_module_t const* /*module*/,
buffer_handle_t handle,
void** vaddr)
{
private_handle_t* hnd = (private_handle_t*)handle;
if (!(hnd->flags & private_handle_t::PRIV_FLAGS_FRAMEBUFFER)) {
size_t size = hnd->size;
void* mappedAddress = mmap(0, size,
PROT_READ|PROT_WRITE, MAP_SHARED, hnd->fd, 0);
hnd->base = uintptr_t(mappedAddress) + hnd->offset;
}
*vaddr = (void*)hnd->base;
return 0;
}
這兒利用mmap()將創建的匿名共享內存映射到了SurfaceFlinger所在進程中地址mappedAddress上,并強轉為int賦給了hnd->base(hnd->offset 這時為0),你看,果然是在分配的時候將地址賦給了handle.base。
六:緩沖區的映射
上面講到,SurfaceFlinger進程請求Gralloc分配緩沖區后,調用mmap()將緩沖區映射到了自己的內存中間中,但是opengl或者skia向緩沖區中寫內容是在app進程中,而因為每個進程都有自己的內存空間,所以通過匿名共享內存創建的緩沖區映射到SurfaceFlinger進程的地址在app端是不能直接用的,但是上面在app進程中,我們直接將GraphicBuffe句柄handle->base(緩沖區首地址)賦給了圖形庫Skia,這又是怎么回事呢?
SurfaceFlinger不是上帝,當然也不能特殊化。這里我們需要知道,GraphicBuffer是需要從SurfaceFlinger跨進程傳輸到app進程的,而C++層的對象要支持跨進程傳輸,必須實現Flattenable接口,用來進行可序列化。
在GraphicBuffer被傳輸到App進程時,會調用它的unflatten()方法,unflatten()中調用了GraphicBufferMapper.registerBuffer(handle),最終調用了gralloc_register_buffer():
//hardware/libhardware/modules/gralloc/mapper.cpp:
int gralloc_register_buffer(gralloc_module_t const* module,
buffer_handle_t handle)
{
private_handle_t* hnd = (private_handle_t*)handle;
void *vaddr;
return gralloc_map(module, handle, &vaddr);
}
這里出現了熟悉的gralloc_map(),還記得上面我們在分配圖形緩沖區時,調用它將緩沖區映射到SurfaceFlinger進程嗎?這里又調用了一遍它,根據緩沖區的文件描述符fd,將緩沖區再次映射到App進程中。
總結:
總結一下上面的內容,主要為以下幾步:
1.App端請求SurfaceFlinger進程的BufferQueue分配圖形緩沖區;
2.BufferQueue通過Gralloc創建了一塊匿名共享內存,并將其文件描述符fd通過GraphicBuffer傳給了App進程;
3.App進程通過fd將緩沖區映射到自己的內存空間中,并將首地址傳遞給圖形庫。
4.圖形庫根據首地址進行繪制。
5.App端繪制完成后將緩沖區交還給BufferQueue。
6.queueBuffer()時,Layer通知SurfaceFlinger刷新。
關于Android繪圖機制其實還有很多內容,關于這方面的資料也很多,不過最重要的還是要自己去閱讀源碼,梳理邏輯。