一.硬件加速初始化
Canvas API用來繪制應用程序的UI元素,在硬件加速渲染環境中,這些Canvas API調用最終會轉化為Open GL API調用(轉化過程對應用程序來說是透明的)。因此,新的Activity啟動的時候初始化好Open GL環境(又稱Open GL渲染上下文)尤為重要。
下面展示下hwui 過程圖:
一個Activity在OpenGL環境中對應一個ANativeWindow,ANativeWindow從SF中dequeueBuffer得到GraphicBuffer之后通過OpenGL繪制完成后queueBuffer到SF進行合成顯示。
1) Open GL渲染上下文只能與一個線程關聯,避免多線程沖突,與只能在UI線程中更新UI一個意思,因此,初始化過程任務之一就是要創建一個Render Thread;
2) 一個Android應用程序可能存在多個Activity組件,當Main Thread向Render Thread發出渲染命令時,Render Thread要知道當前要渲染的窗口是哪個,因此,初始化任務之二就是要告訴Render Thread當前要渲染的窗口是哪個。
下面就從這2個方面介紹hwui 初始化過程:
1.RenderThread初始化
1.1 java層分析
從ViewRootImpl的setView函數開始說起。在該函數內部會判斷有些不會走hwui,比如:canvas api不支持轉換成opengl函數的;還有些不需要hwui繪制的(因為hwui會增加內存開銷)。
frameworks/base/core/java/android/view/ViewRootImpl.java
public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView) {
...
// If the application owns the surface, don't enable hardware acceleration
if (mSurfaceHolder == null) {
// While this is supposed to enable only, it can effectively disable
// the acceleration too.
enableHardwareAcceleration(attrs);
SurfaceView是完全由應用程序自己來控制自己的渲染,因此不需要開啟硬件加速。
frameworks/base/core/java/android/view/ViewRootImpl.java
private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
...
// Don't enable hardware acceleration when the application is in compatibility mode
if (mTranslator != null) return;
// Try to enable hardware acceleration if requested
final boolean hardwareAccelerated =
(attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0;
if (hardwareAccelerated) {
if (!ThreadedRenderer.isAvailable()) {
return;
}
// Persistent processes (including the system) should not do
// accelerated rendering on low-end devices. In that case,
// sRendererDisabled will be set. In addition, the system process
// itself should never do accelerated rendering. In that case, both
// sRendererDisabled and sSystemRendererDisabled are set. When
// sSystemRendererDisabled is set, PRIVATE_FLAG_FORCE_HARDWARE_ACCELERATED
// can be used by code on the system process to escape that and enable
// HW accelerated drawing. (This is basically for the lock screen.)
final boolean fakeHwAccelerated = (attrs.privateFlags &
WindowManager.LayoutParams.PRIVATE_FLAG_FAKE_HARDWARE_ACCELERATED) != 0;
final boolean forceHwAccelerated = (attrs.privateFlags &
WindowManager.LayoutParams.PRIVATE_FLAG_FORCE_HARDWARE_ACCELERATED) != 0;
if (fakeHwAccelerated) {
// This is exclusively for the preview windows the window manager
// shows for launching applications, so they will look more like
// the app being launched.
mAttachInfo.mHardwareAccelerationRequested = true;
} else if (!ThreadedRenderer.sRendererDisabled
|| (ThreadedRenderer.sSystemRendererDisabled && forceHwAccelerated)) {
...
mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent,
attrs.getTitle().toString());
兼容模式下不走hwui;
硬件需要支持hwui,通過isAvailable體現;
fakeHwAccelerated(true)代表的是“Starting Window xxx” layer;
sRendererDisabled(true)代表Persistent進程(一般系統級別的應用可以在Manifest中配置persistent屬性),sSystemRendererDisabled && forceHwAccelerated代表鎖屏場景,即當既不是Persistent進程,或者是system進程(其中有很多線程是需要顯示UI的,但是這些UI一般都是比較簡單的)但是是鎖屏場景的話會走hwui。
ThreadedRenderer(Context context, boolean translucent, String name) {
...
long rootNodePtr = nCreateRootRenderNode();
mRootNode = RenderNode.adopt(rootNodePtr);
mRootNode.setClipToBounds(false);
mNativeProxy = nCreateProxy(translucent, rootNodePtr);
nSetName(mNativeProxy, name);
ProcessInitializer.sInstance.init(context, mNativeProxy);
loadSystemProperties();
}
java層的ThreadedRenderer初始化主要完成native層renderthread線程的創建以及RenderProxy的創建,后者主要用于向renderthread中post消息,具體過程如下:
1.2 native層分析
根據上圖,可以發現:此時renderthread線程已經起來了,繼續往下分析,在分析之前看下androirdP上新增的WorkQueue,在RenderProxy、RenderThread和CanvasContext之間增加了WorkQueue機制,具體工作原理如下:
RenderThread線程起來后首先會設置線程優先級,然后會初始化一些對象,下面看下initThreadLocals做了哪些初始化動作:
299bool RenderThread::threadLoop() {
300 setpriority(PRIO_PROCESS, 0, PRIORITY_DISPLAY);
301 if (gOnStartHook) {
302 gOnStartHook();
303 }
304 initThreadLocals();
305
306 while (true) {
307 waitForWork();
308 processQueue();
309
310 if (mPendingRegistrationFrameCallbacks.size() && !mFrameCallbackTaskPending) {
311 drainDisplayEventQueue();
312 mFrameCallbacks.insert(mPendingRegistrationFrameCallbacks.begin(),
313 mPendingRegistrationFrameCallbacks.end());
314 mPendingRegistrationFrameCallbacks.clear();
315 requestVsync();
316 }
317
318 if (!mFrameCallbackTaskPending && !mVsyncRequested && mFrameCallbacks.size()) {
319 // TODO: Clean this up. This is working around an issue where a combination
320 // of bad timing and slow drawing can result in dropping a stale vsync
321 // on the floor (correct!) but fails to schedule to listen for the
322 // next vsync (oops), so none of the callbacks are run.
323 requestVsync();
324 }
325 }
326
327 return false;
328}
在initThreadLocals做了一些animation的動作、初始化EglManager、RenderState、VulkanManager以及CacheManager。
下面主要分析下initThreadLocals過程,具體如下:
1)創建的DisplayEventReceiver用于請求和接收vsync,與Choreographer中提及到的java層DisplayEventReceiver應該是一個用處 ->std::make_unique<DisplayEventReceiver>;
2)創建的DisplayEventReceiver對象所關聯的文件描述符被注冊到了Render Thread的消息循環中 ->addFd;
優點:surfaceflinger分發vsync的時候會借助fd去喚醒renderthread線程,接著調用displayEventReceiverCallback;
3)緊接著Renderthread::drainDisplayEventQueue去處理vsync: 通過DisplayEventReceiverWrapper對象獲取最近一次的vsync時間,>0的話表示有效的vsync,然后將mVsyncRequested置為false,表示上次上傳的vsync已經接收到了。接下來看DispatchFrameCallbacks 的task(9.0之前有很多task,如:drawFrameTask等,9.0就沒有了,可以簡單的理解為WorkQueue替換了TaskQueue)是否已經添加了,如果添加了mFrameCallbackTaskPending就等于true,就不會執行RenderThread::dispatchFrameCallbacks。
而dispatchFrameCallbacks主要用來干嘛的呢?答:用來顯示動畫的~
下面研究下dispatchFrameCallbacks函數
定義一個指向IFrameCallback的set mPendingRegistrationFrameCallbacks,
在post的時候向其中插入數據,在pushBack中也向其中插入數據,唯一的不同的會刪除“前者”mFrameCallbacks中的callBack,可以把mPendingRegistrationFrameCallbacks理解為“Back Buffer”,mFrameCallbacks理解為“Front Buffer”。在remove的時候刪除“前后對象”中的數據,后面會有交換過程分析。
std::set<IFrameCallback*> mPendingRegistrationFrameCallbacks;
frameworks/base/libs/hwui/renderthread/RenderThread.cpp
330void RenderThread::postFrameCallback(IFrameCallback* callback) {
331 mPendingRegistrationFrameCallbacks.insert(callback);
332}
333
334bool RenderThread::removeFrameCallback(IFrameCallback* callback) {
335 size_t erased;
336 erased = mFrameCallbacks.erase(callback);
337 erased |= mPendingRegistrationFrameCallbacks.erase(callback);
338 return erased;
339}
340
341void RenderThread::pushBackFrameCallback(IFrameCallback* callback) {
342 if (mFrameCallbacks.erase(callback)) {
343 mPendingRegistrationFrameCallbacks.insert(callback);
344 }
345}
在RenderThread等到有處理的task的時候會處理callback,將mPendingRegistrationFrameCallbacks中的數據全部copy到mFrameCallbacks中去,同時會清空mPendingRegistrationFrameCallbacks中數據。
310 if (mPendingRegistrationFrameCallbacks.size() && !mFrameCallbackTaskPending) {
311 drainDisplayEventQueue();
312 mFrameCallbacks.insert(mPendingRegistrationFrameCallbacks.begin(),
313 mPendingRegistrationFrameCallbacks.end());
314 mPendingRegistrationFrameCallbacks.clear();
315 requestVsync();
316 }
那么mFrameCallbacks作用是什么呢?
將mFrameCallbacks數據交換到臨時變量callbacks中去,如果有數據的話就會取出來調用doFrame,那么mFrameCallbacks中保存的是什么呢?
搜索發現只有CanvasContext會繼承IFrameCallback,那么回到前面看下什么時候post和pushBack的~
274void RenderThread::dispatchFrameCallbacks() {
275 ATRACE_CALL();
276 mFrameCallbackTaskPending = false;
277
278 std::set<IFrameCallback*> callbacks;
279 mFrameCallbacks.swap(callbacks);
280
281 if (callbacks.size()) {
282 // Assume one of them will probably animate again so preemptively
283 // request the next vsync in case it occurs mid-frame
284 requestVsync();
285 for (std::set<IFrameCallback*>::iterator it = callbacks.begin(); it != callbacks.end();
286 it++) {
287 (*it)->doFrame();
288 }
289 }
290}
簡單說post進的數據都是借助于CanvasContext:
在prepareTree的時候會postFrameCallback,上層即Java層注冊一個動畫類型的Render Node到Render Thread時,一個類型為IFrameCallback的回調接口就會通過RenderThread類的成員函數postFrameCallback;
在notifyFramePending會調用pushBackFrameCallback,上層的觸發處在scheduleTraversals中。
總結下:
1)displayEventReceiverCallback主要用于處理動畫,將動畫的每一幀同步到Vsync信號來顯示;
2)renderthread此處渲染的是下一幀數據,即還未顯示的;
3)在接收到本地的vsync后會做doFrame,然后請求下一個vsync。
繼續往下走:看下RenderProxy還干了什么?
創建完renderthread后,開始創建CanvasContext,即窗口的畫布,后期會分析怎么關聯到窗口上,其中主要是確定pipline方式~
有沒有思考過new RenderThread后才會new CanvasContext,那么之前分析會用到CanvasContext,怎么回事呢?
2.綁定窗口到RenderThread
一旦Render Thread知道了當前要渲染的窗口,它就將可以將該窗口綁定到Open GL渲染上下文中去,從而使得后面的渲染操作都是針對被綁定的窗口的。
2.1 java層分析
上面分析基于ViewRootImpl的setView的基礎上,現在開始到了真正的繪制階段了,即ViewRootImpl的performTraversals函數中,執行measure、layout、draw動作。在繪制之前要獲取一個surface,獲取成功后再綁定到對應的renderThread線程中去。
frameworks/base/core/java/android/view/ViewRootImpl.java
392 public final Surface mSurface = new Surface();
...
1676 private void performTraversals() {
...
2083 if (!hadSurface) {
2084 if (mSurface.isValid()) {
...
2092 newSurface = true;
2093 mFullRedrawNeeded = true;
2094 mPreviousTransparentRegion.setEmpty();
2095
2096 // Only initialize up-front if transparent regions are not
2097 // requested, otherwise defer to see if the entire window
2098 // will be transparent
2099 if (mAttachInfo.mThreadedRenderer != null) {
2100 try {
2101 hwInitialized = mAttachInfo.mThreadedRenderer.initialize(
2102 mSurface);
...
2301 performMeasure(childWidthMeasureSpec, childHeightMeasureSpec);
...
2320 performLayout(lp, mWidth, mHeight);
...
2477 if (!cancelDraw && !newSurface) {
...
2485 performDraw();
如果這個Surface是新創建的,那么會將該surface(mSurface)通過initialize將它綁定到Render Thread中去,綁定完成后才會做measure、layout、draw的動作。
2.2 native層分析
下面主要從C++層分析下綁定窗口的過程:
frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
689static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz,
690 jlong proxyPtr, jobject jsurface) {
691 RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
692 sp<Surface> surface = android_view_Surface_getSurface(env, jsurface);
693 proxy->initialize(surface);
694}
frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
85void RenderProxy::initialize(const sp<Surface>& surface) {
86 mRenderThread.queue().post(
87 [ this, surf = surface ]() mutable { mContext->setSurface(std::move(surf)); });
88}
直接將上層的surface通過setSurface函數經過workQueue傳到CanvasContext中去,成功的話返回true,即擁有了一個新的surface。
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
186void CanvasContext::setSurface(sp<Surface>&& surface) {
187 ATRACE_CALL();
188
189 mNativeSurface = std::move(surface);
190
191 ColorMode colorMode = mWideColorGamut ? ColorMode::WideColorGamut : ColorMode::Srgb;
192 bool hasSurface = mRenderPipeline->setSurface(mNativeSurface.get(), mSwapBehavior, colorMode);
193
194 mFrameNumber = -1;
195
196 if (hasSurface) {
197 mHaveNewSurface = true;
198 mSwapHistory.clear();
CanvasContext::setSurface有systrace 標簽,通過systrace可以看出具體過程來,如下:
再往下到具體的Pipeline去setSurface,這里以默認的Pipeline介紹:
首先判斷EglSurface是否已經存在,如果存在先destroy,然后再去create一個EglSurface,該mEglSurface表示的是一個繪圖表面,有了這個mEglSurface之后,當執行Open GL命令的時候,就可以知道這些命令是作用在哪個窗口上的了。
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
146bool OpenGLPipeline::setSurface(Surface* surface, SwapBehavior swapBehavior, ColorMode colorMode) {
147 if (mEglSurface != EGL_NO_SURFACE) {
148 mEglManager.destroySurface(mEglSurface);
149 mEglSurface = EGL_NO_SURFACE;
150 }
151
152 if (surface) {
153 const bool wideColorGamut = colorMode == ColorMode::WideColorGamut;
154 mEglSurface = mEglManager.createSurface(surface, wideColorGamut);
155 }
156
157 if (mEglSurface != EGL_NO_SURFACE) {
158 const bool preserveBuffer = (swapBehavior != SwapBehavior::kSwap_discardBuffer);
159 mBufferPreserved = mEglManager.setPreserveBuffer(mEglSurface, preserveBuffer);
160 return true;
161 }
162
163 return false;
164}
在EglManager中createSurface中首先會調用EglManager::initialize(關鍵哦~),其中完成EGL的初始化動作(eglGetDisplay、eglInitialize、eglChooseConfig、eglCreateContext、eglCreatePbufferSurface、eglMakeCurrent、eglSwapInterval),在makeCurrent的時候首先看是不是surface沒有改變,如果是的話就不需要重新eglMakeCurrent了,此處makeCurrent的是當前的mPBufferSurface。然后才是真正的創建當前surface的eglCreateWindowSurface;最后調用eglSurfaceAttrib。
在EglManager中setPreserveBuffer如果不是SwapBehavior::Preserved就直接返回false了,目前很多平臺應該都是SwapBehavior::BufferAge。
那么問題來了,前面創建的surface什么時候設置到上下文環境中的呢?
查看代碼發現:
DrawFrameTask::syncFrameState -> CanvasContext::makeCurrent ->OpenGLPipeline::makeCurrent
bool haveNewSurface = mEglManager.makeCurrent(mEglSurface, &error); //mEglSurface就是前在setSurface中綁定的surface
這樣的話就綁定到了前面創建的surface了。
繼續看下EglManager::makeCurrent發現:
1)在OpenGLPipeline::onStop、EglManager::destroySurface的時候設定上下文surface為EGL_NO_SURFACE
2)在EglManager::initialize時候會綁定PBSurface(前面已經介紹了)
3)在EglManager::beginFrame時候會綁定傳入的surface,此處值得深入跟蹤下,
mEglManager.beginFrame(mEglSurface) //mEglSurface就是前在setSurface中綁定的surface
CanvasContext::draw ->OpenGLPipeline::getFrame ->EglManager::beginFrame->makeCurrent(surface)
因此,將當前surface設置到OpenGL渲染上下文中總共做了2次,在syncFrameState做1次,在CanvasContext::draw時做1次。在makeCurrent的實現的地方會判斷if (isCurrent(surface)) return false; 也就是說后面1次的makeCurrent可能直接返回false。
總結下:初始化過程中會通過eglCreateWindowSurface創建上層surface對應的底層surface(EglSurface),makeCurrent當前的PBSurface,然后在syncFrameSate時會makeCurrent之前創建的底層surface(EglSurface),最后在draw的時候同樣也會makeCurrent一次,但是直接返回了,沒有執行真正的eglMakeCurrent。
到此,RenderThread線程已經run了,OpenGL、EGL環境也已經準備就緒,上層Surface也已經創建完成并成功綁定到HWUI層pipeline中mEglSurface。
二.資源地圖集服務
以下內容描述都是7.0平臺上的,androidO上開始已經沒有Asset Atlas Service。不清楚androidO上叫什么~有知道的同學回復呀~
Android啟動的時候會預加載一些資源,方便應用的后期快速訪問,同時達到共享目的。hwui中做了進一步優化,將預加載資源合成為一個紋理上傳到GPU去,并且能夠在所有的應用程序之間進行共享。
資源預加載發生在Zygote進程的,然后Zygote進程fork了應用程序進程,這樣就保證了資源的共享,但是在hwui中,如果每個應用都去使用預加載的資源的話,那么每個應用都要將資源作為紋理傳入到GPU中,這樣太浪費GPU內存了,這塊是不是有問題需要優化呢,這就是本節的重點。
Zygote進程將預加載的資源作為texture傳到system進程中去,System進程中運行了一個Asset Atlas Service,該service就是上面提到的將預加載資源合成為一個紋理上傳到GPU去。這樣的話,app的renderthread線程直接請求Asset Atlas Service得到紋理即可,無需單獨上傳紋理到GPU。
1.Zygote
加載資源
啟動system_server
接收AMS的請求創建應用進程
下面主要分析加載資源環節,preloadClasses、preloadResources、nativePreloadAppProcessHALs、 preloadOpenGL、preloadSharedLibraries、preloadTextResources,主要分析preloadResources。
而在preloadResources中會分別調用preloadDrawables(對應R.array.preloaded_drawables)和preloadColorStateLists(對應R.array.preloaded_color_state_lists),主要跟蹤preloadDrawables。
mResources.getDrawable加載所有drawable~
Zygote預加載的Drawable將會被運行在System進程里面的Asset Atlas Service合成一個地圖集,最后作為紋理上傳到GPU去,因此,接下來我們就繼續分析Asset Atlas Service的實現。
2. system_server
Zygote進程啟動System進程,System進程會加載系統服務,其中就包括Asset Atlas Service。
在system進程啟動服務之前會進行一些屬性的設置,在startOtherServices函數中會啟動Asset Atlas Service,Asset Atlas Service是一個非系統服務,工廠模式下不會啟用。
注:7.0、7.1還存在AssetAtlasService,到8.0開始就已經取消了~
AssetAtlasService計算將所有預加載的Drawable資源合成在一張圖片中所需要的最小寬度和高度值,有了這兩個值之后創建一塊Graphic Buffer。然后將Drawable渲染到該Buffer中去。最后上傳到GPU中去。
三. android4.4 hwui
這邊插一章節介紹下舊版本hwui的實現過程,相對比較簡單,主要過程如下:
3.1 hardware draw
下面主要分析下hardware draw過程,引用一個大神的圖,關鍵的幾個步驟做簡單分析:
1)beginFrame主要完成EGLDisplay(用于顯示) 和一個EGLSurface(OpenGL將在這個Surface上進行繪圖),然后eglBeginFrame主要是校驗參數的合法性;
2)buildDisplayList主要完成錄制過程,構建native層的DisplayList;
3)prepareFrame完成dirtyRegion的構建;
4)onPostDraw會進行OpenGLRenderer的finish過程;
5)swapBuffer完成對buffer向SurfaceFlinger的遞交,注意在java層調用的哦~
四.DisplayList的構建過程
在上面初始化過程中會調用performTraversals,然后調用initialize完成初始化過程,之后在performTraversals中會調用performMeasure、performLayout、performDraw,本節主要介紹performDraw過程:
frameworks/base/core/java/android/view/ViewRootImpl.java
private boolean draw(boolean fullRedrawNeeded) {
...
if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) {
...
mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this, callback);
} else {
...
if (!drawSoftware(surface, mAttachInfo, xOffset, yOffset,
scalingRequired, dirty, surfaceInsets)) {
return false;
}
...
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
void draw(View view, AttachInfo attachInfo, HardwareDrawCallbacks callbacks) {
......
updateRootDisplayList(view, callbacks);
......
if (attachInfo.mPendingAnimatingRenderNodes != null) {
final int count = attachInfo.mPendingAnimatingRenderNodes.size();
for (int i = 0; i < count; i++) {
registerAnimatingRenderNode(
attachInfo.mPendingAnimatingRenderNodes.get(i));
}
attachInfo.mPendingAnimatingRenderNodes.clear();
// We don't need this anymore as subsequent calls to
// ViewRootImpl#attachRenderNodeAnimator will go directly to us.
attachInfo.mPendingAnimatingRenderNodes = null;
}
...
int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length);
if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) != 0) {
...
attachInfo.mViewRootImpl.invalidate();
}
}
...
}
下面主要分析updateRootDisplayList:
可以看到在該函數中存在“Record View#draw()”的trace tag,首先完成updateViewTreeDisplayList,稍后分析;然后判斷rootNode是否有更新或者rootNode是否無效。
那么什么時候isValid為true呢?mRootNode.end結束時候該值會被置為true,詳細分析下面會有介紹。
frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Record View#draw()");
updateViewTreeDisplayList(view);
if (mRootNodeNeedsUpdate || !mRootNode.isValid()) {
DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
try {
final int saveCount = canvas.save();
canvas.translate(mInsetLeft, mInsetTop);
callbacks.onPreDraw(canvas);
canvas.insertReorderBarrier();
canvas.drawRenderNode(view.updateDisplayListIfDirty());
canvas.insertInorderBarrier();
callbacks.onPostDraw(canvas);
canvas.restoreToCount(saveCount);
mRootNodeNeedsUpdate = false;
} finally {
mRootNode.end(canvas);
}
}
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
mRootNode.isValid值來自于native層,當第一次設置的時候RenderNode中displayList不為空,因此isValid值為true;當renderNode被destroy時該值為false。
frameworks/base/core/java/android/view/RenderNode.java
public void end(DisplayListCanvas canvas) {
long displayList = canvas.finishRecording();
nSetDisplayList(mNativeRenderNode, displayList);
canvas.recycle();
}
frameworks/base/core/jni/android_view_RenderNode.cpp
static void android_view_RenderNode_setDisplayList(JNIEnv* env,
jobject clazz, jlong renderNodePtr, jlong displayListPtr) {
RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
DisplayList* newData = reinterpret_cast<DisplayList*>(displayListPtr);
renderNode->setStagingDisplayList(newData);
}
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::destroyHardwareResources(TreeInfo* info) {
...
setStagingDisplayList(nullptr);
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::setStagingDisplayList(DisplayList* displayList) {
mValid = (displayList != nullptr);
mNeedsDisplayListSync = true;
delete mStagingDisplayList;
mStagingDisplayList = displayList;
}
好了,下面開始分析DisplayList的構建過程中重要的幾個環節:
1.updateViewTreeDisplayList
view.updateDisplayListIfDirty();
2.start
start時候會在native層創建RecordingCanvas、DisplayList,而9.0默認使用skiagl,那么native層就對應SkiaRecordingCanvas、SkiaDisplayList。
frameworks/base/core/java/android/view/RenderNode.java
public DisplayListCanvas start(int width, int height) {
return DisplayListCanvas.obtain(this, width, height);
}
接下來是對應的canvas做save操作,在RecordingCanvas邏輯中會創建對應的Snapshot,在skiagl中走的是SkiaCanvas::save,最終會設置到SkCanvas中。
后面的translate函數和上面的save效果類似。
3.drawRenderNode
在drawRenderNode的前后分別insertBarrier目的是創建一個新的chunk,然后向displayList中加入RenderNodeOp,而skiagl中會走SkiaRecordingCanvas::drawRenderNode。
4.end
得到native層的DisplayList對象地址displayList,最后將displayList設置到mStagingDisplayList。
frameworks/base/core/java/android/view/RenderNode.java
public void end(DisplayListCanvas canvas) {
long displayList = canvas.finishRecording();
nSetDisplayList(mNativeRenderNode, displayList);
canvas.recycle();
}
5.addOp
將addOp單獨拉出來講,因為上面drawRenderNode以及在updateViewTreeDisplayList中的drawBitmap都會往DisplayList中寫入對應的Op。
當上層觸發drawColor、drawRect等操作時會調用native層的addOp(具體可參見之前博客:HWUI繪制系列——從java到C++),傳入對應的op參數進來,下面就詳細分析下addOp函數,因為該函數為后面渲染做鋪墊。
可以將op理解為一個人,首先判斷這個人的rect是否為空,然后得到當前的DisplayList中的ops的last值的索引,將當前的op加到ops中去。
判斷mDeferredBarrierType,研究下該值,初始化的時候該值為DeferredBarrierType::None,當resetRecording時會重置為DeferredBarrierType::InOrder,還有就是在insertReorderBarrier時會進行重新賦值。那么有什么意義呢?
1)剛初始化renderthread的時候會new RecordingCanvas,這樣會resetRecording,表示要重新建一個Chunk了,還有就是進程再次obtain也會強制做resetRecording;
2) 在updateViewTreeDisplayList結束后,在drawRenderNode前后會分別調用insertReorderBarrier(true)和insertInorderBarrier(false)進行mDeferredBarrierType重置,也就是說drawRenderNode時會新建一個chunk。
3)大小關系:ops > chunk > children > op
frameworks/base/libs/hwui/RecordingCanvas.cpp
int RecordingCanvas::addOp(RecordedOp* op) {
// skip op with empty clip
if (op->localClip && op->localClip->rect.isEmpty()) {
// NOTE: this rejection happens after op construction/content ref-ing, so content ref'd
// and held by renderthread isn't affected by clip rejection.
// Could rewind alloc here if desired, but callers would have to not touch op afterwards.
return -1;
}
int insertIndex = mDisplayList->ops.size();
mDisplayList->ops.push_back(op);
if (mDeferredBarrierType != DeferredBarrierType::None) {
// op is first in new chunk
mDisplayList->chunks.emplace_back();
DisplayList::Chunk& newChunk = mDisplayList->chunks.back();
newChunk.beginOpIndex = insertIndex;
newChunk.endOpIndex = insertIndex + 1;
newChunk.reorderChildren = (mDeferredBarrierType == DeferredBarrierType::OutOfOrder);
newChunk.reorderClip = mDeferredBarrierClip;
int nextChildIndex = mDisplayList->children.size();
newChunk.beginChildIndex = newChunk.endChildIndex = nextChildIndex;
mDeferredBarrierType = DeferredBarrierType::None;
} else {
// standard case - append to existing chunk
mDisplayList->chunks.back().endOpIndex = insertIndex + 1;
}
return insertIndex;
}
void RecordingCanvas::insertReorderBarrier(bool enableReorder) {
if (enableReorder) {
mDeferredBarrierType = DeferredBarrierType::OutOfOrder;
mDeferredBarrierClip = getRecordedClip();
} else {
mDeferredBarrierType = DeferredBarrierType::InOrder;
mDeferredBarrierClip = nullptr;
}
}
frameworks/base/core/java/android/view/DisplayListCanvas.java
static DisplayListCanvas obtain(@NonNull RenderNode node, int width, int height) {
if (node == null) throw new IllegalArgumentException("node cannot be null");
DisplayListCanvas canvas = sPool.acquire();
if (canvas == null) {
canvas = new DisplayListCanvas(node, width, height);
} else {
nResetDisplayListCanvas(canvas.mNativeCanvasWrapper, node.mNativeRenderNode,
width, height);
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Record View#draw()");
updateViewTreeDisplayList(view);
if (mRootNodeNeedsUpdate || !mRootNode.isValid()) {
DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
try {
final int saveCount = canvas.save();
canvas.translate(mInsetLeft, mInsetTop);
callbacks.onPreDraw(canvas);
canvas.insertReorderBarrier();
canvas.drawRenderNode(view.updateDisplayListIfDirty());
canvas.insertInorderBarrier();
callbacks.onPostDraw(canvas);
canvas.restoreToCount(saveCount);
mRootNodeNeedsUpdate = false;
} finally {
mRootNode.end(canvas);
}
}
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
五.繪制過程介紹
1.syncFrameState
應用主線程向RT線程的workQueue中post消息并等待消息處理完畢喚醒UI線程,等到RT 執行該消息時會回調run方法,雖然回調了但是不會立刻喚醒UI線程。
DrawFrameTask中只有兩處持鎖,下面會分析什么時候會調用unblockUiThread,當unblockUiThread時會喚醒UI線程繼續往下執行~
先劇透下:當syncFrameState完成后會喚醒UI線程,還有就是本次draw完后會喚醒(但是這種情況是不理想的,不應該存在的)~
frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
void DrawFrameTask::postAndWait() {
AutoMutex _lock(mLock);
mRenderThread->queue().post([this]() { run(); });
mSignal.wait(mLock);
}
void DrawFrameTask::unblockUiThread() {
AutoMutex _lock(mLock);
mSignal.signal();
}
下面看下syncFrameState過程:
1)首先同步當前vsync到TimeLord中的mFrameTimeNanos,即更新上一次的vsync時間;
2)然后是makeCurrent,當VRI發出setStop的時候會停止makeCurrent,也就停止渲染,否則一直往EglManager去makeCurrent,此處會判斷當前的surface是不是已經makeCurrent過了,如果已經makeCurrent了,那么就不去調用eglMakeCurrent,或者沒有surface的話會makeCurrent pbSurface。那么看下makeCurrent的位置有哪些?
a.EglManager::initialize時候makeCurrent(mPBufferSurface);
b.EglManager::beginFrame時候makeCurrent(surface),而beginFrame是在CanvasContext::draw時候調用的;
c.EglManager::destroySurface時候makeCurrent(EGL_NO_SURFACE)。
因此,正常情況下在初始化RT時候會OpenGLPipeline::setSurface將當前待渲染的surface設置進來,然后在syncFrameState時候將surface設置為opengl上下文中。
3)unpinImages主要是為了提高hwui精度的,對每個object做cache,然后讓cache去unpin:caches.textureCache.resetMarkInUse(this);前面一步應該也有這個過程;
frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
std::vector<sp<DeferredLayerUpdater> > mLayers;
Rect mContentDrawBounds;
bool DrawFrameTask::syncFrameState(TreeInfo& info) {
ATRACE_CALL();
int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
mRenderThread->timeLord().vsyncReceived(vsync);
bool canDraw = mContext->makeCurrent();
mContext->unpinImages();
for (size_t i = 0; i < mLayers.size(); i++) {
mLayers[i]->apply();
}
mLayers.clear();
mContext->setContentDrawBounds(mContentDrawBounds);
mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);
...
if (info.out.hasAnimations) {
if (info.out.requiresUiRedraw) {
mSyncResult |= SyncResult::UIRedrawRequired;
}
}
if (!info.out.canDrawThisFrame) {
mSyncResult |= SyncResult::FrameDropped;
}
// If prepareTextures is false, we ran out of texture cache space
return info.prepareTextures; //構造TreeInfo的時候賦值的:true
}
4)處理layer,在TextureLayer中會將layer借助于ThreadedRenderer傳到native層的DrawFrameTask中并用mLayers保存起來。
是不是很想知道DeferredLayerUpdater類中是否有保存layer的name呢?很可惜沒有,只有getWidth()和getHeight() 。
frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
void DrawFrameTask::pushLayerUpdate(DeferredLayerUpdater* layer) {
LOG_ALWAYS_FATAL_IF(!mContext,
"Lifecycle violation, there's no context to pushLayerUpdate with!");
for (size_t i = 0; i < mLayers.size(); i++) {
if (mLayers[i].get() == layer) {
return;
}
}
mLayers.push_back(layer);
}
void DrawFrameTask::removeLayerUpdate(DeferredLayerUpdater* layer) {
for (size_t i = 0; i < mLayers.size(); i++) {
if (mLayers[i].get() == layer) {
mLayers.erase(mLayers.begin() + i);
return;
}
}
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
void pushLayerUpdate(TextureLayer layer) {
nPushLayerUpdate(mNativeProxy, layer.getDeferredLayerUpdater());
}
5)然后再看下layer的apply過程:
a.首先mCreateLayerFn創建一個layer,該為函數指針,在pipeline中傳入的,那么就會調用OpenGLPipeline的createLayer方法,根據傳入的變量去生成紋理(glActiveTexture、glGenTextures)。
b.在setRenderTarget的時候glBindTexture(target, texture)同時glTexParameteri
到此,layer紋理也綁定結束了~
frameworks/base/libs/hwui/DeferredLayerUpdater.cpp
Layer* mLayer;
CreateLayerFn mCreateLayerFn;
void DeferredLayerUpdater::apply() {
if (!mLayer) {
mLayer = mCreateLayerFn(mRenderState, mWidth, mHeight, mColorFilter, mAlpha, mMode, mBlend);
}
mLayer->setColorFilter(mColorFilter);
mLayer->setAlpha(mAlpha, mMode);
if (mSurfaceTexture.get()) {
if (mLayer->getApi() == Layer::Api::Vulkan) {
if (mUpdateTexImage) {
mUpdateTexImage = false;
doUpdateVkTexImage();
}
} else {
LOG_ALWAYS_FATAL_IF(mLayer->getApi() != Layer::Api::OpenGL,
"apply surfaceTexture with non GL backend %x, GL %x, VK %x",
mLayer->getApi(), Layer::Api::OpenGL, Layer::Api::Vulkan);
if (!mGLContextAttached) {
mGLContextAttached = true;
mUpdateTexImage = true;
mSurfaceTexture->attachToContext(static_cast<GlLayer*>(mLayer)->getTextureId());
}
if (mUpdateTexImage) {
mUpdateTexImage = false;
doUpdateTexImage();
}
GLenum renderTarget = mSurfaceTexture->getCurrentTextureTarget();
static_cast<GlLayer*>(mLayer)->setRenderTarget(renderTarget);
}
if (mTransform) {
mLayer->getTransform().load(*mTransform);
setTransform(nullptr);
}
}
}
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
DeferredLayerUpdater* OpenGLPipeline::createTextureLayer() {
mEglManager.initialize();
return new DeferredLayerUpdater(mRenderThread.renderState(), createLayer, Layer::Api::OpenGL);
}
static Layer* createLayer(RenderState& renderState, uint32_t layerWidth, uint32_t layerHeight,
sk_sp<SkColorFilter> colorFilter, int alpha, SkBlendMode mode,
bool blend) {
GlLayer* layer =
new GlLayer(renderState, layerWidth, layerHeight, colorFilter, alpha, mode, blend);
Caches::getInstance().textureState().activateTexture(0);
layer->generateTexture();
return layer;
}
frameworks/base/libs/hwui/renderstate/TextureState.cpp
void TextureState::activateTexture(GLuint textureUnit) {
LOG_ALWAYS_FATAL_IF(textureUnit >= kTextureUnitsCount,
"Tried to use texture unit index %d, only %d exist", textureUnit,
kTextureUnitsCount);
if (mTextureUnit != textureUnit) {
glActiveTexture(kTextureUnits[textureUnit]);
mTextureUnit = textureUnit;
}
}
frameworks/base/libs/hwui/GlLayer.cpp
void GlLayer::generateTexture() {
if (!texture.mId) {
glGenTextures(1, &texture.mId);
}
}
6)再往下setContentDrawBounds設置繪制區域大小,初始化時mContentDrawBounds(0, 0, 0, 0),在VRI中updateContentDrawBounds時會設置bounds。
frameworks/base/libs/hwui/renderthread/CanvasContext.h
void setContentDrawBounds(const Rect& bounds) { mContentDrawBounds = bounds; }
frameworks/base/libs/hwui/renderthread/DrawFrameTask.h
void setContentDrawBounds(int left, int top, int right, int bottom) {
mContentDrawBounds.set(left, top, right, bottom);
}
frameworks/base/core/java/android/view/ThreadedRenderer.java
public void setContentDrawBounds(int left, int top, int right, int bottom) {
nSetContentDrawBounds(mNativeProxy, left, top, right, bottom);
}
7)最后一步就是prepareTree過程了,這一步主要完成每個renderNode的prepareTree過程:
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
RenderNode* target) {
mRenderThread.removeFrameCallback(this);
for (const sp<RenderNode>& node : mRenderNodes) {
// Only the primary target node will be drawn full - all other nodes would get drawn in
// real time mode. In case of a window, the primary node is the window content and the other
// node(s) are non client / filler nodes.
info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
node->prepareTree(info);
GL_CHECKPOINT(MODERATE);
}
...
freePrefetchedLayers();
...
} else {
info.out.canDrawThisFrame = true;
}
...
}
a.prepareTree過程,首先看下mRenderNodes是如何構建起來的,在CanvasContext初始化時將rootRenderNode加進來,之后通過addRenderNode加入,通過removeRenderNode移除。
//加入rootRenderNode
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
CanvasContext::CanvasContext(...RenderNode* rootRenderNode,...){
...
mRenderNodes.emplace_back(rootRenderNode);
...
}
frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(JNIEnv* env, jobject clazz) {
RootRenderNode* node = new RootRenderNode(env);
node->incStrong(0);
node->setName("RootRenderNode");
return reinterpret_cast<jlong>(node);
}
void CanvasContext::addRenderNode(RenderNode* node, bool placeFront) {
int pos = placeFront ? 0 : static_cast<int>(mRenderNodes.size());
node->makeRoot();
mRenderNodes.emplace(mRenderNodes.begin() + pos, node);
}
void CanvasContext::removeRenderNode(RenderNode* node) {
node->clearRoot();
mRenderNodes.erase(std::remove(mRenderNodes.begin(), mRenderNodes.end(), node),
mRenderNodes.end());
}
在pushStagingDisplayListChanges中調用syncDisplayList,在該函數中取走mStagingDisplayList(mDisplayList = mStagingDisplayList;),該displaylist在setDisplayList中賦值的。
這樣的話displayList就取到了。
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
...
if (info.mode == TreeInfo::MODE_FULL) {
pushStagingPropertiesChanges(info);
}
...
if (info.mode == TreeInfo::MODE_FULL) {
pushStagingDisplayListChanges(observer, info);
}
...
}
b.清空mPrefetchedLayers中保存的RenderNode,那么什么時候insert呢?答案:CanvasContext::buildLayer(RenderNode* node),java層觸發。
frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
std::set<RenderNode*> mPrefetchedLayers;
void CanvasContext::freePrefetchedLayers() {
if (mPrefetchedLayers.size()) {
for (auto& node : mPrefetchedLayers) {
ALOGW("Incorrectly called buildLayer on View: %s, destroying layer...",
node->getName());
node->destroyLayers();
node->decStrong(nullptr);
}
mPrefetchedLayers.clear();
}
}
2.現在看下是draw之前的deferLayers,
frameworks/base/libs/hwui/FrameBuilder.cpp
void FrameBuilder::deferLayers(const LayerUpdateQueue& layers) {
// Render all layers to be updated, in order. Defer in reverse order, so that they'll be
// updated in the order they're passed in (mLayerBuilders are issued to Renderer in reverse)
for (int i = layers.entries().size() - 1; i >= 0; i--) {
RenderNode* layerNode = layers.entries()[i].renderNode.get();
// only schedule repaint if node still on layer - possible it may have been
// removed during a dropped frame, but layers may still remain scheduled so
// as not to lose info on what portion is damaged
OffscreenBuffer* layer = layerNode->getLayer();
if (CC_LIKELY(layer)) {
ATRACE_FORMAT("Optimize HW Layer DisplayList %s %ux%u", layerNode->getName(),
layerNode->getWidth(), layerNode->getHeight());
Rect layerDamage = layers.entries()[i].damage;
// TODO: ensure layer damage can't be larger than layer
layerDamage.doIntersect(0, 0, layer->viewportWidth, layer->viewportHeight);
layerNode->computeOrdering();
// map current light center into RenderNode's coordinate space
Vector3 lightCenter = mCanvasState.currentSnapshot()->getRelativeLightCenter();
layer->inverseTransformInWindow.mapPoint3d(lightCenter);
saveForLayer(layerNode->getWidth(), layerNode->getHeight(), 0, 0, layerDamage,
lightCenter, nullptr, layerNode);
if (layerNode->getDisplayList()) {
deferNodeOps(*layerNode);
}
restoreForLayer();
}
}
}
1)是不是很想知道傳入的參數const LayerUpdateQueue& layers從哪里得到的?很誘人吧,來來來,我們來揭曉下:
先看下LayerUpdateQueue類,發現它有成員變量mEntries,表示保存所有的layer信息(RenderNode
和damage)。
現在來分析下參數的由來,當RenderNode::pushLayerUpdate(最開始是prepareTree)時會傳入RenderNode對象和待更新區域dirty,這樣賦值給LayerUpdateQueue中的mEntries。
//LayerUpdateQueue類定義處:
class LayerUpdateQueue {
public:
struct Entry {
Entry(RenderNode* renderNode, const Rect& damage)
: renderNode(renderNode), damage(damage) {}
sp<RenderNode> renderNode;
Rect damage;
};
LayerUpdateQueue() {}
void enqueueLayerWithDamage(RenderNode* renderNode, Rect dirty);
void clear();
const std::vector<Entry>& entries() const { return mEntries; }
private:
std::vector<Entry> mEntries;
};
//參數的由來
void RenderNode::pushLayerUpdate(TreeInfo& info) {
...
info.layerUpdateQueue->enqueueLayerWithDamage(this, dirty); //有很多RenderNode,但是只有一個info.layerUpdateQueue
...
}
frameworks/base/libs/hwui/LayerUpdateQueue.cpp
void LayerUpdateQueue::enqueueLayerWithDamage(RenderNode* renderNode, Rect damage) {
...
if (!damage.isEmpty()) {
for (Entry& entry : mEntries) {
if (CC_UNLIKELY(entry.renderNode == renderNode)) {
entry.damage.unionWith(damage);
return;
}
}
mEntries.emplace_back(renderNode, damage);
}
}
2)再看下renderNode中OffscreenBuffer對象的由來,同樣在RenderNode::pushLayerUpdate(最開始是prepareTree)且在1)之前會構建OffscreenBuffer對象,在OpenGLPipeline中創建并通過setLayer設置到renderNode中去,然后deferLayers就可以通過getLayer得到得到RenderNode的成員變量mLayer,下面列舉了RenderNode的部分成員變量:
class RenderNode : public VirtualLightRefBase {
String8 mName;
DisplayList* mDisplayList;
DisplayList* mStagingDisplayList;
OffscreenBuffer* mLayer = nullptr;
RenderProperties mProperties;
RenderProperties mStagingProperties;
}
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::pushLayerUpdate(TreeInfo& info) {
LayerType layerType = properties().effectiveLayerType(); //softlayer不做處理,直接返回
...
if (info.canvasContext.createOrUpdateLayer(this, *info.damageAccumulator, info.errorHandler)) {
damageSelf(info);
}
if (!hasLayer()) {
return;
}
SkRect dirty;
info.damageAccumulator->peekAtDirty(&dirty);
info.layerUpdateQueue->enqueueLayerWithDamage(this, dirty); //構建LayerUpdateQueue中的成員變量mEntries
// There might be prefetched layers that need to be accounted for.
// That might be us, so tell CanvasContext that this layer is in the
// tree and should not be destroyed.
info.canvasContext.markLayerInUse(this);
}
frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
bool OpenGLPipeline::createOrUpdateLayer(RenderNode* node,
const DamageAccumulator& damageAccumulator,
bool wideColorGamut,
ErrorHandler* errorHandler) {
RenderState& renderState = mRenderThread.renderState();
OffscreenBufferPool& layerPool = renderState.layerPool();
bool transformUpdateNeeded = false;
if (node->getLayer() == nullptr) {
node->setLayer(
layerPool.get(renderState, node->getWidth(), node->getHeight(), wideColorGamut));
transformUpdateNeeded = true;
}
3)繼續往下,對layerDamage做交集處理,保證damage的區域在layer范圍內。然后計算每個renderNode的ordering。再繼續map 當前的light center到renderNode坐標空間中。
接下來調用saveForLayer,看起來應該不錯,那就看一下唄。
mCanvasState.save會構建一個SnapShot,然后在writableSnapshot時獲取該snapshot,每一個renderNode對應一個snapShot,然后將參數設置到snapshot中去。
將當前的mLayerBuilders個數加入到mLayerStack中(只有2個值,一個0,一個size),那么是不是想知道mLayerBuilders之前怎么構建出來的呢?
那么你就來對地方了,看下FrameBuidler的構造函數就知道了,會構建一個fbo0的LayerBuilder。
繼續往下看:
重新構建一個LayerBuilder,再加入到mLayerBuilders中,那么此時mLayerBuilders就有1個fbo0和N個renderNode對應的LayerBuilder。而mLayerStack對應它們的索引,是不是很神奇呢?
frameworks/base/libs/hwui/FrameBuilder.cpp
void FrameBuilder::saveForLayer(uint32_t layerWidth, uint32_t layerHeight, float contentTranslateX,
float contentTranslateY, const Rect& repaintRect,
const Vector3& lightCenter, const BeginLayerOp* beginLayerOp,
RenderNode* renderNode) {
mCanvasState.save(SaveFlags::MatrixClip);
mCanvasState.writableSnapshot()->initializeViewport(layerWidth, layerHeight);
mCanvasState.writableSnapshot()->roundRectClipState = nullptr;
mCanvasState.writableSnapshot()->setRelativeLightCenter(lightCenter);
mCanvasState.writableSnapshot()->transform->loadTranslate(contentTranslateX, contentTranslateY,
0);
mCanvasState.writableSnapshot()->setClip(repaintRect.left, repaintRect.top, repaintRect.right,
repaintRect.bottom);
// create a new layer repaint, and push its index on the stack
mLayerStack.push_back(mLayerBuilders.size());
auto newFbo = mAllocator.create<LayerBuilder>(layerWidth, layerHeight, repaintRect,
beginLayerOp, renderNode);
mLayerBuilders.push_back(newFbo);
}
//1.save的過程就是構建一個Snapshot過程:
frameworks/base/libs/hwui/CanvasState.cpp
int CanvasState::save(int flags) {
return saveSnapshot(flags);
}
int CanvasState::saveSnapshot(int flags) {
mSnapshot = allocSnapshot(mSnapshot, flags);
return mSaveCount++;
}
Snapshot* CanvasState::allocSnapshot(Snapshot* previous, int savecount) {
void* memory;
if (mSnapshotPool) {
memory = mSnapshotPool;
mSnapshotPool = mSnapshotPool->previous;
mSnapshotPoolCount--;
} else {
memory = malloc(sizeof(Snapshot));
}
return new (memory) Snapshot(previous, savecount);
}
frameworks/base/libs/hwui/CanvasState.h
inline Snapshot* writableSnapshot() { return mSnapshot; }
frameworks/base/libs/hwui/FrameBuilder.h
LinearStdAllocator<void*> mStdAllocator;
LinearAllocator mAllocator;
LsaVector<size_t> mLayerStack;
LsaVector<LayerBuilder*> mLayerBuilders;
FrameBuilder::FrameBuilder(const SkRect& clip, uint32_t viewportWidth, uint32_t viewportHeight,
const LightGeometry& lightGeometry, Caches& caches)
: mStdAllocator(mAllocator)
, mLayerBuilders(mStdAllocator)
, mLayerStack(mStdAllocator)
, mCanvasState(*this)
, mCaches(caches)
, mLightRadius(lightGeometry.radius)
, mDrawFbo0(true) {
// Prepare to defer Fbo0
auto fbo0 = mAllocator.create<LayerBuilder>(viewportWidth, viewportHeight, Rect(clip));
mLayerBuilders.push_back(fbo0);
mLayerStack.push_back(0);
mCanvasState.initializeSaveStack(viewportWidth, viewportHeight, clip.fLeft, clip.fTop,
clip.fRight, clip.fBottom, lightGeometry.center);
}
//如果FrameBuilder中沒有指定viewportWidth、viewportHeight和clip,那么選擇1替代:
auto fbo0 = mAllocator.create<LayerBuilder>(1, 1, Rect(1, 1));
4)接下來看下getDisplayList動作,我們知道在end的時候會將之前創建的DisplayList對象設置到mStagingDisplayList,(具體可參見之前博客:HWUI繪制系列——從java到C++),這邊得到的就是該displayList對象。
frameworks/base/libs/hwui/RenderNode.h
DisplayList* mDisplayList;
DisplayList* mStagingDisplayList;
const DisplayList* getDisplayList() const { return mDisplayList; }
frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
...
pushStagingDisplayListChanges(observer, info);
...
pushLayerUpdate(info);
...
}
void RenderNode::pushStagingDisplayListChanges(TreeObserver& observer, TreeInfo& info) {
if (mNeedsDisplayListSync) {
mNeedsDisplayListSync = false;
// Damage with the old display list first then the new one to catch any
// changes in isRenderable or, in the future, bounds
damageSelf(info);
syncDisplayList(observer, &info);
damageSelf(info);
}
}
void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) {
// Make sure we inc first so that we don't fluctuate between 0 and 1,
// which would thrash the layer cache
if (mStagingDisplayList) {
mStagingDisplayList->updateChildren([](RenderNode* child) { child->incParentRefCount(); });
}
deleteDisplayList(observer, info);
mDisplayList = mStagingDisplayList;
mStagingDisplayList = nullptr;
if (mDisplayList) {
mDisplayList->syncContents();
}
}
void RenderNode::setStagingDisplayList(DisplayList* displayList) {
mValid = (displayList != nullptr);
mNeedsDisplayListSync = true;
delete mStagingDisplayList;
mStagingDisplayList = displayList;
}
介紹幾個大神的博客:
http://blog.csdn.net/guoqifa29/article/details/45131099
http://blog.csdn.net/wind_hzx?viewmode=contents
http://www.tuicool.com/articles/bEjYbqN(android 5.0)(簡書地址:http://www.lxweimin.com/p/bc1c1d2fadd1)
http://blog.csdn.net/jinzhuojun/article/details/54234354(android 7.0)