SurfaceFlinger和Choreographer是構成Android圖形系統(tǒng)的主要部分,它們都是VSYNC信號的訂閱者;SurfaceFlinger將接受到的不同數(shù)據(jù)源整合并,最終更新到幀緩沖以便顯示;而Choreographer最終post給ViewRootImpl進行界面view的measure及draw等。
SurfaceFlinger
SurfaceFlinger's role is to accept buffers of data from multiple sources, composite them, and send them to the display. Once upon a time this was done with software blitting to a hardware framebuffer (e.g. /dev/graphics/fb0), but those days are long gone.
Google的官方文檔里很明確的給出來SurfaceFlinger的職責,而它是怎么做到的呢。
啟動
Android在啟動SystemServer進程的時候,會調用frameworks\base\cmds\system_server\library\system_init.cpp中的system_init函數(shù)
extern "C" status_t system_init()
{
ALOGI("Entered system_init()");
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p\n", sm.get());
sp<GrimReaper> grim = new GrimReaper();
sm->asBinder()->linkToDeath(grim, grim.get(), 0);
char propBuf[PROPERTY_VALUE_MAX];
property_get("system_init.startsurfaceflinger", propBuf, "1");
if (strcmp(propBuf, "1") == 0) {
// Start the SurfaceFlinger
SurfaceFlinger::instantiate();
}
...
}
//如果system_init.startsurfaceflinger的屬性值設置成1的時候,SurfaceFlinger會以線程的方式隨著SystemServer一起啟動
而如果system_init.startsurfaceflinger屬性為0是,會在init.rc中配置SurfaceFlinger服務,Linux的天字第一號init進程會以獨立進程的方式啟動SurfaceFlinger。
# Set this property so surfaceflinger is not started by system_init
setprop system_init.startsurfaceflinger 0
service surfaceflinger /system/bin/surfaceflinger
class main
user system
group graphics
onrestart restart zygote
shell@android:/ # ps | grep surface
system 110 1 54448 10732 ffffffff 40076710 S /system/bin/surfaceflinger
//surfaceflinger的父進程的確為天字第一號
VSYNC的產(chǎn)生與分發(fā)
不管以何種方式啟動SurfaceFlinger,創(chuàng)建SurfaceFlinger的實例之后都會調用init方法:
void SurfaceFlinger::init() {
ALOGI( "SurfaceFlinger's main thread ready to run. "
"Initializing graphics H/W...");
Mutex::Autolock _l(mStateLock);
// initialize EGL for the default display
mEGLDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(mEGLDisplay, NULL, NULL);
// start the EventThread
sp<VSyncSource> vsyncSrc = new DispSyncSource(&mPrimaryDispSync,
vsyncPhaseOffsetNs, true, "app");
mEventThread = new EventThread(vsyncSrc);
sp<VSyncSource> sfVsyncSrc = new DispSyncSource(&mPrimaryDispSync,
sfVsyncPhaseOffsetNs, true, "sf");
mSFEventThread = new EventThread(sfVsyncSrc);
mEventQueue.setEventThread(mSFEventThread);
// Initialize the H/W composer object. There may or may not be an
// actual hardware composer underneath.
mHwc = new HWComposer(this,
*static_cast<HWComposer::EventHandler *>(this));
// get a RenderEngine for the given display / config (can't fail)
mRenderEngine = RenderEngine::create(mEGLDisplay, mHwc->getVisualID());
// retrieve the EGL context that was selected/created
mEGLContext = mRenderEngine->getEGLContext();
LOG_ALWAYS_FATAL_IF(mEGLContext == EGL_NO_CONTEXT,
"couldn't create EGLContext");
// initialize our non-virtual displays
for (size_t i=0 ; i<DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES ; i++) {
DisplayDevice::DisplayType type((DisplayDevice::DisplayType)i);
// set-up the displays that are already connected
if (mHwc->isConnected(i) || type==DisplayDevice::DISPLAY_PRIMARY) {
// All non-virtual displays are currently considered secure.
bool isSecure = true;
createBuiltinDisplayLocked(type);
wp<IBinder> token = mBuiltinDisplays[i];
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferConsumer> consumer;
BufferQueue::createBufferQueue(&producer, &consumer,
new GraphicBufferAlloc());
sp<FramebufferSurface> fbs = new FramebufferSurface(*mHwc, i,
consumer);
int32_t hwcId = allocateHwcDisplayId(type);
sp<DisplayDevice> hw = new DisplayDevice(this,
type, hwcId, mHwc->getFormat(hwcId), isSecure, token,
fbs, producer,
mRenderEngine->getEGLConfig());
if (i > DisplayDevice::DISPLAY_PRIMARY) {
// FIXME: currently we don't get blank/unblank requests
// for displays other than the main display, so we always
// assume a connected display is unblanked.
ALOGD("marking display %zu as acquired/unblanked", i);
hw->setPowerMode(HWC_POWER_MODE_NORMAL);
}
mDisplays.add(token, hw);
}
}
// make the GLContext current so that we can create textures when creating Layers
// (which may happens before we render something)
getDefaultDisplayDevice()->makeCurrent(mEGLDisplay, mEGLContext);
mEventControlThread = new EventControlThread(this);
mEventControlThread->run("EventControl", PRIORITY_URGENT_DISPLAY);
// set a fake vsync period if there is no HWComposer
if (mHwc->initCheck() != NO_ERROR) {
mPrimaryDispSync.setPeriod(16666667);
}
// initialize our drawing state
mDrawingState = mCurrentState;
// set initial conditions (e.g. unblank default device)
initializeDisplays();
// start boot animation
startBootAnim();
}
init方法首先初始化默認顯示設備,然后實例化了兩個EventThread和HWComposer。
在Android4.4之后,SurfaceFlinger成為了分發(fā)VSYNC信號的中心,它接收到的HWComposer硬件產(chǎn)生或軟件模擬的VSYNC信號后,調用自己的onVSyncReceived方法,初始化EventControlThread等工作。EventControlThread主要用于SurfaceFlinger對硬件產(chǎn)生的VSYNC信號的控制。
mEventThread是Choreographer所代表App層的遠程SOCKET服務端,mSFEventThread則對應著SurfaceFlinger自身的VSYNC處理。它們都通過各自的DispSyncSource持有mPrimaryDispSync。
mPrimaryDispSync是DispSync的實例,內部持有一個DispSyncThread。EventThread在有客戶端成功訂閱VSYNC后,通過DispSyncSource的setVsyncEnable方法調用DispSync的addEventListener方法,將mPhaseOffset和callback傳遞給DispSync,DispSync則通過這個offset和屏幕刷新周期period,依靠DispSyncThread線程控制著回調調用的時間,最終通過SurfaceFlinger的onDispSyncEvent方法調用到EventThread的onVSyncEvent方法。
void EventThread::onVSyncEvent(nsecs_t timestamp) {
Mutex::Autolock _l(mLock);
mVSyncEvent[0].header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
mVSyncEvent[0].header.id = 0;
mVSyncEvent[0].header.timestamp = timestamp;
mVSyncEvent[0].vsync.count++;
mCondition.broadcast();
}
//EventThread本身繼承自Thread,通過Condition喚醒當前的線程
bool EventThread::threadLoop() {
DisplayEventReceiver::Event event;
Vector< sp<EventThread::Connection> > signalConnections;
signalConnections = waitForEvent(&event);
// dispatch events to listeners...
const size_t count = signalConnections.size();
for (size_t i=0 ; i<count ; i++) {
const sp<Connection>& conn(signalConnections[i]);
// now see if we still need to report this event
status_t err = conn->postEvent(event);
...
//EventThread通過onVSyncEvent中對mVSyncEvent的設置,調用Connection的postEvent
status_t EventThread::Connection::postEvent(
const DisplayEventReceiver::Event& event) {
ssize_t size = DisplayEventReceiver::sendEvents(mChannel, &event, 1);
return size < 0 ? status_t(size) : status_t(NO_ERROR);
}
ssize_t DisplayEventReceiver::sendEvents(const sp<BitTube>& dataChannel,
Event const* events, size_t count)
{
return BitTube::sendObjects(dataChannel, events, count);
}
ssize_t BitTube::sendObjects(const sp<BitTube>& tube,
void const* events, size_t count, size_t objSize)
{
const char* vaddr = reinterpret_cast<const char*>(events);
ssize_t size = tube->write(vaddr, count*objSize);
...
}
ssize_t BitTube::write(void const* vaddr, size_t size)
{
ssize_t err, len;
do {
len = ::send(mSendFd, vaddr, size, MSG_DONTWAIT | MSG_NOSIGNAL);
// cannot return less than size, since we're using SOCK_SEQPACKET
err = len < 0 ? errno : 0;
} while (err == EINTR);
return err == 0 ? len : -err;
}
//最終調用到BitTube的write方法,往mSendFd描述符send對應內容,而這個BitTube,接下來我們還會說到
接著初始化顯示器信息,在setPowerMode或setPeriod中,都會設置mPrimaryDispSync的更新周期Period,在initializeDisplays之后顯示模塊已經(jīng)準備就緒,調用startBootAnim啟動開機動畫。
VSYNC的接收
在SurfaceFlinger中,首次創(chuàng)建實例的時候會調用到onFirstRef方法:
void SurfaceFlinger::onFirstRef()
{
mEventQueue.init(this);
}
//mEventQueue.init為MessageQueue的實例
而SurfaceFlinger的VSYNC接收,就是通過MessageQueue實現(xiàn)的,它的init方法如下:
void MessageQueue::init(const sp<SurfaceFlinger>& flinger)
{
mFlinger = flinger;
mLooper = new Looper(true);
mHandler = new Handler(*this);
}
//創(chuàng)建了一個Looper和一個Handler
上面我們已經(jīng)提到了,在SurfaceFlinger的init方法中,mSFEventThread對應著SurfaceFlinger自身的VSYNC的處理,是因為調用了mEventQueue.setEventThread(mSFEventThread)方法。
void MessageQueue::setEventThread(const sp<EventThread>& eventThread)
{
mEventThread = eventThread;
mEvents = eventThread->createEventConnection();
mEventTube = mEvents->getDataChannel();
mLooper->addFd(mEventTube->getFd(), 0, Looper::EVENT_INPUT,
MessageQueue::cb_eventReceiver, this);
}
//通過getDataChannel得到了上面所說的BitTube對象。
void BitTube::init(size_t rcvbuf, size_t sndbuf) {
int sockets[2];
if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets) == 0) {
size_t size = DEFAULT_SOCKET_BUFFER_SIZE;
setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf));
setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &sndbuf, sizeof(sndbuf));
// sine we don't use the "return channel", we keep it small...
setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &size, sizeof(size));
setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &size, sizeof(size));
fcntl(sockets[0], F_SETFL, O_NONBLOCK);
fcntl(sockets[1], F_SETFL, O_NONBLOCK);
mReceiveFd = sockets[0];
mSendFd = sockets[1];
} else {
mReceiveFd = -errno;
ALOGE("BitTube: pipe creation failed (%s)", strerror(-mReceiveFd));
}
}
BitTube通過SocketPair創(chuàng)建了兩個Socket描述符,而getFd方法:
int BitTube::getFd() const
{
return mReceiveFd;
}
返回的是mReceiveFd,MessageQueue通過Looper的addFD方法將該描述符添加到Looper中,當上述的VSYNC時間分發(fā)時,當通過send方法往mSendFd寫數(shù)據(jù)時,會觸發(fā)Looper中的epoll機制,從而回調到MessageQueue::cb_eventReceiver方法。對Looper不清楚的,可以看這里Android中的Looper與epoll。
MessageQueue的cb_eventReceiver方法,最終通過Looper的SendMessage調用到SurfaceFlinger的onMessageReceived方法,Message類型為MessageQueue::REFRESH:
void SurfaceFlinger::onMessageReceived(int32_t what) {
ATRACE_CALL();
switch (what) {
case MessageQueue::TRANSACTION: {
handleMessageTransaction();
break;
}
case MessageQueue::INVALIDATE: {
bool refreshNeeded = handleMessageTransaction();
refreshNeeded |= handleMessageInvalidate();
refreshNeeded |= mRepaintEverything;
if (refreshNeeded) {
// Signal a refresh if a transaction modified the window state,
// a new buffer was latched, or if HWC has requested a full
// repaint
signalRefresh();
}
break;
}
case MessageQueue::REFRESH: {
handleMessageRefresh();
break;
}
}
}
最終通過SurfaceFlinger的handleMessageRefresh方法,將App填充的最新的Layer實現(xiàn)計算大小,合成到framebuffer中供顯卡顯示。
Choreographer
Choreographer做為編舞者直接由ViewRootImpl持有,控制著App UI的繪制節(jié)奏。
在Choreographer的構造函數(shù)中,會通過判斷系統(tǒng)屬性文件中的debug.choreographer.vsync值決定是否開啟Vsync機制,如啟用?則通過創(chuàng)建FrameDisplayEventReceiver類型,調用到它的父類DisplayEventReceiver.java的構造方法:
public DisplayEventReceiver(Looper looper) {
if (looper == null) {
throw new IllegalArgumentException("looper must not be null");
}
mMessageQueue = looper.getQueue();
mReceiverPtr = nativeInit(new WeakReference<DisplayEventReceiver>(this), mMessageQueue);
mCloseGuard.open("dispose");
}
然后通過nativeInit的jni調用android_view_DisplayEventReceiver.cpp中的nativeInit方法,通過創(chuàng)建NativeDisplayEventReceiver調用到它的父類DisplayEventDispatcher,通過nativeInit中傳遞過來的MessageQueue,得到App主線程的Looper,并交給DisplayEventDispatcher,繼而調用DisplayEventDispatcher的initialize方法:
status_t DisplayEventDispatcher::initialize() {
status_t result = mReceiver.initCheck();
if (result) {
ALOGW("Failed to initialize display event receiver, status=%d", result);
return result;
}
int rc = mLooper->addFd(mReceiver.getFd(), 0, Looper::EVENT_INPUT,
this, NULL);
if (rc < 0) {
return UNKNOWN_ERROR;
}
return OK;
}
可以看到它也是通過addFd監(jiān)聽著VSYNC信號的到來。而mReceiver則對應著DisplayEventReceiver則可以理解成跨進程的遠程對象的本地代理。
DisplayEventReceiver::DisplayEventReceiver() {
sp<ISurfaceComposer> sf(ComposerService::getComposerService());
if (sf != NULL) {
mEventConnection = sf->createDisplayEventConnection();
if (mEventConnection != NULL) {
mDataChannel = mEventConnection->getDataChannel();
}
}
}
int DisplayEventReceiver::getFd() const {
if (mDataChannel == NULL)
return NO_INIT;
return mDataChannel->getFd();
}
DisplayEventReceiver首先通過ComposerService創(chuàng)建了SurfaceFlinger的本地代理,然后通過createDisplayEventConnection得到EventThread所持有的Connection,這個Connection持有的BitTube訂閱的方式和上面SurfaceFlinger一樣。
createDisplayEventConnection方法在SurfaceFlinger中:
sp<IDisplayEventConnection> SurfaceFlinger::createDisplayEventConnection() {
return mEventThread->createEventConnection();
}
可以看到,的確是通過mEventThread建立的連接而不是mSFEventThread。就這樣Choreographer成功的訂閱了VSYNC信號。
而當Vsync信號到來的時候,由于DisplayEventDispatcher繼承了LooperCallback,會調用到它的handleEvent方法,繼而調用到NativeDisplayEventReceiver的dispatchVsync方法:
void NativeDisplayEventReceiver::dispatchVsync(nsecs_t timestamp, int32_t id, uint32_t count) {
JNIEnv* env = AndroidRuntime::getJNIEnv();
ScopedLocalRef<jobject> receiverObj(env, jniGetReferent(env, mReceiverWeakGlobal));
if (receiverObj.get()) {
ALOGV("receiver %p ~ Invoking vsync handler.", this);
env->CallVoidMethod(receiverObj.get(),
gDisplayEventReceiverClassInfo.dispatchVsync, timestamp, id, count);
ALOGV("receiver %p ~ Returned from vsync handler.", this);
}
mMessageQueue->raiseAndClearException(env, "dispatchVsync");
}
jni端會通過CallVoidMethod方法調用到java端的DisplayEventReceiver的dispatchVsync方法,繼而調用到FrameDisplayEventReceiver的onVsync方法,最后通過調用Choreographer的doFrame方法啟動App端的UI繪制工作。
總結
在Android4.4之前,VSYNC信號到來的時候,Choreographer和SurfaceFlinger是同時回調,并沒有Offset機制,所以導致了Choreographer準備好了新的一幀數(shù)據(jù),SurfaceFlinger要到下個VSYNC到來才會組成到framebuffer中,而顯示設備必須等到下下個VSYNC才能最終顯示出來,這樣不僅浪費了時間(需要2個VSYNC周期),而且同時啟動也引起了CPU等資源的爭搶。所有Google在4.4之后引入了Offset機制,希望能夠盡量優(yōu)化這個問題。Google官方文檔也解釋了這一點。
Application and SurfaceFlinger render loops should be synchronized to the hardware VSYNC. On a VSYNC event, the display begins showing frame N while SurfaceFlinger begins compositing windows for frame N+1. The app handles pending input and generates frame N+2.
Synchronizing with VSYNC delivers consistent latency. It reduces errors in apps and SurfaceFlinger and the drifting of displays in and out of phase with each other. This, however, does assume application and SurfaceFlinger per-frame times don’t vary widely. Nevertheless, the latency is at least two frames.
To remedy this, you may employ VSYNC offsets to reduce the input-to-display latency by making application and composition signal relative to hardware VSYNC. This is possible because application plus composition usually takes less than 33 ms.