Android P 圖形顯示系統(四) Android VirtualDisplay解析

[TOC]

Android VirtualDisplay解析

Android支持多個屏幕:主顯,外顯,和虛顯,虛顯就是我們要說的VirtualDisplay。VirtualDisplay的使用場景很多,比如錄屏,WFD顯示等。其作用就是抓取屏幕上顯示的內容。VirtualDisplay抓取屏幕內容,其實現方式有很多。在API中就提供了ImageReader進行讀取VirtualDisplay里的內容。

下面我們就結合ImageReader,來看看VirtualDisplay及其相關流程。

ImageReader和VirtualDisplay使用示例

我們以VirtualDisplayTest為示例:

1.在測試setUp時,初始化 DisplayManager, ImageReader 和 ImageListener ,代碼如下:

* frameworks/base/core/tests/coretestssrc/android/hardware/display/VirtualDisplayTest.java

    protected void setUp() throws Exception {
        super.setUp();

        mDisplayManager = (DisplayManager)mContext.getSystemService(Context.DISPLAY_SERVICE);
        mHandler = new Handler(Looper.getMainLooper());
        mImageListener = new ImageListener();

        mImageReaderLock.lock();
        try {
            mImageReader = ImageReader.newInstance(WIDTH, HEIGHT, PixelFormat.RGBA_8888, 2);
            mImageReader.setOnImageAvailableListener(mImageListener, mHandler);
            mSurface = mImageReader.getSurface();
        } finally {
            mImageReaderLock.unlock();
        }
    }

  • DisplayManager 管理Display的,系統中有對應的DisplayManagerService。
  • ImageListener實現OnImageAvailableListener接口。
  • ImageReader是一個圖片讀取器,它是OnImageAvailableListener接口的觸發者
  • 另外,注意這里的mSurface。

2.以測試項目testPrivateVirtualDisplay為例

    public void testPrivateVirtualDisplay() throws Exception {
        VirtualDisplay virtualDisplay = mDisplayManager.createVirtualDisplay(NAME,
                WIDTH, HEIGHT, DENSITY, mSurface, 0);
        assertNotNull("virtual display must not be null", virtualDisplay);

        Display display = virtualDisplay.getDisplay();
        try {
            assertDisplayRegistered(display, Display.FLAG_PRIVATE);

            // Show a private presentation on the display.
            assertDisplayCanShowPresentation("private presentation window",
                    display, BLUEISH,
                    WindowManager.LayoutParams.TYPE_PRIVATE_PRESENTATION, 0);
        } finally {
            virtualDisplay.release();
        }
        assertDisplayUnregistered(display);
    }
  • 測試時,先通過mDisplayManager,創建一個虛擬顯示。
  • 通過assertDisplayRegistered判斷虛顯是否已經注冊
  • 通過assertDisplayCanShowPresentation判斷是否能顯示私有的Presentation
  • 將虛顯釋放后,通過assertDisplayUnregistered判斷是否已經撤銷注冊。

這里Presentation是Andorid的一個顯示控件,能夠實現將要顯示的內容顯示到制定的顯示屏上。

實例代碼就這么多,接下來,我們來看具體的流程。

ImageReader介紹

ImageReader,簡單來說,就是使應用能夠以圖片數據的形式讀取繪制到Surface中的內容。圖片數據用Image描述。

1.ImageReader的定義
ImageReader的定義如下:

* frameworks/base/media/java/android/media/ImageReader.java

    public static ImageReader newInstance(int width, int height, int format, int maxImages) {
        return new ImageReader(width, height, format, maxImages, BUFFER_USAGE_UNKNOWN);
    }

這里的參數maxImages表示,能同時訪問的Image數量,這里概念上和BufferQueue中的maxnumber也是類似的。

ImageReader關鍵代碼如下:

    protected ImageReader(int width, int height, int format, int maxImages, long usage) {
        mWidth = width;
        mHeight = height;
        mFormat = format;
        mMaxImages = maxImages;

        .. ...

        mNumPlanes = ImageUtils.getNumPlanesForFormat(mFormat);

        nativeInit(new WeakReference<>(this), width, height, format, maxImages, usage);

        mSurface = nativeGetSurface();

        mIsReaderValid = true;
        // Estimate the native buffer allocation size and register it so it gets accounted for
        // during GC. Note that this doesn't include the buffers required by the buffer queue
        // itself and the buffers requested by the producer.
        // Only include memory for 1 buffer, since actually accounting for the memory used is
        // complex, and 1 buffer is enough for the VM to treat the ImageReader as being of some
        // size.
        mEstimatedNativeAllocBytes = ImageUtils.getEstimatedNativeAllocBytes(
                width, height, format, /*buffer count*/ 1);
        VMRuntime.getRuntime().registerNativeAllocation(mEstimatedNativeAllocBytes);
    }
  • 我們的格式是PixelFormat.RGBA_8888,所以這里的mNumPlanes值為1
  • nativeInit,native方法,創建一個native的ImageReader實例。
  • nativeGetSurface,native方法,獲取對應的Native實例的Surface,注意,我們的Surface是從哪兒來的。

2.ImageReader的JNI實現
ImageReader的JNI實現如下,這里包含了ImageReader的方法和SurfaceImage的方法。

* frameworks/base/media/jni/android_media_ImageReader.cpp

static const JNINativeMethod gImageReaderMethods[] = {
    {"nativeClassInit",        "()V",                        (void*)ImageReader_classInit },
    {"nativeInit",             "(Ljava/lang/Object;IIIIJ)V",  (void*)ImageReader_init },
    {"nativeClose",            "()V",                        (void*)ImageReader_close },
    {"nativeReleaseImage",     "(Landroid/media/Image;)V",   (void*)ImageReader_imageRelease },
    {"nativeImageSetup",       "(Landroid/media/Image;)I",   (void*)ImageReader_imageSetup },
    {"nativeGetSurface",       "()Landroid/view/Surface;",   (void*)ImageReader_getSurface },
    {"nativeDetachImage",      "(Landroid/media/Image;)I",   (void*)ImageReader_detachImage },
    {"nativeDiscardFreeBuffers", "()V",                      (void*)ImageReader_discardFreeBuffers }
};

static const JNINativeMethod gImageMethods[] = {
    {"nativeCreatePlanes",      "(II)[Landroid/media/ImageReader$SurfaceImage$SurfacePlane;",
                                                              (void*)Image_createSurfacePlanes },
    {"nativeGetWidth",         "()I",                        (void*)Image_getWidth },
    {"nativeGetHeight",        "()I",                        (void*)Image_getHeight },
    {"nativeGetFormat",        "(I)I",                        (void*)Image_getFormat },
};

nativeInit對應的方法為ImageReader_init:

static void ImageReader_init(JNIEnv* env, jobject thiz, jobject weakThiz, jint width, jint height,
                             jint format, jint maxImages, jlong ndkUsage)
{
    ... ...
    sp<JNIImageReaderContext> ctx(new JNIImageReaderContext(env, weakThiz, clazz, maxImages));

    sp<IGraphicBufferProducer> gbProducer;
    sp<IGraphicBufferConsumer> gbConsumer;
    BufferQueue::createBufferQueue(&gbProducer, &gbConsumer);
    sp<BufferItemConsumer> bufferConsumer;
    String8 consumerName = String8::format("ImageReader-%dx%df%xm%d-%d-%d",
            width, height, format, maxImages, getpid(),
            createProcessUniqueId());
    ... ...
    bufferConsumer = new BufferItemConsumer(gbConsumer, consumerUsage, maxImages,
            /*controlledByApp*/true);
    if (bufferConsumer == nullptr) {
        jniThrowExceptionFmt(env, "java/lang/RuntimeException",
                "Failed to allocate native buffer consumer for format 0x%x and usage 0x%x",
                nativeFormat, consumerUsage);
        return;
    }
    ctx->setBufferConsumer(bufferConsumer);
    bufferConsumer->setName(consumerName);

    ctx->setProducer(gbProducer);
    bufferConsumer->setFrameAvailableListener(ctx);
    ImageReader_setNativeContext(env, thiz, ctx);
    ctx->setBufferFormat(nativeFormat);
    ctx->setBufferDataspace(nativeDataspace);
    ctx->setBufferWidth(width);
    ctx->setBufferHeight(height);

    // Set the width/height/format/dataspace to the bufferConsumer.
    res = bufferConsumer->setDefaultBufferSize(width, height);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default size (%dx%d) for format 0x%x",
                          width, height, nativeFormat);
        return;
    }
    res = bufferConsumer->setDefaultBufferFormat(nativeFormat);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default format 0x%x", nativeFormat);
    }
    res = bufferConsumer->setDefaultBufferDataSpace(nativeDataspace);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/IllegalStateException",
                          "Failed to set buffer consumer default dataSpace 0x%x", nativeDataspace);
    }
}
  • 創建了一個JNIImageReaderContext實例,這個就是ImageReader的Native對應的對象。
JNIImageReaderContext::JNIImageReaderContext(JNIEnv* env,
        jobject weakThiz, jclass clazz, int maxImages) :
    mWeakThiz(env->NewGlobalRef(weakThiz)),
    mClazz((jclass)env->NewGlobalRef(clazz)),
    mFormat(0),
    mDataSpace(HAL_DATASPACE_UNKNOWN),
    mWidth(-1),
    mHeight(-1) {
    for (int i = 0; i < maxImages; i++) {
        BufferItem* buffer = new BufferItem;
        mBuffers.push_back(buffer);
    }
}

這里的mDataSpace是數據空間,用以描述格式的。native的Buffer用BufferItem描述,在mBuffers中。

  • 創建對應的BufferQueue,生產者gbProducer,消費者gbConsumer。
    這里用的還是BufferQueue,Consumer端用BufferItemConsumer進行了封裝。還記得我們Androdi正常顯示的時候,Consumer是什么嗎?沒錯BufferLayerConsumer,需要注意這其間的差別。BufferItemConsumer中持有gbConsumer對象。

  • 創建完BufferQueue后,再設置到 JNIImageReaderContext 中。注意BufferItemConsumer的FrameAvailableListener為JNIImageReaderContext中實現的FrameAvailableListener。

  • 最后通過ImageReader_setNativeContext,將native對象和Java的對象關聯。

JNIImageReaderContext的類圖


ImageReaderContext的類圖

VirtualDisplay的創建

通過DisplayManager創建VirtualDisplay。

* frameworks/base/core/java/android/hardware/display/DisplayManager.java

    public VirtualDisplay createVirtualDisplay(@Nullable MediaProjection projection,
            @NonNull String name, int width, int height, int densityDpi, @Nullable Surface surface,
            int flags, @Nullable VirtualDisplay.Callback callback, @Nullable Handler handler,
            @Nullable String uniqueId) {
        return mGlobal.createVirtualDisplay(mContext, projection,
                name, width, height, densityDpi, surface, flags, callback, handler, uniqueId);
    }

DisplayManagerGlobal是一個單例,Android系統中就這么一個。

    public DisplayManager(Context context) {
        mContext = context;
        mGlobal = DisplayManagerGlobal.getInstance();
    }

DisplayManagerGlobal的createVirtualDisplay方法實現如下:

* frameworks/base/core/java/android/hardware/display/DisplayManagerGlobal.java

    public VirtualDisplay createVirtualDisplay(Context context, MediaProjection projection,
            String name, int width, int height, int densityDpi, Surface surface, int flags,
            VirtualDisplay.Callback callback, Handler handler, String uniqueId) {
        ... ...
        int displayId;
        try {
            displayId = mDm.createVirtualDisplay(callbackWrapper, projectionToken,
                    context.getPackageName(), name, width, height, densityDpi, surface, flags,
                    uniqueId);
        } catch (RemoteException ex) {
            throw ex.rethrowFromSystemServer();
        }
        if (displayId < 0) {
            Log.e(TAG, "Could not create virtual display: " + name);
            return null;
        }
        Display display = getRealDisplay(displayId);
        if (display == null) {
            Log.wtf(TAG, "Could not obtain display info for newly created "
                    + "virtual display: " + name);
            try {
                mDm.releaseVirtualDisplay(callbackWrapper);
            } catch (RemoteException ex) {
                throw ex.rethrowFromSystemServer();
            }
            return null;
        }
        return new VirtualDisplay(this, display, callbackWrapper, surface);
    }

mDm是DisplayManagerservice(DMS)的Stub。mDm.createVirtualDisplay直接看DMS的實現:

* frameworks/base/services/core/java/com/android/server/display/DisplayManagerService.java

        @Override // Binder call
        public int createVirtualDisplay(IVirtualDisplayCallback callback,
                IMediaProjection projection, String packageName, String name,
                int width, int height, int densityDpi, Surface surface, int flags,
                String uniqueId) {
            ... ...

            if (projection != null) {
                try {
                    if (!getProjectionService().isValidMediaProjection(projection)) {
                        throw new SecurityException("Invalid media projection");
                    }
                    flags = projection.applyVirtualDisplayFlags(flags);
                } catch (RemoteException e) {
                    throw new SecurityException("unable to validate media projection or flags");
                }
            }

            if (callingUid != Process.SYSTEM_UID &&
                    (flags & VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR) != 0) {
                if (!canProjectVideo(projection)) {
                    throw new SecurityException("Requires CAPTURE_VIDEO_OUTPUT or "
                            + "CAPTURE_SECURE_VIDEO_OUTPUT permission, or an appropriate "
                            + "MediaProjection token in order to create a screen sharing virtual "
                            + "display.");
                }
            }
            if ((flags & VIRTUAL_DISPLAY_FLAG_SECURE) != 0) {
                if (!canProjectSecureVideo(projection)) {
                    throw new SecurityException("Requires CAPTURE_SECURE_VIDEO_OUTPUT "
                            + "or an appropriate MediaProjection token to create a "
                            + "secure virtual display.");
                }
            }

            final long token = Binder.clearCallingIdentity();
            try {
                return createVirtualDisplayInternal(callback, projection, callingUid, packageName,
                        name, width, height, densityDpi, surface, flags, uniqueId);
            } finally {
                Binder.restoreCallingIdentity(token);
            }
        }

在DMS的createVirtualDisplay函數中,做了一些參數的初始化,project和secure的處理等。然后通過createVirtualDisplayInternal方法來真正創建。

createVirtualDisplayInternal函數

    private int createVirtualDisplayInternal(IVirtualDisplayCallback callback,
            IMediaProjection projection, int callingUid, String packageName, String name, int width,
            int height, int densityDpi, Surface surface, int flags, String uniqueId) {
        synchronized (mSyncRoot) {
            if (mVirtualDisplayAdapter == null) {
                Slog.w(TAG, "Rejecting request to create private virtual display "
                        + "because the virtual display adapter is not available.");
                return -1;
            }

            DisplayDevice device = mVirtualDisplayAdapter.createVirtualDisplayLocked(
                    callback, projection, callingUid, packageName, name, width, height, densityDpi,
                    surface, flags, uniqueId);
            if (device == null) {
                return -1;
            }

            handleDisplayDeviceAddedLocked(device);
            LogicalDisplay display = findLogicalDisplayForDeviceLocked(device);
            if (display != null) {
                return display.getDisplayIdLocked();
            }

            // Something weird happened and the logical display was not created.
            Slog.w(TAG, "Rejecting request to create virtual display "
                    + "because the logical display was not created.");
            mVirtualDisplayAdapter.releaseVirtualDisplayLocked(callback.asBinder());
            handleDisplayDeviceRemovedLocked(device);
        }
        return -1;
    }
  • mVirtualDisplayAdapter是DMS啟動的時候初始化的
    啟動時,用消息MSG_REGISTER_DEFAULT_DISPLAY_ADAPTERS注冊的。
    private void registerDefaultDisplayAdapters() {
        // Register default display adapters.
        synchronized (mSyncRoot) {
            // main display adapter
            registerDisplayAdapterLocked(new LocalDisplayAdapter(
                    mSyncRoot, mContext, mHandler, mDisplayAdapterListener));

            mVirtualDisplayAdapter = mInjector.getVirtualDisplayAdapter(mSyncRoot, mContext,
                    mHandler, mDisplayAdapterListener);
            if (mVirtualDisplayAdapter != null) {
                registerDisplayAdapterLocked(mVirtualDisplayAdapter);
            }
        }
    }
  • mVirtualDisplayAdapter創建完后,用handleDisplayDeviceAddedLocked處理
    這里告訴Android上層,一個新的Display被添加了。
    private void handleDisplayDeviceAddedLocked(DisplayDevice device) {
        DisplayDeviceInfo info = device.getDisplayDeviceInfoLocked();
        if (mDisplayDevices.contains(device)) {
            Slog.w(TAG, "Attempted to add already added display device: " + info);
            return;
        }

        Slog.i(TAG, "Display device added: " + info);
        device.mDebugLastLoggedDeviceInfo = info;

        mDisplayDevices.add(device);
        LogicalDisplay display = addLogicalDisplayLocked(device);
        Runnable work = updateDisplayStateLocked(device);
        if (work != null) {
            work.run();
        }
        scheduleTraversalLocked(false);
    }

一個Display被添加了,先拿到它的信息,DisplayDeviceInfo。再將添加的 設備加到mDisplayDevices中。
最后,通過addLogicalDisplayLocked創建一個對應的邏輯顯示屏,通過updateDisplayStateLocked更新 Display的信息,和Native的VirtualDisplay的信息保持同步。

VirtualDisplayAdapter的createVirtualDisplayLocked方法:

* frameworks/base/services/core/java/com/android/server/display/VirtualDisplayAdapter.java

    public DisplayDevice createVirtualDisplayLocked(IVirtualDisplayCallback callback,
            IMediaProjection projection, int ownerUid, String ownerPackageName, String name,
            int width, int height, int densityDpi, Surface surface, int flags, String uniqueId) {
        boolean secure = (flags & VIRTUAL_DISPLAY_FLAG_SECURE) != 0;
        IBinder appToken = callback.asBinder();
        IBinder displayToken = mSurfaceControlDisplayFactory.createDisplay(name, secure);
        final String baseUniqueId =
                UNIQUE_ID_PREFIX + ownerPackageName + "," + ownerUid + "," + name + ",";
        final int uniqueIndex = getNextUniqueIndex(baseUniqueId);
        if (uniqueId == null) {
            uniqueId = baseUniqueId + uniqueIndex;
        } else {
            uniqueId = UNIQUE_ID_PREFIX + ownerPackageName + ":" + uniqueId;
        }
        VirtualDisplayDevice device = new VirtualDisplayDevice(displayToken, appToken,
                ownerUid, ownerPackageName, name, width, height, densityDpi, surface, flags,
                new Callback(callback, mHandler), uniqueId, uniqueIndex);

        mVirtualDisplayDevices.put(appToken, device);

        try {
            if (projection != null) {
                projection.registerCallback(new MediaProjectionCallback(appToken));
            }
            appToken.linkToDeath(device, 0);
        } catch (RemoteException ex) {
            mVirtualDisplayDevices.remove(appToken);
            device.destroyLocked(false);
            return null;
        }

        // Return the display device without actually sending the event indicating
        // that it was added.  The caller will handle it.
        return device;
    }
  • 首先通過mSurfaceControlDisplayFactory創建一個displayToken,這個displayToken實際上是native的VirtualDisplay的token。
  • 最后VirtualDisplayAdapter創建的是一個VirtualDisplayDevice。

這里的mSurfaceControlDisplayFactory其實是對SurfaceControl調用的一個封裝:

    public VirtualDisplayAdapter(DisplayManagerService.SyncRoot syncRoot,
            Context context, Handler handler, Listener listener) {
        this(syncRoot, context, handler, listener,
                (String name, boolean secure) -> SurfaceControl.createDisplay(name, secure));
    }

SurfaceControl的createDisplay,主要的是調用native函數,創建native的VirtualDisplay。

* frameworks/base/core/java/android/view/SurfaceControl.java

    public static IBinder createDisplay(String name, boolean secure) {
        if (name == null) {
            throw new IllegalArgumentException("name must not be null");
        }
        return nativeCreateDisplay(name, secure);
    }

到此Java層的創建VirtualDisplay的流程完成。

VirtualDisplay在Java層相關的類關系如下:


VirtualDisplay關系類圖

簡單梳理一下:

  • Android提供了DMS管理系統的Display
  • DisplayManagerGlobal是DMS的一個代理,唯一的代理。
  • 應用可以通過DisplayManager和DMS通信
  • 每個Display都有一個對應的LogcalDisplay進行描述。
  • 具體的顯示屏用DisplayDevice進行描述,系統里面分為很多類型,VirtualDisplayDevice只是其中的一類。
  • 每種類型都有自己對應的Adapter,VirtualDisplayAdapter和VirtualDisplayDevice對應。

看完Java層的流程,我們再來看一下Native層的流程。我們關系的主要問題,還是ImageReader是怎么獲取到顯示屏幕的顯示數據,顯然現在還沒有我們要的答案。

Native創建VirtualDisplay

nativeCreateDisplay函數JNI實現,如下:

* android_view_SurfaceControl.cpp

static jobject nativeCreateDisplay(JNIEnv* env, jclass clazz, jstring nameObj,
        jboolean secure) {
    ScopedUtfChars name(env, nameObj);
    sp<IBinder> token(SurfaceComposerClient::createDisplay(
            String8(name.c_str()), bool(secure)));
    return javaObjectForIBinder(env, token);
}

最終還是通過SurfaceComposerClient來創建的

sp<IBinder> SurfaceComposerClient::createDisplay(const String8& displayName, bool secure) {
    return ComposerService::getComposerService()->createDisplay(displayName,
            secure);
}

ComposerService的服務端實現,就是SurfaceFlinger。

sp<IBinder> SurfaceFlinger::createDisplay(const String8& displayName,
        bool secure)
{
    class DisplayToken : public BBinder {
        sp<SurfaceFlinger> flinger;
        virtual ~DisplayToken() {
             // no more references, this display must be terminated
             Mutex::Autolock _l(flinger->mStateLock);
             flinger->mCurrentState.displays.removeItem(this);
             flinger->setTransactionFlags(eDisplayTransactionNeeded);
         }
     public:
        explicit DisplayToken(const sp<SurfaceFlinger>& flinger)
            : flinger(flinger) {
        }
    };

    sp<BBinder> token = new DisplayToken(this);

    Mutex::Autolock _l(mStateLock);
    DisplayDeviceState info(DisplayDevice::DISPLAY_VIRTUAL, secure);
    info.displayName = displayName;
    mCurrentState.displays.add(token, info);
    mInterceptor.saveDisplayCreation(info);
    return token;
}

SurfaceFlinger在創建Display時,創建了一個DisplayToken。這個就是Java中我們說的那個token了。然后在將token添加到mCurrentState的displays中。創建的Display就保存在displays中。

Native的流程很簡單,但是我們還沒有看到數據是怎么流轉的。別急,看看我們的Surface去哪兒了。

數據流分析

DisplayManager創建Display時,有mSurface,這個是ImageReader那邊獲取過來的。

static jobject ImageReader_getSurface(JNIEnv* env, jobject thiz)
{
    ALOGV("%s: ", __FUNCTION__);

    IGraphicBufferProducer* gbp = ImageReader_getProducer(env, thiz);
    if (gbp == NULL) {
        jniThrowRuntimeException(env, "Buffer consumer is uninitialized");
        return NULL;
    }

    // Wrap the IGBP in a Java-language Surface.
    return android_view_Surface_createFromIGraphicBufferProducer(env, gbp);
}

包裝了一個IGraphicBufferProducer?,F在我們再來捋一遍創建VirtualDisplay的流程,只關系Surface的去向。是不是最后給到了VirtualDisplayDevice中的mSurface。那么又是什么時候調的呢?

再看看DMS的handleDisplayDeviceAddedLocked方法,是不是有個scheduleTraversalLocked的調用?

這Traversal,通知了WMS,然后又從WMS繞回DMS,調的是
DMS的performTraversalInTransactionFromWindowManager,最后在performTraversalInTransactionLocked中,將調每個Device的performTraversalInTransactionLocked函數。

    private void performTraversalInTransactionLocked() {
        // Clear all viewports before configuring displays so that we can keep
        // track of which ones we have configured.
        clearViewportsLocked();

        // Configure each display device.
        final int count = mDisplayDevices.size();
        for (int i = 0; i < count; i++) {
            DisplayDevice device = mDisplayDevices.get(i);
            configureDisplayInTransactionLocked(device);
            device.performTraversalInTransactionLocked();
        }

        // Tell the input system about these new viewports.
        if (mInputManagerInternal != null) {
            mHandler.sendEmptyMessage(MSG_UPDATE_VIEWPORT);
        }
    }

VirtualDisplayDevice的performTraversalInTransactionLocked函數如下:

        public void performTraversalInTransactionLocked() {
            if ((mPendingChanges & PENDING_RESIZE) != 0) {
                SurfaceControl.setDisplaySize(getDisplayTokenLocked(), mWidth, mHeight);
            }
            if ((mPendingChanges & PENDING_SURFACE_CHANGE) != 0) {
                setSurfaceInTransactionLocked(mSurface);
            }
            mPendingChanges = 0;
        }

PENDING_SURFACE_CHANGE這個伏筆,在VirtualDisplayDevice創建的時候就已經埋下了。

        public VirtualDisplayDevice(IBinder displayToken, IBinder appToken,
                int ownerUid, String ownerPackageName,
                String name, int width, int height, int densityDpi, Surface surface, int flags,
                Callback callback, String uniqueId, int uniqueIndex) {
            super(VirtualDisplayAdapter.this, displayToken, uniqueId);
            mAppToken = appToken;
            mOwnerUid = ownerUid;
            mOwnerPackageName = ownerPackageName;
            mName = name;
            mWidth = width;
            mHeight = height;
            mMode = createMode(width, height, REFRESH_RATE);
            mDensityDpi = densityDpi;
            mSurface = surface;
            mFlags = flags;
            mCallback = callback;
            mDisplayState = Display.STATE_UNKNOWN;
            mPendingChanges |= PENDING_SURFACE_CHANGE;
            mUniqueIndex = uniqueIndex;
        }

沒毛病~ 再通過setSurfaceInTransactionLocked函數,將Surface通過SurfaceControl,傳給Native的VirtualDisplay。

    public final void setSurfaceInTransactionLocked(Surface surface) {
        if (mCurrentSurface != surface) {
            mCurrentSurface = surface;
            SurfaceControl.setDisplaySurface(mDisplayToken, surface);
        }
    }

SurfaceControl中有個一個sGlobalTransaction,Surface被暫時保存到sGlobalTransaction中。

    public static void setDisplaySurface(IBinder displayToken, Surface surface) {
        synchronized (SurfaceControl.class) {
            sGlobalTransaction.setDisplaySurface(displayToken, surface);
        }
    }

sGlobalTransaction生效是在closeTransaction時,這里是由WMS調的。openTransaction和closeTransaction成對出現,一個打開一個關閉。關閉時生效。

    private static void closeTransaction(boolean sync) {
        synchronized(SurfaceControl.class) {
            if (sTransactionNestCount == 0) {
                Log.e(TAG, "Call to SurfaceControl.closeTransaction without matching openTransaction");
            } else if (--sTransactionNestCount > 0) {
                return;
            }
            sGlobalTransaction.apply(sync);
        }
    }

apply函數如下:

        public void apply(boolean sync) {
            applyResizedSurfaces();
            nativeApplyTransaction(mNativeObject, sync);
        }
  • 一些同步被保存到SurfaceControl中
  • 再通過nativeApplyTransaction給到底層。

在JNI中將java的Transaction轉換我們native的Transaction。

static void nativeApplyTransaction(JNIEnv* env, jclass clazz, jlong transactionObj, jboolean sync) {
    auto transaction = reinterpret_cast<SurfaceComposerClient::Transaction*>(transactionObj);
    transaction->apply(sync);
}

而我們的nativeSetDisplaySurface,最后如下:

status_t SurfaceComposerClient::Transaction::setDisplaySurface(const sp<IBinder>& token,
        const sp<IGraphicBufferProducer>& bufferProducer) {
    if (bufferProducer.get() != nullptr) {
        // Make sure that composition can never be stalled by a virtual display
        // consumer that isn't processing buffers fast enough.
        status_t err = bufferProducer->setAsyncMode(true);
        if (err != NO_ERROR) {
            ALOGE("Composer::setDisplaySurface Failed to enable async mode on the "
                    "BufferQueue. This BufferQueue cannot be used for virtual "
                    "display. (%d)", err);
            return err;
        }
    }
    DisplayState& s(getDisplayStateLocked(token));
    s.surface = bufferProducer;
    s.what |= DisplayState::eSurfaceChanged;
    return NO_ERROR;
}

直接看SurfaceFlinger中的處理吧,注意我們這里what是DisplayState::eSurfaceChanged。

void SurfaceFlinger::setTransactionState(
        const Vector<ComposerState>& state,
        const Vector<DisplayState>& displays,
        uint32_t flags)
{
    ... ...

    size_t count = displays.size();
    for (size_t i=0 ; i<count ; i++) {
        const DisplayState& s(displays[i]);
        transactionFlags |= setDisplayStateLocked(s);
    }
    ... ...
}

SF在setTransactionState時,調用每Display的setDisplayStateLocked

uint32_t SurfaceFlinger::setDisplayStateLocked(const DisplayState& s)
{
    ssize_t dpyIdx = mCurrentState.displays.indexOfKey(s.token);
    if (dpyIdx < 0)
        return 0;

    uint32_t flags = 0;
    DisplayDeviceState& disp(mCurrentState.displays.editValueAt(dpyIdx));
    if (disp.isValid()) {
        const uint32_t what = s.what;
        if (what & DisplayState::eSurfaceChanged) {
            if (IInterface::asBinder(disp.surface) != IInterface::asBinder(s.surface)) {
                disp.surface = s.surface;
                flags |= eDisplayTransactionNeeded;
            }
        }

前面我們創建的VirtualDisplay的token是不是Add到了mCurrentState.displays中,現在我們編輯它,將ImageReader那邊給過來的Surface給到了disp.surface。

Oops~ 記住,我們的Surface給給到了mCurrentState.displays的disp.surface。

setTransactionState完成后,將通過setTransactionFlags出發SurfaceFlinger工作。SurfaceFlinger將處理Transaction。也就是會調用到handleTransaction函數。

我們只看和處理 Display相關的流程,這里將有一個場大戰。

void SurfaceFlinger::handleTransactionLocked(uint32_t transactionFlags)
{
            ... ...
            // find displays that were added
            // (ie: in current state but not in drawing state)
            for (size_t i=0 ; i<cc ; i++) {
                if (draw.indexOfKey(curr.keyAt(i)) < 0) {
                    const DisplayDeviceState& state(curr[i]);

                    sp<DisplaySurface> dispSurface;
                    sp<IGraphicBufferProducer> producer;
                    sp<IGraphicBufferProducer> bqProducer;
                    sp<IGraphicBufferConsumer> bqConsumer;
                    BufferQueue::createBufferQueue(&bqProducer, &bqConsumer);

                    int32_t hwcId = -1;
                    if (state.isVirtualDisplay()) {
                        // Virtual displays without a surface are dormant:
                        // they have external state (layer stack, projection,
                        // etc.) but no internal state (i.e. a DisplayDevice).
                        if (state.surface != NULL) {

                            // Allow VR composer to use virtual displays.
                            if (mUseHwcVirtualDisplays || getBE().mHwc->isUsingVrComposer()) {
                                ... ...//這類的流程我們暫時走不到,先不看。
                            }

                            sp<VirtualDisplaySurface> vds =
                                    new VirtualDisplaySurface(*getBE().mHwc,
                                            hwcId, state.surface, bqProducer,
                                            bqConsumer, state.displayName);

                            dispSurface = vds;
                            producer = vds;
                        }
                    } else {
                        ... ...主顯示,先不關心
                    }

                    const wp<IBinder>& display(curr.keyAt(i));
                    if (dispSurface != NULL) {
                        sp<DisplayDevice> hw =
                                new DisplayDevice(this, state.type, hwcId, state.isSecure, display,
                                                  dispSurface, producer, hasWideColorDisplay);
                        hw->setLayerStack(state.layerStack);
                        hw->setProjection(state.orientation,
                                state.viewport, state.frame);
                        hw->setDisplayName(state.displayName);
                        mDisplays.add(display, hw);
                        if (!state.isVirtualDisplay()) {
                            mEventThread->onHotplugReceived(state.type, true);
                        }
                    }
                }
            }
  • 首先從DisplayDeviceState中拿出一個DisplayDeviceState
  • 創建一個createBufferQueue,注意區分這里的producer和bqProducer
  • 判斷是不是虛顯,如果是虛顯,且state.surface不為空,將創建一個VirtualDisplaySurface,注意這里dispSurface和producer都是我們剛創建的VirtualDisplaySurface對象vds。
  • 最后創建DisplayDevice對象hw,初始化hw ,并添加到mDisplays中。
  • 另外,要注意的是,這里的 hwcId 為-1

這里引出了兩個重要的量級類DisplayDevice和VirtualDisplaySurface。

我們先來看VirtualDisplaySurface

class VirtualDisplaySurface : public DisplaySurface,
                              public BnGraphicBufferProducer,
                              private ConsumerBase {

厲害了,我的VirtualDisplaySurface,繼承了BnGraphicBufferProducer和ConsumerBase。這是即做Producer,也做Consumer。

先來看VirtualDisplaySurface的構造函數:

* frameworks/native/services/surfaceflinger/DisplayHardware/VirtualDisplaySurface.cpp

VirtualDisplaySurface::VirtualDisplaySurface(HWComposer& hwc, int32_t dispId,
        const sp<IGraphicBufferProducer>& sink,
        const sp<IGraphicBufferProducer>& bqProducer,
        const sp<IGraphicBufferConsumer>& bqConsumer,
        const String8& name)
:   ConsumerBase(bqConsumer),
    mHwc(hwc),
    mDisplayId(dispId),
    mDisplayName(name),
    mSource{},
    mDefaultOutputFormat(HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED),
    mOutputFormat(HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED),
    mOutputUsage(GRALLOC_USAGE_HW_COMPOSER),
    mProducerSlotSource(0),
    mProducerBuffers(),
    mQueueBufferOutput(),
    mSinkBufferWidth(0),
    mSinkBufferHeight(0),
    mCompositionType(COMPOSITION_UNKNOWN),
    mFbFence(Fence::NO_FENCE),
    mOutputFence(Fence::NO_FENCE),
    mFbProducerSlot(BufferQueue::INVALID_BUFFER_SLOT),
    mOutputProducerSlot(BufferQueue::INVALID_BUFFER_SLOT),
    mDbgState(DBG_STATE_IDLE),
    mDbgLastCompositionType(COMPOSITION_UNKNOWN),
    mMustRecompose(false),
    mForceHwcCopy(SurfaceFlinger::useHwcForRgbToYuv)
{
    mSource[SOURCE_SINK] = sink;
    mSource[SOURCE_SCRATCH] = bqProducer;

    resetPerFrameState();

    int sinkWidth, sinkHeight;
    sink->query(NATIVE_WINDOW_WIDTH, &sinkWidth);
    sink->query(NATIVE_WINDOW_HEIGHT, &sinkHeight);
    mSinkBufferWidth = sinkWidth;
    mSinkBufferHeight = sinkHeight;

    // Pick the buffer format to request from the sink when not rendering to it
    // with GLES. If the consumer needs CPU access, use the default format
    // set by the consumer. Otherwise allow gralloc to decide the format based
    // on usage bits.
    int sinkUsage;
    sink->query(NATIVE_WINDOW_CONSUMER_USAGE_BITS, &sinkUsage);
    if (sinkUsage & (GRALLOC_USAGE_SW_READ_MASK | GRALLOC_USAGE_SW_WRITE_MASK)) {
        int sinkFormat;
        sink->query(NATIVE_WINDOW_FORMAT, &sinkFormat);
        mDefaultOutputFormat = sinkFormat;
    } else {
        mDefaultOutputFormat = HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED;
    }
    mOutputFormat = mDefaultOutputFormat;

    ConsumerBase::mName = String8::format("VDS: %s", mDisplayName.string());
    mConsumer->setConsumerName(ConsumerBase::mName);
    mConsumer->setConsumerUsageBits(GRALLOC_USAGE_HW_COMPOSER);
    mConsumer->setDefaultBufferSize(sinkWidth, sinkHeight);
    sink->setAsyncMode(true);
    IGraphicBufferProducer::QueueBufferOutput output;
    mSource[SOURCE_SCRATCH]->connect(NULL, NATIVE_WINDOW_API_EGL, false, &output);
}
  • ImageReader那邊過來的Surface,被保存在mSource[SOURCE_SINK] 中
  • 新創建的BufferQueue的Producer保存在mSource[SOURCE_SCRATCH] 中
  • 新的BufferQueue的Consumer給到mConsumer。
  • mDisplayId為-1

看明白了沒有?這里主要點是的兩個BufferQueue。一個是ImageReader的,另一個是VirtualDisplay的,也就是DisplayDevice的。

DisplayDevice的BufferQueue,為了便于區分,我們私自命名一個DisplayBufferQueue,主要是用來GLES合成,合成后的數據就queue到這個BufferQueue中。

ImageReader的BufferQueue,為了便于區分,我們私自命名一個ReaderBufferQueue,主要是用來讀數據,合成完的數據,queue到DisplayBufferQueue中,再queue到ReaderBufferQueue。

事實上,DisplayBufferQueue的Buffer,也是從ReaderBufferQueue中dequeue出來的。我們來看一下VirtualDisplaySurface的dequeueBuffer和queueBuffer方法就明白了。

* frameworks/native/services/surfaceflinger/DisplayHardware/VirtualDisplaySurface.cpp

status_t VirtualDisplaySurface::dequeueBuffer(int* pslot, sp<Fence>* fence, uint32_t w, uint32_t h,
                                              PixelFormat format, uint64_t usage,
                                              uint64_t* outBufferAge,
                                              FrameEventHistoryDelta* outTimestamps) {
    if (mDisplayId < 0) {
        return mSource[SOURCE_SINK]->dequeueBuffer(pslot, fence, w, h, format, usage, outBufferAge,
                                                   outTimestamps);
    }
    ... ...
}


status_t VirtualDisplaySurface::queueBuffer(int pslot,
        const QueueBufferInput& input, QueueBufferOutput* output) {
    if (mDisplayId < 0)
        return mSource[SOURCE_SINK]->queueBuffer(pslot, input, output);
    ... ...
}

看到沒有,因為ImageReader使用時,mDisplayId為-1,所以,這里直接走的SOURCE_SINK mSource,就ImageReader那邊的BufferQueue。我們加一個棧看看dequeueBuffer和queueBuffer。

VirtualDisplaySurface的dequeueBuffer棧

01-03 13:53:16.709   265   265 D VirtualDisplaySurface_queueBuffer1: #00 pc 0006f6db  /system/lib/libsurfaceflinger.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #01 pc 00054cb3  /system/lib/libgui.so (android::Surface::dequeueBuffer(ANativeWindowBuffer**, int*)+346)
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #02 pc 0069e648  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #03 pc 00345970  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #04 pc 0034582c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #05 pc 00633650  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #06 pc 0062b30c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #07 pc 00633474  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #08 pc 00627d3c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #09 pc 0062a820  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #10 pc 000737c9  /system/lib/libsurfaceflinger.so
01-03 13:53:16.710   265   265 D VirtualDisplaySurface_queueBuffer1: #11 pc 000721b1  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #12 pc 00079fad  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #13 pc 0007ab59  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #14 pc 000797cf  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #15 pc 00078629  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #16 pc 00078411  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #17 pc 000100a3  /system/lib/libutils.so (android::Looper::pollInner(int)+294)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #18 pc 0000fee5  /system/lib/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+32)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #19 pc 00061ba7  /system/lib/libsurfaceflinger.so
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #20 pc 000773d1  /system/lib/libsurfaceflinger.so (android::SurfaceFlinger::run()+8)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #21 pc 00002141  /system/bin/surfaceflinger
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #22 pc 000774a9  /system/lib/libc.so (__libc_init+48)
01-03 13:53:16.711   265   265 D VirtualDisplaySurface_queueBuffer1: #23 pc 00001df4  /system/bin/surfaceflinger

VirtualDisplaySurface的queueBuffer棧

01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #01 pc 00055423  /system/lib/libgui.so (android::Surface::queueBuffer(ANativeWindowBuffer*, int)+594)
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #02 pc 0069eb38  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #03 pc 0034628c  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #04 pc 00346f60  /vendor/lib/egl/libGLES_mali.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #05 pc 00346930  /vendor/lib/egl/libGLES_mali.so (eglp_swap_buffers+740)
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #06 pc 0000ca29  /system/lib/libEGL.so (eglSwapBuffersWithDamageKHR+236)
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #07 pc 0005135d  /system/lib/libsurfaceflinger.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #08 pc 0007ab71  /system/lib/libsurfaceflinger.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #09 pc 000797cf  /system/lib/libsurfaceflinger.so
01-03 13:53:16.774   265   265 D VirtualDisplaySurface_queueBuffer: #10 pc 00078629  /system/lib/libsurfaceflinger.so
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #11 pc 00078411  /system/lib/libsurfaceflinger.so
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #12 pc 000100a3  /system/lib/libutils.so (android::Looper::pollInner(int)+294)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #13 pc 0000fee5  /system/lib/libutils.so (android::Looper::pollOnce(int, int*, int*, void**)+32)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #14 pc 00061ba7  /system/lib/libsurfaceflinger.so
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #15 pc 000773d1  /system/lib/libsurfaceflinger.so (android::SurfaceFlinger::run()+8)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #16 pc 00002141  /system/bin/surfaceflinger
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #17 pc 000774a9  /system/lib/libc.so (__libc_init+48)
01-03 13:53:16.775   265   265 D VirtualDisplaySurface_queueBuffer: #18 pc 00001df4  /system/bin/surfaceflinger

合成的流程這里就不介紹了,只是這個數據流,大家再仔細體會。

ImageReader獲取數據

我們再回到測試,看看ImageReader是怎么對到數據的。

合成的數據queue過來后,將會調回調到JNIImageReaderContext的監聽,onFrameAvailable。

void JNIImageReaderContext::onFrameAvailable(const BufferItem& /*item*/)
{
    ALOGV("%s: frame available", __FUNCTION__);
    bool needsDetach = false;
    JNIEnv* env = getJNIEnv(&needsDetach);
    if (env != NULL) {
        env->CallStaticVoidMethod(mClazz, gImageReaderClassInfo.postEventFromNative, mWeakThiz);
    } else {
        ALOGW("onFrameAvailable event will not posted");
    }
    if (needsDetach) {
        detachJNI();
    }
}

postEventFromNative是java的方法

    private static void postEventFromNative(Object selfRef) {
        @SuppressWarnings("unchecked")
        WeakReference<ImageReader> weakSelf = (WeakReference<ImageReader>)selfRef;
        final ImageReader ir = weakSelf.get();
        if (ir == null) {
            return;
        }

        final Handler handler;
        synchronized (ir.mListenerLock) {
            handler = ir.mListenerHandler;
        }
        if (handler != null) {
            handler.sendEmptyMessage(0);
        }
    }

這里的handler是一個ListenerHandler。

    private final class ListenerHandler extends Handler {
        public ListenerHandler(Looper looper) {
            super(looper, null, true /*async*/);
        }

        @Override
        public void handleMessage(Message msg) {
            OnImageAvailableListener listener;
            synchronized (mListenerLock) {
                listener = mListener;
            }

            // It's dangerous to fire onImageAvailable() callback when the ImageReader is being
            // closed, as application could acquire next image in the onImageAvailable() callback.
            boolean isReaderValid = false;
            synchronized (mCloseLock) {
                isReaderValid = mIsReaderValid;
            }
            if (listener != null && isReaderValid) {
                listener.onImageAvailable(ImageReader.this);
            }
        }
    }

這里終于調回測試代碼中onImageAvailable

        public void onImageAvailable(ImageReader reader) {
            mImageReaderLock.lock();
            try {
                if (reader != mImageReader) {
                    return;
                }

                Log.d(TAG, "New image available from virtual display.");

                // Get the latest buffer.
                Image image = reader.acquireLatestImage();
                if (image != null) {
                    try {
                        // Scan for colors.
                        int color = scanImage(image);
                        synchronized (this) {
                            if (mColor != color) {
                                mColor = color;
                                notifyAll();
                            }
                        }
                    } finally {
                        image.close();
                    }
                }
            } finally {
                mImageReaderLock.unlock();
            }
        }

看到沒有,這個和SurfaceFlinger中Layer的處理是不相似?通過ImageReader去acquireLatestImage。

    public Image acquireLatestImage() {
        Image image = acquireNextImage();
        if (image == null) {
            return null;
        }
        try {
            for (;;) {
                Image next = acquireNextImageNoThrowISE();
                if (next == null) {
                    Image result = image;
                    image = null;
                    return result;
                }
                image.close();
                image = next;
            }
        } finally {
            if (image != null) {
                image.close();
            }
        }
    }

這里有一個循環,目的就是獲取,最后一幀數據。acquireNextImage和acquireNextImageNoThrowISE是類似的,只是一個會拋出異常,一個不會。

    public Image acquireNextImage() {
        // Initialize with reader format, but can be overwritten by native if the image
        // format is different from the reader format.
        SurfaceImage si = new SurfaceImage(mFormat);
        int status = acquireNextSurfaceImage(si);

        switch (status) {
            case ACQUIRE_SUCCESS:
                return si;
            case ACQUIRE_NO_BUFS:
                return null;
            case ACQUIRE_MAX_IMAGES:
                throw new IllegalStateException(
                        String.format(
                                "maxImages (%d) has already been acquired, " +
                                "call #close before acquiring more.", mMaxImages));
            default:
                throw new AssertionError("Unknown nativeImageSetup return code " + status);
        }
    }

acquireNextImage出錯后,會拋一些異常。

    private int acquireNextSurfaceImage(SurfaceImage si) {
        synchronized (mCloseLock) {
            // A null image will eventually be returned if ImageReader is already closed.
            int status = ACQUIRE_NO_BUFS;
            if (mIsReaderValid) {
                status = nativeImageSetup(si);
            }

            switch (status) {
                case ACQUIRE_SUCCESS:
                    si.mIsImageValid = true;
                case ACQUIRE_NO_BUFS:
                case ACQUIRE_MAX_IMAGES:
                    break;
                default:
                    throw new AssertionError("Unknown nativeImageSetup return code " + status);
            }

            // Only keep track the successfully acquired image, as the native buffer is only mapped
            // for such case.
            if (status == ACQUIRE_SUCCESS) {
                mAcquiredImages.add(si);
            }
            return status;
        }
    }

這類終于調到關鍵的了,nativeImageSetup函數。對應的JNI函數為ImageReader_imageSetup。

ImageReader_imageSetup函數:

static jint ImageReader_imageSetup(JNIEnv* env, jobject thiz, jobject image) {
    ALOGV("%s:", __FUNCTION__);
    JNIImageReaderContext* ctx = ImageReader_getContext(env, thiz);
    if (ctx == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
                "ImageReader is not initialized or was already closed");
        return -1;
    }

    BufferItemConsumer* bufferConsumer = ctx->getBufferConsumer();
    BufferItem* buffer = ctx->getBufferItem();
    if (buffer == NULL) {
        ALOGW("Unable to acquire a buffer item, very likely client tried to acquire more than"
            " maxImages buffers");
        return ACQUIRE_MAX_IMAGES;
    }

    status_t res = bufferConsumer->acquireBuffer(buffer, 0);
    if (res != OK) {
        ... ...
    }

    // Add some extra checks for non-opaque formats.
    if (!isFormatOpaque(ctx->getBufferFormat())) {
        ... ...
    }

    // Set SurfaceImage instance member variables
    Image_setBufferItem(env, image, buffer);
    env->SetLongField(image, gSurfaceImageClassInfo.mTimestamp,
            static_cast<jlong>(buffer->mTimestamp));

    return ACQUIRE_SUCCESS;
}
  • 獲取JNIImageReaderContext對象ctx
  • 從ctx中獲取對應地Consumer BufferItemConsumer
  • 通過BufferItemConsumer的acquireBuffer接口去請求一塊Buffer BufferItem
  • 將BufferItem和SurfaceImage關聯
static void Image_setBufferItem(JNIEnv* env, jobject thiz,
        const BufferItem* buffer)
{
    env->SetLongField(thiz, gSurfaceImageClassInfo.mNativeBuffer, reinterpret_cast<jlong>(buffer));
}
  • 設置SurfaceImage的timeStamp

BufferItemConsumer的acquireBuffer函數如下:

status_t BufferItemConsumer::acquireBuffer(BufferItem *item,
        nsecs_t presentWhen, bool waitForFence) {
    status_t err;

    if (!item) return BAD_VALUE;

    Mutex::Autolock _l(mMutex);

    err = acquireBufferLocked(item, presentWhen);
    if (err != OK) {
        if (err != NO_BUFFER_AVAILABLE) {
            BI_LOGE("Error acquiring buffer: %s (%d)", strerror(err), err);
        }
        return err;
    }

    if (waitForFence) {
        err = item->mFence->waitForever("BufferItemConsumer::acquireBuffer");
        if (err != OK) {
            BI_LOGE("Failed to wait for fence of acquired buffer: %s (%d)",
                    strerror(-err), err);
            return err;
        }
    }

    item->mGraphicBuffer = mSlots[item->mSlot].mGraphicBuffer;

    return OK;
}

這里waitForFence為0,這里不會去等Fence。

最終還是通過ConsumerBase的acquireBufferLocked去獲取的

status_t ConsumerBase::acquireBufferLocked(BufferItem *item,
        nsecs_t presentWhen, uint64_t maxFrameNumber) {
    if (mAbandoned) {
        CB_LOGE("acquireBufferLocked: ConsumerBase is abandoned!");
        return NO_INIT;
    }

    status_t err = mConsumer->acquireBuffer(item, presentWhen, maxFrameNumber);
    if (err != NO_ERROR) {
        return err;
    }

    if (item->mGraphicBuffer != NULL) {
        if (mSlots[item->mSlot].mGraphicBuffer != NULL) {
            freeBufferLocked(item->mSlot);
        }
        mSlots[item->mSlot].mGraphicBuffer = item->mGraphicBuffer;
    }

    mSlots[item->mSlot].mFrameNumber = item->mFrameNumber;
    mSlots[item->mSlot].mFence = item->mFence;

    CB_LOGV("acquireBufferLocked: -> slot=%d/%" PRIu64,
            item->mSlot, item->mFrameNumber);

    return OK;
}

mConsumer為ImageReader的BufferQueue的Consumer。要是不記得了,回頭去看看ImageReader_init。mConsumer通過acquireBuffer函數獲取回來的,就是虛顯合成后的數據。

這里用了兩個BufferQueue,千萬不要混淆了。

最后,測試代碼中來掃描圖時的處理,scanImage函數。

        private int scanImage(Image image) {
            final Image.Plane plane = image.getPlanes()[0];
            final ByteBuffer buffer = plane.getBuffer();

getPlanes函數如下:

        public Plane[] getPlanes() {
            throwISEIfImageIsInvalid();

            if (mPlanes == null) {
                mPlanes = nativeCreatePlanes(ImageReader.this.mNumPlanes, ImageReader.this.mFormat);
            }
            // Shallow copy is fine.
            return mPlanes.clone();
        }

對應的JNI函數為Image_createSurfacePlanes:

static jobjectArray Image_createSurfacePlanes(JNIEnv* env, jobject thiz,
        int numPlanes, int readerFormat)
{
    ... ...

    jobjectArray surfacePlanes = env->NewObjectArray(numPlanes, gSurfacePlaneClassInfo.clazz,
            /*initial_element*/NULL);
    ... ...

    LockedImage lockedImg = LockedImage();
    Image_getLockedImage(env, thiz, &lockedImg);
    if (env->ExceptionCheck()) {
        return NULL;
    }
    // Create all SurfacePlanes
    for (int i = 0; i < numPlanes; i++) {
        Image_getLockedImageInfo(env, &lockedImg, i, halReaderFormat,
                &pData, &dataSize, &pixelStride, &rowStride);
        byteBuffer = env->NewDirectByteBuffer(pData, dataSize);
        if ((byteBuffer == NULL) && (env->ExceptionCheck() == false)) {
            jniThrowException(env, "java/lang/IllegalStateException",
                    "Failed to allocate ByteBuffer");
            return NULL;
        }

        // Finally, create this SurfacePlane.
        jobject surfacePlane = env->NewObject(gSurfacePlaneClassInfo.clazz,
                    gSurfacePlaneClassInfo.ctor, thiz, rowStride, pixelStride, byteBuffer);
        env->SetObjectArrayElement(surfacePlanes, i, surfacePlane);
    }

    return surfacePlanes;
}
  • 先通過Image_getLockedImage函數數,生成一個LockedImage。
  • 再通過Image_getLockedImageInfo獲取生成的LockedImage數據,將數據保存在一個byteBuffer對象中。
  • 根據byteBuffer數據創建SurfacePlane
    這樣,數據就傳到Java層,SurfacePlane中,即mBuffer。

LockedImage的生成,通過 Image_getLockedImage 函數:

static void Image_getLockedImage(JNIEnv* env, jobject thiz, LockedImage *image) {
    ALOGV("%s", __FUNCTION__);
    BufferItem* buffer = Image_getBufferItem(env, thiz);
    if (buffer == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException",
                "Image is not initialized");
        return;
    }

    status_t res = lockImageFromBuffer(buffer,
            GRALLOC_USAGE_SW_READ_OFTEN, buffer->mFence->dup(), image);
    if (res != OK) {
        jniThrowExceptionFmt(env, "java/lang/RuntimeException",
                "lock buffer failed for format 0x%x",
                buffer->mGraphicBuffer->getPixelFormat());
        return;
    }

    // Carry over some fields from BufferItem.
    image->crop        = buffer->mCrop;
    image->transform   = buffer->mTransform;
    image->scalingMode = buffer->mScalingMode;
    image->timestamp   = buffer->mTimestamp;
    image->dataSpace   = buffer->mDataSpace;
    image->frameNumber = buffer->mFrameNumber;

    ALOGV("%s: Successfully locked the image", __FUNCTION__);
    // crop, transform, scalingMode, timestamp, and frameNumber should be set by producer,
    // and we don't set them here.
}
  • 先獲取到BufferItem,Image_getBufferItem
  • 從BufferItem中lock Image,lockImageFromBuffer

lockImageFromBuffer函數如下;

status_t lockImageFromBuffer(BufferItem* bufferItem, uint32_t inUsage,
        int fenceFd, LockedImage* outputImage) {
    ALOGV("%s: Try to lock the BufferItem", __FUNCTION__);
    if (bufferItem == nullptr || outputImage == nullptr) {
        ALOGE("Input BufferItem or output LockedImage is NULL!");
        return BAD_VALUE;
    }

    status_t res = lockImageFromBuffer(bufferItem->mGraphicBuffer, inUsage, bufferItem->mCrop,
            fenceFd, outputImage);
    if (res != OK) {
        ALOGE("%s: lock graphic buffer failed", __FUNCTION__);
        return res;
    }

    outputImage->crop        = bufferItem->mCrop;
    outputImage->transform   = bufferItem->mTransform;
    outputImage->scalingMode = bufferItem->mScalingMode;
    outputImage->timestamp   = bufferItem->mTimestamp;
    outputImage->dataSpace   = bufferItem->mDataSpace;
    outputImage->frameNumber = bufferItem->mFrameNumber;
    ALOGV("%s: Successfully locked the image from the BufferItem", __FUNCTION__);
    return OK;
}
  • 通過lockImageFromBuffer函數,從GraphicBuffer中生成我們需要的outputImage。
  • 再同步對應的信息描述

lockImageFromBuffer函數:

status_t lockImageFromBuffer(sp<GraphicBuffer> buffer, uint32_t inUsage,
        const Rect& rect, int fenceFd, LockedImage* outputImage) {
    ... ...

    void* pData = NULL;
    android_ycbcr ycbcr = android_ycbcr();
    status_t res;
    int format = buffer->getPixelFormat();
    int flexFormat = format;
    if (isPossiblyYUV(format)) {
        res = buffer->lockAsyncYCbCr(inUsage, rect, &ycbcr, fenceFd);
        pData = ycbcr.y;
        flexFormat = HAL_PIXEL_FORMAT_YCbCr_420_888;
    }

    // lockAsyncYCbCr for YUV is unsuccessful.
    if (pData == NULL) {
        res = buffer->lockAsync(inUsage, rect, &pData, fenceFd);
        if (res != OK) {
            ALOGE("Lock buffer failed!");
            return res;
        }
    }

    outputImage->data = reinterpret_cast<uint8_t*>(pData);
    outputImage->width = buffer->getWidth();
    outputImage->height = buffer->getHeight();
    outputImage->format = format;
    outputImage->flexFormat = flexFormat;
    outputImage->stride =
            (ycbcr.y != NULL) ? static_cast<uint32_t>(ycbcr.ystride) : buffer->getStride();

    outputImage->dataCb = reinterpret_cast<uint8_t*>(ycbcr.cb);
    outputImage->dataCr = reinterpret_cast<uint8_t*>(ycbcr.cr);
    outputImage->chromaStride = static_cast<uint32_t>(ycbcr.cstride);
    outputImage->chromaStep = static_cast<uint32_t>(ycbcr.chroma_step);
    ALOGV("%s: Successfully locked the image from the GraphicBuffer", __FUNCTION__);
    // Crop, transform, scalingMode, timestamp, and frameNumber should be set by caller,
    // and cann't be set them here.
    return OK;
}
  • 這里GraphicBuffer對象buffer就是VirtualDisplay合成后的數據,outputImage就是我們需要生成的數據。
  • 優先采用yuv的格式,通過GraphicBuffer的lockAsyncYCbCr接口去獲取對應的數據地址。
  • 如果yuv獲取失敗,采用RGB的方式,lockAsync
  • outputImage->data就是數據的起始地址

最后經過Image_getLockedImageInfo處理后,我們的數據就獲取好了。最終我們的數據是保存在Image.Plane的mBuffer中,是一個ByteBuffer。此時,我們就可以做各種我們需要的處理了。

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容