Android P 圖像顯示系統(三)Android HWUI 繪製流程

Android中,繪圖的API很多,比如2D的繪圖skia;3D的繪圖OpenGLES,Vulkan等。Android 開始,的View系統中,多數都是採用2D的模式的View Widget,比如繪製一張Bitmap圖片,顯示一個按鈕等。隨着Android系統的更新,和用戶對視覺效果的追求,以前的這套2D View系統,不僅不能滿足要求,而且渲染非常的慢。所以Android一方面完善對3D的API的支持,另一方面修改原來View Widget的渲染機制。

渲染機制的更新,Android提出了硬件加速的機制,其作用就是將2D的繪圖操縱,轉換爲對應的3D的繪圖操縱,這個轉換的過程,我們把它叫做錄製。需要顯示的時候,再用OpenGLES通過GPU去渲染。界面創建時,第一次全部錄製,後續的過程中,界面如果只有部分區域的widget更新,只需要重新錄製更新的widget。錄製好的繪圖操縱,保存在一個顯示列表DisplayList中,需要真正顯示到界面的時候,直接顯示DisplayList中的繪圖 操縱。這樣,一方面利用GPU去渲染,比Skia要快;另一方面,採用DisplayList,值重新錄製,有更新區域,最大程度利用上一幀的數據,效率自然就快很多。這就是硬件加速的來源。
roundRectClipState
語言蒼白,實踐爲先,我們結合測試示例,來看看硬件加速是怎麼回事~

應用使用硬件(GPU)繪製實例

這個是Android原生的測試硬件繪製的應用:

* frameworks/base/tests/HwAccelerationTest/src/com/android/test/hwui/HardwareCanvasSurfaceViewActivity.java

    private static class RenderingThread extends Thread {
        private final SurfaceHolder mSurface;
        private volatile boolean mRunning = true;
        private int mWidth, mHeight;

        public RenderingThread(SurfaceHolder surface) {
            mSurface = surface;
        }

        void setSize(int width, int height) {
            mWidth = width;
            mHeight = height;
        }

        @Override
        public void run() {
            float x = 0.0f;
            float y = 0.0f;
            float speedX = 5.0f;
            float speedY = 3.0f;

            Paint paint = new Paint();
            paint.setColor(0xff00ff00);

            while (mRunning && !Thread.interrupted()) {
                final Canvas canvas = mSurface.lockHardwareCanvas();
                try {
                    canvas.drawColor(0x00000000, PorterDuff.Mode.CLEAR);
                    canvas.drawRect(x, y, x + 20.0f, y + 20.0f, paint);
                } finally {
                    mSurface.unlockCanvasAndPost(canvas);
                }

                ... ...

                try {
                    Thread.sleep(15);
                } catch (InterruptedException e) {
                    // Interrupted
                }
            }
        }

        void stopRendering() {
            interrupt();
            mRunning = false;
        }
    }

應用這裏拿到一個Surface,然後lock一個HardwareCanvas,用lock的HardwareCanvas進行繪製,我們繪製的就可以使用硬件GPU進行繪製。這裏每隔15秒循環一次,繪製一個小方塊,在屏幕上不停的運動。而背景,被繪製成0x00000000,黑色。

硬件繪製Java層相關流程

通過前面的代碼,關鍵的是在lockHardwareCanvas。

lockHardwareCanvas的代碼如下:

* frameworks/base/core/java/android/view/SurfaceView.java

        @Override
        public Canvas lockHardwareCanvas() {
            return internalLockCanvas(null, true);
        }

        private Canvas internalLockCanvas(Rect dirty, boolean hardware) {
            mSurfaceLock.lock();

            if (DEBUG) Log.i(TAG, System.identityHashCode(this) + " " + "Locking canvas... stopped="
                    + mDrawingStopped + ", surfaceControl=" + mSurfaceControl);

            Canvas c = null;
            if (!mDrawingStopped && mSurfaceControl != null) {
                try {
                    if (hardware) {
                        c = mSurface.lockHardwareCanvas();
                    } else {
                        c = mSurface.lockCanvas(dirty);
                    }
                } catch (Exception e) {
                    Log.e(LOG_TAG, "Exception locking surface", e);
                }
            }

            if (DEBUG) Log.i(TAG, System.identityHashCode(this) + " " + "Returned canvas: " + c);
            if (c != null) {
                mLastLockTime = SystemClock.uptimeMillis();
                return c;
            }

            ... ...

            return null;
        }

這裏Canvas是通過mSurface來申請的。

* frameworks/base/core/java/android/view/Surface.java

    public Canvas lockHardwareCanvas() {
        synchronized (mLock) {
            checkNotReleasedLocked();
            if (mHwuiContext == null) {
                mHwuiContext = new HwuiContext();
            }
            return mHwuiContext.lockCanvas(
                    nativeGetWidth(mNativeObject),
                    nativeGetHeight(mNativeObject));
        }
    }

Surface中封裝了一個 HwuiContext ,其構造函數如下:

        HwuiContext() {
            mRenderNode = RenderNode.create("HwuiCanvas", null);
            mRenderNode.setClipToBounds(false);
            mHwuiRenderer = nHwuiCreate(mRenderNode.mNativeRenderNode, mNativeObject);
        }

在HwuiContext的構造函數中,創建了一個RenderNode,創建了一個HwuiRenderer。nHwuiCreate創建一個native的HwuiRender。

這裏的HwuiContext,就是和HWUI打交道了。

HwuiContext的lockCanvas實現如下:

        Canvas lockCanvas(int width, int height) {
            if (mCanvas != null) {
                throw new IllegalStateException("Surface was already locked!");
            }
            mCanvas = mRenderNode.start(width, height);
            return mCanvas;
        }

RenderNode的start函數:

    public DisplayListCanvas start(int width, int height) {
        return DisplayListCanvas.obtain(this, width, height);
    }

    static DisplayListCanvas obtain(@NonNull RenderNode node, int width, int height) {
        if (node == null) throw new IllegalArgumentException("node cannot be null");
        DisplayListCanvas canvas = sPool.acquire();
        if (canvas == null) {
            canvas = new DisplayListCanvas(node, width, height);
        } else {
            nResetDisplayListCanvas(canvas.mNativeCanvasWrapper, node.mNativeRenderNode,
                    width, height);
        }
        canvas.mNode = node;
        canvas.mWidth = width;
        canvas.mHeight = height;
        return canvas;
    }

RenderNode,start時,將創建一個DisplayListCanvas。DisplayListCanvas是顯示列表的Canvas。DisplayListCanvas 構建時,將通過nCreateDisplayListCanvas創建一個native的DisplayListCanvas。

    private DisplayListCanvas(@NonNull RenderNode node, int width, int height) {
        super(nCreateDisplayListCanvas(node.mNativeRenderNode, width, height));
        mDensity = 0; // disable bitmap density scaling
    }

DisplayListCanvas和RecordingCanvas的構造函數都比較簡單,但是留意一下Canvas的構造函數:

    public Canvas(long nativeCanvas) {
        if (nativeCanvas == 0) {
            throw new IllegalStateException();
        }
        mNativeCanvasWrapper = nativeCanvas;
        mFinalizer = NoImagePreloadHolder.sRegistry.registerNativeAllocation(
                this, mNativeCanvasWrapper);
        mDensity = Bitmap.getDefaultDensity();
    }

這裏的mNativeCanvasWrapper,就是nCreateDisplayListCanvas時,創建的native對應的Canvas。後續,JNI中都是通過mNativeCanvasWrapper去找到對應的nativ的Canvas的。

我們先來看這些相關的類之間的關係~
Hwui Context相關類圖

其中,RenderNode,DisplayListCanvas,HwuiRenderer構成了硬件繪製的重要元素。

再回到我們的測試代碼,我們這裏有兩個繪製操縱:

  • drawColor
  • drawRect

drawColor是在DisplayListCanvas的父類RecordingCanvas中實現的:

    public final void drawColor(@ColorInt int color, @NonNull PorterDuff.Mode mode) {
        nDrawColor(mNativeCanvasWrapper, color, mode.nativeInt);
    }

這裏調用native的nDrawColor方法。

drawRect也是在DisplayListCanvas的父類RecordingCanvas中實現的:

    @Override
    public final void drawRect(float left, float top, float right, float bottom,
            @NonNull Paint paint) {
        nDrawRect(mNativeCanvasWrapper, left, top, right, bottom, paint.getNativeInstance());
    }

調用native的nDrawRect方法。

native處理流程

###native的Canvas創建

DisplayListCanvas的JNI實現如下:

* frameworks/base/core/jni/android_view_DisplayListCanvas.cpp

const char* const kClassPathName = "android/view/DisplayListCanvas";

static JNINativeMethod gMethods[] = {

    // ------------ @FastNative ------------------

    { "nCallDrawGLFunction", "(JJLjava/lang/Runnable;)V",
            (void*) android_view_DisplayListCanvas_callDrawGLFunction },

    // ------------ @CriticalNative --------------
    { "nCreateDisplayListCanvas", "(JII)J",     (void*) android_view_DisplayListCanvas_createDisplayListCanvas },
    { "nResetDisplayListCanvas",  "(JJII)V",    (void*) android_view_DisplayListCanvas_resetDisplayListCanvas },
    { "nGetMaximumTextureWidth",  "()I",        (void*) android_view_DisplayListCanvas_getMaxTextureWidth },
    { "nGetMaximumTextureHeight", "()I",        (void*) android_view_DisplayListCanvas_getMaxTextureHeight },
    { "nInsertReorderBarrier",    "(JZ)V",      (void*) android_view_DisplayListCanvas_insertReorderBarrier },
    { "nFinishRecording",         "(J)J",       (void*) android_view_DisplayListCanvas_finishRecording },
    { "nDrawRenderNode",          "(JJ)V",      (void*) android_view_DisplayListCanvas_drawRenderNode },
    { "nDrawLayer",               "(JJ)V",      (void*) android_view_DisplayListCanvas_drawLayer },
    { "nDrawCircle",              "(JJJJJ)V",   (void*) android_view_DisplayListCanvas_drawCircleProps },
    { "nDrawRoundRect",           "(JJJJJJJJ)V",(void*) android_view_DisplayListCanvas_drawRoundRectProps },
};

nCreateDisplayListCanvas對應的實現爲android_view_DisplayListCanvas_createDisplayListCanvas。

static jlong android_view_DisplayListCanvas_createDisplayListCanvas(jlong renderNodePtr,
        jint width, jint height) {
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    return reinterpret_cast<jlong>(Canvas::create_recording_canvas(width, height, renderNode));
}

注意我們這裏的renderNodePtr。這個是RenderNode在native層的對象(地址)。

Canvas的create_recording_canvas函數如下:

Canvas* Canvas::create_recording_canvas(int width, int height, uirenderer::RenderNode* renderNode) {
    if (uirenderer::Properties::isSkiaEnabled()) {
        return new uirenderer::skiapipeline::SkiaRecordingCanvas(renderNode, width, height);
    }
    return new uirenderer::RecordingCanvas(width, height);
}

isSkiaEnabled沒有被enable的,所以創建的是native的RecordingCanvas。Android 8.0開始,對HWUI進行了重構,增加了RenderPipeline的概念。目前有三種類型的pipeline,分別對應不同的渲染。

enum class RenderPipelineType {
	OpenGL = 0,
	SkiaGL,
	SkiaVulkan,
	NotInitialized = 128
};

默認還是OpenGL類型。

native的RecordingCanvas如下:

* frameworks/base/libs/hwui/RecordingCanvas.cpp

RecordingCanvas::RecordingCanvas(size_t width, size_t height)
        : mState(*this), mResourceCache(ResourceCache::getInstance()) {
    resetRecording(width, height);
}

RecordingCanvas創建時,創建了對應的CanvasState,和ResourceCache。CanvasState是Canvas的狀態,管理Snapshot的棧,實現matrix,save/restore,clipping等Renderer的接口。ResourceCache主要是做資源cache,cache爲點九類型。

在resetRecording函數中,又做了很多初始化。

void RecordingCanvas::resetRecording(int width, int height, RenderNode* node) {
    LOG_ALWAYS_FATAL_IF(mDisplayList, "prepareDirty called a second time during a recording!");
    mDisplayList = new DisplayList();

    mState.initializeRecordingSaveStack(width, height);

    mDeferredBarrierType = DeferredBarrierType::InOrder;
}
  • 創建了顯示列表mDisplayList,這個很重要,稍後我們再介紹。它主要用來保存顯示列表的繪製命令。
  • 初始化CanvasState

到此,native的Canvas創建完成。

Draw操縱的錄製

測試代碼中,一共兩個繪製操縱,我們以這兩個繪製操縱爲例,來說明繪製的操縱的錄製。
nDrawColor nDrawRect

* frameworks/base/core/jni/android_graphics_Canvas.cpp

static const JNINativeMethod gDrawMethods[] = {
    {"nDrawColor","(JII)V", (void*) CanvasJNI::drawColor},
    {"nDrawPaint","(JJ)V", (void*) CanvasJNI::drawPaint},
    {"nDrawPoint", "(JFFJ)V", (void*) CanvasJNI::drawPoint},
    {"nDrawPoints", "(J[FIIJ)V", (void*) CanvasJNI::drawPoints},
    {"nDrawLine", "(JFFFFJ)V", (void*) CanvasJNI::drawLine},
    {"nDrawLines", "(J[FIIJ)V", (void*) CanvasJNI::drawLines},
    {"nDrawRect","(JFFFFJ)V", (void*) CanvasJNI::drawRect},

drawColor函數

static void drawColor(JNIEnv* env, jobject, jlong canvasHandle, jint color, jint modeHandle) {
    SkBlendMode mode = static_cast<SkBlendMode>(modeHandle);
    get_canvas(canvasHandle)->drawColor(color, mode);
}

canvasHandle爲native RecordingCanvas的handle,所以get_canvas獲取到的是RecordingCanvas。

RecordingCanvas的drawColor函數如下:

* frameworks/base/libs/hwui/RecordingCanvas.cpp

void RecordingCanvas::drawColor(int color, SkBlendMode mode) {
    addOp(alloc().create_trivial<ColorOp>(getRecordedClip(), color, mode));
}
  • alloc()獲取到的是DisplayList的allocator
  • create_trivial是一個模板函數
    template <class T, typename... Params>
    T* create_trivial(Params&&... params) {
        static_assert(std::is_trivially_destructible<T>::value,
                      "Error, called create_trivial on a non-trivial type");
        return new (allocImpl(sizeof(T))) T(std::forward<Params>(params)...);
    }

類型 T爲ColorOp,參數params爲(getRecordedClip(), color, mode),其作用就是構造已給ColorOp。

  • allocImpl,分配內存空間

ColorOp的定義在頭文件中:

frameworks/base/libs/hwui/RecordedOp.h

struct ColorOp : RecordedOp {
    // Note: unbounded op that will fillclip, so no bounds/matrix needed
    ColorOp(const ClipBase* localClip, int color, SkBlendMode mode)
            : RecordedOp(RecordedOpId::ColorOp, Rect(), Matrix4::identity(), localClip, nullptr)
            , color(color)
            , mode(mode) {}
    const int color;
    const SkBlendMode mode;
};

RecordedOp.h中定義了所以的繪圖操縱。

如nDrawRect對應的操縱爲RectOp:

void RecordingCanvas::drawRect(float left, float top, float right, float bottom,
                               const SkPaint& paint) {
    if (CC_UNLIKELY(paint.nothingToDraw())) return;

    addOp(alloc().create_trivial<RectOp>(Rect(left, top, right, bottom),
                                         *(mState.currentSnapshot()->transform), getRecordedClip(),
                                         refPaint(&paint)));
}

struct RectOp : RecordedOp {
    RectOp(BASE_PARAMS) : SUPER(RectOp) {}
};

所有的繪圖操作都繼承RecordedOp。

RecordedOp定義如下:

struct RecordedOp {
    /* ID from RecordedOpId - generally used for jumping into function tables */
    const int opId;

    /* bounds in *local* space, without accounting for DisplayList transformation, or stroke */
    const Rect unmappedBounds;

    /* transform in recording space (vs DisplayList origin) */
    const Matrix4 localMatrix;

    /* clip in recording space - nullptr if not clipped */
    const ClipBase* localClip;

    /* optional paint, stored in base object to simplify merging logic */
    const SkPaint* paint;

protected:
    RecordedOp(unsigned int opId, BASE_PARAMS)
            : opId(opId)
            , unmappedBounds(unmappedBounds)
            , localMatrix(localMatrix)
            , localClip(localClip)
            , paint(paint) {}
};
  • opId,RecordedOpId中的ID,用以調轉到對應的函數
  • unmappedBounds,繪製區域的大小
  • localMatrix,transform
  • ClipBase,截取
  • paint,畫筆

繪圖操縱創建後,通過addOp方法,添加到DisplayList中:

int RecordingCanvas::addOp(RecordedOp* op) {
    // skip op with empty clip
    if (op->localClip && op->localClip->rect.isEmpty()) {
        // NOTE: this rejection happens after op construction/content ref-ing, so content ref'd
        // and held by renderthread isn't affected by clip rejection.
        // Could rewind alloc here if desired, but callers would have to not touch op afterwards.
        return -1;
    }

    int insertIndex = mDisplayList->ops.size();
    mDisplayList->ops.push_back(op);
    if (mDeferredBarrierType != DeferredBarrierType::None) {
        // op is first in new chunk
        mDisplayList->chunks.emplace_back();
        DisplayList::Chunk& newChunk = mDisplayList->chunks.back();
        newChunk.beginOpIndex = insertIndex;
        newChunk.endOpIndex = insertIndex + 1;
        newChunk.reorderChildren = (mDeferredBarrierType == DeferredBarrierType::OutOfOrder);
        newChunk.reorderClip = mDeferredBarrierClip;

        int nextChildIndex = mDisplayList->children.size();
        newChunk.beginChildIndex = newChunk.endChildIndex = nextChildIndex;
        mDeferredBarrierType = DeferredBarrierType::None;
    } else {
        // standard case - append to existing chunk
        mDisplayList->chunks.back().endOpIndex = insertIndex + 1;
    }
    return insertIndex;
}

不得不說,這裏有點複雜,但是很巧妙。

  • 所有的繪圖操縱,我們把它叫做Ops,都保存在ops中。ops就好比一個公司,而Ops就是一個員工。而每個Ops都有一個序號insertIndex,按照加入的先後順序,相當與工號。
  • chunk中還沒有元素時,mDeferredBarrierType爲DeferredBarrierType::InOrder,這個時候就會增加一個Chunk。除非重新插入Barrier,即insertReorderBarrier,要不然,後續添加的Ops都是在同一個Chunk中的。Chunk就好比公司裏面的部門,部門說,工號從多少號到多少號的歸屬於這個部門。beginOpIndex是開始的序號,endOpIndex是結束的序號,這之間的,都是屬於同一個Chunk,每加入一個Ops,endOpIndex就會加1。
  • 怎麼來理解children呢?按照前面的類比,可以理解爲一個部門裏面的小組。beginChildIndex和endChildIndex之間的Ops都屬於同一個Children。

其實,這的Ops,chunk,children就是對Android View系統的抽象化。Chunk對應RootView,而children對應ViewGroup,Ops再對應,繪製Color,Rect等操縱。就是這麼神奇~

我們來看一下DisplayList和Ops之間的關係
DisplayList顯示列表

繪製操縱完成後,所有繪製操縱極其參數都保存在DisplayList中了。那麼這些繪製操縱什麼時候顯示出來呢?我們繼續看。

創建RenderNode

RenderNode用以錄製繪圖操縱的批處理,當繪製的時候,可以store和apply。
java層的代碼如下:其實RenderNode就對應前面我們所說的ViewGroup,有一個RootView,同樣也有一個RootNode。

我們先來看RenderNode是怎麼創建的

    public static RenderNode create(String name, @Nullable View owningView) {
        return new RenderNode(name, owningView);
    }

    private RenderNode(String name, View owningView) {
        mNativeRenderNode = nCreate(name);
        NoImagePreloadHolder.sRegistry.registerNativeAllocation(this, mNativeRenderNode);
        mOwningView = owningView;
    }

nCreate是JNI方法。

RenderNode的JNI實現如下:

const char* const kClassPathName = "android/view/RenderNode";

static const JNINativeMethod gMethods[] = {
// ----------------------------------------------------------------------------
// Regular JNI
// ----------------------------------------------------------------------------
    { "nCreate",               "(Ljava/lang/String;)J", (void*) android_view_RenderNode_create },
    { "nGetNativeFinalizer",   "()J",    (void*) android_view_RenderNode_getNativeFinalizer },
    { "nOutput",               "(J)V",    (void*) android_view_RenderNode_output },
    { "nGetDebugSize",         "(J)I",    (void*) android_view_RenderNode_getDebugSize },
    { "nAddAnimator",              "(JJ)V", (void*) android_view_RenderNode_addAnimator },
    { "nEndAllAnimators",          "(J)V", (void*) android_view_RenderNode_endAllAnimators },
    { "nRequestPositionUpdates",   "(JLandroid/view/SurfaceView;)V", (void*) android_view_RenderNode_requestPositionUpdates },
    { "nSetDisplayList",       "(JJ)V",   (void*) android_view_RenderNode_setDisplayList },

nCreate函數實現爲android_view_RenderNode_create

static jlong android_view_RenderNode_create(JNIEnv* env, jobject, jstring name) {
    RenderNode* renderNode = new RenderNode();
    renderNode->incStrong(0);
    if (name != NULL) {
        const char* textArray = env->GetStringUTFChars(name, NULL);
        renderNode->setName(textArray);
        env->ReleaseStringUTFChars(name, textArray);
    }
    return reinterpret_cast<jlong>(renderNode);
}

在JNI中就創建了一個native的RenderNode

* frameworks/base/libs/hwui/RenderNode.cpp

RenderNode::RenderNode()
        : mDirtyPropertyFields(0)
        , mNeedsDisplayListSync(false)
        , mDisplayList(nullptr)
        , mStagingDisplayList(nullptr)
        , mAnimatorManager(*this)
        , mParentCount(0) {}

創建完成的RenderNode,是給到DisplayListCanvas的。

HwuiContext和HwuiRenderer

nHwuiCreate創建HwuiRenderer

* frameworks/base/core/jni/android_view_Surface.cpp
static const JNINativeMethod gSurfaceMethods[] = {
    ... ...

    // HWUI context
    {"nHwuiCreate", "(JJ)J", (void*) hwui::create },
    {"nHwuiSetSurface", "(JJ)V", (void*) hwui::setSurface },
    {"nHwuiDraw", "(J)V", (void*) hwui::draw },
    {"nHwuiDestroy", "(J)V", (void*) hwui::destroy },
};

nHwuiCreate函數實現如下:

static jlong create(JNIEnv* env, jclass clazz, jlong rootNodePtr, jlong surfacePtr) {
    RenderNode* rootNode = reinterpret_cast<RenderNode*>(rootNodePtr);
    sp<Surface> surface(reinterpret_cast<Surface*>(surfacePtr));
    ContextFactory factory;
    RenderProxy* proxy = new RenderProxy(false, rootNode, &factory);
    proxy->loadSystemProperties();
    proxy->setSwapBehavior(SwapBehavior::kSwap_discardBuffer);
    proxy->initialize(surface);
    // Shadows can't be used via this interface, so just set the light source
    // to all 0s.
    proxy->setup(0, 0, 0);
    proxy->setLightCenter((Vector3){0, 0, 0});
    return (jlong) proxy;
}

創建了一個RenderProxy,nHwuiCreate返回的是一個RenderProxy實例。

RenderProxy的構造函數如下:

* frameworks/base/libs/hwui/renderthread/RenderProxy.cpp

RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode,
                         IContextFactory* contextFactory)
        : mRenderThread(RenderThread::getInstance()), mContext(nullptr) {
    mContext = mRenderThread.queue().runSync([&]() -> CanvasContext* {
        return CanvasContext::create(mRenderThread, translucent, rootRenderNode, contextFactory);
    });
    mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode);
}

這裏誕生了很多東西:

  • RenderProxy是一個代理者,嚴格的單線程。所有的方法都必須在自己的線程中調用。
  • RenderThread,渲染線程,是一個單例,也就是說,一個進程中只有一個,所有的繪製操縱都必須在這個線程中完成。應用端很多操縱,都以RenderTask的形式post到RenderThread線程中完成。
  • CanvasContext,上下文,由於OpenGL是單線程的,所以,我們給到GPU的繪圖命令都封裝在各自的上下文中。這個和上層的HwuiRenderer是對應的。
  • DrawFrameTask,比較特殊的一個RenderTask。可重複使用的繪製Task。

我們先來理解這個HWUI的Thread。

RenderThread

hwui中很多C++的新特性,代碼比較難理解。

* frameworks/base/libs/hwui/renderthread/RenderThread.h

class RenderThread : private ThreadBase {
    PREVENT_COPY_AND_ASSIGN(RenderThread);
    
  • PREVENT_COPY_AND_ASSIG阻止拷貝構造函數和*=*重載
  • 繼承ThreadBase,ThreadBase繼承Android的基本類Thread

在構造RenderThread時,就啓動了RenderThread線程。

RenderThread::RenderThread()
        : ThreadBase()
        , mDisplayEventReceiver(nullptr)
        , mVsyncRequested(false)
        , mFrameCallbackTaskPending(false)
        , mRenderState(nullptr)
        , mEglManager(nullptr)
        , mVkManager(nullptr) {
    Properties::load();
    start("RenderThread");
}

ThreadBase的構造函數值得一看:

    ThreadBase()
            : Thread(false)
            , mLooper(new Looper(false))
            , mQueue([this]() { mLooper->wake(); }, mLock) {}

mQueue的實例化,C++的新特性。其實就是構造一個Queue,第一個參數是一個函數。函數體爲:

{ mLooper->wake(); }

這個函數執行的時候,就喚醒mLooper,線程開始工作。

WorkQueue的構造函數如下:

    WorkQueue(std::function<void()>&& wakeFunc, std::mutex& lock)
            : mWakeFunc(move(wakeFunc)), mLock(lock) {}

我們再來看RenderThread是怎麼工作的。RenderThread起來後,就會執行RenderThread的threadLoop。

threadLoop如下:

bool RenderThread::threadLoop() {
    setpriority(PRIO_PROCESS, 0, PRIORITY_DISPLAY);
    if (gOnStartHook) {
        gOnStartHook();
    }
    initThreadLocals();

    while (true) {
        waitForWork();
        processQueue();

        if (mPendingRegistrationFrameCallbacks.size() && !mFrameCallbackTaskPending) {
            drainDisplayEventQueue();
            mFrameCallbacks.insert(mPendingRegistrationFrameCallbacks.begin(),
                                   mPendingRegistrationFrameCallbacks.end());
            mPendingRegistrationFrameCallbacks.clear();
            requestVsync();
        }

        if (!mFrameCallbackTaskPending && !mVsyncRequested && mFrameCallbacks.size()) {
            // TODO: Clean this up. This is working around an issue where a combination
            // of bad timing and slow drawing can result in dropping a stale vsync
            // on the floor (correct!) but fails to schedule to listen for the
            // next vsync (oops), so none of the callbacks are run.
            requestVsync();
        }
    }

    return false;
}
  • initThreadLocals初始化Thread的本地變量
  • threadLoop中while循環,不停處理請求。如果沒有任務時,等在waitForWork

前面是創建完RenderProxy後,還會設置一些參數

    RenderProxy* proxy = new RenderProxy(false, rootNode, &factory);
    proxy->loadSystemProperties();
    proxy->setSwapBehavior(SwapBehavior::kSwap_discardBuffer);
    proxy->initialize(surface);
    // Shadows can't be used via this interface, so just set the light source
    // to all 0s.
    proxy->setup(0, 0, 0);
    proxy->setLightCenter((Vector3){0, 0, 0});

我們以initialize爲例。

void RenderProxy::initialize(const sp<Surface>& surface) {
    mRenderThread.queue().post(
            [ this, surf = surface ]() mutable { mContext->setSurface(std::move(surf)); });
}

initialize時,將給mRenderThread的隊列中post一個東西,Oops…現在還不知道它是什麼。下面我們將來看它是什麼。

post是一個模板函數:

* frameworks/base/libs/hwui/thread/WorkQueue.h

    template <class F>
    void post(F&& func) {
        postAt(0, std::forward<F>(func));
    }
    
    template <class F>
    void postAt(nsecs_t time, F&& func) {
        enqueue(WorkItem{time, std::function<void()>(std::forward<F>(func))});
    }

post的時候,將根據傳進來的參數,創建一個WorkItem,enqueue到消息隊列mWorkQueue中。

    void enqueue(WorkItem&& item) {
        bool needsWakeup;
        {
            std::unique_lock _lock{mLock};
            auto insertAt = std::find_if(
                    std::begin(mWorkQueue), std::end(mWorkQueue),
                    [time = item.runAt](WorkItem & item) { return item.runAt > time; });
            needsWakeup = std::begin(mWorkQueue) == insertAt;
            mWorkQueue.emplace(insertAt, std::move(item));
        }
        if (needsWakeup) {
            mWakeFunc();
        }
    }

mWakeFunc如果需要喚醒,就通過mWakeFunc函數,喚醒mLooper。還記得嗎?mWakeFunc是ThreadBase中構建WorkQueue時,傳下來的無名函數。

WorkItem定義如下。

    struct WorkItem {
        WorkItem() = delete;
        WorkItem(const WorkItem& other) = delete;
        WorkItem& operator=(const WorkItem& other) = delete;
        WorkItem(WorkItem&& other) = default;
        WorkItem& operator=(WorkItem&& other) = default;

        WorkItem(nsecs_t runAt, std::function<void()>&& work)
                : runAt(runAt), work(std::move(work)) {}

        nsecs_t runAt;
        std::function<void()> work;
    };

對於我們的initialize函數而言,這裏的WorkItem中的work是不是mContext->setSurface?答案是肯定的。

再來看RenderThread,收到新消息後怎麼處理。

首先用processQueue處理Queue。

void processQueue() { mQueue.process(); }

最終還是 回到WorkQueue 中。

    void process() {
        auto now = clock::now();
        std::vector<WorkItem> toProcess;
        {
            std::unique_lock _lock{mLock};
            if (mWorkQueue.empty()) return;
            toProcess = std::move(mWorkQueue);
            auto moveBack = find_if(std::begin(toProcess), std::end(toProcess),
                                    [&now](WorkItem& item) { return item.runAt > now; });
            if (moveBack != std::end(toProcess)) {
                mWorkQueue.reserve(std::distance(moveBack, std::end(toProcess)) + 5);
                std::move(moveBack, std::end(toProcess), std::back_inserter(mWorkQueue));
                toProcess.erase(moveBack, std::end(toProcess));
            }
        }
        for (auto& item : toProcess) {
            item.work();
        }
    }

這裏將mWorkQueue中未處理的WorkItem找處理,放到toProcess中。再調用每個Item的work方法。

對於我們的initialize函數而言,這裏是不是就是mContext->setSurface?也就是CanvasContext的setSurface方法:

void CanvasContext::setSurface(sp<Surface>&& surface) {
    ATRACE_CALL();

    mNativeSurface = std::move(surface);

    ColorMode colorMode = mWideColorGamut ? ColorMode::WideColorGamut : ColorMode::Srgb;
    bool hasSurface = mRenderPipeline->setSurface(mNativeSurface.get(), mSwapBehavior, colorMode);

    mFrameNumber = -1;

    if (hasSurface) {
        mHaveNewSurface = true;
        mSwapHistory.clear();
    } else {
        mRenderThread.removeFrameCallback(this);
    }
}

神奇吧~

很多RenderProxy中的操作,都是通過這種方式post到CanvasContext中,且運行在RenderThread線程中。

我們再來看一個特殊的Task DrawFrameTask。

RenderProxy創建時,創建的DrawFrameTask

* frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp

DrawFrameTask::DrawFrameTask()
        : mRenderThread(nullptr)
        , mContext(nullptr)
        , mContentDrawBounds(0, 0, 0, 0)
        , mSyncResult(SyncResult::OK) {}

DrawFrameTask::~DrawFrameTask() {}

void DrawFrameTask::setContext(RenderThread* thread, CanvasContext* context,
                               RenderNode* targetNode) {
    mRenderThread = thread;
    mContext = context;
    mTargetNode = targetNode;
}

到目前位置,DisplayList有了,RenderThread有了,但是繪製在哪兒呢?我們這裏直接解密吧,具體的流程就不介紹了,我們單看hwui這部分的邏輯。

顯示時,上層會調syncAndDrawFrame

int RenderProxy::syncAndDrawFrame() {
    return mDrawFrameTask.drawFrame();
}
int DrawFrameTask::drawFrame() {
    LOG_ALWAYS_FATAL_IF(!mContext, "Cannot drawFrame with no CanvasContext!");

    mSyncResult = SyncResult::OK;
    mSyncQueued = systemTime(CLOCK_MONOTONIC);
    postAndWait();

    return mSyncResult;
}

void DrawFrameTask::postAndWait() {
    AutoMutex _lock(mLock);
    mRenderThread->queue().post([this]() { run(); });
    mSignal.wait(mLock);
}

這類,drawFrame,也就通過RenderThread,post一個WorkItem到RenderThread的隊列裏面,在RenderThread線程中執行的。

RenderThread處理Queue時,執行的確是這裏的run函數。

void DrawFrameTask::run() {
    ATRACE_NAME("DrawFrame");

    bool canUnblockUiThread;
    bool canDrawThisFrame;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = info.out.canDrawThisFrame;
    }

    // Grab a copy of everything we need
    CanvasContext* context = mContext;

    // From this point on anything in "this" is *UNSAFE TO ACCESS*
    if (canUnblockUiThread) {
        unblockUiThread();
    }

    if (CC_LIKELY(canDrawThisFrame)) {
        context->draw();
    } else {
        // wait on fences so tasks don't overlap next frame
        context->waitOnFences();
    }

    if (!canUnblockUiThread) {
        unblockUiThread();
    }
}
  • 先調用syncFrameState,同步一下Frame的狀態
  • 再通過CanvasContext的draw方法去繪製

OK,現在,主要的流程就到CanvasContext,我們看看CanvasContext

CanvasContext

渲染的上下文。

* frameworks/base/libs/hwui/renderthread/CanvasContext.cpp

CanvasContext* CanvasContext::create(RenderThread& thread, bool translucent,
                                     RenderNode* rootRenderNode, IContextFactory* contextFactory) {
    auto renderType = Properties::getRenderPipelineType();

    switch (renderType) {
        case RenderPipelineType::OpenGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<OpenGLPipeline>(thread));
        case RenderPipelineType::SkiaGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread));
        case RenderPipelineType::SkiaVulkan:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread));
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t)renderType);
            break;
    }
    return nullptr;
}

前面我們已經說過,渲染Pipeline有幾種類型,Pipeline由IRenderPipeline描述。創建CanvasContext時,會根據pipeline的類型,創建對應的Pipeline,他們的關係如下:
HWUI處理管線Pipeline

IRenderPipeline是統一的接口。默認的類型是OpenGLPipeline,用的是OpenGL實現。這可以可通過屬性debug.hwui.renderer來設置。對應地邏輯如下:

* frameworks/base/libs/hwui/Properties.cpp
#define PROPERTY_RENDERER "debug.hwui.renderer"

RenderPipelineType Properties::getRenderPipelineType() {
    if (sRenderPipelineType != RenderPipelineType::NotInitialized) {
        return sRenderPipelineType;
    }
    char prop[PROPERTY_VALUE_MAX];
    property_get(PROPERTY_RENDERER, prop, "skiagl");
    if (!strcmp(prop, "skiagl")) {
        ALOGD("Skia GL Pipeline");
        sRenderPipelineType = RenderPipelineType::SkiaGL;
    } else if (!strcmp(prop, "skiavk")) {
        ALOGD("Skia Vulkan Pipeline");
        sRenderPipelineType = RenderPipelineType::SkiaVulkan;
    } else {  //"opengl"
        ALOGD("HWUI GL Pipeline");
        sRenderPipelineType = RenderPipelineType::OpenGL;
    }
    return sRenderPipelineType;
}

SkiaOpenGLPipeline和SkiaVulkanPipeline,兩者都用到skia進行Ops的渲染,也就是說,Ops的錄製是用skia來完成的。後面的顯示纔用到OpenGL或Vulkan。

我們再來看一下CanvasContext的構造函數:

CanvasContext::CanvasContext(RenderThread& thread, bool translucent, RenderNode* rootRenderNode,
                             IContextFactory* contextFactory,
                             std::unique_ptr<IRenderPipeline> renderPipeline)
        : mRenderThread(thread)
        , mOpaque(!translucent)
        , mAnimationContext(contextFactory->createAnimationContext(mRenderThread.timeLord()))
        , mJankTracker(&thread.globalProfileData(), thread.mainDisplayInfo())
        , mProfiler(mJankTracker.frames())
        , mContentDrawBounds(0, 0, 0, 0)
        , mRenderPipeline(std::move(renderPipeline)) {
    rootRenderNode->makeRoot();
    mRenderNodes.emplace_back(rootRenderNode);
    mRenderThread.renderState().registerCanvasContext(this);
    mProfiler.setDensity(mRenderThread.mainDisplayInfo().density);
}
  • contextFactory
    contextFactory是在Surface的JNI中創建RenderProxy時,傳入的。主要是用來創建AnimationContext,AnimationContext主要用來處理動畫Animation。
* frameworks/base/core/jni/android_view_Surface.cpp

class ContextFactory : public IContextFactory {
public:
    virtual AnimationContext* createAnimationContext(renderthread::TimeLord& clock) {
        return new AnimationContext(clock);
    }
};
  • rootRenderNod,rootRenderNode前面在做Ops錄製時的RenderNode。這裏通過makeRoot,將其設置爲Root的RenderNode。它是mRenderNodes中的第一個RenderNode。

  • CanvasContext實現了IFrameCallback接口,所以,CanvasContext能接收編舞者Choreographer的callback,處理實時動畫。

我們再回過頭看DrawFrameTask的run。首先是syncFrameState處理,同步Frame的State:

bool DrawFrameTask::syncFrameState(TreeInfo& info) {
    ATRACE_CALL();
    int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
    mRenderThread->timeLord().vsyncReceived(vsync);
    bool canDraw = mContext->makeCurrent();
    mContext->unpinImages();

    for (size_t i = 0; i < mLayers.size(); i++) {
        mLayers[i]->apply();
    }
    mLayers.clear();
    mContext->setContentDrawBounds(mContentDrawBounds);
    mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);

    // This is after the prepareTree so that any pending operations
    // (RenderNode tree state, prefetched layers, etc...) will be flushed.
    if (CC_UNLIKELY(!mContext->hasSurface() || !canDraw)) {
        if (!mContext->hasSurface()) {
            mSyncResult |= SyncResult::LostSurfaceRewardIfFound;
        } else {
            // If we have a surface but can't draw we must be stopped
            mSyncResult |= SyncResult::ContextIsStopped;
        }
        info.out.canDrawThisFrame = false;
    }

    if (info.out.hasAnimations) {
        if (info.out.requiresUiRedraw) {
            mSyncResult |= SyncResult::UIRedrawRequired;
        }
    }
    // If prepareTextures is false, we ran out of texture cache space
    return info.prepareTextures;
}
  • makeCurrent,這個從早期的版本就有,早期只有Opengl pipeline時,Opengl只支持單線程。我們首先要通過makeCurrent,告訴GPU處理當前的上下文(context)。
  • unpinImages,hwui爲了提高速度,對各種object都做了cache,這裏的unpin,就是讓cache去做unpin,以前的都不要了。
  • setContentDrawBounds,設置繪製的區域大小
  • prepareTree,前面我們也說過,Android View是樹型結構的,這就是在繪製之前,去準備這些Tree節點的繪圖操作Ops。這個過程也是非常的複雜。

回到run函數,syncFrameState後,如果,可以繪製,也就是存在更新。直接讓CanvasContext去繪製了。

CanvasContext的draw是在RenderPipeline中完成的。而Ops的渲染則是通過BakedOpRenderer完成。默認用的是OpenGLPipeline,簡單的來看,這段流程。
DrawFrame時序圖

其中就兩個主要的流程:PrepareTree和Draw。在流程圖上,只是標記了一下,沒有仔細的畫。下面的我們來看看,這裏都做了什麼,我們的界面是怎麼畫出來的。

Node Tree的準備

離開我們的測試應用代碼很久了,回來測試的代碼。此時,RenderThread,DrawFrameTask,CanvasContext等已經就緒,繪製操縱已經被添加到了DisplayList中。

那麼DisplayList,是怎麼到CanvasContext中進行繪製的呢?

我們接着來看測試代碼,接下來,就是Surface的unlock和post操縱。

mSurface.unlockCanvasAndPost(canvas);

SurfaceHolder直接調的Surface的unlockCanvasAndPost。

        @Override
        public void unlockCanvasAndPost(Canvas canvas) {
            mSurface.unlockCanvasAndPost(canvas);
            mSurfaceLock.unlock();
        }

由於我們採用的hardware Context,走的HwuiContext的分支。

    public void unlockCanvasAndPost(Canvas canvas) {
        synchronized (mLock) {
            checkNotReleasedLocked();

            if (mHwuiContext != null) {
                mHwuiContext.unlockAndPost(canvas);
            } else {
                unlockSwCanvasAndPost(canvas);
            }
        }
    }

HwuiContext的unlockAndPost函數如下:

        void unlockAndPost(Canvas canvas) {
            if (canvas != mCanvas) {
                throw new IllegalArgumentException("canvas object must be the same instance that "
                        + "was previously returned by lockCanvas");
            }
            mRenderNode.end(mCanvas);
            mCanvas = null;
            nHwuiDraw(mHwuiRenderer);
        }

我們在lockCanvas時,mRenderNode.start,unlock時,調的mRenderNode.end。

Node結束時,先結束Canvas的錄製,然後將錄製的List,給到RenderNode。

    public void end(DisplayListCanvas canvas) {
        long displayList = canvas.finishRecording();
        nSetDisplayList(mNativeRenderNode, displayList);
        canvas.recycle();
    }

記住,Canvas錄製的List,給到了RenderNode。這很重要。

finishRecording,我們直接看最後native的實現。

DisplayList* RecordingCanvas::finishRecording() {
    restoreToCount(1);
    mPaintMap.clear();
    mRegionMap.clear();
    mPathMap.clear();
    DisplayList* displayList = mDisplayList;
    mDisplayList = nullptr;
    mSkiaCanvasProxy.reset(nullptr);
    return displayList;
}

返回的就是前面我們已經錄製好的mDisplayList。

錄製好的DisplayList,最後給到哪兒呢?

nSetDisplayListJNI實現如下:

static void android_view_RenderNode_setDisplayList(JNIEnv* env,
        jobject clazz, jlong renderNodePtr, jlong displayListPtr) {
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    DisplayList* newData = reinterpret_cast<DisplayList*>(displayListPtr);
    renderNode->setStagingDisplayList(newData);
}

JNI再通過setStagingDisplayList,給到RenderNode的mStagingDisplayList

void RenderNode::setStagingDisplayList(DisplayList* displayList) {
    mValid = (displayList != nullptr);
    mNeedsDisplayListSync = true;
    delete mStagingDisplayList;
    mStagingDisplayList = displayList;
}

到此,錄製的Ops,是不是都給到RenderNode的mStagingDisplayList了。

現在,我們可以來看CanvasContext的PrepareTree了。

* frameworks/base/libs/hwui/renderthread/CanvasContext.cpp

void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
                                RenderNode* target) {
    mRenderThread.removeFrameCallback(this);

    ... ... //處理frame信息

    info.damageAccumulator = &mDamageAccumulator;
    info.layerUpdateQueue = &mLayerUpdateQueue;

    mAnimationContext->startFrame(info.mode);
    mRenderPipeline->onPrepareTree();
    for (const sp<RenderNode>& node : mRenderNodes) {
        // 只有Primary的node是 FULL,其他都是實時
        info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
        node->prepareTree(info);
        GL_CHECKPOINT(MODERATE);
    }
    mAnimationContext->runRemainingAnimations(info);
    GL_CHECKPOINT(MODERATE);

    freePrefetchedLayers();
    GL_CHECKPOINT(MODERATE);

    mIsDirty = true;

    // 如果,窗口已經沒有Native Surface,這一幀就丟掉。
    if (CC_UNLIKELY(!mNativeSurface.get())) {
        mCurrentFrameInfo->addFlag(FrameInfoFlags::SkippedFrame);
        info.out.canDrawThisFrame = false;
        return;
    }

    ... ...
}

第一個問題,info是什麼,從哪兒來的?從DrawFrameTask中來的。

void DrawFrameTask::run() {
    ATRACE_NAME("DrawFrame");

    bool canUnblockUiThread;
    bool canDrawThisFrame;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = info.out.canDrawThisFrame;
    }

TreeInfo顧名思義,描述Viewtree的,也就是RenderNode tree。

    TreeInfo(TraversalMode mode, renderthread::CanvasContext& canvasContext)
            : mode(mode), prepareTextures(mode == MODE_FULL), canvasContext(canvasContext) {}

注意這裏的mode爲TreeInfo::MODE_FULL。只有Primary的node是 FULL,其他都是實時。

Context可能會有多個Node,每個Node都進行Prepare。

* frameworks/base/libs/hwui/RenderNode.cpp

void RenderNode::prepareTree(TreeInfo& info) {
    ATRACE_CALL();
    LOG_ALWAYS_FATAL_IF(!info.damageAccumulator, "DamageAccumulator missing");
    MarkAndSweepRemoved observer(&info);

    // The OpenGL renderer reserves the stencil buffer for overdraw debugging.  Functors
    // will need to be drawn in a layer.
    bool functorsNeedLayer = Properties::debugOverdraw && !Properties::isSkiaEnabled();

    prepareTreeImpl(observer, info, functorsNeedLayer);
}

在RenderNode進行Prepare時,先對TreeInfo進行封,MarkAndSweepRemoved,主要是對可能的Node進行標記,刪除。MarkAndSweepRemoved的代碼如下:

class MarkAndSweepRemoved : public TreeObserver {
    PREVENT_COPY_AND_ASSIGN(MarkAndSweepRemoved);

public:
    explicit MarkAndSweepRemoved(TreeInfo* info) : mTreeInfo(info) {}

    void onMaybeRemovedFromTree(RenderNode* node) override { mMarked.emplace_back(node); }

    ~MarkAndSweepRemoved() {
        for (auto& node : mMarked) {
            if (!node->hasParents()) {
                node->onRemovedFromTree(mTreeInfo);
            }
        }
    }

private:
    FatVector<sp<RenderNode>, 10> mMarked;
    TreeInfo* mTreeInfo;
};

能從tree上刪除的就添加到mMarked中,在析構函數中,再對mMarked的mode進行刪除。

prepareTreeImpl是RenderNode真正進行Prepare的地方。

void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
    info.damageAccumulator->pushTransform(this);

    if (info.mode == TreeInfo::MODE_FULL) {
        pushStagingPropertiesChanges(info);
    }
    uint32_t animatorDirtyMask = 0;
    if (CC_LIKELY(info.runAnimations)) {
        animatorDirtyMask = mAnimatorManager.animate(info);
    }

    bool willHaveFunctor = false;
    if (info.mode == TreeInfo::MODE_FULL && mStagingDisplayList) {
        willHaveFunctor = mStagingDisplayList->hasFunctor();
    } else if (mDisplayList) {
        willHaveFunctor = mDisplayList->hasFunctor();
    }
    bool childFunctorsNeedLayer =
            mProperties.prepareForFunctorPresence(willHaveFunctor, functorsNeedLayer);

    if (CC_UNLIKELY(mPositionListener.get())) {
        mPositionListener->onPositionUpdated(*this, info);
    }

    prepareLayer(info, animatorDirtyMask);
    if (info.mode == TreeInfo::MODE_FULL) {
        pushStagingDisplayListChanges(observer, info);
    }

    if (mDisplayList) {
        info.out.hasFunctors |= mDisplayList->hasFunctor();
        bool isDirty = mDisplayList->prepareListAndChildren(
                observer, info, childFunctorsNeedLayer,
                [](RenderNode* child, TreeObserver& observer, TreeInfo& info,
                   bool functorsNeedLayer) {
                    child->prepareTreeImpl(observer, info, functorsNeedLayer);
                });
        if (isDirty) {
            damageSelf(info);
        }
    }
    pushLayerUpdate(info);

    info.damageAccumulator->popTransform();
}

damageAccumulator是從CanvasContext中傳過來的,是CanvasContext的成員,damage的累乘器。主要是用來標記,屏幕的那些區域被破壞了,需要重新繪製,所有的RenderNode累加起來,就是總的。

我們來看一眼pushTransform。

void DamageAccumulator::pushCommon() {
    if (!mHead->next) {
        DirtyStack* nextFrame = mAllocator.create_trivial<DirtyStack>();
        nextFrame->next = nullptr;
        nextFrame->prev = mHead;
        mHead->next = nextFrame;
    }
    mHead = mHead->next;
    mHead->pendingDirty.setEmpty();
}

void DamageAccumulator::pushTransform(const RenderNode* transform) {
    pushCommon();
    mHead->type = TransformRenderNode;
    mHead->renderNode = transform;
}

damage累加器中,每一個元素由DirtyStack描述,分兩種類型:TransformMatrix4和TransformRenderNode。採用一個雙向鏈表mHead進行管理。

pushStagingPropertiesChanges,property是對RenderNode的描述,也就是對View的描述,比如大小,位置等。有兩個狀態,正在使用的syncProperties和待處理的mStagingProperties。syncProperties時,將mStagingProperties賦值給syncProperties。這裏,很多狀態都是這樣同步的。

pushStagingDisplayListChanges,和前面的Property一樣的流程,只是這裏是syncDisplayList。這樣,前面錄製好Ops,就通過mStagingDisplayList傳給mDisplayList。

繪製的Ops都放在mDisplayList中,這邊會去遞歸的調用每個RenderNode的prepareTreeImpl。

pushLayerUpdate,將要更新的RenderNode都加到TreeInfo的layerUpdateQueue中,還有其對應的damage大小。

累加器的popTransform,就是將該Node的DirtyStack生效。

Prepare完成,代碼量還是非常多的,我們主要關心我們的數據流。DisplayList的數據,不是更新到了Context的mLayerUpdateQueue中?

繪製

CanvasContext Prepare完後,繪製一幀的數據就準備好了。繪製是在各自的pipeline中進行的。OpenGLPipeline的繪製流程如下:

bool OpenGLPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty,
                          const FrameBuilder::LightGeometry& lightGeometry,
                          LayerUpdateQueue* layerUpdateQueue, const Rect& contentDrawBounds,
                          bool opaque, bool wideColorGamut,
                          const BakedOpRenderer::LightInfo& lightInfo,
                          const std::vector<sp<RenderNode>>& renderNodes,
                          FrameInfoVisualizer* profiler) {
    mEglManager.damageFrame(frame, dirty);

    bool drew = false;

    auto& caches = Caches::getInstance();
    FrameBuilder frameBuilder(dirty, frame.width(), frame.height(), lightGeometry, caches);

    frameBuilder.deferLayers(*layerUpdateQueue);
    layerUpdateQueue->clear();

    frameBuilder.deferRenderNodeScene(renderNodes, contentDrawBounds);

    BakedOpRenderer renderer(caches, mRenderThread.renderState(), opaque, wideColorGamut,
                             lightInfo);
    frameBuilder.replayBakedOps<BakedOpDispatcher>(renderer);
    ProfileRenderer profileRenderer(renderer);
    profiler->draw(profileRenderer);
    drew = renderer.didDraw();

    // post frame cleanup
    caches.clearGarbage();
    caches.pathCache.trim();
    caches.tessellationCache.trim();

#if DEBUG_MEMORY_USAGE
    caches.dumpMemoryUsage();
#else
    if (CC_UNLIKELY(Properties::debugLevel & kDebugMemory)) {
        caches.dumpMemoryUsage();
    }
#endif

    return drew;
}

Frame是描述一幀數據信息的,主要是寬,高,ufferAge,和Surface這幾個屬性。繪製開始時,由EglManager根據Surface的屬性構建。

Frame EglManager::beginFrame(EGLSurface surface) {
    LOG_ALWAYS_FATAL_IF(surface == EGL_NO_SURFACE, "Tried to beginFrame on EGL_NO_SURFACE!");
    makeCurrent(surface);
    Frame frame;
    frame.mSurface = surface;
    eglQuerySurface(mEglDisplay, surface, EGL_WIDTH, &frame.mWidth);
    eglQuerySurface(mEglDisplay, surface, EGL_HEIGHT, &frame.mHeight);
    frame.mBufferAge = queryBufferAge(surface);
    eglBeginFrame(mEglDisplay, surface);
    return frame;
}

damageFrame主要是部分更新參數的設置,前面我們也damage的區域就是前面Prepare時累加器累加出來的。

FrameBuilder,用來創建一幀Frame,繼承CanvasStateClient。

FrameBuilder::FrameBuilder(const SkRect& clip, uint32_t viewportWidth, uint32_t viewportHeight,
                           const LightGeometry& lightGeometry, Caches& caches)
        : mStdAllocator(mAllocator)
        , mLayerBuilders(mStdAllocator)
        , mLayerStack(mStdAllocator)
        , mCanvasState(*this)
        , mCaches(caches)
        , mLightRadius(lightGeometry.radius)
        , mDrawFbo0(true) {
    // Prepare to defer Fbo0
    auto fbo0 = mAllocator.create<LayerBuilder>(viewportWidth, viewportHeight, Rect(clip));
    mLayerBuilders.push_back(fbo0);
    mLayerStack.push_back(0);
    mCanvasState.initializeSaveStack(viewportWidth, viewportHeight, clip.fLeft, clip.fTop,
                                     clip.fRight, clip.fBottom, lightGeometry.center);
}

FrameBuilder創建一個LayerBuilder的List來記錄Rendernode的繪製狀態,然後以倒序的方式去replay錄製的RenderNode。

deferLayers主要是做了一個倒序,所有的RenderNode進行倒序,RenderNode的Ops也進行倒序。

void FrameBuilder::deferLayers(const LayerUpdateQueue& layers) {
    // Render all layers to be updated, in order. Defer in reverse order, so that they'll be
    // updated in the order they're passed in (mLayerBuilders are issued to Renderer in reverse)
    for (int i = layers.entries().size() - 1; i >= 0; i--) {
        RenderNode* layerNode = layers.entries()[i].renderNode.get();
        // only schedule repaint if node still on layer - possible it may have been
        // removed during a dropped frame, but layers may still remain scheduled so
        // as not to lose info on what portion is damaged
        OffscreenBuffer* layer = layerNode->getLayer();
        if (CC_LIKELY(layer)) {
            ATRACE_FORMAT("Optimize HW Layer DisplayList %s %ux%u", layerNode->getName(),
                          layerNode->getWidth(), layerNode->getHeight());

            Rect layerDamage = layers.entries()[i].damage;
            // TODO: ensure layer damage can't be larger than layer
            layerDamage.doIntersect(0, 0, layer->viewportWidth, layer->viewportHeight);
            layerNode->computeOrdering();

            // map current light center into RenderNode's coordinate space
            Vector3 lightCenter = mCanvasState.currentSnapshot()->getRelativeLightCenter();
            layer->inverseTransformInWindow.mapPoint3d(lightCenter);

            saveForLayer(layerNode->getWidth(), layerNode->getHeight(), 0, 0, layerDamage,
                         lightCenter, nullptr, layerNode);

            if (layerNode->getDisplayList()) {
                deferNodeOps(*layerNode);
            }
            restoreForLayer();
        }
    }
}

倒序的目的,其實就是解決誰先畫,誰後畫的問題。Node都是Tree結構,如果子tree先繪製,父tree後繪製,這樣後繪製的就會將前面繪製的遮蓋住,看不見了。注意我們的數據流,倒序後的Layer放在mLayerBuilders中。

BakedOpRenderer是渲染器Renderer。它是主要的渲染管理者,用以管理渲染的任務集合,比如一幀數據,和包含的FBO。管理着他們的生命週期,綁定FrameBuffer。這是FBO創建,銷燬等的唯一的地方。而所有的渲染操縱都是通過Dispatcher進行傳遞。

    BakedOpRenderer(Caches& caches, RenderState& renderState, bool opaque, bool wideColorGamut,
                    const LightInfo& lightInfo)
            : mGlopReceiver(DefaultGlopReceiver)
            , mRenderState(renderState)
            , mCaches(caches)
            , mOpaque(opaque)
            , mWideColorGamut(wideColorGamut)
            , mLightInfo(lightInfo) {}

mGlopReceiver是一個函數指針,默認爲DefaultGlopReceiver。

    static void DefaultGlopReceiver(BakedOpRenderer& renderer, const Rect* dirtyBounds,
                                    const ClipBase* clip, const Glop& glop) {
        renderer.renderGlopImpl(dirtyBounds, clip, glop);
    }

replayBakedOps是一個模板函數,這樣就可以自由決定錄製Ops被replay的地方。它包含一個lambdas數組,通過這個數組,replay時,,錄製的BakeOpState就能夠通過state->op->opId找到對應的接收者進行replay。

replayBakedOps函數實現如下:

    template <typename StaticDispatcher, typename Renderer>
    void replayBakedOps(Renderer& renderer) {
        std::vector<OffscreenBuffer*> temporaryLayers;
        finishDefer();

#define X(Type)                                                                   \
    [](void* renderer, const BakedOpState& state) {                               \
        StaticDispatcher::on##Type(*(static_cast<Renderer*>(renderer)),           \
                                   static_cast<const Type&>(*(state.op)), state); \
    },
        static BakedOpReceiver unmergedReceivers[] = BUILD_RENDERABLE_OP_LUT(X);
#undef X

#define X(Type)                                                                           \
    [](void* renderer, const MergedBakedOpList& opList) {                                 \
        StaticDispatcher::onMerged##Type##s(*(static_cast<Renderer*>(renderer)), opList); \
    },
        static MergedOpReceiver mergedReceivers[] = BUILD_MERGEABLE_OP_LUT(X);
#undef X

        // Relay through layers in reverse order, since layers
        // later in the list will be drawn by earlier ones
        for (int i = mLayerBuilders.size() - 1; i >= 1; i--) {
            GL_CHECKPOINT(MODERATE);
            LayerBuilder& layer = *(mLayerBuilders[i]);
            if (layer.renderNode) {
                // cached HW layer - can't skip layer if empty
                renderer.startRepaintLayer(layer.offscreenBuffer, layer.repaintRect);
                GL_CHECKPOINT(MODERATE);
                layer.replayBakedOpsImpl((void*)&renderer, unmergedReceivers, mergedReceivers);
                GL_CHECKPOINT(MODERATE);
                renderer.endLayer();
            } else if (!layer.empty()) {
                // save layer - skip entire layer if empty (in which case, LayerOp has null layer).
                layer.offscreenBuffer = renderer.startTemporaryLayer(layer.width, layer.height);
                temporaryLayers.push_back(layer.offscreenBuffer);
                GL_CHECKPOINT(MODERATE);
                layer.replayBakedOpsImpl((void*)&renderer, unmergedReceivers, mergedReceivers);
                GL_CHECKPOINT(MODERATE);
                renderer.endLayer();
            }
        }

        GL_CHECKPOINT(MODERATE);
        if (CC_LIKELY(mDrawFbo0)) {
            const LayerBuilder& fbo0 = *(mLayerBuilders[0]);
            renderer.startFrame(fbo0.width, fbo0.height, fbo0.repaintRect);
            GL_CHECKPOINT(MODERATE);
            fbo0.replayBakedOpsImpl((void*)&renderer, unmergedReceivers, mergedReceivers);
            GL_CHECKPOINT(MODERATE);
            renderer.endFrame(fbo0.repaintRect);
        }

        for (auto& temporaryLayer : temporaryLayers) {
            renderer.recycleTemporaryLayer(temporaryLayer);
        }
    }

這個表和前面我們在錄製的流程中說的LUT就對應起來了,unmergedReceivers和mergedReceivers分別和對應的LUT表對應。比如我們的ColorOp,就調的BakedOpDispatcher::onColorOp。另外要注意的是,我們的drawColor是從fbo0這裏調的。

void BakedOpDispatcher::onColorOp(BakedOpRenderer& renderer, const ColorOp& op,
                                  const BakedOpState& state) {
    SkPaint paint;
    paint.setColor(op.color);
    paint.setBlendMode(op.mode);

    Glop glop;
    GlopBuilder(renderer.renderState(), renderer.caches(), &glop)
            .setRoundRectClipState(state.roundRectClipState)
            .setMeshUnitQuad()
            .setFillPaint(paint, state.alpha)
            .setTransform(Matrix4::identity(), TransformFlags::None)
            .setModelViewMapUnitToRect(state.computedState.clipState->rect)
            .build();
    renderer.renderGlop(state, glop);
}

我們需要繪製的color值,直接設置到畫筆paint,blend模式也設置到paint。

這部分的邏輯在LayerBuilder的replayBakedOpsImpl函數中。

void LayerBuilder::replayBakedOpsImpl(void* arg, BakedOpReceiver* unmergedReceivers,
                                      MergedOpReceiver* mergedReceivers) const {
    if (renderNode) {
        ATRACE_FORMAT_BEGIN("Issue HW Layer DisplayList %s %ux%u", renderNode->getName(), width,
                            height);
    } else {
        ATRACE_BEGIN("flush drawing commands");
    }

    for (const BatchBase* batch : mBatches) {
        size_t size = batch->getOps().size();
        if (size > 1 && batch->isMerging()) {
            int opId = batch->getOps()[0]->op->opId;
            const MergingOpBatch* mergingBatch = static_cast<const MergingOpBatch*>(batch);
            MergedBakedOpList data = {batch->getOps().data(), size,
                                      mergingBatch->getClipSideFlags(),
                                      mergingBatch->getClipRect()};
            mergedReceivers[opId](arg, data);
        } else {
            for (const BakedOpState* op : batch->getOps()) {
                unmergedReceivers[op->op->opId](arg, *op);
            }
        }
    }
    ATRACE_END();
}

我們的drawcolor是從unmergedReceivers調的!

代碼寫的確實複雜,得慢慢的看,看明白後,有以後就可以跳過這一塊的邏輯了,直接去看Ops繪製的地方~

渲染Ops的時,又被封裝了一次,都被封裝成Glop。Glop由GlopBuilder統一構建。構建完後,由renderGlop進行渲染。

    void renderGlop(const BakedOpState& state, const Glop& glop) {
        renderGlop(&state.computedState.clippedBounds, state.computedState.getClipIfNeeded(), glop);
    }

    void renderGlop(const Rect* dirtyBounds, const ClipBase* clip, const Glop& glop) {
        mGlopReceiver(*this, dirtyBounds, clip, glop);
    }

mGlopReceiver是一個函數指針,指向的是DefaultGlopReceiver。封裝一下,最後的實現爲BakedOpRenderer的renderGlopImpl。

renderGlopImpl函數如下:

void BakedOpRenderer::renderGlopImpl(const Rect* dirtyBounds, const ClipBase* clip,
                                     const Glop& glop) {
    prepareRender(dirtyBounds, clip);
    // Disable blending if this is the first draw to the main framebuffer, in case app has defined
    // transparency where it doesn't make sense - as first draw in opaque window. Note that we only
    // apply this improvement when the blend mode is SRC_OVER - other modes (e.g. CLEAR) can be
    // valid draws that affect other content (e.g. draw CLEAR, then draw DST_OVER)
    bool overrideDisableBlending = !mHasDrawn && mOpaque && !mRenderTarget.frameBufferId &&
                                   glop.blend.src == GL_ONE &&
                                   glop.blend.dst == GL_ONE_MINUS_SRC_ALPHA;
    mRenderState.render(glop, mRenderTarget.orthoMatrix, overrideDisableBlending);
    if (!mRenderTarget.frameBufferId) mHasDrawn = true;
}

在renderGlopImpl中,準備了一個Render,最終是通過mRenderState的render進行渲染。在RenderState的render中,直接調用OpenGLES的接口,需繪製我們的Ops了。具體怎麼繪製的,就是OpenGL的問題了,這裏就不看了,交給OpenGL去吧。

waitOnFences等待所有的task已經繪製完成,這裏的fence和BufferQueue那邊的Fence不是同一個概念。繪製完後,通過swapBuffers函數,交換buffer,將繪製完的數據送去顯示。

另外,hwui中還做了很多Jank的跟蹤,便於debug性能

小結

測試代碼才幾行,底層卻折騰了這麼多,我們來總結一下:

  • 硬件繪製,或硬件加速,就是通過hwui,將2D的繪圖操縱轉換爲3D的繪圖
  • 每一個繪製採用一個RecordedOp進行描述,複雜的繪圖將被拆分成簡單的基本繪圖,並利用RecordingCanvas進行錄製。
  • 每個View都對應RenderNode,而每個界面有一個DisplayList,用以保存錄制的Ops。
  • 每個進程只有一個RenderThread,所有的繪圖都在RenderThread中完成,因此,其他線程的操縱都通過Task或WorkItem的形式post到RenderThread中完成。DrawFrameTask是RenderThread中比較特殊的一個task,是用以繪製整個界面的,跟隨Vync而觸發。
  • OpenGL是單線程的,所以每個RenderThread都有各自的上下文,CanvasContext,通過Preparetree,將DisplayList中Ops都同步到CanvasContext的layerUpdateQueue中,準備好繪製幀的數據。
  • 繪製是由具體的Pipeline完成的,目前有3中類型的Pipeline,OpenGLPipeline是默認的Pipeline。
  • OpenGLPipeline繪製時,通過FrameBuilder和LayerBuilder,將DisplayList的數據進一步封裝。在replayBakedOps時,將Opo的操縱轉換爲具體的繪製操縱,通過BakedOpDispatcher分發給BakedOpRenderer進行渲染。而真正的渲染是在mRenderState完成,直接調用OpenGL的接口。

這中間,只要抓住數據流,Ops和DisplayList,這條主線,理解起來就輕鬆些。總的來說,可以分爲以下幾個部分,我們用一張總體的圖來描述:
HWUI相關類圖

  • Recording部分,這部分主要是2D到3D的轉換,錄製繪圖操縱Ops
  • Draw 控制部分,這部分主要和上層應用和顯示系統同步,控制繪製的進行,包括動畫的處理
  • Draw的執行部分,這部分主要和具體的加速系統交互,採用具體的加速API進行界面的繪製

以上就是結合測試代碼,講解的hwui的具體內容。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章