SurfaceFlinger合成流程(三)
配置硬件合成 setUpHWComposer
回到handleMessageRefresh,繼續看Refresh消息的處理。此時需要進行合成顯示的數據,在rebuildLayerStacks時,已經被更新到每個Display各自的layersSortedByZ中。Layer棧創建完成後,進行HWC 合成的設置。
setUpHWComposer的代碼比較長,我們分段看,在setUpHWComposer中,主要做了以下幾件事:
1.DisplayDevice beginFrame
void SurfaceFlinger::setUpHWComposer() {
... ...
for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
bool dirty = !mDisplays[dpy]->getDirtyRegion(false).isEmpty();
bool empty = mDisplays[dpy]->getVisibleLayersSortedByZ().size() == 0;
bool wasEmpty = !mDisplays[dpy]->lastCompositionHadVisibleLayers;
// 判斷是否需要重新合成
bool mustRecompose = dirty && !(empty && wasEmpty);
... ...
mDisplays[dpy]->beginFrame(mustRecompose);
if (mustRecompose) {
mDisplays[dpy]->lastCompositionHadVisibleLayers = !empty;
}
}
Android對每一塊顯示屏的處理都是分開的。這裏主要是調Display的beginFrame
函數。
status_t DisplayDevice::beginFrame(bool mustRecompose) const {
return mDisplaySurface->beginFrame(mustRecompose);
}
mDisplaySurface根據屏幕有所不同。
主顯和外顯用的FramebufferSurface,需顯示用的VirtualDisplaySurface,我們這裏先不關虛顯。
status_t FramebufferSurface::beginFrame(bool /*mustRecompose*/) {
return NO_ERROR;
}
FramebufferSurface在beginFrame是每一做什麼過多的處理。
回到setUpHWComposer函數
2.創建工作列表
void SurfaceFlinger::setUpHWComposer() {
... ...
// build the h/w work list
if (CC_UNLIKELY(mGeometryInvalid)) {
mGeometryInvalid = false;
for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
sp<const DisplayDevice> displayDevice(mDisplays[dpy]);
const auto hwcId = displayDevice->getHwcDisplayId();
if (hwcId >= 0) {
const Vector<sp<Layer>>& currentLayers(
displayDevice->getVisibleLayersSortedByZ());
for (size_t i = 0; i < currentLayers.size(); i++) {
const auto& layer = currentLayers[i];
if (!layer->hasHwcLayer(hwcId)) {
if (!layer->createHwcLayer(getBE().mHwc.get(), hwcId)) {
layer->forceClientComposition(hwcId);
continue;
}
}
layer->setGeometry(displayDevice, i);
if (mDebugDisableHWC || mDebugRegion) {
layer->forceClientComposition(hwcId);
}
}
}
}
}
對每個Display中的每個Layer創建對應的HWC Layer,注意hwcId,只是對hwcId大於零的Layer纔會創建HWC Layer。Layer的createHwcLayer函數如下:
bool Layer::createHwcLayer(HWComposer* hwc, int32_t hwcId) {
LOG_ALWAYS_FATAL_IF(getBE().mHwcLayers.count(hwcId) != 0,
"Already have a layer for hwcId %d", hwcId);
HWC2::Layer* layer = hwc->createLayer(hwcId);
if (!layer) {
return false;
}
LayerBE::HWCInfo& hwcInfo = getBE().mHwcLayers[hwcId];
hwcInfo.hwc = hwc;
hwcInfo.layer = layer;
layer->setLayerDestroyedListener(
[this, hwcId](HWC2::Layer* /*layer*/) { getBE().mHwcLayers.erase(hwcId); });
return true;
}
根據hwcId來創建,也就是說,HWC上層(SurfaceFlinger)的每個Layer,都會爲每個Display創建一個HWC Layer。HWComposer 根據hwcId找對HWC2的Display,再通過具體的HWC2::Dispaly去創建自己的HWC Layer;
Layer通過HWComposer來創建,
HWC2::Layer* HWComposer::createLayer(int32_t displayId) {
if (!isValidDisplay(displayId)) {
ALOGE("Failed to create layer on invalid display %d", displayId);
return nullptr;
}
auto display = mDisplayData[displayId].hwcDisplay;
HWC2::Layer* layer;
auto error = display->createLayer(&layer);
if (error != HWC2::Error::None) {
ALOGE("Failed to create layer on display %d: %s (%d)", displayId,
to_string(error).c_str(), static_cast<int32_t>(error));
return nullptr;
}
return layer;
}
創建HWC Layer的實現,最終是在Vendor實現的HAL中來完成的。對應的HWC2command爲HWC2_FUNCTION_CREATE_LAYER
。
Error HwcHal::createLayer(Display display, Layer* outLayer)
{
int32_t err = mDispatch.createLayer(mDevice, display, outLayer);
return static_cast<Error>(err);
}
創建的HWC Layer在HWC2::Dispaly中也保存了一個引用:
* frameworks/native/services/surfaceflinger/DisplayHardware/HWC2.cpp
Error Display::createLayer(Layer** outLayer)
{
if (!outLayer) {
return Error::BadParameter;
}
hwc2_layer_t layerId = 0;
auto intError = mComposer.createLayer(mId, &layerId);
auto error = static_cast<Error>(intError);
if (error != Error::None) {
return error;
}
auto layer = std::make_unique<Layer>(
mComposer, mCapabilities, mId, layerId);
*outLayer = layer.get();
mLayers.emplace(layerId, std::move(layer));
return Error::None;
}
如果hwclayer沒有創建成功,那麼這一層Layer就強制用Client方式合成forceClientComposition。
void Layer::forceClientComposition(int32_t hwcId) {
if (getBE().mHwcLayers.count(hwcId) == 0) {
ALOGE("forceClientComposition: no HWC layer found (%d)", hwcId);
return;
}
getBE().mHwcLayers[hwcId].forceClientComposition = true;
}
另外,如果是Disable掉HWC合成,或者調試Region,也強制用Client方式合成:
mDebugDisableHWC || mDebugRegion
這兩個調試方式可以在系統設置,開發者選項中進行設置。
創建完hwcLayer後,設置Layer的幾何尺寸:
void Layer::setGeometry(const sp<const DisplayDevice>& displayDevice, uint32_t z)
{
... ... //注意,我們這裏的數據都是來源於DrawingState
const State& s(getDrawingState());
... ...
if (!isOpaque(s) || getAlpha() != 1.0f) {
blendMode =
mPremultipliedAlpha ? HWC2::BlendMode::Premultiplied : HWC2::BlendMode::Coverage;
}
auto error = hwcLayer->setBlendMode(blendMode);
// 計算displayFrame
Rect frame{t.transform(computeBounds(activeTransparentRegion))};
... ...
const Transform& tr(displayDevice->getTransform());
Rect transformedFrame = tr.transform(frame);
error = hwcLayer->setDisplayFrame(transformedFrame);
... ...
// 計算sourceCrop
FloatRect sourceCrop = computeCrop(displayDevice);
error = hwcLayer->setSourceCrop(sourceCrop);
... ...
// 設置Alpha
float alpha = static_cast<float>(getAlpha());
error = hwcLayer->setPlaneAlpha(alpha);
... ...
// 設置z-order
error = hwcLayer->setZOrder(z);
... ...
int type = s.type;
int appId = s.appId;
sp<Layer> parent = mDrawingParent.promote();
if (parent.get()) {
auto& parentState = parent->getDrawingState();
type = parentState.type;
appId = parentState.appId;
}
// 設置Layer的信息
error = hwcLayer->setInfo(type, appId);
ALOGE_IF(error != HWC2::Error::None, "[%s] Failed to set info (%d)", mName.string(),
static_cast<int32_t>(error));
// 設置transform
const uint32_t orientation = transform.getOrientation();
if (orientation & Transform::ROT_INVALID) {
// we can only handle simple transformation
hwcInfo.forceClientComposition = true;
} else {
auto transform = static_cast<HWC2::Transform>(orientation);
auto error = hwcLayer->setTransform(transform);
... ...
}
}
注意,我們這裏的數據都是來源於DrawingState,setGeometry函數中,主要做了以下幾件事:
- 確認HWCLayer的混合模式
混合模式,是兩個Layer直接的混合方式,主要下面的幾種:
/* Blend modes, settable per layer */
typedef enum {
HWC2_BLEND_MODE_INVALID = 0,
/* colorOut = colorSrc */
HWC2_BLEND_MODE_NONE = 1,
/* colorOut = colorSrc + colorDst * (1 - alphaSrc) */
HWC2_BLEND_MODE_PREMULTIPLIED = 2,
/* colorOut = colorSrc * alphaSrc + colorDst * (1 - alphaSrc) */
HWC2_BLEND_MODE_COVERAGE = 3,
} hwc2_blend_mode_t;
HWC2_BLEND_MODE_NONE,不混合,源是什麼樣,輸出就是什麼樣的。
HWC2_BLEND_MODE_PREMULTIPLIED,預乘,Dst需要做Alpha的處理。
HWC2_BLEND_MODE_COVERAGE,覆蓋方式,源和Dst都需要做Alpha的處理。
-
計算displayFrame並設置給hwcLayer
displayFrame通過transform轉換過的 -
計算sourceCrop並設置給hwcLayer
sourceCrop是上層傳下來的,再和Dispaly,其他Layer的屬性進行計算。 -
設置Alpha值
-
設置z-Order
-
設置Layer的信息
type和appId是Android Framework層創建SurfaceControl時設置的,可以搜搜"new SurfaceControl"就出來了。type是類型,比如ScreenshotSurface
,Background
等;appId是應用的進程號。 -
設置變換信息transform
Layer信息的設置,是通過CommandBuffer的讀寫來完成的,比如,設置混合模式,最終是調的HwcHal的setLayerBlendMode方法。
Error HwcHal::setLayerBlendMode(Display display, Layer layer, int32_t mode)
{
int32_t err = mDispatch.setLayerBlendMode(mDevice, display, layer, mode);
return static_cast<Error>(err);
}
回到setUpHWComposer函數
3.設置每層Layer的Frame數據
void SurfaceFlinger::setUpHWComposer() {
... ...
mat4 colorMatrix = mColorMatrix * computeSaturationMatrix() * mDaltonizer();
// Set the per-frame data
for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
const auto hwcId = displayDevice->getHwcDisplayId();
... ... // 設置每個Dispaly的顏色矩陣
if (colorMatrix != mPreviousColorMatrix) {
status_t result = getBE().mHwc->setColorTransform(hwcId, colorMatrix);
... ...
}
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
if (layer->getForceClientComposition(hwcId)) {
ALOGV("[%s] Requesting Client composition", layer->getName().string());
layer->setCompositionType(hwcId, HWC2::Composition::Client);
continue;
}
layer->setPerFrameData(displayDevice);
}
if (hasWideColorDisplay) {
android_color_mode newColorMode;
android_dataspace newDataSpace = HAL_DATASPACE_V0_SRGB;
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
newDataSpace = bestTargetDataSpace(layer->getDataSpace(), newDataSpace);
ALOGV("layer: %s, dataspace: %s (%#x), newDataSpace: %s (%#x)",
layer->getName().string(), dataspaceDetails(layer->getDataSpace()).c_str(),
layer->getDataSpace(), dataspaceDetails(newDataSpace).c_str(), newDataSpace);
}
newColorMode = pickColorMode(newDataSpace);
setActiveColorModeInternal(displayDevice, newColorMode);
}
}
mPreviousColorMatrix = colorMatrix;
設置Frame數據時,主要做了以下幾件事:
- 設置每個Dispaly的顏色矩陣
可以在開發這選項中設置,模擬顏色空間。其支持的transform主要有:
typedef enum {
HAL_COLOR_TRANSFORM_IDENTITY = 0,
HAL_COLOR_TRANSFORM_ARBITRARY_MATRIX = 1,
HAL_COLOR_TRANSFORM_VALUE_INVERSE = 2,
HAL_COLOR_TRANSFORM_GRAYSCALE = 3,
HAL_COLOR_TRANSFORM_CORRECT_PROTANOPIA = 4,
HAL_COLOR_TRANSFORM_CORRECT_DEUTERANOPIA = 5,
HAL_COLOR_TRANSFORM_CORRECT_TRITANOPIA = 6,
} android_color_transform_t;
colorTransform主要是用以做顏色變換,以模擬或幫助色盲患者等。
- 設置每一層Layer的顯示數據
setPerFrameData BufferLayer和ColorLayer的實現不一樣。ColorLayer 的邏輯比較簡單:
void ColorLayer::setPerFrameData(const sp<const DisplayDevice>& displayDevice) {
... ...
// 設置可見區域
auto error = hwcLayer->setVisibleRegion(visible);
// 制定合成方式
setCompositionType(hwcId, HWC2::Composition::SolidColor);
half4 color = getColor();
// 設置顏色
error = hwcLayer->setColor({static_cast<uint8_t>(std::round(255.0f * color.r)),
static_cast<uint8_t>(std::round(255.0f * color.g)),
static_cast<uint8_t>(std::round(255.0f * color.b)), 255});
... ...// 去掉變換矩陣
error = hwcLayer->setTransform(HWC2::Transform::None);
... ...
}
ColorLayer,主要有4個操縱:設置可見區域,這個前面已經就算好了,但是這裏要確保它在Dispaly的視窗裏;指定合成方式,默認採用SolidColor
方式合成;設置顏色,指定該Layer的顏色,RGBA的格式,Alpha默認爲255,全透;最後,ColorLayer不需要transform,去掉。
BufferLayer的setPerFrameData處理如下:
void BufferLayer::setPerFrameData(const sp<const DisplayDevice>& displayDevice) {
// 設置可見區域
auto& hwcLayer = hwcInfo.layer;
auto error = hwcLayer->setVisibleRegion(visible);
... ... //設置Damage區域
error = hwcLayer->setSurfaceDamage(surfaceDamageRegion);
... ...
// Sideband layers處理,默認合成類型爲Sideband
if (getBE().compositionInfo.hwc.sidebandStream.get()) {
setCompositionType(hwcId, HWC2::Composition::Sideband);
// 制定Sideband流
error = hwcLayer->setSidebandStream(getBE().compositionInfo.hwc.sidebandStream->handle());
... ...
return; // Sideband layers處理完成後直接返回了。
}
// 如果是Cursor Layer,合成類似爲Cursor,其他爲Device
if (mPotentialCursor) {
ALOGV("[%s] Requesting Cursor composition", mName.string());
setCompositionType(hwcId, HWC2::Composition::Cursor);
} else {
ALOGV("[%s] Requesting Device composition", mName.string());
setCompositionType(hwcId, HWC2::Composition::Device);
}
// 設置數據空間dataspace
error = hwcLayer->setDataspace(mCurrentState.dataSpace);
if (error != HWC2::Error::None) {
ALOGE("[%s] Failed to set dataspace %d: %s (%d)", mName.string(), mCurrentState.dataSpace,
to_string(error).c_str(), static_cast<int32_t>(error));
}
// 獲取GraphicBuffer
uint32_t hwcSlot = 0;
sp<GraphicBuffer> hwcBuffer;
hwcInfo.bufferCache.getHwcBuffer(getBE().compositionInfo.mBufferSlot,
getBE().compositionInfo.mBuffer, &hwcSlot, &hwcBuffer);
//獲取Fence
auto acquireFence = mConsumer->getCurrentFence();
// 設置Buffer
error = hwcLayer->setBuffer(hwcSlot, hwcBuffer, acquireFence);
if (error != HWC2::Error::None) {
ALOGE("[%s] Failed to set buffer %p: %s (%d)", mName.string(),
getBE().compositionInfo.mBuffer->handle, to_string(error).c_str(),
static_cast<int32_t>(error));
}
}
BufferLayer的處理比ColorLayer多,Sideband,Cursor和其他的UI圖層都屬於BufferLayer,每種類型Layer處理不臺一樣。這裏比較難的是Fence的處理。Buffer也只是將Buffer的handle傳給底層的HWC,並沒有傳Buffer裏面的內容。
Error Composer::setLayerBuffer(Display display, Layer layer,
uint32_t slot, const sp<GraphicBuffer>& buffer, int acquireFence)
{
mWriter.selectDisplay(display);
mWriter.selectLayer(layer);
if (mIsUsingVrComposer && buffer.get()) {
... ...//VR
}
const native_handle_t* handle = nullptr;
if (buffer.get()) {
handle = buffer->getNativeBuffer()->handle;
}
mWriter.setLayerBuffer(slot, handle, acquireFence);
return Error::NONE;
}
所以,每層Layer的數據,要麼的Buffer,要麼是固定的顏色。在處理每一層的數據時,還要處理widecolor。首先通過bestTargetDataSpace找到每層Layer的最佳DataSpace,然後再通過pickColorMode選取顏色模式,最後通過setActiveColorModeInternal函數設置。
void SurfaceFlinger::setActiveColorModeInternal(const sp<DisplayDevice>& hw,
android_color_mode_t mode) {
int32_t type = hw->getDisplayType();
android_color_mode_t currentMode = hw->getActiveColorMode();
... ...
hw->setActiveColorMode(mode);
getHwComposer().setActiveColorMode(type, mode);
}
回到setUpHWComposer函數
4.prepareFrame準備數據
for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
if (!displayDevice->isDisplayOn()) {
continue;
}
status_t result = displayDevice->prepareFrame(*getBE().mHwc);
ALOGE_IF(result != NO_ERROR, "prepareFrame for display %zd failed:"
" %d (%s)", displayId, result, strerror(-result));
}
}
Prepare流程,現在寫的有點隱晦,以前都是直接在SurfaceFlinger中調的Prepare。現在通過DisplayDevice來完成,調用每個DisplayDevice的prepareFrame。
prepareFrame函數如下:
status_t DisplayDevice::prepareFrame(HWComposer& hwc) {
status_t error = hwc.prepare(*this);
if (error != NO_ERROR) {
return error;
}
DisplaySurface::CompositionType compositionType;
bool hasClient = hwc.hasClientComposition(mHwcDisplayId);
bool hasDevice = hwc.hasDeviceComposition(mHwcDisplayId);
if (hasClient && hasDevice) {
compositionType = DisplaySurface::COMPOSITION_MIXED;
} else if (hasClient) {
compositionType = DisplaySurface::COMPOSITION_GLES;
} else if (hasDevice) {
compositionType = DisplaySurface::COMPOSITION_HWC;
} else {
// Nothing to do -- when turning the screen off we get a frame like
// this. Call it a HWC frame since we won't be doing any GLES work but
// will do a prepare/set cycle.
compositionType = DisplaySurface::COMPOSITION_HWC;
}
return mDisplaySurface->prepareFrame(compositionType);
}
設置Layer數據時,已經指定每一層的合成方式,但是那是SurfaceFlinger的一廂情願,還得看HWC接受不接受。HWComposer的prepare函數如下:
status_t HWComposer::prepare(DisplayDevice& displayDevice) {
ATRACE_CALL();
Mutex::Autolock _l(mDisplayLock);
auto displayId = displayDevice.getHwcDisplayId();
... ...
auto& displayData = mDisplayData[displayId];
auto& hwcDisplay = displayData.hwcDisplay;
if (!hwcDisplay->isConnected()) {
return NO_ERROR;
}
... ...
if (!displayData.hasClientComposition) {
sp<android::Fence> outPresentFence;
uint32_t state = UINT32_MAX;
error = hwcDisplay->presentOrValidate(&numTypes, &numRequests, &outPresentFence , &state);
if (error != HWC2::Error::None && error != HWC2::Error::HasChanges) {
ALOGV("skipValidate: Failed to Present or Validate");
return UNKNOWN_ERROR;
}
if (state == 1) { //Present Succeeded.
std::unordered_map<HWC2::Layer*, sp<Fence>> releaseFences;
error = hwcDisplay->getReleaseFences(&releaseFences);
displayData.releaseFences = std::move(releaseFences);
displayData.lastPresentFence = outPresentFence;
displayData.validateWasSkipped = true;
displayData.presentError = error;
return NO_ERROR;
}
// Present failed but Validate ran.
} else {
error = hwcDisplay->validate(&numTypes, &numRequests);
}
... ...
std::unordered_map<HWC2::Layer*, HWC2::Composition> changedTypes;
changedTypes.reserve(numTypes);
error = hwcDisplay->getChangedCompositionTypes(&changedTypes);
... ...
displayData.displayRequests = static_cast<HWC2::DisplayRequest>(0);
std::unordered_map<HWC2::Layer*, HWC2::LayerRequest> layerRequests;
layerRequests.reserve(numRequests);
error = hwcDisplay->getRequests(&displayData.displayRequests,
&layerRequests);
if (error != HWC2::Error::None) {
ALOGE("prepare: getRequests failed on display %d: %s (%d)", displayId,
to_string(error).c_str(), static_cast<int32_t>(error));
return BAD_INDEX;
}
displayData.hasClientComposition = false;
displayData.hasDeviceComposition = false;
for (auto& layer : displayDevice.getVisibleLayersSortedByZ()) {
auto hwcLayer = layer->getHwcLayer(displayId);
if (changedTypes.count(hwcLayer) != 0) {
// We pass false so we only update our state and don't call back
// into the HWC device
validateChange(layer->getCompositionType(displayId),
changedTypes[hwcLayer]);
layer->setCompositionType(displayId, changedTypes[hwcLayer], false);
}
switch (layer->getCompositionType(displayId)) {
... ...
}
if (layerRequests.count(hwcLayer) != 0 &&
layerRequests[hwcLayer] ==
HWC2::LayerRequest::ClearClientTarget) {
layer->setClearClientTarget(displayId, true);
} else {
if (layerRequests.count(hwcLayer) != 0) {
ALOGE("prepare: Unknown layer request: %s",
to_string(layerRequests[hwcLayer]).c_str());
}
layer->setClearClientTarget(displayId, false);
}
}
error = hwcDisplay->acceptChanges();
... ...
return NO_ERROR;
}
prepare流程如下:
-
首先嚐試Prepare和Present一次處理完成
如果SurfaceFlinger沒有指定得有Client端合成hasClientComposition爲false,首先通過presentOrValidate接口嘗試直接present,如果HWC不能直接顯示,再執行validate操縱,這時的流程和validate是類似的。如果成功,那麼此次數據就顯示了,不用再 繼續後續的處理。 -
validate刷新
Error Display::validate(uint32_t* outNumTypes, uint32_t* outNumRequests)
{
uint32_t numTypes = 0;
uint32_t numRequests = 0;
auto intError = mComposer.validateDisplay(mId, &numTypes, &numRequests);
auto error = static_cast<Error>(intError);
if (error != Error::None && error != Error::HasChanges) {
return error;
}
*outNumTypes = numTypes;
*outNumRequests = numRequests;
return error;
}
注意Composer的 validateDisplay 函數,和其他Composer的區別:
Error Composer::validateDisplay(Display display, uint32_t* outNumTypes,
uint32_t* outNumRequests)
{
mWriter.selectDisplay(display);
mWriter.validateDisplay();
Error error = execute();
if (error != Error::NONE) {
return error;
}
mReader.hasChanges(display, outNumTypes, outNumRequests);
return Error::NONE;
}
validateDisplay 也是通過CommandWriter寫Buffer的方式調用到HWC中的,但是這裏多了一個execute函數。其實,validateDisplay之前的通過,Buffer命令的調用,都還沒有真正的調到HWC中,只是將命令寫到了Buffer中。這裏的execute才真正的調用,這裏將觸發HWC的服務端去解析Buffer命令,再分別去調HWC中對應的實現函數。
比如設置 z-order的解析如下:
bool ComposerClient::CommandReader::parseSetLayerZOrder(uint16_t length)
{
if (length != CommandWriterBase::kSetLayerZOrderLength) {
return false;
}
auto err = mHal.setLayerZOrder(mDisplay, mLayer, read());
if (err != Error::NONE) {
mWriter.setError(getCommandLoc(), err);
}
return true;
}
-
獲取HWC的validate結果
如果SurfaceFlinger指定的合成方式HWC不能處理,通過getChangedCompositionTypes函數獲取到HWC對合成方式的修改,保存在 changedTypes 中。獲取LayerRequest,保存在layerRequests中。layerRequests和changedTypes都是以HWC2::Layer作爲key的map。 -
修改合成方式
如果合成方式HWC不接受,SurfaceFlinger中修改根據HWC的反饋進行修改,也就是changedTypes中的Layer進行修改。修改的函數爲setCompositionType
,注意這裏的callIntoHwc
參數爲false。 -
響應layerRequests
layerRequests主要是決定是否需要清楚Client端的Target,也就是Client的合成結果,留意clearClientTarget,看看後續的流程是怎麼處理的。
void Layer::setClearClientTarget(int32_t hwcId, bool clear) {
if (getBE().mHwcLayers.count(hwcId) == 0) {
ALOGE("setClearClientTarget called without a valid HWC layer");
return;
}
getBE().mHwcLayers[hwcId].clearClientTarget = clear;
}
- 最後接受修改
通過HWC,SurfaceFlinger接受修改。
Error Display::acceptChanges()
{
auto intError = mComposer.acceptDisplayChanges(mId);
return static_cast<Error>(intError);
}
到此setUpHWComposer結束,此時,我們需要顯示的數據已經送到HWC,且每一層Layer的合成方式已經確定。如果是HWC能支持更新和顯示同時完成,那麼此時數據已經開始顯示。
回到handleMessageRefresh函數,接下來是doDebugFlashRegions。doDebugFlashRegions只是一個debug的功能,其目前就是更新的區域不停的閃爍,收mDebugRegion
的控制。
void SurfaceFlinger::doDebugFlashRegions()
{
// is debugging enabled
if (CC_LIKELY(!mDebugRegion))
return;
}
接下來的doTracing也是一個debug的輔助功能。
void SurfaceFlinger::doTracing(const char* where) {
ATRACE_CALL();
ATRACE_NAME(where);
if (CC_UNLIKELY(mTracing.isEnabled())) {
mTracing.traceLayers(where, dumpProtoInfo(LayerVector::StateSet::Drawing));
}
}
合成處理 doComposition
如果present和validate沒有一起完成,那麼此時我們需要顯示的數據已經送到HWC,且每一層Layer的合成方式已經確定。接下來的合成處理流程在doComposition中完成。
doComposition函數:
void SurfaceFlinger::doComposition() {
ATRACE_CALL();
ALOGV("doComposition");
const bool repaintEverything = android_atomic_and(0, &mRepaintEverything);
for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
const sp<DisplayDevice>& hw(mDisplays[dpy]);
if (hw->isDisplayOn()) {
// transform the dirty region into this screen's coordinate space
const Region dirtyRegion(hw->getDirtyRegion(repaintEverything));
// repaint the framebuffer (if needed)
doDisplayComposition(hw, dirtyRegion);
hw->dirtyRegion.clear();
hw->flip();
}
}
postFramebuffer();
}
合成處理也是每個Display各自 進行的,合成處理主要步驟如下:
- 獲取髒區域
在前面重構Layer時,Display的髒區域dirtyRegion已經計算出來。如果是重畫,mRepaintEverything爲true,那麼髒區域就是整個屏幕的大小。
Region DisplayDevice::getDirtyRegion(bool repaintEverything) const {
Region dirty;
if (repaintEverything) {
dirty.set(getBounds());
} else {
const Transform& planeTransform(mGlobalTransform);
dirty = planeTransform.transform(this->dirtyRegion);
dirty.andSelf(getBounds());
}
return dirty;
}
2.Display合成處理
doDisplayComposition函數如下:
void SurfaceFlinger::doDisplayComposition(
const sp<const DisplayDevice>& displayDevice,
const Region& inDirtyRegion)
{
// 需要HWC處理,或者髒區域不爲空
bool isHwcDisplay = displayDevice->getHwcDisplayId() >= 0;
if (!isHwcDisplay && inDirtyRegion.isEmpty()) {
ALOGV("Skipping display composition");
return;
}
ALOGV("doDisplayComposition");
if (!doComposeSurfaces(displayDevice)) return;
// swap buffers (presentation)
displayDevice->swapBuffers(getHwComposer());
}
合成操縱主要在doComposeSurfaces函數中完成,合成方式,主要就兩種,一種Client端用GPU合成;另外一種,Device端合成,用的是HWC硬件。doComposeSurfaces主要是處理Client端合成,Client通過RenderEngine用GPU來進行合成。
doComposeSurfaces 函數如下,我們分段看:
- RenderEngine 的初始化
bool SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& displayDevice)
{
... ...
const Region bounds(displayDevice->bounds());
const DisplayRenderArea renderArea(displayDevice);
const auto hwcId = displayDevice->getHwcDisplayId();
mat4 oldColorMatrix;
const bool applyColorMatrix = !getBE().mHwc->hasDeviceComposition(hwcId) &&
!getBE().mHwc->hasCapability(HWC2::Capability::SkipClientColorTransform);
if (applyColorMatrix) {
mat4 colorMatrix = mColorMatrix * mDaltonizer();
oldColorMatrix = getRenderEngine().setupColorTransform(colorMatrix);
}
bool hasClientComposition = getBE().mHwc->hasClientComposition(hwcId);
if (hasClientComposition) {
ALOGV("hasClientComposition");
getBE().mRenderEngine->setWideColor(
displayDevice->getWideColorSupport() && !mForceNativeColorMode);
getBE().mRenderEngine->setColorMode(mForceNativeColorMode ?
HAL_COLOR_MODE_NATIVE : displayDevice->getActiveColorMode());
if (!displayDevice->makeCurrent()) {
... ...
}
const bool hasDeviceComposition = getBE().mHwc->hasDeviceComposition(hwcId);
if (hasDeviceComposition) {
getBE().mRenderEngine->clearWithColor(0, 0, 0, 0);
} else {
const Region letterbox(bounds.subtract(displayDevice->getScissor()));
// compute the area to clear
Region region(displayDevice->undefinedRegion.merge(letterbox));
// screen is already cleared here
if (!region.isEmpty()) {
// can happen with SurfaceView
drawWormhole(displayDevice, region);
}
}
if (displayDevice->getDisplayType() != DisplayDevice::DISPLAY_PRIMARY) {
const Rect& bounds(displayDevice->getBounds());
const Rect& scissor(displayDevice->getScissor());
if (scissor != bounds) {
const uint32_t height = displayDevice->getHeight();
getBE().mRenderEngine->setScissor(scissor.left, height - scissor.bottom,
scissor.getWidth(), scissor.getHeight());
}
}
}
RenderEngine的初始化包括:
- 指定顏色矩陣 setupColorTransform
- 指定是否用WideColor setWideColor
- 指定顏色模式 setColorMode
- 設置FBTarget Surface,視窗,投影矩陣等
這個過程在DisplayDevice的makeCurrent中完成:
bool DisplayDevice::makeCurrent() const {
bool success = mFlinger->getRenderEngine().setCurrentSurface(mSurface);
setViewportAndProjection();
return success;
}
-
FBTarget 處理背景
如果是混合模式,也就是hasClientComposition和hasDeviceComposition,先清掉FBTarget背景。一般情況,合成很少採用這種方式。基本都是通過drawWormhole將屏幕填充爲RGBA_0000。 -
設置剪切區 setScissor
對於非主屏,通過setScissor設置Display的剪切區
到此,初始化完成~
- 將Client端的Layer渲染到FBTarget
bool SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& displayDevice)
{
... ...
const Transform& displayTransform = displayDevice->getTransform();
if (hwcId >= 0) {
// hwcId >=0 我們使用HWC
bool firstLayer = true;
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
const Region clip(bounds.intersect(
displayTransform.transform(layer->visibleRegion)));
if (!clip.isEmpty()) {
switch (layer->getCompositionType(hwcId)) {
case HWC2::Composition::Cursor:
case HWC2::Composition::Device:
case HWC2::Composition::Sideband:
case HWC2::Composition::SolidColor: {
const Layer::State& state(layer->getDrawingState());
if (layer->getClearClientTarget(hwcId) && !firstLayer &&
layer->isOpaque(state) && (state.color.a == 1.0f)
&& hasClientComposition) {
// never clear the very first layer since we're
// guaranteed the FB is already cleared
layer->clearWithOpenGL(renderArea);
}
break;
}
case HWC2::Composition::Client: {
layer->draw(renderArea, clip);
break;
}
default:
break;
}
} else {
ALOGV(" Skipping for empty clip");
}
firstLayer = false;
}
} else {
// we're not using h/w composer
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
const Region clip(bounds.intersect(
displayTransform.transform(layer->visibleRegion)));
if (!clip.isEmpty()) {
layer->draw(renderArea, clip);
}
}
}
if (applyColorMatrix) {
getRenderEngine().setupColorTransform(oldColorMatrix);
}
// disable scissor at the end of the frame
getBE().mRenderEngine->disableScissor();
return true;
}
hwcId >= 0
說明我們用到了HWC,大多數情況都會走到這裏。對於很多VirtualDisplay的情況,hwcId爲-1。
用到hwc時,首先計算每一層Layer的可見區域在Display中的區域clip,如果Layer是Client合成,那麼直接調layer->draw
,將Layer中clip區域繪製到FBTarget上。如果不是Client合成,但是有其他Layer是Client合成時,需要將Layer在 FBTarget中對應的區域清理掉clearWithOpenGL,清理掉的區域HWC合成。最終,FBTarget的內容和HWC中的內容再合成爲最後的顯示數據。
沒有用到hwc時,直接調layer->draw
,將Layer中clip區域繪製到FBTarget上。
到此,doComposeSurfaces完成,這裏主要是處理和Client端合成相關的流程~
3.Display 交換Buffer
void DisplayDevice::swapBuffers(HWComposer& hwc) const {
if (hwc.hasClientComposition(mHwcDisplayId)) {
mSurface.swapBuffers();
}
status_t result = mDisplaySurface->advanceFrame();
if (result != NO_ERROR) {
ALOGE("[%s] failed pushing new frame to HWC: %d",
mDisplayName.string(), result);
}
}
如果有Client合成,調eglSwapBuffers交換Buffer
void Surface::swapBuffers() const {
if (!eglSwapBuffers(mEGLDisplay, mEGLSurface)) {
... ...
}
}
mDisplaySurface的advanceFrame方法,虛顯用的VirtualDisplaySurface,非虛顯用的FramebufferSurface。advanceFrame獲取 FBTarget 的數據,我們看非虛顯:
status_t FramebufferSurface::advanceFrame() {
uint32_t slot = 0;
sp<GraphicBuffer> buf;
sp<Fence> acquireFence(Fence::NO_FENCE);
android_dataspace_t dataspace = HAL_DATASPACE_UNKNOWN;
status_t result = nextBuffer(slot, buf, acquireFence, dataspace);
mDataSpace = dataspace;
if (result != NO_ERROR) {
ALOGE("error latching next FramebufferSurface buffer: %s (%d)",
strerror(-result), result);
}
return result;
}
主要在nextBuffer函數中完成:
status_t FramebufferSurface::nextBuffer(uint32_t& outSlot,
sp<GraphicBuffer>& outBuffer, sp<Fence>& outFence,
android_dataspace_t& outDataspace) {
Mutex::Autolock lock(mMutex);
BufferItem item;
status_t err = acquireBufferLocked(&item, 0);
... ...
if (mCurrentBufferSlot != BufferQueue::INVALID_BUFFER_SLOT &&
item.mSlot != mCurrentBufferSlot) {
mHasPendingRelease = true;
mPreviousBufferSlot = mCurrentBufferSlot;
mPreviousBuffer = mCurrentBuffer;
}
mCurrentBufferSlot = item.mSlot;
mCurrentBuffer = mSlots[mCurrentBufferSlot].mGraphicBuffer;
mCurrentFence = item.mFence;
outFence = item.mFence;
mHwcBufferCache.getHwcBuffer(mCurrentBufferSlot, mCurrentBuffer,
&outSlot, &outBuffer);
outDataspace = item.mDataSpace;
status_t result =
mHwc.setClientTarget(mDisplayType, outSlot, outFence, outBuffer, outDataspace);
... ...
}
nextBuffer函數中:
-
獲取一個Buffer
如果是Client合成,swapBuffer時,將調用queueBuffer,queue到FrameBufferSurface的BufferQueue中。這裏的acquireBufferLocked 將從BufferQueue中獲取一個Buffer。 -
替換Buffer
當前Buffer的序號mCurrentBufferSlot,當前Buffer mCurrentBuffer,對應的Fence mCurrentFence;如果新獲取到的Buffer不一樣,釋放舊的。Buffer都被cache到mHwcBufferCache中。 -
將 FBTarget 設置給HWC
關鍵代碼mHwc.setClientTarget
。
status_t HWComposer::setClientTarget(int32_t displayId, uint32_t slot,
const sp<Fence>& acquireFence, const sp<GraphicBuffer>& target,
android_dataspace_t dataspace) {
... ...
auto& hwcDisplay = mDisplayData[displayId].hwcDisplay;
auto error = hwcDisplay->setClientTarget(slot, target, acquireFence, dataspace);
if (error != HWC2::Error::None) {
ALOGE("Failed to set client target for display %d: %s (%d)", displayId,
to_string(error).c_str(), static_cast<int32_t>(error));
return BAD_VALUE;
}
return NO_ERROR;
}
FBTarget 也是通過Command Buffer的方式傳到HWC中的。在hwc1.x的版本中,在創建工作列表時也爲FBTarget創建了一個Layer,HWC2版本直接傳遞FBTarget。
回到doComposition函數中,DisplayDevice的flip函數,將記錄flip的次數mPageFlipCount。
void DisplayDevice::flip() const
{
mFlinger->getRenderEngine().checkErrors();
mPageFlipCount++;
}
4.提交Framebuffer
到此,我們顯示的數據成什麼樣了?需要Client合成的,已經合成完了,合成後的結果FBTarget已傳給HWC。需要Device合成的數據之前也提交給HWC了。但是數據還沒有最終合成顯示出來。postFramebuffer 函數就是告訴HWC開始做最後的合成了。
postFramebuffer函數如下:
void SurfaceFlinger::postFramebuffer()
{
... ...
for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
if (!displayDevice->isDisplayOn()) {
continue;
}
const auto hwcId = displayDevice->getHwcDisplayId();
if (hwcId >= 0) {
getBE().mHwc->presentAndGetReleaseFences(hwcId);
}
displayDevice->onSwapBuffersCompleted();
displayDevice->makeCurrent();
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
auto hwcLayer = layer->getHwcLayer(hwcId);
sp<Fence> releaseFence = getBE().mHwc->getLayerReleaseFence(hwcId, hwcLayer);
if (layer->getCompositionType(hwcId) == HWC2::Composition::Client) {
releaseFence = Fence::merge("LayerRelease", releaseFence,
displayDevice->getClientTargetAcquireFence());
}
layer->onLayerDisplayed(releaseFence);
}
if (!displayDevice->getLayersNeedingFences().isEmpty()) {
sp<Fence> presentFence = getBE().mHwc->getPresentFence(hwcId);
for (auto& layer : displayDevice->getLayersNeedingFences()) {
layer->onLayerDisplayed(presentFence);
}
}
if (hwcId >= 0) {
getBE().mHwc->clearReleaseFences(hwcId);
}
}
mLastSwapBufferTime = systemTime() - now;
mDebugInSwapBuffers = 0;
// |mStateLock| not needed as we are on the main thread
uint32_t flipCount = getDefaultDisplayDeviceLocked()->getPageFlipCount();
if (flipCount % LOG_FRAME_STATS_PERIOD == 0) {
logFrameStats();
}
}
postFramebuffer的流程如下:
- 通過presentAndGetReleaseFences顯示獲取releaseFence
status_t HWComposer::presentAndGetReleaseFences(int32_t displayId) {
... ...
auto& displayData = mDisplayData[displayId];
auto& hwcDisplay = displayData.hwcDisplay;
if (displayData.validateWasSkipped) {
// explicitly flush all pending commands
auto error = mHwcDevice->flushCommands();
... ...
}
auto error = hwcDisplay->present(&displayData.lastPresentFence);
if (error != HWC2::Error::None) {
... ...
}
std::unordered_map<HWC2::Layer*, sp<Fence>> releaseFences;
error = hwcDisplay->getReleaseFences(&releaseFences);
if (error != HWC2::Error::None) {
... ...
}
displayData.releaseFences = std::move(releaseFences);
return NO_ERROR;
}
如果之前Prepare過程中,presentOrValidate成功,validateWasSkipped爲 true,那麼直接刷掉command Buffer中的命令,讓你執行。就沒有後續的處理了,presentOrValidate成功,是沒有Client合成的,也就沒有所謂的FBTarget。
如果之前presentOrValidate沒有成功,很有可能是需要Client端做合成的,也就是present沒有完成,那麼這裏需要走present的流程。
present操縱也是先寫到Command Buffer中。最後調的execute。
Error Composer::presentDisplay(Display display, int* outPresentFence)
{
mWriter.selectDisplay(display);
mWriter.presentDisplay();
Error error = execute();
if (error != Error::NONE) {
return error;
}
mReader.takePresentFence(display, outPresentFence);
return Error::NONE;
}
設置FBTarget時,只是寫到Buffer中,execute時,設置FBTarget的操縱才一起生效,設置到了Server端。Server再進行最後的合成。
present完成後,通過getReleaseFences獲取releaseFence,保存在displayData中。注意這裏的release是每個Layer的release fence,這是8.0之前的版本沒有的流程,之前的releasefence只有一個,所以Layer只有一個。而present時的lastPresentFence就是FBTarget的releasefence。
回到postFramebuffer函數。
- DisplayDevice處理FBTarget
釋放上一幀的Buffer
void DisplayDevice::onSwapBuffersCompleted() const {
mDisplaySurface->onFrameCommitted();
}
主要在onFrameCommitted函數中完成:
void FramebufferSurface::onFrameCommitted() {
if (mHasPendingRelease) {
sp<Fence> fence = mHwc.getPresentFence(mDisplayType);
if (fence->isValid()) {
status_t result = addReleaseFence(mPreviousBufferSlot,
mPreviousBuffer, fence);
... ...
}
status_t result = releaseBufferLocked(mPreviousBufferSlot, mPreviousBuffer);
... ...
mPreviousBuffer.clear();
mHasPendingRelease = false;
}
}
makeCurrent爲新的合成做準備
bool DisplayDevice::makeCurrent() const {
bool success = mFlinger->getRenderEngine().setCurrentSurface(mSurface);
setViewportAndProjection();
return success;
}
- 給每一層Layer設置releaseFence
void BufferLayer::onLayerDisplayed(const sp<Fence>& releaseFence) {
mConsumer->setReleaseFence(releaseFence);
}
如果Layer需要Fence,給它presentFence,也就是FBTarget的Fence。最後清除HWComposer的mDisplayData中的releaseFence,因爲他們已經傳給Layer去了。
到此合成處理結束~
合成後處理 postComposition
void SurfaceFlinger::postComposition(nsecs_t refreshStartTime)
{
// Release any buffers which were replaced this frame
nsecs_t dequeueReadyTime = systemTime();
for (auto& layer : mLayersWithQueuedFrames) {
layer->releasePendingBuffer(dequeueReadyTime);
}
// 處理Timeline
... ...
// 記錄Buffer狀態
mDrawingState.traverseInZOrder([&](Layer* layer) {
bool frameLatched = layer->onPostComposition(glCompositionDoneFenceTime,
presentFenceTime, compositorTiming);
if (frameLatched) {
recordBufferingStats(layer->getName().string(),
layer->getOccupancyHistory(false));
}
});
// Vsync的同步
if (presentFenceTime->isValid()) {
if (mPrimaryDispSync.addPresentFence(presentFenceTime)) {
enableHardwareVsync();
} else {
disableHardwareVsync(false);
}
}
if (!hasSyncFramework) {
if (hw->isDisplayOn()) {
enableHardwareVsync();
}
}
// 動畫合成處理
if (mAnimCompositionPending) {
mAnimCompositionPending = false;
if (presentFenceTime->isValid()) {
mAnimFrameTracker.setActualPresentFence(
std::move(presentFenceTime));
} else {
// The HWC doesn't support present fences, so use the refresh
// timestamp instead.
nsecs_t presentTime =
getBE().mHwc->getRefreshTimestamp(HWC_DISPLAY_PRIMARY);
mAnimFrameTracker.setActualPresentTime(presentTime);
}
mAnimFrameTracker.advanceFrame();
}
// 時間記錄
}
中主要做了下列幾件事:
- 釋放待釋放的Buffer
這一幀合成完成後,將被替代的Buffer釋放掉~
void BufferLayer::releasePendingBuffer(nsecs_t dequeueReadyTime) {
if (!mConsumer->releasePendingBuffer()) {
return;
}
auto releaseFenceTime =
std::make_shared<FenceTime>(mConsumer->getPrevFinalReleaseFence());
mReleaseTimeline.updateSignalTimes();
mReleaseTimeline.push(releaseFenceTime);
Mutex::Autolock lock(mFrameEventHistoryMutex);
if (mPreviousFrameNumber != 0) {
mFrameEventHistory.addRelease(mPreviousFrameNumber, dequeueReadyTime,
std::move(releaseFenceTime));
}
}
-
處理Timeline
-
記錄Buffer狀態
void SurfaceFlinger::recordBufferingStats(const char* layerName,
std::vector<OccupancyTracker::Segment>&& history) {
Mutex::Autolock lock(getBE().mBufferingStatsMutex);
auto& stats = getBE().mBufferingStats[layerName];
for (const auto& segment : history) {
if (!segment.usedThirdBuffer) {
stats.twoBufferTime += segment.totalTime;
}
if (segment.occupancyAverage < 1.0f) {
stats.doubleBufferedTime += segment.totalTime;
} else if (segment.occupancyAverage < 2.0f) {
stats.tripleBufferedTime += segment.totalTime;
}
++stats.numSegments;
stats.totalTime += segment.totalTime;
}
}
-
Vsync的同步
平常我們用的Vsync都是mPrimaryDispSync分發出來的,並不是每一次都是從底層硬件上報的,所以mPrimaryDispSync需要和底層的硬件Vsync保持同步 -
動畫合成處理
-
處理時間的記錄
到此一次合成處理完成,REFRESH處理完成。下一個Vsync到來時,新的一次合成又將開始。
Client合成
硬件HWC合成是Vendor實現的,各個Vendor不一樣。而Client合成是Android自帶的,我們接下來就來看看Android的Client端的合成。
Client端合成,本質是採用GPU進程合成,SurfaceFlinger中封裝了RenderEngine進行具體的實現,相關的代碼在如下位置:
frameworks/native/services/surfaceflinger/RenderEngine
我們來看看看相關的類:
-
RenderEngine 是對GPU渲染的封裝,包括了 EGLDisplay,EGLContext, EGLConfig,EGLSurface。注意每個Display的EGLSurface不是同一個,各自有各自的EGLSurface。
-
GLES20RenderEngine 繼承RenderEngine,是GELS的2.0版本實現。其Program採用ProgramCache進行cache。狀態用Description進描述。
-
每個BufferLayer 都有專門的Texture進行紋理的描述,GLES20RenderEngine 支持紋理貼圖。合成時,將GraphicBuffer轉換爲紋理,進行混合。
下面我們來看具體的流程,Client端GPU合成相關的流程如下:
1.創建 RenderEngine
RenderEngine 是在SurfaceFlinger初始化時,創建的。
void SurfaceFlinger::init() {
... ...
getBE().mRenderEngine = RenderEngine::create(HAL_PIXEL_FORMAT_RGBA_8888,
hasWideColorDisplay ? RenderEngine::WIDE_COLOR_SUPPORT : 0);
create函數如下:
std::unique_ptr<RenderEngine> RenderEngine::create(int hwcFormat, uint32_t featureFlags) {
// 初始化EGLDisplay
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
if (!eglInitialize(display, NULL, NULL)) {
LOG_ALWAYS_FATAL("failed to initialize EGL");
}
// GLExtensions處理
EGLint renderableType = 0;
if (config == EGL_NO_CONFIG) {
renderableType = EGL_OPENGL_ES2_BIT;
} else if (!eglGetConfigAttrib(display, config, EGL_RENDERABLE_TYPE, &renderableType)) {
LOG_ALWAYS_FATAL("can't query EGLConfig RENDERABLE_TYPE");
}
EGLint contextClientVersion = 0;
if (renderableType & EGL_OPENGL_ES2_BIT) {
contextClientVersion = 2;
} else if (renderableType & EGL_OPENGL_ES_BIT) {
contextClientVersion = 1;
} else {
LOG_ALWAYS_FATAL("no supported EGL_RENDERABLE_TYPEs");
}
// 初始化Attributes
std::vector<EGLint> contextAttributes;
... ...
// 創建EGLContext
EGLContext ctxt = eglCreateContext(display, config, NULL, contextAttributes.data());
... ...
// 創建PBuffer
EGLint attribs[] = {EGL_WIDTH, 1, EGL_HEIGHT, 1, EGL_NONE, EGL_NONE};
EGLSurface dummy = eglCreatePbufferSurface(display, dummyConfig, attribs);
//
EGLBoolean success = eglMakeCurrent(display, dummy, dummy, ctxt);
LOG_ALWAYS_FATAL_IF(!success, "can't make dummy pbuffer current");
... ...
std::unique_ptr<RenderEngine> engine;
switch (version) {
... ...
case GLES_VERSION_3_0:
engine = std::make_unique<GLES20RenderEngine>(featureFlags);
break;
}
// 設置EGL信息
engine->setEGLHandles(display, config, ctxt);
... ...
eglMakeCurrent(display, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
eglDestroySurface(display, dummy);
return engine;
}
RenderEngine的初始化過程,就是GPU渲染初始化的過程,做過OpenGL編程的同學來說,小case。其大概的流程如下:
-
創建 EGLDisplay
eglGetDisplay -
初始化 EGLDisplay
eglInitialize -
選擇 EGLConfig
chooseEglConfig -
獲取renderableType
eglGetConfigAttrib -
初始化Context屬性
contextAttributes -
創建EGLContext
eglCreateContext -
創建 PBuffer
eglCreatePbufferSurface -
MakeCurrent
eglMakeCurrent這是爲虛擬的PBuffercheck狀態。 -
創建RenderEngine
這裏,目前值支持GELS2.0,對應的Render GLES20RenderEngine -
設置設置EGL信息
將創建的EGL對象設置到我們創建的GLES20RenderEngine中。
void RenderEngine::setEGLHandles(EGLDisplay display, EGLConfig config, EGLContext ctxt) {
mEGLDisplay = display;
mEGLConfig = config;
mEGLContext = ctxt;
}
2.創建Surface FBTarget
在RenderEngine創建時,初始化了EGLDisplaym,EGLConfig,EGLContext。這些都是所有Display共用的,但是Surface每個Display的是自己的。
在DisplayDevice創建時,創建對應的Surface
DisplayDevice::DisplayDevice(
... ...
mSurface{flinger->getRenderEngine()},
... ...
{
// clang-format on
Surface* surface;
mNativeWindow = surface = new Surface(producer, false);
ANativeWindow* const window = mNativeWindow.get();
... ...
mSurface.setCritical(mType == DisplayDevice::DISPLAY_PRIMARY);
mSurface.setAsync(mType >= DisplayDevice::DISPLAY_VIRTUAL);
mSurface.setNativeWindow(window);
mDisplayWidth = mSurface.queryWidth();
mDisplayHeight = mSurface.queryHeight();
... ...
if (useTripleFramebuffer) {
surface->allocateBuffers();
}
}
注意mSurface.setNativeWindow
,通過ANativeWindow,Surface就和DisplayDevice的BufferQueue建立了聯繫。
void Surface::setNativeWindow(ANativeWindow* window) {
if (mEGLSurface != EGL_NO_SURFACE) {
eglDestroySurface(mEGLDisplay, mEGLSurface);
mEGLSurface = EGL_NO_SURFACE;
}
mWindow = window;
if (mWindow) {
mEGLSurface = eglCreateWindowSurface(mEGLDisplay, mEGLConfig, mWindow, nullptr);
}
}
創建的EGLSurface mEGLSurface和nativewindow mWindow 關聯。這個GPU就可以通過nativewindow,從BufferQueue中dequeue Buffer進行渲染,swapBuffer時,也queue到Bufferqueu中。這裏的ANativeWindow,本質就是 FBTarget。
- 創建Texture
BufferLayer創建時,創建Texture:
BufferLayer::BufferLayer(SurfaceFlinger* flinger, const sp<Client>& client, const String8& name,
uint32_t w, uint32_t h, uint32_t flags)
: Layer(flinger, client, name, w, h, flags),
... ...
mFlinger->getRenderEngine().genTextures(1, &mTextureName);
mTexture.init(Texture::TEXTURE_EXTERNAL, mTextureName);
}
通過glGenTextures函數創建Texture。
void RenderEngine::genTextures(size_t count, uint32_t* names) {
glGenTextures(count, names);
}
且在創建BufferLayerConsumer時,傳到了Consumer中,對應的值爲mTexName。
glGenTextures生成的Texture,在BufferLayer中,保存在mTexture中。
4.開始合成 doComposeSurfaces
合成是在SurfaceFlinger的doComposeSurfaces中進的,首先先makeCurrent。每個Display有自己的Surface,所以,每個Display做具體合成時,需要給RenderEngine指定Surface,視窗,投影矩陣等,告訴RenderEngine合成到哪個Surface上。
bool DisplayDevice::makeCurrent() const {
bool success = mFlinger->getRenderEngine().setCurrentSurface(mSurface);
setViewportAndProjection();
return success;
}
setCurrentSurface函數如下:
bool RenderEngine::setCurrentSurface(const RE::Surface& surface) {
bool success = true;
EGLSurface eglSurface = surface.getEGLSurface();
if (eglSurface != eglGetCurrentSurface(EGL_DRAW)) {
success = eglMakeCurrent(mEGLDisplay, eglSurface, eglSurface, mEGLContext) == EGL_TRUE;
if (success && surface.getAsync()) {
eglSwapInterval(mEGLDisplay, 0);
}
}
return success;
}
GPU不支持多線程,所以需要通過eglMakeCurrent切換GPU的工作線程,eglMakeCurrent後,GPU將處理我們當前線程的OpenGL繪圖操縱。
5.Layer的合成
合成時,每個Display的每個Layer都合成到Display對應的Surface上。主要是在Layer的draw方法中完成:
void Layer::draw(const RenderArea& renderArea, const Region& clip) const {
onDraw(renderArea, clip, false);
}
BufferLayer和ColorLayer實現各自的onDraw方法。我們先來看BufferLayer,BufferLayer比較複雜。
BufferLayer的合成onDraw處理流程如下:
- 綁定Texture
void BufferLayer::onDraw(const RenderArea& renderArea, const Region& clip,
bool useIdentityTransform) const {
ATRACE_CALL();
if (CC_UNLIKELY(getBE().compositionInfo.mBuffer == 0)) {
... ...
return;
}
// 綁定Texture
status_t err = mConsumer->bindTextureImage();
... ...
status_t BufferLayerConsumer::bindTextureImage() {
Mutex::Autolock lock(mMutex);
return bindTextureImageLocked();
}
綁定Texture主要在bindTextureImageLocked中完成:
status_t BufferLayerConsumer::bindTextureImageLocked() {
mRE.checkErrors();
if (mCurrentTexture == BufferQueue::INVALID_BUFFER_SLOT && mCurrentTextureImage == NULL) {
... ...
return NO_INIT;
}
const Rect& imageCrop = canUseImageCrop(mCurrentCrop) ? mCurrentCrop : Rect::EMPTY_RECT;
status_t err = mCurrentTextureImage->createIfNeeded(imageCrop);
if (err != NO_ERROR) {
... ...
return UNKNOWN_ERROR;
}
mRE.bindExternalTextureImage(mTexName, mCurrentTextureImage->image());
return doFenceWaitLocked();
}
mCurrentTextureImage是合成開始時,acquireBuffer時更新的。通過createIfNeeded創建Image。
status_t BufferLayerConsumer::Image::createIfNeeded(const Rect& imageCrop) {
const int32_t cropWidth = imageCrop.width();
const int32_t cropHeight = imageCrop.height();
if (mCreated && mCropWidth == cropWidth && mCropHeight == cropHeight) {
return OK;
}
mCreated = mImage.setNativeWindowBuffer(mGraphicBuffer->getNativeBuffer(),
mGraphicBuffer->getUsage() & GRALLOC_USAGE_PROTECTED,
cropWidth, cropHeight);
if (mCreated) {
... ...
return mCreated ? OK : UNKNOWN_ERROR;
}
image的創建在setNativeWindowBuffer函數中完成:
bool Image::setNativeWindowBuffer(ANativeWindowBuffer* buffer, bool isProtected, int32_t cropWidth,
int32_t cropHeight) {
if (mEGLImage != EGL_NO_IMAGE_KHR) {
... //release pre mEGLImage
}
if (buffer) {
std::vector<EGLint> attrs = buildAttributeList(isProtected, cropWidth, cropHeight);
mEGLImage = eglCreateImageKHR(mEGLDisplay, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID,
static_cast<EGLClientBuffer>(buffer), attrs.data());
if (mEGLImage == EGL_NO_IMAGE_KHR) {
ALOGE("failed to create EGLImage: %#x", eglGetError());
return false;
}
}
return true;
}
setNativeWindowBuffer時,先釋放掉舊的mEGLImage。再創建新的mEGLImage。注意eglCreateImageKHR的參數。這裏的buffer就是我們acquireBuffer時,獲取到的GraphicBuffer。eglCreateImageKHR函數根據GraphicBuffer創建了一個mEGLImage。
回到bindTextureImageLocked函數,創建的EglImage通過bindExternalTextureImage函數綁定。
void RenderEngine::bindExternalTextureImage(uint32_t texName, const RE::Image& image) {
const GLenum target = GL_TEXTURE_EXTERNAL_OES;
glBindTexture(target, texName);
if (image.getEGLImage() != EGL_NO_IMAGE_KHR) {
glEGLImageTargetTexture2DOES(target, static_cast<GLeglImageOES>(image.getEGLImage()));
}
}
最終通過glEGLImageTargetTexture2DOES函數,將創建的EglImage和Texture mTexName進行綁定。這樣我們的Layer數據就送到了GPU進行處理。
回到onDraw方法:
- DRM處理
如果是受保護的內容,或者是Secure的內容想顯示在非安全的Display上,都是不允許的。這個時候,相關的區域顯示爲黑色
void GLES20RenderEngine::setupLayerBlackedOut() {
glBindTexture(GL_TEXTURE_2D, mProtectedTexName);
Texture texture(Texture::TEXTURE_2D, mProtectedTexName);
texture.setDimensions(1, 1); // FIXME: we should get that from somewhere
mState.setTexture(texture);
}
- 獲取textureMatrix
void BufferLayer::onDraw(const RenderArea& renderArea, const Region& clip,
bool useIdentityTransform) const {
... ...
bool blackOutLayer = isProtected() || (isSecure() && !renderArea.isSecure());
RenderEngine& engine(mFlinger->getRenderEngine());
if (!blackOutLayer) {
const bool useFiltering = getFiltering() || needsFiltering(renderArea) || isFixedSize();
// Query the texture matrix given our current filtering mode.
float textureMatrix[16];
mConsumer->setFilteringEnabled(useFiltering);
mConsumer->getTransformMatrix(textureMatrix);
if (getTransformToDisplayInverse()) {
// 處理Inverse翻轉
}
// Set things up for texturing.
mTexture.setDimensions(getBE().compositionInfo.mBuffer->getWidth(),
getBE().compositionInfo.mBuffer->getHeight());
mTexture.setFiltering(useFiltering);
mTexture.setMatrix(textureMatrix);
engine.setupLayerTexturing(mTexture);
} else {
engine.setupLayerBlackedOut();
}
drawWithOpenGL(renderArea, useIdentityTransform);
engine.disableTexturing();
}
textureMatrix是在GLConsumer::computeTransformMatrix中計算的,感興趣的可以去看看。
- 用OpenGL繪製
主要通過drawWithOpenGL函數完成:
void BufferLayer::drawWithOpenGL(const RenderArea& renderArea, bool useIdentityTransform) const {
const State& s(getDrawingState());
//計算區域邊界,獲取Mesh
computeGeometry(renderArea, getBE().mMesh, useIdentityTransform);
const Rect bounds{computeBounds()}; // Rounds from FloatRect
Transform t = getTransform();
Rect win = bounds;
if (!s.finalCrop.isEmpty()) {
... ... //處理finalCrop
}
float left = float(win.left) / float(s.active.w);
float top = float(win.top) / float(s.active.h);
float right = float(win.right) / float(s.active.w);
float bottom = float(win.bottom) / float(s.active.h);
// 計算Texture的座標頂點
Mesh::VertexArray<vec2> texCoords(getBE().mMesh.getTexCoordArray<vec2>());
texCoords[0] = vec2(left, 1.0f - top);
texCoords[1] = vec2(left, 1.0f - bottom);
texCoords[2] = vec2(right, 1.0f - bottom);
texCoords[3] = vec2(right, 1.0f - top);
RenderEngine& engine(mFlinger->getRenderEngine());
engine.setupLayerBlending(mPremultipliedAlpha, isOpaque(s), false /* disableTexture */,
getColor());
engine.setSourceDataSpace(mCurrentState.dataSpace);
engine.drawMesh(getBE().mMesh);
engine.disableBlending();
}
setupLayerBlending處理Alpha的Blend:
void GLES20RenderEngine::setupLayerBlending(bool premultipliedAlpha, bool opaque,
bool disableTexture, const half4& color) {
mState.setPremultipliedAlpha(premultipliedAlpha);
mState.setOpaque(opaque);
mState.setColor(color);
if (disableTexture) {
mState.disableTexture();
}
if (color.a < 1.0f || !opaque) {
glEnable(GL_BLEND);
glBlendFunc(premultipliedAlpha ? GL_ONE : GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
} else {
glDisable(GL_BLEND);
}
}
drawMesh繪製內容:
void GLES20RenderEngine::drawMesh(const Mesh& mesh) {
if (mesh.getTexCoordsSize()) {
glEnableVertexAttribArray(Program::texCoords);
glVertexAttribPointer(Program::texCoords, mesh.getTexCoordsSize(), GL_FLOAT, GL_FALSE,
mesh.getByteStride(), mesh.getTexCoords());
}
glVertexAttribPointer(Program::position, mesh.getVertexSize(), GL_FLOAT, GL_FALSE,
mesh.getByteStride(), mesh.getPositions());
if (usesWideColor()) {
Description wideColorState = mState;
if (mDataSpace != HAL_DATASPACE_DISPLAY_P3) {
... ...
}
ProgramCache::getInstance().useProgram(wideColorState);
glDrawArrays(mesh.getPrimitive(), 0, mesh.getVertexCount());
if (outputDebugPPMs) {
... ...
}
} else {
ProgramCache::getInstance().useProgram(mState);
glDrawArrays(mesh.getPrimitive(), 0, mesh.getVertexCount());
}
if (mesh.getTexCoordsSize()) {
glDisableVertexAttribArray(Program::texCoords);
}
}
glDrawArrays繪製~
所有Layer都繪製完成後,swapBuffer
6.交換Buffer
Surface交換Buffer
void Surface::swapBuffers() const {
if (!eglSwapBuffers(mEGLDisplay, mEGLSurface)) {
... ...
}
}
eglSwapBuffers 將交換GPU處理的Buffer,處理完的Buffer,也就是包含Layer合成數據後的Buffer將被queue到BufferQueue中。
前面已經說過advanceFrame時,將acquireBuffer,通過setClientTarget給HWC設置Client端的合成結果,傳給底層進行顯示。
以上就是Client端的合成處理。