URP/LWRP學習入門

要了解什麼是URP,首先得先了解什麼是SRP.而要了解什麼是SRP,又得先了解什麼是RenderPipeline.我們先來看看RenderPipeline究竟是什麼.

RenderPipeline

在遊戲場景中,有很多的模型效果需要繪製,如有不透明的(Opaque Object)、透明的(Transparencies)物體,有時候又需要使用屏幕深度貼圖,後處理效果等.那麼這些東西在什麼時候來繪製,哪個階段繪製什麼東西,這就是由渲染管線來決定的.

默認渲染管線示意圖

Unity默認的渲染管線如上圖所示.我們來簡單解釋一下Forward Rendering的渲染管線.

在前向渲染中,相機開始繪製一幀圖像時,如果你在代碼中對一個攝像機設置了設置了

cam.depthTextureMode = DepthTextureMode.Depth;

那麼該相機的渲染管線就會執行Depth Texture這一步操作.會把場景中所有RenderType=Opaque的Shader裏面Tags{"LightMode" = "ShadowCaster"}的Pass全部執行一遍,把運行結果畫到深度貼圖中.所以如果相機開起了深度貼圖,那麼該相機的drawcall就會增加(所有Opaque的DrawCall翻了一倍).要是沒有打開深度貼圖,那麼渲染管線就會略過這一步.

所以Shader 的Tags { "LightMode" "RenderType" "Queue"}等標籤其實是和渲染管線息息相關的.Unity在進行繪製的時候,會根據這些標籤,把Shader中對應的Pass放進上圖中對應的步驟去執行.所以就有了先畫不透明物體,再畫透明物體這些繪製順序.

總結來說,渲染管線就是定義好了一種繪製的順序,相機在繪製每一幀的時候就是按這個定義的順序去畫場景中的一個個物體.

Scriptable Renderer Pipeline(SRP 可編程渲染管線)

SRP簡單來說,就是之前定死的渲染管線現在可以自己來組織了.只要你樂意,就可以把不透明物體放到透明物體之後來畫,或者在上面的步驟中可以插入很多的渲染步驟來滿足你需要的效果.

SRP除了提供定義管線的自由度,還和很多更新的優化有關,比如SRP Batcher的使用等.

URP/LWRP

URP其實就是Unity定義出來的,適合大多數設備的一個SRP.它的具體實現細節可以通過代碼來看到.

文件結構

在你添加完URP後,會看到如下的結構

URP的入口就是Universal RP/Runtime/UniversalRenderPipeline.Render函數

protected override void Render(ScriptableRenderContext renderContext, Camera[] cameras)
{
    BeginFrameRendering(renderContext, cameras);

    GraphicsSettings.lightsUseLinearIntensity = (QualitySettings.activeColorSpace == ColorSpace.Linear);
    GraphicsSettings.useScriptableRenderPipelineBatching = asset.useSRPBatcher;
    SetupPerFrameShaderConstants();

    SortCameras(cameras);//根據相機深度把相機排好序
    foreach (Camera camera in cameras)
    {
        BeginCameraRendering(renderContext, camera);
#if VISUAL_EFFECT_GRAPH_0_0_1_OR_NEWER
        //It should be called before culling to prepare material. When there isn't any VisualEffect component, this method has no effect.
        VFX.VFXManager.PrepareCamera(camera);
#endif
        RenderSingleCamera(renderContext, camera);//渲染一個個相機

        EndCameraRendering(renderContext, camera);
    }

    EndFrameRendering(renderContext, cameras);
}

關鍵的幾步就是對攝像機根據深度排好序,然後調用RenderSingleCamera逐個去繪製每個攝像機.

RenderSingleCamera

public static void RenderSingleCamera(ScriptableRenderContext context, Camera camera)
{
    //獲取相機的裁剪參數
    if (!camera.TryGetCullingParameters(IsStereoEnabled(camera), out var cullingParameters))
        return;

    var settings = asset;
    UniversalAdditionalCameraData additionalCameraData = null;
    if (camera.cameraType == CameraType.Game || camera.cameraType == CameraType.VR)
        camera.gameObject.TryGetComponent(out additionalCameraData);
    //根據PipelineAsset和UniversalAdditionalCameraData(相機上的設置)設置好攝像機參數
    InitializeCameraData(settings, camera, additionalCameraData, out var cameraData);
    //把相機的參數設置進shader共用變量,即shader中使用的一些內置shader變量
    SetupPerCameraShaderConstants(cameraData);
    //獲取到該相機使用的ScriptableRenderer,即自己定義的SRP管線.
    //該管線如果相機有自己定義好,就用相機定義的,沒有則用asset設置的管線.
    //URP中默認使用的是ForwardRenderer,就是在這裏使用上了重新定義的渲染管線.
    ScriptableRenderer renderer = (additionalCameraData != null) ? additionalCameraData.scriptableRenderer : settings.scriptableRenderer;
    if (renderer == null)
    {
        Debug.LogWarning(string.Format("Trying to render {0} with an invalid renderer. Camera rendering will be skipped.", camera.name));
        return;
    }
    //設置好FrameDebug中的標籤名字
    string tag = (asset.debugLevel >= PipelineDebugLevel.Profiling) ? camera.name: k_RenderCameraTag;
    CommandBuffer cmd = CommandBufferPool.Get(tag);
    using (new ProfilingSample(cmd, tag))
    {
        renderer.Clear();
        //根據cameraData設置好cullingParameters
        renderer.SetupCullingParameters(ref cullingParameters, ref cameraData);

        context.ExecuteCommandBuffer(cmd);
        cmd.Clear();

#if UNITY_EDITOR

        // Emit scene view UI
        if (cameraData.isSceneViewCamera)
            ScriptableRenderContext.EmitWorldGeometryForSceneView(camera);
#endif
        //根據裁剪參數計算出裁剪結果,管線中的所有渲染步驟都是從這個裁剪結果中篩選出自己要渲染的元素
        var cullResults = context.Cull(ref cullingParameters);
        //把前面獲取到的所有參數都設置進renderingData
        InitializeRenderingData(settings, ref cameraData, ref cullResults, out var renderingData);
        //設置好管線中的所有設置
        renderer.Setup(context, ref renderingData);
        //執行管線中的一個個步驟
        renderer.Execute(context, ref renderingData);
    }

    context.ExecuteCommandBuffer(cmd);
    CommandBufferPool.Release(cmd);
    context.Submit();
}

在這個函數中有幾個函數可以說明一下

SetupPerCameraShaderConstants中把shader常用的幾個內置變量給設置了

PerCameraBuffer._InvCameraViewProj = Shader.PropertyToID("_InvCameraViewProj");
PerCameraBuffer._ScreenParams = Shader.PropertyToID("_ScreenParams");
PerCameraBuffer._ScaledScreenParams = Shader.PropertyToID("_ScaledScreenParams");
PerCameraBuffer._WorldSpaceCameraPos = Shader.PropertyToID("_WorldSpaceCameraPos");

之前這些都是unity在背後給做了,現在這些都可以看到.所以現在Shader中使用的所有內置變量都可以找到出處,用的是那些數據進行設置的.

string tag = (asset.debugLevel >= PipelineDebugLevel.Profiling) ? camera.name: k_RenderCameraTag;

這個tag決定了Frame Debug中顯示的最頂層的名字.如果你設置了DebugLevel

那麼你在FrameDebug中就可以看到相應攝像機的渲染步驟.

其實這個函數最關鍵的兩步就是

        //設置好管線中的所有設置
        renderer.Setup(context, ref renderingData);
        //執行管線中的一個個步驟
        renderer.Execute(context, ref renderingData);

從這裏進入了ScriptableRenderer,URP自定義的渲染管線ForwardRenderer.

在ForwardRenderer中,定義了一堆的ScriptableRenderPass.ScriptableRenderPass其實就相當於渲染管線中的一個個步驟.如m_DepthPrepass就相當於開始圖片中Depth Texture這一步,m_RenderOpaqueForwardPass就相當於默認管線中的Opaque Object這一步.

而SetUp函數做的就是把這些Pass組織起來,按什麼順序來定義這個渲染管線.下面截取一些來說明一下.

public override void Setup(ScriptableRenderContext context, ref RenderingData renderingData)
{
    ...
    //這些rendererFeatures是在RenderData的配置文件中設置的,相當於我們自己如果要在管線中做一些添加
    //就自己寫上renderPass,添加到這些feature裏.Forward管線在這裏把我們自己的pass加進來.
    for (int i = 0; i < rendererFeatures.Count; ++i)
    {
        rendererFeatures[i].AddRenderPasses(this, ref renderingData);
    }
    ...
    //這裏就是把一個個的步驟加進管線,在FrameDebug中可以看到一個個的步驟
    if (mainLightShadows)
        EnqueuePass(m_MainLightShadowCasterPass);

    if (additionalLightShadows)
        EnqueuePass(m_AdditionalLightsShadowCasterPass);

    if (requiresDepthPrepass)
    {
        m_DepthPrepass.Setup(cameraTargetDescriptor, m_DepthTexture);
        EnqueuePass(m_DepthPrepass);
    }

    if (resolveShadowsInScreenSpace)
    {
        m_ScreenSpaceShadowResolvePass.Setup(cameraTargetDescriptor);
        EnqueuePass(m_ScreenSpaceShadowResolvePass);
    }

    if (postProcessEnabled)
    {
        m_ColorGradingLutPass.Setup(m_ColorGradingLut);
        EnqueuePass(m_ColorGradingLutPass);
    }

    EnqueuePass(m_RenderOpaqueForwardPass);

    if (camera.clearFlags == CameraClearFlags.Skybox && RenderSettings.skybox != null)
        EnqueuePass(m_DrawSkyboxPass);

    // If a depth texture was created we necessarily need to copy it, otherwise we could have render it to a renderbuffer
    if (createDepthTexture)
    {
        m_CopyDepthPass.Setup(m_ActiveCameraDepthAttachment, m_DepthTexture);
        EnqueuePass(m_CopyDepthPass);
    }

    if (renderingData.cameraData.requiresOpaqueTexture)
    {
        // TODO: Downsampling method should be store in the renderer isntead of in the asset.
        // We need to migrate this data to renderer. For now, we query the method in the active asset.
        Downsampling downsamplingMethod = UniversalRenderPipeline.asset.opaqueDownsampling;
        m_CopyColorPass.Setup(m_ActiveCameraColorAttachment.Identifier(), m_OpaqueColor, downsamplingMethod);
        EnqueuePass(m_CopyColorPass);
    }

    EnqueuePass(m_RenderTransparentForwardPass);
    EnqueuePass(m_OnRenderObjectCallbackPass);

    bool afterRenderExists = renderingData.cameraData.captureActions != null ||
                                hasAfterRendering;

    bool requiresFinalPostProcessPass = postProcessEnabled &&
                                renderingData.cameraData.antialiasing == AntialiasingMode.FastApproximateAntialiasing;

    // if we have additional filters
    // we need to stay in a RT
    if (afterRenderExists)
    {
        bool willRenderFinalPass = (m_ActiveCameraColorAttachment != RenderTargetHandle.CameraTarget);
        // perform post with src / dest the same
        if (postProcessEnabled)
        {
            m_PostProcessPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment, m_AfterPostProcessColor, m_ActiveCameraDepthAttachment, m_ColorGradingLut, requiresFinalPostProcessPass, !willRenderFinalPass);
            EnqueuePass(m_PostProcessPass);
        }

        //now blit into the final target
        if (m_ActiveCameraColorAttachment != RenderTargetHandle.CameraTarget)
        {
            if (renderingData.cameraData.captureActions != null)
            {
                m_CapturePass.Setup(m_ActiveCameraColorAttachment);
                EnqueuePass(m_CapturePass);
            }

            if (requiresFinalPostProcessPass)
            {
                m_FinalPostProcessPass.SetupFinalPass(m_ActiveCameraColorAttachment);
                EnqueuePass(m_FinalPostProcessPass);
            }
            else
            {
                m_FinalBlitPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment);
                EnqueuePass(m_FinalBlitPass);
            }
        }
    }
    else
    {
        if (postProcessEnabled)
        {
            if (requiresFinalPostProcessPass)
            {
                m_PostProcessPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment, m_AfterPostProcessColor, m_ActiveCameraDepthAttachment, m_ColorGradingLut, true, false);
                EnqueuePass(m_PostProcessPass);
                m_FinalPostProcessPass.SetupFinalPass(m_AfterPostProcessColor);
                EnqueuePass(m_FinalPostProcessPass);
            }
            else
            {
                m_PostProcessPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment, RenderTargetHandle.CameraTarget, m_ActiveCameraDepthAttachment, m_ColorGradingLut, false, true);
                EnqueuePass(m_PostProcessPass);
            }
        }
        else if (m_ActiveCameraColorAttachment != RenderTargetHandle.CameraTarget)
        {
            m_FinalBlitPass.Setup(cameraTargetDescriptor, m_ActiveCameraColorAttachment);
            EnqueuePass(m_FinalBlitPass);
        }
    }

#if UNITY_EDITOR
    if (renderingData.cameraData.isSceneViewCamera)
    {
        m_SceneViewDepthCopyPass.Setup(m_DepthTexture);
        EnqueuePass(m_SceneViewDepthCopyPass);
    }
#endif
}

其實總結來說,SetUp()就是決定了渲染管線中會有哪些步驟,這些步驟的渲染順序是什麼.而每一步裏面該怎麼渲染,該渲染什麼,都是在這一步的pass中自己定義的.

public void EnqueuePass(ScriptableRenderPass pass)
{
    m_ActiveRenderPassQueue.Add(pass);
}

EnqueuePass就是把這些pass都加進m_ActiveRenderPassQueue裏面,後面的Execute部分則把這些pass都執行一遍,畫出他們對應的東西.

public void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
    ...
    //把m_ActiveRenderPassQueue中的pass都進行排序.
    SortStable(m_ActiveRenderPassQueue);

    // Cache the time for after the call to `SetupCameraProperties` and set the time variables in shader
    // For now we set the time variables per camera, as we plan to remove `SetupCamearProperties`.
    // Setting the time per frame would take API changes to pass the variable to each camera render.
    // Once `SetupCameraProperties` is gone, the variable should be set higher in the call-stack.
#if UNITY_EDITOR
    float time = Application.isPlaying ? Time.time : Time.realtimeSinceStartup;
#else
    float time = Time.time;
#endif
    float deltaTime = Time.deltaTime;
    float smoothDeltaTime = Time.smoothDeltaTime;
    //shader中使用的內置time變量都在這邊設置.
    SetShaderTimeValues(time, deltaTime, smoothDeltaTime);

    // Upper limits for each block. Each block will contains render passes with events below the limit.
    NativeArray<RenderPassEvent> blockEventLimits = new NativeArray<RenderPassEvent>(k_RenderPassBlockCount, Allocator.Temp);
    blockEventLimits[RenderPassBlock.BeforeRendering] = RenderPassEvent.BeforeRenderingPrepasses;
    blockEventLimits[RenderPassBlock.MainRendering] = RenderPassEvent.AfterRenderingPostProcessing;
    blockEventLimits[RenderPassBlock.AfterRendering] = (RenderPassEvent)Int32.MaxValue;

    NativeArray<int> blockRanges = new NativeArray<int>(blockEventLimits.Length + 1, Allocator.Temp);
    //把m_ActiveRenderPassQueue中的pass分成幾塊(BeforeRendering,MainRendering,AfterRendering)來執行.
    FillBlockRanges(blockEventLimits, blockRanges);
    blockEventLimits.Dispose();
    //設置好shader中使用的內置light數據
    SetupLights(context, ref renderingData);

    // Before Render Block. This render blocks always execute in mono rendering.
    // Camera is not setup. Lights are not setup.
    // Used to render input textures like shadowmaps.
    ExecuteBlock(RenderPassBlock.BeforeRendering, blockRanges, context, ref renderingData);
    ...
    // Override time values from when `SetupCameraProperties` were called.
    // They might be a frame behind.
    // We can remove this after removing `SetupCameraProperties` as the values should be per frame, and not per camera.
    SetShaderTimeValues(time, deltaTime, smoothDeltaTime);
    ...
    // In this block main rendering executes.
    ExecuteBlock(RenderPassBlock.MainRendering, blockRanges, context, ref renderingData);
    ...
    // In this block after rendering drawing happens, e.g, post processing, video player capture.
    ExecuteBlock(RenderPassBlock.AfterRendering, blockRanges, context, ref renderingData);

    if (stereoEnabled)
        EndXRRendering(context, camera);

    DrawGizmos(context, camera, GizmoSubset.PostImageEffects);

    //if (renderingData.resolveFinalTarget)
        InternalFinishRendering(context);
    blockRanges.Dispose();
}

Execute函數中把m_ActiveRenderPassQueue中的pass進行排序,並分成幾塊,設置好一些共用的shader內置變量後,把這幾塊pass都跑一遍.

void ExecuteBlock(int blockIndex, NativeArray<int> blockRanges,
    ScriptableRenderContext context, ref RenderingData renderingData, bool submit = false)
{
    int endIndex = blockRanges[blockIndex + 1];
    for (int currIndex = blockRanges[blockIndex]; currIndex < endIndex; ++currIndex)
    {
        //找到對應的pass去執行
        var renderPass = m_ActiveRenderPassQueue[currIndex];
        ExecuteRenderPass(context, renderPass, ref renderingData);
    }

    if (submit)
        context.Submit();
}

void ExecuteRenderPass(ScriptableRenderContext context, ScriptableRenderPass renderPass, ref RenderingData renderingData)
{
    //設置好該renderPass的渲染目標,是往攝像機默認緩存裏輸出,還是輸出到對應的renderTexture中.
    ...
    //去執行這個renderPass的渲染
    renderPass.Execute(context, ref renderingData);
}

所以我們現在已經知道了一個SRP到底是怎麼把一個個渲染步驟聯繫起來執行的.

每一個步驟中怎麼執行,到底渲染哪些東西,則是在每一個步驟裏面決定的.

我們挑一個DepthOnlyPass來說明一下.

public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
    CommandBuffer cmd = CommandBufferPool.Get(m_ProfilerTag);
    //string m_ProfilerTag = "Depth Prepass";可以在Frame Debug中找到這一步驟
    using (new ProfilingSample(cmd, m_ProfilerTag))
    {
        context.ExecuteCommandBuffer(cmd);
        cmd.Clear();
        
        var sortFlags = renderingData.cameraData.defaultOpaqueSortFlags;
        //ShaderTagId m_ShaderTagId = new ShaderTagId("DepthOnly");
        //篩選出在裁剪結果裏shader中LightMode="DepthOnly"的Pass來繪製.
        var drawSettings = CreateDrawingSettings(m_ShaderTagId, ref renderingData, sortFlags);
        drawSettings.perObjectData = PerObjectData.None;

        ref CameraData cameraData = ref renderingData.cameraData;
        Camera camera = cameraData.camera;
        if (cameraData.isStereoEnabled)
            context.StartMultiEye(camera);
        //根據設置好的篩選條件來繪製物體.
        context.DrawRenderers(renderingData.cullResults, ref drawSettings, ref m_FilteringSettings);

    }
    context.ExecuteCommandBuffer(cmd);
    CommandBufferPool.Release(cmd);
}

其實每一個渲染步驟中,都會設置好這一步的繪製條件.比如在DepthOnly這一步驟,就會在cullResults篩選出來的所有東西里面找到那些有寫了DepthOnly的Pass來繪製.所以你打開一個shader文件,會有好多個pass,每個pass都有對應的LightMode,這就是爲了讓這一個東西在渲染管線中對應的步驟去做相應的繪製.

所以總結來說,SRP其實就是把原先藏起來的渲染過程暴露出來,可以讓我們清楚的看到在一個渲染流程中到底經歷了哪些步驟,每一步驟都對哪些東西進行繪製.這些步驟的順序,繪製的東西也都與FrameDebug窗口中的每一個條目一一對應.相當於從FrameDebug中就可以看到這個渲染管線的具體渲染步驟細節.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章