GPU Profiling

        提起優化,第一件事情要做的就是Profiling,因爲沒有經過Profiling誰都不知道瓶頸是什麼,正確的Profiling是對遊戲整體性能的全面認識。關於CPU的Profiling已經非常成熟,各種軟件都很齊全,甚至沒有這些軟件,開發者也很容易自己寫一個Profiling系統,最簡單的就是在函數執行的開始調用QueryPerformanceCounter,記下時間,在函數執行結束的地方再次調用QueryPerformanceCounter,兩個時間一減就知道這個函數的執行時間了。有一些性能分析工具像vTune可以抓出來執行時間最多的函數,開發者可以逐個優化最費的函數,因爲最費的就是瓶頸,瓶頸沒優化下去,別的再怎麼優化貢獻也不明顯。

        對於GPU大家熟知的nVPerfHud,GPA,NSight都可以幫助開發者分析GPU的性能開銷,然後PerfHud經常莫名的就無法使用,而且需要Attach上去,並且對A卡無解;GPA對大多顯卡都可以Capture出來,但是必須要打開Capture的數據才能看到性能開銷,而且很多時候不準而且很慢,NSight也強不到哪裏去,對於遊戲開發者來說,能夠有一個內嵌到遊戲引擎的GPU Profiler再好不過,不需要任何第三方工具,遊戲跑起來就可以快速的看到每一部分的性能開銷,Crysis在GPU Pro 3的一篇文章裏有一張圖片就提及了他們內嵌的GPU Profilor,網上搜一搜也有一些開發者在 DX 11下通過Query實現了類似的功能,Query這個東西DX9就已經有了,而且硬件廠商基本都支持,所以在DX9下面也可以很容易用Query做一個GPU Profilor.下面詳細轉載瞭如何在DX11下實現GPU Profilor,DX9下面可以參考DXSDK關於Query的文檔。


http://www.reedbeta.com/blog/2011/10/12/gpu-profiling-101/

As you can see, even though Idyll is at a very, very early stage (it has no textures, only ambient and directional lighting), it still has a fairly complete performance measurement system. I chose to implement this early on in development because it’s my belief that although it can be too early to optimize, it’s never too early to profile.

Even at the very beginning of development, I want to know that the performance numbers I’m seeing are reasonable. I don’t need to worry about the details—I’m not going to worry that 0.47 ms is too long to spend drawing a 2700-triangle, untextured city—but I do want to know the numbers are at about the right order of magnitude. To put it another way, if I were spending 4.7 ms drawing a 2700-triangle, untextured city, then I’d be wondering what was going on! More than likely, it would be because I was doing something dumb that was forcing the driver or GPU into a slow mode of execution. This kind of bug can be hard to spot because the rendered frame is still visually correct, so you can’t see it (unless it causes you to drop from 60 to 30 Hz or something like that). But if you’re measuring your performance and you at least sanity-check your numbers from time to time, you’ll notice something’s wrong.

The other reason to implement GPU profiling early is that when it comes time to do your performance optimizations for real, you’ll have more trust in your profiling system. You’ll have ironed out most of the bugs, and you’ll have a feel for how much noise there is in the data and what sorts of artifacts there may be.

So now that you know why you should do GPU profiling ;), how do you actually do it? In this article, I’ll show you how to set it up in Direct3D 11. Similar commands should be available in any modern graphics API, although the details may vary.

First, let’s talk about how not to do GPU profiling. Do not measure performance in terms of framerate! This is something many beginners do, but it’s misleading at best, since framerate isn’t a linear measure of performance. People will make statements like “turning on XYZ dropped my framerate by 10 fps”, but this is meaningless, since a drop from 100 to 90 fps is a very different thing than a drop from 30 to 20 fps. You could clarify by reporting the fps you started from, but why bother? Just report performance results using units of real time. Milliseconds are fine, although you can go one step further and express everything in fractions of your frame budget. For instance, I’m currently targeting 60 Hz, so I have 16.67 ms to render a frame. Instead of saying that my objects rendered in 0.47 ms, I could make Idyll report that they rendered in 2.8% of a frame.

Another caveat that should be obvious: you can’t measure GPU performance by timing on the CPU. For instance, calling QueryPerformanceCounter before and after a draw call won’t measure how long the GPU took to draw your objects, only how long the graphics driver took to queue up that call in the various data structures it has under the hood. That might be useful information to have in general, but it’s not GPU profiling.

Placing Queries

The tools we’ll use to get profiling data out of the GPU are ID3D11Query objects. As described in the docs, these can be created with ID3D11Device::CreateQuery, and come in various flavors. There are two we’ll want: D3D11_QUERY_TIMESTAMP and D3D11_QUERY_TIMESTAMP_DISJOINT. You’ll need one timestamp query for each stage of your renderer that you want to profile separately (in my case, one each for shadow clear, shadow objects, main clear, and main objects), plus two more to timestamp the beginning and ending of the whole frame. You’ll also need one “timestamp disjoint” query; I’ll return to that one later.

Executing a timestamp query causes the GPU to read an internal clock and store the current value of that clock somewhere for later retrieval. This is done by callingID3D11DeviceContext::End and passing the query object you created. (There’s no call to Begin for timestamp queries, because they don’t operate like a stopwatch—they don’t measure the elapsed time between two points; they just take a clock reading at a single point.) You’ll want to place one of these calls at the beginning of the frame, and one after each chunk of rendering work you want to profile. You’ll also need one at the very end, after your call to IDXGISwapChain::Present. Each of these is a separate ID3D11Query object, because all of these timestamps need to be collected and stored separately as the GPU chews its way through the frame.

The “timestamp disjoint” query should enclose the whole frame. This one does require paired calls to ID3D11DeviceContext::Begin (just before your first timestamp) andID3D11DeviceContext::End (just after your last one). What does the disjoint query do? Its most important role for our purposes is to tell us the clock frequency. All the timestamps we’re going to measure are expressed as counts of ticks of some internal GPU clock…but it’s the disjoint query that tells how to convert those ticks into real time. It also has a secondary purpose: if the something went wrong with the clock, such as the counter overflowing and wrapping back to zero or the clock frequency changing during the middle of the frame (as can happen on laptops due to CPU throttling), it’ll also tell us that. However, in those cases we just have to throw our data out, as we don’t have enough information to recover accurate timings. In practice, this rarely happens, and even when it does happen it only leads to one frame of missed data, so it’s not a big deal.

To recap, here’s what a render routine might look like after all these queries have been added:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void Render (ID3D11DeviceContext * pContext)
{
    // Begin disjoint query, and timestamp the beginning of the frame
    pContext->Begin(pQueryDisjoint);
    pContext->End(pQueryBeginFrame);
 
    // Draw shadow map
    ClearShadowMap();
    pContext->End(pQueryShadowClear);
    DrawStuffInShadowMap();
    pContext->End(pQueryShadowObjects);
 
    // Draw main view
    ClearMainView();
    pContext->End(pQueryMainClear);
    DrawStuffInMainView();
    pContext->End(pQueryMainObjects);
 
    // Display frame on-screen and finish up queries
    pSwapChain->Present(1, 0);
    pContext->End(pQueryEndFrame);
    pContext->End(pQueryDisjoint);
}

Retrieving The Data

So far we’ve set up a number of query objects that tell the GPU when and how to collect the timing data we want. Once the GPU actually renders the frame and collects the data, we need to retrieve it so we can analyze and display it.

The idea is simple enough. We just use the ID3D11DeviceContext::GetData method to see if a particular query has finished yet, and return its result if it has. As the docs say,GetData returns S_FALSE if the query is still in flight on the GPU, and S_OK when it’s done. So, we just need to write a loop that waits (an idle wait, please, not a busy wait!) untilGetData returns something other than S_FALSE. Then our query data will be waiting for us!

The timestamp queries all send back a UINT64 containing—you guessed it—the timestamp value, measured in some sort of clock tick. The disjoint query returns aD3D11_QUERY_DATA_TIMESTAMP_DISJOINT struct, which contains the clock frequency and, as mentioned before, the rarely-encountered “disjoint” flag that means you have to throw out the last frame’s data because something weird happened with the clock.

So all we have to do now is subtract adjacent timestamp values to get the deltas, convert those into milliseconds and off we go! The code for this might look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
void CollectTimestamps (ID3D11DeviceContext * pContext)
{
    // Wait for data to be available
    while (pContext->GetData(pQueryDisjoint, NULL, 0, 0) == S_FALSE)
    {
        Sleep(1);       // Wait a bit, but give other threads a chance to run
    }
 
    // Check whether timestamps were disjoint during the last frame
    D3D10_QUERY_DATA_TIMESTAMP_DISJOINT tsDisjoint;
    pContext->GetData(pQueryDisjoint, &tsDisjoint, sizeof(tsDisjoint), 0);
    if (tsDisjoint.Disjoint)
    {
        return;
    }
 
    // Get all the timestamps
    UINT64 tsBeginFrame, tsShadowClear, // ... etc.
    pContext->GetData(pQueryBeginFrame, &tsBeginFrame, sizeof(UINT64), 0);
    pContext->GetData(pQueryShadowClear, &tsShadowClear, sizeof(UINT64), 0);
    // ... etc.
 
    // Convert to real time
    float msShadowClear = float(tsShadowClear - tsBeginFrame) /
                            float(tsDisjoint.Frequency) * 1000.0f;
    // ... etc.
}

Now you have the timings in milliseconds and can do with them as you please! Display them on screen, average them over time, log them to a file, or whatever else you can think of.

Double-Buffered Queries

Back toward the beginning of this article, I said that you would need one timestamp query for each rendering stage and one disjoint query. Actually, I lied. I wanted to keep things simple while explaining the basics, but the truth is that to do a proper GPU profiler, you’ll need to double-buffer your queries, meaning that you need two copies of everything.

This is because the CPU and GPU run simultaneously, not sequentially—or at least they should, for efficiency! While the GPU is rendering frame n, the CPU is forging ahead and working on frame n + 1. When the GPU finishes frame n, it puts it up on screen and starts on frame n + 1; then the CPU gets the go-ahead to start on frame n + 2. But this means that if we fire off a bunch of timestamp queries in frame n on the CPU, then wait for them to complete before we start the next frame, we’ll be serializing the system. The CPU will have to wait for the GPU to finish the frame it just dispatched, and then the GPU will idle waiting for the CPU to finish the next frame. This is a Bad Thing.

Double-buffering allows us to get around this by keeping separate queries for alternate frames. On frame n we’ll fire off one set of queries, then we’ll wait for the other set of queries to complete—these were the queries for frame n – 1. Then we’ll re-use the queries we just collected and fire off a new set for frame n + 1, then finally wait for the queries from frame n. This way we always give the GPU a whole frame to finish its work, and avoid dropping into the slow, serialized mode of execution.

(By the way, this is a good example of what I mentioned at the top of this article about sanity-checking your performance numbers! You might have noticed that I’m measuring the total frame time on the CPU as well as on the GPU—on the CPU, it’s just QueryPerformanceCounter as normal. When the CPU and GPU are running simultaneously, as they should be, the CPU and GPU frame time will be equal (plus or minus some measurement noise). But if you didn’t double-buffer your queries, or did something else to make them drop into the slow, serialized execution mode, you’d see the CPU frame time shoot up to substantially larger than the GPU frame time, since it’s doing its own work and then waiting for the GPU to finish all its work.)

At this point, you should have all the information you’ll need to go and write a GPU profiler of your own! But in case some of the details aren’t clear, I’ve posted my GPU profiling code from Idyll, which you’re free to include in your own project; it’s under the BSD license. In that code, I’ve defined an enum called GTS (GPU Time-Stamp) to label all the timestamped points in the frame; you should modify it to contain the set of blocks you care about. It handles all the double-buffering for you, and will also average all the deltas over a half-second, for smoother display. Now, go forth and find out how long the GPU is taking to render your scenes!



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章