Introduction
by Rajiv Kapoor
Today's modern processors derive much of their performance by executing instructions in parallel and before the time when their results are actually needed. When a sequential block of code is executed, the processor has full visibility of the next set of instructions
it needs to execute, and after evaluating their interdependencies, it executes them. However, when a comparison instruction and a branch based on its result are encountered, the processor does not know which set of instructions to execute after the branch
until it knows the result of the comparison instruction. The next set of instructions could be either the ones immediately following the branch target address, if the branch is taken, or they could be the ones following the branch instruction, if the branch
is not taken. This causes a stall in the processor's execution pipeline. To avoid such stalls, the processor makes an educated "guess" of the result of the comparison and executes the instructions that it thinks need to be next. These predictions are based
on past history of that particular branch instruction that is maintained in the processor's branch history buffer. This article discusses problems encountered as a consequence of branch mispredictions and offers the reader approaches for avoiding the performance
costs that come with these events.
The Problem With Branches
Predicting the branch based on history results in a correct guess only if the branch has been visited before and behaves in some sort of a consistent pattern. A typical case of unpredictable branch behavior is when the comparison result is dependent on data.
When the prediction made by the processor is incorrect, all the speculatively executed instructions are discarded as soon as the correct branch is determined, and the processor execution pipeline restarts with instructions from the correct branch destination.
This halt in productive work while the new instructions work their way down the execution pipeline causes a processor stall, which is a major drain on performance.
No Branches Means No Mispredicts
One of the most effective techniques of reducing processor stalls due to mispredicted branches is to eliminate the branches altogether. Removing the branches not only improves runtime performance of the code, it also helps the compiler to optimize the code.
When branch removal is not possible, the branch can be coded to help the processor predict it correctly based on the static branch prediction rules. The references below provide more information on static branch prediction rules.
Which Branches To Remove
Branches are any decision points in a program. Branches occur at every if and case statement, in every for and do-while loop,
and every gotostatement, among others. Some of these, such as the goto statement, are unconditional, are always predicted correctly, and need no optimization. Others, especially branches
that rely on data values, can be very difficult to predict. Such data-dependent branche s are the best ones to target for removal.
Removing all branches may not be very useful. To make the right decision, consider the correlation between the execution time spent in the code block containing the branches and the rate of misprediction of those branches. Use a tool such as the Intel® VTune™
Performance Analyzer, to measure two metrics for the code block - percentage of execution time spent, and the ratio of branches mispredicted to branches retired (also known as the branch mispredict ratio). As a rule of thumb, remove or optimize a branch if
the branch mispredict ratio is greater than about 8% and the location of the branch is a hotspot in the execution profile relative to the rest of the code. It is important to consider both the metrics together because
a branch location may be an execution hotspot simply due to the fact that it is execute frequently, even though it is never mispredicted.
Branch Elimination By Example
Let's consider some simple synthetic code snippets to illustrate several techniques of branch removal. The following function shows three code segments, appropriately commented as segments 1, 2 and 3. These code segments or blocks will be used to illustrate
different ways of removing branches in the following sections. For simplifying the example, it is assumed that size is a multiple of 4 and all pointer input parameters to the function are 16 byte aligned.
03 | size, int blend, float *src, float *dest, float *src_1,
... ) |
11 | for (j
= 0; j < size; j++ ) |
19 | else if (
blend == 0 ) |
25 | dest[j]
= ( src_1[j] * blend + src_2[j] * (255-blend) ) / 256; |
32 | for (j
= 0; j < size; j++ ) |
36 | if (
src[j] > dest[j] ) |
61 | for (j
= 0; j < size; j++) |
65 | src[j]
= -1.0 * src[j]; |
Code Segment 1
This segment shows a simple case where a busy loop contains invariant branches inside the loop. This code is the kind of loop all C and C++ programmers have written hundreds if not thousands of times. Compare it with this segment:
03 | size, int blend, float *src, float *dest, float *src_1,
... |
22 | for (
j = 0; j < size; j++ ) |
28 | else if (
blend == 0 ) |
32 | for (
j = 0; j < size; j++ ) |
41 | for (
j = 0; j < size; j++ ) |
44 | dest[j]
= ( src_1[j] * |
45 | blend
+ src_2[j] * (255-blend) ) / 256; |
The same operations are still performed, but none of the loops have any comparison within them. This means that the loops can be optimally executed by the processor without decreasing performance due to mispredicted branches. This change alone will significantly
increase the performance of these loops.
Code Segment 2
This segment shows code that calculates the minimum and maximum values from two arrays. By using Single Instruction Multiple Data (SIMD) operations, this kind of branchy code can be easily converted into logical operations. One way to do this would be to use
the mask generating compare instructions in the Streaming SIMD Extension (SSE) instruction set offered on the Pentium® III and later processors. Here is an implementation using intrinsics that can be called in C for the SSE instructions that are supported
by Intel and Microsoft C/C++ compilers.
03 | size, int blend, float *src, float *dest, float *src_1,
... |
15 | __m128
src4, dest4, min4, max4, mask4, max4_s, |
16 | max4_d,
min4_s, min4_d; |
18 | for (j
= 0; j < size/4 ; j++, src+=4, dest+=4, |
24 | src4
= _mm_load_ps(src); |
28 | dest4
= _mm_load_ps(dest); |
38 | max4_s
= _mm_and_ps(mask4, |
42 | min4_d
= _mm_and_ps(mask4, |
46 | _mm_andnot_ps(mask4,
dest4); |
50 | _mm_andnot_ps(mask4,
src4); |
52 | max4
= _mm_or_ps(max4_s, |
55 | min4
= _mm_or_ps(min4_s, |
This can also be implemented in a more efficient way by using the SIMD min and max instructions provided by the SSE instructions. The following code shows the implementation using the _mm_min_ps() and _mm_max_ps() intrinsics.
03 | size, int blend, float *src, float *dest, float *src_1,
... |
16 | __m128
src4, dest4, min4, max4; |
18 | for (j
= 0; j < size/4 ; j++, src+=4, dest+=4, |
23 | src4
= _mm_load_ps(src); |
26 | dest4
= _mm_load_ps(dest); |
36 | max4
= _mm_max_ps(src4, |
Code Segment 3
This piece of code calculates the absolute values of those stored in the array src. The most obvious way to remove the branch here is to use the fabs() intrinsic as:
08 | for (j
= 0; j < size; j++) |
Using SSE, the code could be further optimized to process four values at one time and further reduce the branches due to the loop execution, since the loop count is now a fourth of the original. Here is an implementation that exploits the fact that calculating
the absolute value of a single precision floating point number is just a matter of clearing bit 31 of the floating point representation.
08 | __declspec (align
(16)) int temp[4]
= |
11 | 0x7FFFFFFF,
0x7FFFFFFF, 0x7FFFFFFF}; |
13 | __m128
const_sign_bit_0 |
15 | =
mm_load_ps(( float *)
temp); |
18 | j
< size/4; j++, src+=4) |
23 | src4
= _mm_load_ps(src); |
27 | src4
= _mm_and_ps(src4, |
30 | _mm_store_ps(src,
src4); |
Conclusion
The Occasional Cost Of Branch Removal
Branch removal may not always be free. Consider the optimized versions of code segment 3 shown in the previous section. In the optimized versions, all src values are processed as opposed to the original code, which only processes the negative values. If the
values are non-negative in a majority of the cases, then we will be doing more processing in the optimized code compared to the original code. Furthermore, when using SSE2 or SSE instructions, one must be aware of the time spent in "setup" code as opposed
to the real intended operation. Setup code could be loading and/or rearranging of data to make it amenable to SIMD operations that follow the setup. Always try to concatenate as many SIMD operations as possible to amortize the fixed cost of setup.
Resources
For more information on this topic, please see the Optimization manual for the Pentium® 4 and Intel® Xeon® processors, as well as the processor manuals, that contain significant helpful information. They can be downloaded at: http://developer.intel.com/products/processor/manuals/index.htm
Also see Richard Gerber's book, The Software Optimization Cookbook, Intel Press,
2002, which inspired the code in the first example.