IEEE 754標準中,階的偏置值爲什麼是127,而不是128?

The reason is both Infinities/NaNs and gradual underflow.

If you use exponents to show both integer (n >= 0) and fractional (n < 0) values you have the problem that you need one exponent for 2^0 = 1. So the remaining range is odd, giving you either the choice of choosing the bigger range for fractions or for integers. For single precision we have 256 values, 255 without the 0 exponent. Now IEEE754 reserved the highest exponent (255) for special values: +- Infinity and NaNs (Not a Number) to indicate failure. So we are back to even numbers again (254 for both sides, integer and fractional) but with a lower bias.

The second reason is gradual underflow. The Standard declares that normally all numbers are normalized, meaning that the exponent indicates the position of the first bit. To increase the number of bits the first bit is normally not set but assumed (hidden bit): The first bit after the exponent bit is thesecond bit of the number, the first is always a binary 1. If you enforce normalization you encounter the problem that you cannot encode zero and even if you encode zero as special value, the numerical accuracy is hampered. +-Infinity (the highest exponent) makes it clear that something is wrong, but underflow to zero for too small numbers is perfectly normal and therefore easily to overlook as a possible problem. So Kahan, the designer of the standard, decided that denormalized numbers or subnormals should be introduced and they should include 1/MAX_FLOAT.

EDIT: Allan asked why the "numerical accuracy is hampered" if you encode zero as special value. I should better phrase it as "numerical accuracy is still hampered". In fact this was the implementation of the historical DEC VAX floating point format. If the exponent field in the raw bit encoding was 0, it was considered zero. For example I take now the 32 bit format still rampant in GPUs.

X 00000000 XXXXXXXXXXXXXXXXXXXXXXX

In this case, the content of the mantissa field at the right could be completely ignored and was normally filled with zeroes. The sign field at the left side could be valid, distinguishing a normal zero and a "negative zero" (You could get a negative zero by something like -1.0/0.0 or rounding a negative number).

Gradual underflow and subnormals of IEEE 754 in contrast did use the mantissa field. Only
X 00000000 00000000000000000000000

is zero. All other bit combinations are valid and even more practical, you are warned if your result underflows. So whats the point ?

Consider the different numbers
A 0 00000009 10010101111001111111111
B 0 00000009 10010101111100001010000

They are valid floating point members, very small but still finite. But as you see the first 11 bits are identical. If you now subtract A-B or B-A the first valid bit leaves the lower exponent range, so the result without gradual underflow is....0. So A != B but A-B = 0. Ouch. Countless people have fallen in this trap and it can be assumed that they never recognized it. The same with multiplication or division: You need to add or subtract exponents and if it falls below the lower threshold: 0. And as you know: 0*everything = 0. You could have S*T*X*Y*Z and once one subproduct is 0, the result is 0 even when a completely valid and even huge number is the correct result. It should be said that these anomalities could be never completely avoided due to rounding, but with gradual underflow they became rare. Very rare.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章