Decimal64 floating-point format

Decimal64 floating-point format
Floating-point precisions

IEEE 754:
16-bit: Half (binary16)
32-bit: Single (binary32), decimal32
64-bit: Double (binary64), decimal64
128-bit: Quadruple (binary128), decimal128
Other:
Minifloat · Extended precision
Arbitrary precision

In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes (64 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

Decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i.e. ±0.000000000000000×10^−383 to ±9.999999999999999×10^384. (Equivalently, ±0000000000000000×10^−398 to ±9999999999999999×10^369.) Because the significand is not normalized, most values with less than 16 significant digits have multiple possible representations; 1×102=0.1×103=0.01×104, etc. Zero has 768 possible representations (1536 if you include both signed zeros).

Decimal64 floating point is a relatively new decimal floating-point format, formally introduced in the 2008 version of IEEE 754.

Contents

Representation of decimal64 values

IEEE 754 allows two alternative representation methods for decimal64 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal64 values are communicated between systems.

In one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer.

The other, alternative, representation method is based on densely packed decimal for most of the significand (except the most significant digit).

Both alternatives provide exactly the same range of representable numbers: 16 digits of significand and 3×28 = 768 possible exponent values.

In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of a 5-bit field. The remaining combinations encode infinities and NaNs.

If the leading 4 bits of the significand is between 0 and 7, the number begins as follows

s 00 xxxx   Exponent begins with 00, significand with 0mmm
s 01 xxxx   Exponent begins with 01, significand with 0mmm
s 10 xxxx   Exponent begins with 10, significand with 0mmm

If the leading 4 bits of the significand are binary 1000 or 1001 (decimal 8 or 9), the number begins as follows:

s 1100 xx   Exponent begins with 00, significand with 100m
s 1101 xx   Exponent begins with 01, significand with 100m
s 1110 xx   Exponent begins with 10, significand with 100m

The following bits (xxx in the above) encode the additional exponent bits and the remainder of the most significant digit, but the details vary depending on the encoding alternative used. There is no particular reason for this difference, other than historical reasons in the eight-year long development of IEEE 754-2008.

The final combinations are used for infinities and NaNs, and are the same for both alternative encodings:

s 11110 x  ±Infinity (see Extended real number line)
s 111110   quiet NaN (sign bit ignored)
s 111111   signaling NaN (sign bit ignored)

In the latter cases, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to NaNs by filling it with a single byte value.

Binary integer significand field

This format uses a binary significand from 0 to 1016−1 = 9999999999999999 = 2386F26FC0FFFF16 = 1000111000011011110010011011111100000011111111111111112. The encoding can represent binary significands up to 10×250−1 = 11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).

As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).

If the 2 bits after the sign bit are "00", "01", or "10", then the exponent field consists of the 10 bits following the sign bit, and the significand is the remaining 53 bits, with an implicit leading 0 bit:

s 00eeeeeeee (0)TTTtttttttttttttttttt tttttttttttttttttttttttttttttttt
s 01eeeeeeee (0)TTTtttttttttttttttttt tttttttttttttttttttttttttttttttt
s 10eeeeeeee (0)TTTtttttttttttttttttt tttttttttttttttttttttttttttttttt

This includes subnormal numbers where the leading significand digit is 0.

If the 4 bits after the sign bit are "1100", "1101", or "1110", then the 10-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 51 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" in the true significand.

s 11 00eeeeeeee (100)Ttttttttttttttttttt tttttttttttttttttttttttttttttttt
s 11 01eeeeeeee (100)Ttttttttttttttttttt tttttttttttttttttttttttttttttttt
s 11 10eeeeeeee (100)Ttttttttttttttttttt tttttttttttttttttttttttttttttttt

The "11" 2-bit sequence after the sign bit indicates that there is an implicit "100" 3-bit prefix to the significand. Compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the "00", "01", or "10" bits are part of the exponent field.

Note that the leading bits of the significand field do not encode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of 8000000000000000 is encoded as binary 11100011010111111010100100110001101000000000000000000, with the leading 4 bits encoding 7; the first significand which requires a 54th bit is 253 = 9007199254740992.

In the above cases, the value represented is

(−1)sign × 10exponent−398 × significand

If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:

s 11110 xx...x    ±infinity
s 111110 x...x    a quiet NaN
s 111111 x...x    a signalling NaN

Densely packed decimal significand field

In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal encoding.

Unlike the binary integer significand version, where the exponent changed position and came before the significand, this encoding combines the leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand into the five bits that follow the sign bit.

This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent.

The last 50 bits are the significand continuation field, consisting of 5 10-bit "declets". Each declet encodes three decimal digits using the DPD encoding.

If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits after that are interpreted as the leading decimal digit (0 to 7):

s 00 TTT (00)eeeeeeee (TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 01 TTT (01)eeeeeeee (TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 10 TTT (10)eeeeeeee (TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

If the 4 bits after the sign bit are "1100", "1101", or "1110", then the second two bits are the leading bits of the exponent, and the last bit is prefixed with "100" to form the leading decimal digit (8 or 9):

s 1100 T (00)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 1101 T (01)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]
s 1110 T (10)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

The remaining two combinations (11110 and 11111) of the 5-bit field are used to represent ±infinity and NaNs, respectively.

The DPD/3BCD transcoding for the declets is given by the following table. b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.

Densely packed decimal encoding rules[1]
b9 b8 b7 b6 b5 b4 b3 b2 b1 b0 d2 d1 d0 Values encoded Digit pattern
a b c d e f 0 g h i 0abc 0def 0ghi (0–7) (0–7) (0–7) Three small digits
a b c d e f 1 0 0 i 0abc 0def 100i (0–7) (0–7) (8–9) Two small digits,
one large
a b c d e f 1 0 1 i 0abc 100f 0dei (0–7) (8–9) (0–7)
a b c d e f 1 1 0 i 100c 0def 0abi (8–9) (0–7) (0–7)
a b c 1 0 f 1 1 1 i 0abc 100f 100i (0–7) (8–9) (8–9) One small digit,
two large
a b c 0 1 f 1 1 1 i 100c 0abf 100i (8–9) (0–7) (8–9)
a b c 0 0 f 1 1 1 i 100c 100f 0abi (8–9) (8–9) (0–7)
x x c 1 1 f 1 1 1 i 100c 100f 100i (8–9) (8–9) (8–9) Three large digits

The 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results. (The 8×3 = 24 non-standard encodings fill in the gap between 103=1000 and 210=1024.)

In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is

(-1)^\text{signbit}\times 10^{\text{exponentbits}_2-398_{10}}\times \text{truesignificand}_{10}

See also

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Decimal128 floating-point format — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision In com …   Wikipedia

  • Decimal32 floating-point format — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision In com …   Wikipedia

  • Quadruple-precision floating-point format — In computing, quadruple precision (also commonly shortened to quad precision) is a binary floating point computer number format that occupies 16 bytes (128 bits) in computer memory. In IEEE 754 2008 the 128 bit base 2 format is officially… …   Wikipedia

  • Double-precision floating-point format — In computing, double precision is a computer number format that occupies two adjacent storage locations in computer memory. A double precision number, sometimes simply called a double, may be defined to be an integer, fixed point, or floating… …   Wikipedia

  • Floating point — In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent …   Wikipedia

  • Decimal floating point — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision… …   Wikipedia

  • Densely packed decimal — (DPD) is a system of binary encoding for decimal digits. The traditional system of binary encoding for decimal digits, known as Binary coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data… …   Wikipedia

  • Normal number (computing) — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision In… …   Wikipedia

  • Minifloat — In computing, minifloats are floating point values represented with very few bits. Predictably, they are not well suited for general purpose numerical calculations. They are used for special purposes most often in computer graphics where… …   Wikipedia

  • NaN — For other uses, see Nan. In computing, NaN (Not a Number) is a value of the numeric data type representing an undefined or unrepresentable value, especially in floating point calculations. Systematic use of NaNs was introduced by the IEEE 754… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”