 Significand

The significand (also coefficient or mantissa) is part of a floatingpoint number, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction.
Contents
Examples
The number 123.45 can be represented as a decimal floatingpoint number with an integer significand of 12345 and an exponent of −2. Its value is given by the following arithmetic:
 12345 × 10^{−2}
This same value can also be represented in normalized form with a fractional coefficient of 1.2345 and an exponent of +2:
 1.2345 × 10^{+2}
Finally, this value can be represented in the format given by the Language Independent Arithmetic standard and several programming language standards, including Ada, C, Fortran and Modula2, as:
 0.12345 × 10^{+3}
When working in binary, the significand is characterized by its width in binary digits (bits). Because the most significant bit is always 1 for a normalized number, this bit is not typically stored and is called the "hidden bit". Depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double precision format is commonly described as having either a 53bit significand, including the hidden bit, or a 52bit significand, not including the hidden bit. The notion of a hidden bit only applies to binary representations.
Use of "mantissa"
In American English, the original word for this seems to have been mantissa (Burks et al.), and as of 2005^{[update]} this usage remains common in computing and among computer scientists. However, this use of mantissa is discouraged by the IEEE floatingpoint standard committee and by some professionals such as William Kahan and Donald Knuth,^{[citation needed]} because it conflicts with the preexisting use of mantissa for the fractional part of a logarithm (see also common logarithm).
The fractional part of a logarithm, the original meaning of mantissa, is equal to the logarithm of the significand (for the same base) plus a constant depending on the normalization. By contrast, the relationship between the floatingpoint exponent and the integer part of the logarithm is not affected by normalization.
Etymology
The logarithmic meaning of mantissa dates to the 18th century (OED^{[Full citation needed]}), from its general English meaning, now archaic, of "minor addition". This meaning stemmed from the Latin word for "makeweight", which in turn may have come from Etruscan. Significand is a 20th century neologism.
References
 Burks, Arthur W.; Goldstine, Herman H.; Von Neumann, John (1946). Preliminary discussion of the logical design of an electronic computing instrument. Technical Report, Institute for Advanced Study, Princeton, NJ. In Von Neumann, Collected Works, Vol. 5, A. H. Taub, ed., MacMillan, New York, 1963, p. 42:
 5.3. 'Several of the digital computers being built or planned in this country and England are to contain a socalled "floating decimal point". This is a mechanism for expressing each word as a characteristic and a mantissa—e.g. 123.45 would be carried in the machine as (0.12345,03), where the 3 is the exponent of 10 associated with the number.'
Categories: Computer arithmetic
Wikimedia Foundation. 2010.
См. также в других словарях:
significand — noun that part of a floating point number that contains its significant digits. Syn: mantissa, coefficient … Wiktionary
Floating point — In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent … Wikipedia
Decimal floating point — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision… … Wikipedia
Decimal32 floatingpoint format — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision In com … Wikipedia
Decimal64 floatingpoint format — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision In com … Wikipedia
Decimal128 floatingpoint format — Floating point precisions IEEE 754: 16 bit: Half (binary16) 32 bit: Single (binary32), decimal32 64 bit: Double (binary64), decimal64 128 bit: Quadruple (binary128), decimal128 Other: Minifloat · Extended precision Arbitrary precision In com … Wikipedia
Computer numbering formats — The term computer numbering formats refers to the schemes implemented in digital computer and calculator hardware and software to represent numbers. A common mistake made by non specialist computer users is a certain misplaced faith in the… … Wikipedia
Computer number format — A computer number format is the internal representation of numeric values in digital computer and calculator hardware and software.[1] Contents 1 Bits, bytes, nibbles, and unsigned integers 1.1 Bits 1.2 … Wikipedia
Quadrupleprecision floatingpoint format — In computing, quadruple precision (also commonly shortened to quad precision) is a binary floating point computer number format that occupies 16 bytes (128 bits) in computer memory. In IEEE 754 2008 the 128 bit base 2 format is officially… … Wikipedia
IEEE 7542008 — The IEEE Standard for Floating Point Arithmetic (IEEE 754) is the most widely used standard for floating point computation, and is followed by many hardware (CPU and FPU) and software implementations. Many computer languages allow or require that … Wikipedia