Jump to content

Normal number (computing)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Vadmium (talk | contribs) at 07:11, 27 January 2012 (See also Normalized number, very similar; merge-worthy?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format.

The magnitude of the smallest normal number in a format is given by bemin, where b is the base (radix) of the format (usually 2 or 10) and emin depends on the size and layout of the format.

Similarly, the magnitude of the largest normal number in a format is given by

bemax × (bb1−p),

where p is the precision of the format in digits and emax is (−emin)+1.

In the IEEE 754 binary and decimal formats, p, emin, and emax have the following values:

Format p emin emax
binary16 11 −14 15
binary32 24 −126 127
binary64 53 −1022 1023
binary128 113 −16382 16383
decimal32 7 −95 96
decimal64 16 −383 384
decimal128 34 −6143 6144

For example, in the smallest decimal format, the range of positive normal numbers is 10−95 through 9.999999 × 1096.

Non-zero numbers smaller in magnitude than the smallest normal number are called denormal (or subnormal) numbers. Zero is neither normal nor subnormal.

See also