Jump to content

Talk:Decimal64 floating-point format

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Meaning of m x c?

I guess c is easiest, it is probably 1.c in decimal? Or 0.c ? however m and x are a mystery in this article. what is biasing? I guess there is 10^z , where z is somehow dependent on m and x, but tables do not explain this.

Quantum vs Exponent

I'm new to the encoding, but I suspect the range of floating point numbers is incorrect. At the moment the range is described as:

±0.000000000000000×10^−383 to ±9.999999999999999×10^384

According to the IEEE-754 2008 standard, the decimal bias is expressed in terms of the quantum (bias=E-q), unlike the binary bias being expressed in terms of the exponent (bias=E-e). As such, I believe the range is actually:

±0000000000000000×10^−383 to ±9999999999999999×10^384

(note the lack of decimal point)

I was hoping someone more familiar with the standard could clarify this?

Mabtjie (talk) 03:34, 19 February 2022 (UTC)[reply]

@Mabtjie: No, this is correct. The standard says that emax is 384, thus the maximum value is ±9.999999999999999×10^384. It also says that the bias Eq is 398. E is encoded on 10 bits and the first two cannot be 11; thus its maximum value is 1011111111 in binary, i.e. 3×256−1 = 767. Thus the maximum value of q is 767−398 = 369, so that the maximum decimal64 finite value is ±9999999999999999×10^369. This is consistent. — Vincent Lefèvre (talk) 10:31, 19 February 2022 (UTC)[reply]