Jump to content

Talk:Decimal64 floating-point format

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Mabtjie (talk | contribs) at 03:34, 19 February 2022 (Meaning of m x c?). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconComputing: Software Start‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software (assessed as Low-importance).
Taskforce icon
This article is supported by Computer hardware task force (assessed as Low-importance).

Meaning of m x c?

I guess c is easiest, it is probably 1.c in decimal? Or 0.c ? however m and x are a mystery in this article. what is biasing? I guess there is 10^z , where z is somehow dependent on m and x, but tables do not explain this.

Quantum vs Exponent

I'm new to the encoding, but I suspect the range of floating point numbers is incorrect. At the moment the range is described as:

±0.000000000000000×10^−383 to ±9.999999999999999×10^384

According to the IEEE-754 2008 standard, the decimal bias is expressed in terms of the quantum (bias=E-q), unlike the binary bias being expressed in terms of the exponent (bias=E-e). As such, I believe the range is actually:

±0000000000000000×10^−383 to ±9999999999999999×10^384

(note the lack of decimal point)

I was hoping someone more familiar with the standard could clarify this?

Mabtjie (talk) 03:34, 19 February 2022 (UTC)[reply]