Talk:Floating-point arithmetic/Archive 5
![]() | This is an archive of past discussions about Floating-point arithmetic. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 3 | Archive 4 | Archive 5 |
Zuse's Z3 floating-point format
There are contradictory documents about the size and the significand (mantissa) size of the floating-point format of Zuse's Z3. According to Pr Horst Zuse, this is 22 bits, with a 15-bit significand (implicit bit + 14 represented bits). There has been a recent anonymous change of the article, based on unpublished Raúl Rojas's work, but I wonder whether this is reliable. Raúl Rojas was already wrong in the Bulletin of the Computer Conservation Society Number 37, 2006 about single precision (he said 22 bits for the mantissa). Vincent Lefèvre (talk) 14:44, 21 September 2013 (UTC)
Error in diagram
The image "Float mantissa exponent.png" erroneously shows that 10e-4 is the exponent, while the exponent actually is only -4 and the base is 10. — Preceding unsigned comment added by 109.85.65.228 (talk) 12:14, 22 January 2014 (UTC)
Failure at Dhahran - Loss of significance or clock drift
This article states in section http://en.wikipedia.org/wiki/Floating_point#Incidents that the Failure at Dhahran was caused by Loss of significance. However, the article "MIM-104 Patriot" makes it sound like it was rather simply clock drift. This should be cleared up. — Preceding unsigned comment added by 82.198.218.209 (talk) 14:01, 3 December 2014 (UTC)
- I agree. It isn't a loss of significance as defined by Loss of significance. It is an accumulation of rounding errors (not compensating each other) due to the fact that 1/10 was represented in binary (with a low precision for its usage). In a loss of significance, the relative error increases while the absolute error remains (almost) the same. Here, it is the opposite: the relative error remains (almost) the same, but the absolute error (which is what matters here) increases. Vincent Lefèvre (talk) 00:49, 4 December 2014 (UTC)
John McLaughlin's Album
Should there be a link to John McLaughlin's album at the top in case someone was trying to go there but went here?2602:306:C591:4D0:AD55:E334:4141:98FA (talk) 05:49, 7 January 2015 (UTC)
- Done. Good catch! --Guy Macon (talk) 07:05, 7 January 2015 (UTC)
needs simpler overview
put it this way, I'm an IT guy and I can't understand this article, there need to be a much simpler summery for non tech people, using simple English. Right now every other word is another tech term I don't fully understand. -- thanks, Wikipedia Lover & Supporter
- It seems that Mfwitten removed that simple overview. Perhaps, to enforce the WP:ROWN. He called this "streamlining". I have recovered mine affair, additionally reducing the 'bits part'. Yet, I am sure, IT department will be happy now. --Javalenok (talk) 18:56, 17 February 2015 (UTC)
Non-trivial Floating-Point Focused computation
The C program intpow.c at www.civilized.com/files/intpow.c may be a suitable link for this topic. If the principal author agrees, please feel free to add it. (Don't assume this is just exponentiation by repeated doubling - it deals with optimal output in the presence of overflow or denormal intermediate results.) — Preceding unsigned comment added by Garyknott (talk • contribs) 23:31, 27 August 2015 (UTC)
Lead
What does "formulaic representation" in the lead sentence mean?
In general, I think we could simplify the lead. I may give it a try over the weekend.... --Macrakis (talk) 18:52, 23 February 2016 (UTC)
Minor technical correctness error
Any integer with absolute value less than 224 can be exactly represented in the single precision format, and any integer with absolute value less than 253
These ought to say "less than or equal" instead of "less than", because the powers of two themselves can be exactly represented in single-precision and double-precision IEEE-754 numbers respectively. They are the last such consecutive integers. -- Myria (talk) 00:12, 16 June 2016 (UTC)
Epsilon vs. Oopsilon
Deep in section Minimizing the effect of accuracy problems there is a sentence
- Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13).
wherein 'epsilon' is linked to Machine epsilon. Unfortunately this is not the same 'epsilon'. Epsilon as a general term for a minimum acceptable error is not the same as Machine epsilon which is a limitation of some hardware floating point implementation.
As used in the sentence it would be perfectly appropriate to set that constant 'epsilon' to 0.00001. Whereas Machine epsilon is derivable based on the hardware to be something like 2.22e-16. The latter is a fixed value. The former is something chosen as a "good enough" guard limit for a particular programming problem.
I'm going to unlink that use of epsilon. I hope that won't be considered an error of sufficiently large magnitude. ;-) Shenme (talk) 08:00, 25 June 2016 (UTC)
spelling inconsistency floating point or floating-point
The title and first section say "floating point". But elsewhere in the article "floating-point" is used. The article should be consistent in spelling. In IEEE 754 they use "floating-point" with hyphen. I think that should be the correct spelling.JHBonarius (talk) 14:18, 18 January 2017 (UTC)
- This is not an inconsistency (at least, not always), but usual English rules: when followed by a noun, one adds an hyphen to avoid ambiguity, e.g. "floating-point arithmetic". Vincent Lefèvre (talk) 14:26, 18 January 2017 (UTC)