Talk:Floating-point arithmetic
This is the talk page for discussing improvements to the Floating-point arithmetic article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2, 3, 4, 5Auto-archiving period: 3 months ![]() |
![]() | Computing: CompSci B‑class Top‑importance | |||||||||||||||||||
|
![]() | Computer science B‑class Top‑importance | ||||||||||||||||
|
|
|||||
This page has archives. Sections older than 90 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
Epsilon vs. Oopsilon
Deep in section Minimizing the effect of accuracy problems there is a sentence
- Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13).
wherein 'epsilon' is linked to Machine epsilon. Unfortunately this is not the same 'epsilon'. Epsilon as a general term for a minimum acceptable error is not the same as Machine epsilon which is a limitation of some hardware floating point implementation.
As used in the sentence it would be perfectly appropriate to set that constant 'epsilon' to 0.00001. Whereas Machine epsilon is derivable based on the hardware to be something like 2.22e-16. The latter is a fixed value. The former is something chosen as a "good enough" guard limit for a particular programming problem.
I'm going to unlink that use of epsilon. I hope that won't be considered an error of sufficiently large magnitude. ;-) Shenme (talk) 08:00, 25 June 2016 (UTC)
spelling inconsistency floating point or floating-point
The title and first section say "floating point". But elsewhere in the article "floating-point" is used. The article should be consistent in spelling. In IEEE 754 they use "floating-point" with hyphen. I think that should be the correct spelling.JHBonarius (talk) 14:18, 18 January 2017 (UTC)
- This is not an inconsistency (at least, not always), but usual English rules: when followed by a noun, one adds an hyphen to avoid ambiguity, e.g. "floating-point arithmetic". Vincent Lefèvre (talk) 14:26, 18 January 2017 (UTC)
hidden bit
The article Hidden bit redirects to this article, but there is no definition of this term here (there are two usages, but they are unclear in context unless you already know what the term is referring to). Either there should be a definition here, or the redirection should be removed and a stub created. JulesH (talk) 05:43, 1 June 2017 (UTC)
- It is defined in the Internal representation section. Vincent Lefèvre (talk) 17:56, 1 June 2017 (UTC)
Seeking consensus on the deletion of the "Causes of Floating Point Error" section.
There is a discussion with Vincent Lefèvre seeking consensus on the deletion of the "Causes of Floating Point Error" from this article on whether this change should be reverted.
Softtest123 (talk) 20:16, 19 April 2018 (UTC)
- It started with "The primary sources of floating point errors are alignment and normalization." Both are completely wrong. First, alignment (of the significands) is just for addition and subtraction, and it is just an implementation method of a behavior that has (most of the time) already been specified: correct rounding. Thus alignment has nothing to do with floating-point errors. Ditto for normalization. Moreover, in the context of IEEE 754-2008, a result can be normalized or not (for the decimal formats and non-interchange binary formats), but this is a Level 4 consideration, i.e. it does not affect the rounded value, thus does not affect the rounding error. In the past (before IEEE 754), important errors could come from the lack of normalization before doing an addition or subtraction, but this is the opposite of what you said: the errors were due to the lack of normalization in the implementation of the operation, not due to normalization. Anyway, that's the past. Then this section went on about alignment and normalization...
- The primary source of floating-point errors is actually the fact that most real numbers cannot be represented exactly and must be rounded. But this point has already been covered in the article. Then, the errors also depend on the algorithms: those used to implement the basic operations (but in practice, this is fixed by the correct rounding requirement such as for the arithmetic operations +, −, ×, /, √), and those that use these operations. Note also that there is already a section Accuracy problems about these issues.
- Vincent Lefèvre (talk) 22:14, 19 April 2018 (UTC)