Talk:Square root algorithms
![]() | Mathematics Start‑class Low‑priority | |||||||||
|
|
|
This page has archives. Sections older than 90 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
Reciprocal of the square root
This piece of code is a composite of a quirky square root starting estimate and Newton's method iterations. Newton's method is covered elsewhere; there's a section on Rough estimate already - the estimation code should go there. Also, I first saw this trick in Burroughs B6700 system intrinsics about 1979, and it predated my tenure there, so it's been around a long time. That's well before the drafting of IEEE 754 in 1985. Since the trick is based on linear approximation of an arc seqment of x2 which in the end, is how all estimates must be made, I'm certain that the method has been reinvented a number of times.
There're numerous issues with this code:
- the result of type punning via pointer dereferencing in C/C++ is undefined
- the result of bit-twiddling floating point numbers, bypassing the API, is undefined
- we don't know whether values like zero, infinity and unnormalized floating point numbers, and big/little endian formats are correctly handled.
- It definitely won't work on architectures like Unisys mainframes which are 48/96 bit, or 64-bit IEEE floats. Restructuring the expression to make it work in those cases is non-trivial.
- since I can readily find, by testing incremental offsets from the original, a constant which reduces the maximum error, the original constant isn't optimal, probably resulting from trial and error. How does one verify something that's basically a plausible random number? That it works for a range of typical values is cold comfort. (Because its only use is as an estimate, maybe we don't actually care that enumerable cases aren't handled(?)... they'll just converge slowly.)
- because it requires a multiply to get back a square root, on architectures without fast multiply, it won't be such a quick estimate relative to others (if multiply and divide are roughly comparable, it'll be no faster than a random seed plus a Newton iteration).
I think we should include at least an outline of the derivation of the estimate expression, thus: a normalized floating point number is basically some power of the base multiplied by 1+k, where 0 <= k < 1. The '1' is not represented in the mantissa, but is implicit. The square root of a number around 1, i.e. 1+k, is (as a linear approximation) 1+k/2. Shifting the fp number represented as an integer down by 1 effectively divides k by 2, since the '1' is not represented. Subtracting that from a constant fixes up the 'smeared' mantissa and exponent, and leaves the sign bit flipped, so the result is an estimate of the reciprocal square root, which requires a multiply to re-vert back to the square root. It's just a quick way of guessing that gives you 1+ digits of precision, not an algorithm.
That cryptic constant is actually a composite of three bitfields, and twiddling it requires some understanding of what those fields are. It would be clearer, but a few more operations, to do that line as a pair of bitfield extract/inserts. But we're saving divides in the subsequent iterations, so the extra 1-cycle operations are a wash.
"nearest perfect square" in Bakhshali method?
The example in the Bakhshali method has me confused. The initial guess is said to be "x02 be the initial approximation to S." The example uses , and chooses . How can that be? and there are many perfect squares closer to S, like 400 or 350.
How is the initial guess really meant to be chosen? Unfortunately, the material here (in particular, this example) isn't well-referenced enough to explain how 600 meets the criteria given in the article. -- Mikeblas (talk) 21:06, 26 February 2020 (UTC)
- The method does not require the initial guess to be the closest perfect square. This was only used to obtain a bound on the error. The 600 value is obtained in the above section on scalar estimates and was used as the initial guess in the previous example. --Bill Cherowitzo (talk) 23:27, 26 February 2020 (UTC)
The article referenced here makes clear, by numerical example, that the initial guess does not need to be near the closest perfect square. — Preceding unsigned comment added by Sramakrishna123 (talk • contribs) 22:10, 19 December 2020 (UTC)
Erroneous introduction
The first paragraph of the article is this:
"Methods of computing square roots are numerical analysis algorithms for finding the principal, or non-negative, square root (usually denoted √S, 2√S, or S1/2) of a real number. Arithmetically, it means given S, a procedure for finding a number which when multiplied by itself, yields S; algebraically, it means a procedure for finding the non-negative root of the equation x2 - S = 0; geometrically, it means given the area of a square, a procedure for constructing a side of the square."
I do not know much about numerical analysis. But I do know that this is totally misleading!
A "method for computing a square root" of a number does not mean a method for *finding* the square root of that number.
Instead, it means a method for approximating the square root of the number, usually to various degrees of accuracy.
The distinction is essential to understanding what a "method for computing a square root" means. So the article should not mislead readers with an erroneous first paragraph.66.37.241.35 (talk) 16:39, 7 October 2020 (UTC)
- Is this not dealt with adequately in the second paragraph? JRSpriggs (talk) 17:05, 7 October 2020 (UTC)
JSON or what?
Code language is not identified. It should. Is it JSON? All traces of JAVA should be exterminated from the planet to the seventh generation. It's on the bible. — Preceding unsigned comment added by 188.80.214.144 (talk) 23:54, 13 April 2021 (UTC)