Jump to content

Talk:Square root algorithms

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by SineBot (talk | contribs) at 15:17, 6 October 2020 (Signing comment by 98.164.14.157 - "Square Root algorithm in 1965 Friden EC-132 Electronic Calculator: new section"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconMathematics Start‑class Low‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-priority on the project's priority scale.

Reciprocal of the square root

This piece of code is a composite of a quirky square root starting estimate and Newton's method iterations. Newton's method is covered elsewhere; there's a section on Rough estimate already - the estimation code should go there. Also, I first saw this trick in Burroughs B6700 system intrinsics about 1979, and it predated my tenure there, so it's been around a long time. That's well before the drafting of IEEE 754 in 1985. Since the trick is based on linear approximation of an arc seqment of x2 which in the end, is how all estimates must be made, I'm certain that the method has been reinvented a number of times.

There're numerous issues with this code:

  • the result of type punning via pointer dereferencing in C/C++ is undefined
  • the result of bit-twiddling floating point numbers, bypassing the API, is undefined
  • we don't know whether values like zero, infinity and unnormalized floating point numbers, and big/little endian formats are correctly handled.
  • It definitely won't work on architectures like Unisys mainframes which are 48/96 bit, or 64-bit IEEE floats. Restructuring the expression to make it work in those cases is non-trivial.
  • since I can readily find, by testing incremental offsets from the original, a constant which reduces the maximum error, the original constant isn't optimal, probably resulting from trial and error. How does one verify something that's basically a plausible random number? That it works for a range of typical values is cold comfort. (Because its only use is as an estimate, maybe we don't actually care that enumerable cases aren't handled(?)... they'll just converge slowly.)
  • because it requires a multiply to get back a square root, on architectures without fast multiply, it won't be such a quick estimate relative to others (if multiply and divide are roughly comparable, it'll be no faster than a random seed plus a Newton iteration).

I think we should include at least an outline of the derivation of the estimate expression, thus: a normalized floating point number is basically some power of the base multiplied by 1+k, where 0 <= k < 1. The '1' is not represented in the mantissa, but is implicit. The square root of a number around 1, i.e. 1+k, is (as a linear approximation) 1+k/2. Shifting the fp number represented as an integer down by 1 effectively divides k by 2, since the '1' is not represented. Subtracting that from a constant fixes up the 'smeared' mantissa and exponent, and leaves the sign bit flipped, so the result is an estimate of the reciprocal square root, which requires a multiply to re-vert back to the square root. It's just a quick way of guessing that gives you 1+ digits of precision, not an algorithm.

That cryptic constant is actually a composite of three bitfields, and twiddling it requires some understanding of what those fields are. It would be clearer, but a few more operations, to do that line as a pair of bitfield extract/inserts. But we're saving divides in the subsequent iterations, so the extra 1-cycle operations are a wash.

Pell's equation?

Pell's equation is a Diophantine equation, meaning it has integer coefficients, and requires an integer solution. So in this case must be an integer (not zero and not a perfect square). Most of the time, we're not computing square roots of integers, because we can look them up. But let's just suppose we want the square root of 1361, First we come up against: Find positive integers p1 and q1... This is the hard part; It can be done either by guessing, or by using fairly sophisticated techniques. Guess?? This isn't the same kind of guessing we do in the Initial estimate section, because a bad guess there just means more work. No, a bad guess here is utterly worthless. There's no guessable solution to p2 - 1361 · q2=1. (Yes, 1361 is prime, and has a few other properties that make it a particularly irascible number, but in the end, it's just an arbitrary number). The standard method of solving Diophantine equations is by continued fractions (one must find the period of the repetend); the problem is that the repetend may be long, extremely long, and it can be that long for seemingly innocuous choices of . We don't elaborate that in the section.

Pell's equation is a recursive relationship that allows one to generate infinitely many solutions after finding one. It's finding one that's the hump, as it is with any Diophantine equation. And in the usual case, while there may be infinitely many solutions, none of them are small, small enough to make trial and error a useable approach. I don't think this section ought to be in the article, unless we greatly expand it to demonstrate how actually to solve the equation. Solving a Diophantine equation is a LOT more work than an arithmetic computation of square root. Sbalfour (talk) 22:09, 14 December 2019 (UTC)[reply]

I agree. JRSpriggs (talk) 09:29, 15 December 2019 (UTC)[reply]

What's the programming language used here?

Which languages have been used to desribe the algorothms?

2003:E0:5F1B:749A:DC42:9B8D:E639:24FC (talk) 22:09, 7 January 2020 (UTC)[reply]

In general, at Wikipedia we try to avoid using any specific programming language in the articles. If an example is necessary, it is generally done in pseudocode.
Obviously, articles about a particular language are an exception.
In this article, it appears that the three examples are in C (programming language). But we do not guarantee that they will work as written. JRSpriggs (talk) 00:38, 8 January 2020 (UTC)[reply]

"nearest perfect square" in Bakhshali method?

The example in the Bakhshali method has me published. The initial guess is said to be "x02 be the initial approximation to S." The example uses , and chooses . How can that be? and there are many perfect squares closer to S, like 400 or 350.

How is the initial guess really meant to be chosen? Unfortunately, the material here (in particular, this example) isn't well-referenced enough to explain how 600 meets the criteria given in the article. -- Mikeblas (talk) 21:06, 26 February 2020 (UTC)[reply]

The method does not require the initial guess to be the closest perfect square. This was only used to obtain a bound on the error. The 600 value is obtained in the above section on scalar estimates and was used as the initial guess in the previous example. --Bill Cherowitzo (talk) 23:27, 26 February 2020 (UTC)[reply]

Square Root algorithm in 1965 Friden EC-132 Electronic Calculator

Early electronic calculator square root algorithms: