Talk:Square root algorithms
| This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
| |||||||||||
Merge "Approximations that depend on the floating point representation" into "Initial estimate"
[edit]I believe the section "Approximations that depend on the floating point representation" should be merged into "Initial estimate", since it is a special case of "Binary estimates". Merging would clear up the fact that the floating point trick gives an initial rough approximation, which is then typically iteratively improved.
I also believe the "Initial estimate" section should appear after the section on Heron's method, as the reader is likely more interested in the general idea of iterative refinement than in the details of how to obtain a good initial estimate in all possible ways.
Additionally, in my opinion the entirety of the article could benefit from some trimming/rewriting, as many sections contain redundant information, unnecessary details, and awkward formulations. BlueRavel (talk) 14:54, 4 December 2023 (UTC)
- Your proposition makes sense to me, and I dont necessarily disagree. That said though, as a pure mathematician, I am uninclined to blur the lines between programmatical issues and mathematical problems. I think maintaining a distinction is appropriate. An analysis of the pure mathematical problem of initial estimation in these abstract reiterative processes is a decidedly distinct discussion from considerations in this programming language, or that programming language, or this architecture, or that architecture. The former is future-proofed, the latter is not. CogitoErgoCogitoSum (talk) 21:09, 11 February 2024 (UTC)
Useful addition??
[edit]Not sure if its useful, but I have found that, in general, , and if x=n2 we get .
Similarly .
I sometimes use this for quick pencil and paper calculations, if Im close enough to a convenient value.
Not sure if this is a known or established property, proven, bounded, or if its already in the article in some alternative capacity, or if its even appropriate for this article. I do know the taylor series approximation with two terms connects these expressions. CogitoErgoCogitoSum (talk) 21:05, 11 February 2024 (UTC)
- There is nothing special about 2 and 4: provided that c is small compared to x. This is, in fact, just the first two terms of the series given in the article under the section heading "Taylor series". JBW (talk) 01:45, 13 February 2024 (UTC)
- I don't think they are useful. In the first, you have replaced a square root and an addition with a square root, an addition, and a division to get an approximate answer. Bubba73 You talkin' to me? 08:02, 13 February 2024 (UTC)
Python tweak needed
[edit]
I noticed the interesting Python example and thought I had better look at it before a gnome removes it as OR or whatever. There is a syntax error in the following line: the inner "..." should use apostrophes not quotes because of the quotes on the overall string.
print(f"2) {sqrt_Heron(Decimal("3.1415926535897932384626433832795028841971693993"))}")
Johnuniq (talk) 03:42, 21 October 2025 (UTC)
- Fixed. Changed to
print(f"2) {sqrt_Heron(Decimal('3.1415926535897932384626433832795028841971693993'))}")
Halley's method versus Heron's method
[edit]Halley's method should not be mixed into Heron's method, but put into a separate section.
Halley's method requires one division and two multiplications per iteration (the squaring in the numerator can be re-used in the denominator). I do not count the factor of 3 in the denominator because that can be done by a shift and an addition which are negligible compared to multiplication. The 3S in the numerator need only be done once at the outset, so it is negligible also. After the initial division, the quotient is known to sufficient precision that we only need to do one step of the division algorithm which amounts to three multiplications. Thus the cost of one iteration of Halley's method amounts to five multiplications. Five iterations would be 25 multiplications for a 243-fold increase in precision.
By contrast, Heron's method only requires one division. (Multiplication by one half is just a shift and the addition is also negligible.) So it costs three multiplications per iteration. Eight iterations of Heron's method would thus cost 24 multiplications for a 256-fold increase in precision. So Heron's method costs less while giving you more precision. (revised once) JRSpriggs (talk) 02:39, 8 November 2025 (UTC)
Pseudo-code for a possible implementation of Heron's method:
- Input S as a real number.
- If S < 0, then raise an exception (error=out).
- If S = 0, then return 0.
- Let X = 1. current estimate of the square-root of S
- Let R = 1. current estimate of the reciprocal of the square-root of S
- Begin loop.
- Let X = ½(X + R·S). one step of Heron's method
- Let T = max (½, 2 - R·X). first half of updating R
- If (1 - tolerance) < T < (1 + tolerance), then return X. testing whether we are done while avoiding another multiplication
- Let R = R·T. second half of updating R to complete one step of division algorithm
- End loop.