Talk:Fibonacci sequence/Archive 4
![]() | This is an archive of past discussions about Fibonacci sequence. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 |
Computation Time of n-th Fibonacci Number.
The article on Wikipedia writes the following: These last two identities provide a way to compute Fibonacci numbers recursively in O(log(n)) arithmetic operations and in time O(M(n) log(n)), where M(n) is the time for the multiplication of two numbers of n digits. I think this can be improved, using that M(2n) is at least 2*M(n), as the time is at least linear (in order to read all input bits). Thus M(n) < 1/2 M(2n) and so using the telescope sum property of this recursion formula, one would get O(M(n)) instead of O(M(n) * log(n)). Frank Stephan — Preceding unsigned comment added by 116.14.205.177 (talk) 00:53, 27 January 2021 (UTC)
- This sounds right, but we need a published source with the correct time bound. Do you know of one? —David Eppstein (talk) 06:02, 27 January 2021 (UTC)
- A little bit different, but the n-th Fibonnaci can be computed directly (no recursion or iteration). Bubba73 You talkin' to me? 06:10, 27 January 2021 (UTC)
- You mean the formula involving taking the golden ratio to powers? I don't think it's correct to say that it can be computed any more directly that way. It gives a closed-form formula, but a closed-form formula and a computation are not the same thing. There are three problems with using that formula to compute: (1) you have to be able to exponentiate, and that is not generally a built-in machine operation, and (2) you need to compute irrational square roots to large numbers of digits, proportionally the same number of digits as you need for the Fibonacci number you are trying to compute, and again that is not generally a built-in operation, and (3) you have to worry about numerical accuracy of non-integer arithmetic being small enough that you get the right result when you round it to an integer. Instead, the recursion the IP is talking about can be expressed in a different way as a closed-form exponentiation of 2x2 integer matrices, still requiring (matrix) exponentiation but avoiding all the other numeric issues. The recursion is just what you get when you apply binary exponentiation to this matrix formula and simplify a little using the symmetry of the matrix. —David Eppstein (talk) 07:49, 27 January 2021 (UTC)
- A little bit different, but the n-th Fibonnaci can be computed directly (no recursion or iteration). Bubba73 You talkin' to me? 06:10, 27 January 2021 (UTC)
The "direct computation" using phi^n is exactly (one, but not the best method) which yields the log2(n) multiplications and squarings. Simply stated: Computing x^n is O(log n). — MFH:Talk 19:07, 11 September 2021 (UTC)
- It is a logarithmic number of operations, but that is very different from saying that it takes logarithmic time. For this sort of computation, it is not reasonable to use a model of computation where arithmetic operations are assumed to take constant time; that's only good for problems where the numbers stay small, and here they don't. To compute the time properly, you have to take into account the length of the numbers involved and the time for arithmetic operations on numbers of that length. —David Eppstein (talk) 19:36, 11 September 2021 (UTC)
Comments:
- The complexity assertion quoted at the beginning of the thread is not in the reference, and thus not sourced. It is not an immediate consequence of the identities or of the content of the source. Moreover, it is incomplete as the space complexity is not considered.
- Using the golden ratio for estimating the complexity is not a good idea, as the requires to estimate the number of digits of the golden ratio that are needed, to compute them, and to do the whole computation with this number of digits.
- One of the standard way for computing is given at the beginning of § Matrix form: If then is the entry of the second row and first column of So, can be computed by using exponentiation by squaring applied to As squaring a 2×2-matrix needs 5 multiplications and 4 additions, the whole computation needs multiplication, and at most the same number of additions. Moreover, as each square doubles the size of the integers, the whole computation takes at most twice the time of the last squaring, and needs an auxiliary space that is at most twice the size of the output. Thus, the time complexity is for arithmetic operations; this is exactly the claim of the first post of this thread, but with a much more simpler algorithm.
- I have no source under hands for these assertions, but the use of matrix form and exponentiation by squaring are standard. The estimation of the complexity when each step doubles the size is standard in other contexts. For example for proving that using Newton's method for computing the inverse or the square root of an integer gives a complexity of where n is the number of desired digits.
So, I suggest to remove the paragraph quoted at the beginning, and to replace it by a section on the different sandard methods of computation, with complexity estimation. There are plenty of sources, as textbooks that present the different methods of implementation (recursive, recursive with memoization, iterative, ...) often takes Fibonacci sequence as an example because it is a simple case where the method of implementation may change dramatically the complexity. D.Lazard (talk) 08:46, 12 September 2021 (UTC)
- A version of the claim can be found in Matoušek's "Thirty-three miniatures" (in fact it is Miniature 1): he uses exponentiation by squaring of the matrix to compute the nth Fibonacci number in at most 2 log(n) multiplications of 2-by-2 matrices. He then goes on to note that, because the Fibonacci numbers grow exponentially, the individual arithmetic operations are themselves slow (but does not attempt to quantify this, or to synthesize an overall bound). (Our present article frames this slightly differently -- in terms of the entries of the matrix rather than the matrices themselves -- but of course it's essentially the same procedure.) --JBL (talk) 13:46, 12 September 2021 (UTC)
- There are lots of sources for the fast computation method itself, whether by powering matrices or (what amounts to the same thing) using the index-doubling recurrences. I think one of the earliest may be in one of the letters of Dijkstra. But what we need is not just that, but the detailed time analysis. —David Eppstein (talk) 19:26, 12 September 2021 (UTC)