Jump to content

Talk:Computational complexity of mathematical operations

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Kaluppollo (talk | contribs) at 14:49, 5 November 2009. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconComputer science Unassessed
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
Things you can help WikiProject Computer science with:

I suppose that where you say:

"Here, complexity refers to the time complexity of performing computations on a Turing machine."

you should say:

"Here, complexity refers to the time complexity of performing computations to a RAM (Random Access Machine)."

Usually when talking about polynomical time operations, RAM models are used instead of TM. This is because we can simulate a RAM in a TM, but only with an overhead of n^2. They are equally powerfull but minimum time, changes.

Thanks for pointing this out. Fredrik Johansson 19:52, 31 March 2007 (UTC)[reply]
The complexity of integer multiplication given in the article is for multitape Turing machines (as defined in the book by Strassen et al.), it is possible to multiply faster on random access machines. (As far as I know, the remaining complexity results stated in the article are not affected.) Mm (talk) —Preceding undated comment was added at 15:49, 25 January 2009 (UTC).[reply]

Why isn't exponentiation listed? —Preceding unsigned comment added by 90.128.188.188 (talk) 12:00, 17 May 2008 (UTC)[reply]

Perhaps because the most commonplace clever exponentiation algorithm at the heart of the discrete-logarithm cryptographic techniques is actually not calculating the full product of the repeated multiplications because a number that occupies O(2^160) bits does not fit in RAM or hard disk on conventional computers and would take too long to calculate anyway. Instead, it is calculating that product modulo a prime number, which permits converting the exponent to sums and products of even and odd numbers (or other combinations of additions and subtractions) that equal the value of the exponent. Thus, those 2 are not an apples-to-apples comparison. Likewise, calculating exponentiation in base-b is as small as L(1), whatever your complexity L of your chosen left shift is. For example, on O(n) base-2-based hardware, left-shift appears to software as O(1) because the hardware provided O(n) logic gates to accomplish the operation. In a positional number system for very large numbers, perhaps shifting 1 to the left 1 million bits might imply O(n) writes to memory to zero-out a bunch of words, such as 1,000,000/32=31,250 writes to RAM on a 32-bit processor or 1,000,000/64=15,625 writes to RAM. Clearly as the number of digits increases beyond the natural register size of the processor, left shift in a positional number system has a time complexity of O(n). But if someone is interested in efficient exponentiation in a base-b number system, then that person would likely be interested in the time complexity of converting to and from base-b. Knuth's Seminumerical Algorithms discusses base-b conversions; this is easier to find in the table of contents than in the index, by the way. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

What does lack of fastest algorithm mean?

I think that someone who is familiar with Schnorr's and Stumpf's work should explain this claim that no multiplication algorithm is fastest. Either this is trivially true for all algorithms, not merely for multiplication: there is always some small-sized scenarios where one of the high-growth-rate algorithms outperforms the slower-growth-rate algorithms. Or this is misleadingly worded and presented. Certainly, the shallowest meaning cannot be what was intended: no matter how many civilizations expended a seemingly infinite amount of effort in multiplication-algorithm research, there is always a faster (e.g., slower growth rate) multiplication algorithm that has yet to be discovered. Even if such an odd reading is what is intended, isn't that trivially true for all algorithms, not merely multiplication? Please explain further, especially in the article. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

Fastest means asymptotically fastest. It is not true that a faster algorithm can always be found: it is easy to prove that the ordinary O(n) algorithm is optimal for addition (because each digit has to be visited at least once). Fredrik Johansson 19:15, 29 June 2008 (UTC)[reply]
Your response is a nonsequitor. Plus you did not modify the article, as I requested. I am asking that the article overtly explain what "is not the fastest" means exactly. What does Schnorr's & Stumpf's work precisely say when it makes this claim that that Fürer's algorithm is not the asymptotically fastest multiplication algorithm. Does Schnorr's & Stumpf's work say that Bridgeman's O(n) multiplication algorithm to be published in the year 2032 is asymptotically still faster? Or does Schnorr's & Stumpf's work likewise say in 2032 that Bridgeman's algorithm once again is not the asymptotically fastest multiplication algorithm and that a still faster one exists? and so forth for all subsequent multiplication algorithms... This concept of what "not the asymptotically fastest" means is not presented in this article at all. Indeed, it is not presented in this discussion page at all, including in your nonsequitor response. If it is not fully explained, I am going to remove the cryptic, unexplained reference to Schnorr's & Stumpf's work. At a very bare minimum, this article must hyperlink to another Wikipedia article that explains what Schnorr's & Stumpf's "not the asymptotically fastest" claim precisely means, if it does not explain it herein.—optikos (talk) 01:35, 3 August 2008 (UTC)[reply]
It means exactly what it says. There is no (asymptotically) fastest algorithm. In other words, for every algorithm, you can find another algorithm that is (asymptotically) faster; the second option in what you say above. I don't know a clearer way to explain this, but I'm open to suggestions. -- Jitse Niesen (talk) 13:07, 3 August 2008 (UTC)[reply]

Long division

Why is long division assuming a single multiplication? Methinks that the O(n^2) of schoolbook long division, which is presented as M(1), where the chosen multiplication algorithm is schoolbook multiplication, is not based on the same foundations as Newton method's M(n). I suspect that long division should be presented as O(n^3) here, where for n digits there are n multiplication-subtraction pairs; multiplication dominates in complexity over subtraction; thus there are a quantity n of O(n^2) schoolbook multiplications. O(n) times O(n^2) is O(n^3) for schoolbook long division. —optikos (talk) 17:31, 29 June 2008 (UTC)[reply]

There are O(n^2) single-digit multiplications, which take time O(n^2). Newton's method can find the answer with a fixed number of full-length multiplications, for time O(k * M(n)) = O(M(n)). CRGreathouse (t | c) 00:51, 6 July 2008 (UTC)[reply]
Then I consider our notation here to be a highly confusing abuse of Landau notation. Here M(n) means one iteration of the n-bit multiplication algorithm, not quantity-n multiplications. Perhaps a notation such as M subscript n (1) more directly states that Newton method is a constant quantity of n-bit multiplications. To me M(n) implies n multiplications, not a single n-bit multiplication. Thus this abuse-of-Landau notation appears to imply n*O(n * log n * 2^(log* n)) = O(n^2 * log n * 2^(log* n)) which is clearly wrong.—optikos (talk) 01:35, 3 August 2008 (UTC)[reply]

Matrix Inversion

Can someone add 'Matrix Solution'? I.e. finding the solution of Ax = b without explicitly finding A^-1. This is faster than using matrix inversion and can be done using LU decomposition I think. —Preceding unsigned comment added by 93.97.48.217 (talk) 11:23, 22 October 2008 (UTC)[reply]

Actually, inverting with Strassen's algorithm takes less time than an LU decomposition. I'm not sure what methods are used, practically speaking, to find the solution for a matrix equation with large matrices. LU decomposition is certainly more stable, but if the matrices are large enough I imagine faster methods are desirable. (If a 100 x 100 matrix equation takes the same amount of time with Strassen's algorithm and LU decomposition, I'd expect a 1,000,000 x 1,000,000 matrix equation would be 5-6 times faster with Strassen's.)
I don't know if there's a fast (subcubic) algorithm that avoids the numerical instability of computing a literal inverse.
CRGreathouse (t | c) 15:06, 18 January 2009 (UTC)[reply]

k-way Toom–Cook multiplication ε

User EmilJ added a value for ε in the k-way Toom–Cook multiplication: ε = log (2k − 1)/log k. I suppose the correct value of the big-O notation with that value of ε is: O(nε). I will edit the page. Kaluppollo (talk) 14:49, 5 November 2009 (UTC)[reply]