Jump to content

Talk:Time complexity

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

P and NP Problems.

My question is: If N^3 is p type problem, and to solve this it's take almost 3 hours, Then now I want to ask what is value of N^3. 2402:8100:2652:B42D:61B8:A047:8958:1CFB (talk) 07:51, 29 March 2022 (UTC)[reply]

What is T(n)?

In the section on 'Constant Time', the function T(n) first shows up without any previous explanation. There should be a section differentiating b/w Time Complexity and 'Order/degree of Time Complexity'. — Preceding unsigned comment added by Granzer92 (talkcontribs) 07:33, 26 May 2022 (UTC)[reply]

Article too long

This article is too long. I suggest splitting it into separate articles. At the very least, polynomial time should have its own article. Other important complexity classes should have their own articles, too. 2601:547:B05:ECD:ED93:4837:4D47:671 (talk) 06:29, 31 January 2023 (UTC)[reply]

At 33kB "readable prose size" we are at the point where WP:TOOBIG says "Length alone does not justify division". But some of these time classes may well work better as standalone topics. The bigger problem is that many sections are completely unsourced. With sources, they could stand up a lot better to expansion and splitting off into separate articles. Without sources, we cannot make articles out of them. —David Eppstein (talk) 07:58, 31 January 2023 (UTC)[reply]

Is factorial too narrow?

Is "factorial" a too narrow class? The other at-least-subexponential classes mentioned here are closed under multiplication of the exponent by a constant, but this definition of factorial isn't. Factorial is between 2^((n/2)log(n/2)) and 2^(n log n), so maybe 2^O(n log n) is a more useful definition, more in the spirit of other at-least-subexponential classes, and captures similar functions like n^n. The examples mention Bogosort, which is O(n*n!), which is actually not O(n!), but is 2^O(n log n).

SvenSandberg (talk) 07:08, 14 October 2023 (UTC)[reply]

Definitional Issue in the "Superpolynomial" Section

Currently the description says a super-polynomial runtime is one that is not polynomially bounded, and then in the text it is said to be grow faster than any polynomial, i.e. for every .

The above is misleading: consider the following artificial function where with the convention that . This function is neither polynomially bounded, nor for any . Asd00012334 (talk) 21:58, 29 May 2025 (UTC)[reply]

It generally does not make sense to categorize piecewise-defined functions like this using the normal time complexity rules, since time complexity discusses limits not specific instances. Since time complexity is usually discussed in the limit in the worst case, your function T(n) would have exponential time complexity, as, in the worst case in the limit, it grows exponentially and will eventually exceed ω(nc) for any c. The fact that at most values it is much easier to evaluate does not change its order of growth. Elliptical Reasoning (talk) 22:08, 29 May 2025 (UTC)[reply]
@Elliptical Reasoning The issue here is *independent* of whether it is worst case or average case being considered. The runtime as a function of input size is a well-defined function, and regardless of worst-case/average-case considered here, it can certainlly be neither polynomially bounded nor super-polynomial. Asd00012334 (talk) 22:16, 29 May 2025 (UTC)[reply]
More concretely, consider an -complete problem . We can construct the following piece-wise problem
. The worst-case time-complexity of this problem is neither polynomially bounded, nor super-polynomial. Asd00012334 (talk) 22:27, 29 May 2025 (UTC)[reply]
Ok, if I understand right your objection is based on the use of ω in this description - since little-omega notation implies a dominance will be true at all points in the limit, for functions that cannot be evaluated in the limit obviously the definition is ill-formed. My understanding is that such functions are not normally evaluated using complexity classes for exactly this reason - the definitions are ill formed.
What change do you propose in the article in response to this concern? It's possible a note in the introduction that some functions do not lend themselves to complexity class analysis would be appropriate. If you're aware of a suitable source that gives a more robust definition of superpolynomial time complexity it would absolutely be appropriate to add that formal definition and the appropriate citation. I would be averse to simply removing the current definition, which is readily understood and is correct in the intended domain. Elliptical Reasoning (talk) 22:37, 29 May 2025 (UTC)[reply]
My suggestion is to change the following sentence
"An algorithm is defined to take superpolynomial time if T(n) is not bounded above by any polynomial."
to something along the line of
"An algorithm takes superpolynomial time if T(n) grows asymptotically faster than any polynomial."
I also suggest to mention there are such examples where T(n) is neither polynomially-bounded, nor super-polynomial. For the same reason, it may also be insightful to mention that the definition may vary depending authors.
I see no objection from yesturday; hence, I am giving it a go on adjusting the main article. Asd00012334 (talk) 13:36, 30 May 2025 (UTC)[reply]
In addition, I suggest the community to adapt with other languages as well. Asd00012334 (talk) 14:25, 30 May 2025 (UTC)[reply]
Your example is interesting. However, it does not belong to complexity theory. It shows that not being in of every is not the same as being in for every . The fact that "not " is not easily related to is well known in complexity theory, and therefore, one cannot mix big-O and small- notation.
Moreover, your example is WO:OR, as it is not in any reliable source on complexity theory (except maybe, for enforcing what said above).
So, I'll revert your edit and removing notation from the section. D.Lazard (talk) 14:48, 30 May 2025 (UTC)[reply]
By default when referring to a function being "super-polynomial," it is more than that it is not polynomially bounded, but that it grows asymptotically faster than all polynomials. Hence, I don't think you should remove the omega notation.
The same situation also applies when one refers to superlinear functions, they typically mean omega(n) instead of anything outside of O(n).
PS. What does WO:OR mean here? Asd00012334 (talk) 15:00, 30 May 2025 (UTC)[reply]
I believe it's an attempt to reference the Wikipedia No Original Research policy: WP:OR. - CRGreathouse (t | c) 15:11, 30 May 2025 (UTC)[reply]
I see. I find it strange to pledge the WP:OR policy here, as the example is NOT original research, but a simple mathematical fact. Asd00012334 (talk) 15:22, 30 May 2025 (UTC)[reply]
A counterexample is a mathematical fact, but providing a counterexample that has never been published is original research. But the true motivation of my revert is that this discussion does not belong to complexity theory, and that your edit was not a good way to fix the previous error. Nevertheless, many thanks for having pointed this error out. D.Lazard (talk) 17:27, 30 May 2025 (UTC)[reply]
@D.Lazard I'm fine removing the example. My main concern is the following:
When speaking of superpolynomial running time, there are two in-equivalent possible definitions:
(A) superpolynomial as in the runtime is not bounded by any polynomial.
(B) superpolynomial as in the runtime scales faster than any polynomial.
Your modification favours (A), but I see no justification doing so. In fact I believe (B) makes more sense because quite literally the prefix "super" typically denotes exceeding or going beyond something. The definition (A) allows a part of a superpolynomial function to grow slower than any polynomial --- this is very far from being "super." Asd00012334 (talk) 23:38, 30 May 2025 (UTC)[reply]
I'd be totally fine if we take (B) instead of (A). Asd00012334 (talk) 23:43, 30 May 2025 (UTC)[reply]
I do not know what is your definition of asymptotically faster. For the definition that I know, A an B are equivalent. In any case, "superpolynomial time" must include everything that is not polynomial time (or less), and this is the justification of my formulation that consists of negating the definition of polynomial time: the negation of is . This is the motivation of my supposed choice. D.Lazard (talk) 00:21, 31 May 2025 (UTC)[reply]
@D.Lazard My definition of being asymptotically faster than any polynomial is for every . Asd00012334 (talk) 10:58, 31 May 2025 (UTC)[reply]
So, if one would accept this definition for superpolynomial, your example would imply that their could be algoritihms and problems that are neither polynomial time nor superpolynomial time. As far as I know, your definition of asymptotically faster is common when dealing with lower bounds of complexity. For upper bounds, the subject of this article, this is not the same defiition that is used. having two different notions of "asymptotically faster" is not harmful since upper and lower bounds of complexities are very distinct problems. However, the ambiguity of the term implies that this is a bad idea to use it in this article. D.Lazard (talk) 16:34, 31 May 2025 (UTC)[reply]
Apologies if I am being annoying.
There is only one natural definition of being "asymptotically faster," for the following reason. If we say is asymptotically faster than , then regardless of the chosen definition, one would naturally expect that is asymptotically slower than , for which there is un-ambiguously only one definition: namely . But then that is mathematically equivalent to saying that . Hence is the only natural definition of being asymptotically faster than .
Regarding your distinction on the context of upperbounds vs lowerbounds, we are speaking of the "superpolynomial" section, which is about lowerbounds instead of upperbounds, but even for upperbounds, I don't think there is any article that explicitly uses for being asymptotically faster than , as natural examples tend to satisfy neither of the definitions or both. Indeed, the Wikipedia page per se used it interchageably in at least 3 different languages (English, Mandrin, and Spanish).
If there is still concern in ambiguity of being "asymptotically faster", I am also fine with dropping the term "asymptotically faster," but as per the above reasons, we should stick to the definition of for superpolynomial running time. Asd00012334 (talk) 19:36, 31 May 2025 (UTC)[reply]
To me this debate is over something very silly, because when we analyze algorithms, we typically do so in terms of worst-case complexity. When we say that, for instance, a sorting algorithm runs in superlinear time, it means that its worst case is larger, likely proportional to , but there may still be inputs for which the time is linear.
Likewise, when we say that an algorithm is superpolynomial, that means that its time, on its worst case inputs, exceeds any polynomial bound. It does not mean that its time on all inputs is superpolynomial. It does not mean that all input sizes contain instances of superpolynomial complexity.
An example: 3-dimensional matching. You can trivially answer no for hypergraphs whose number of vertices is not divisible by three, in linear time. Nevertheless the times for all known 3DM algorithms, on its worst case instances, are superpolynomial and in fact exponential.
The existence of easy instances, and even of special problem sizes for which all instances are easy, is not what worst-case analysis is about. So modifying our wording to emphasize this pedantic distinction is a distraction for readers from the important aspects of this content. —David Eppstein (talk) 19:48, 31 May 2025 (UTC)[reply]
@David Eppstein I believe you mis-understood the main point of this debate. According to prior comments, this debate is independent of how we define the running time --- be it worst-case, average-case, or best-case. Asd00012334 (talk) 19:56, 31 May 2025 (UTC)[reply]
No. It is about which functions are suitable for doing analysis based on O-notation, and which are too oscillatory to be suitable for this notation. The oscillatory functions are also unsuitable for performing worst-case analysis of algorithms for the same reason. —David Eppstein (talk) 19:58, 31 May 2025 (UTC)[reply]
(edit conflict) I believe that you misunderstood Eppstein's comment, especially its last paragraph, which means that cannot be used here. D.Lazard (talk) 20:03, 31 May 2025 (UTC)[reply]
@David Eppstein@D.Lazard Now I see where you are coming from. Indeed the 3-dimensional matching is easy on certain (infinitely many) instances, meaning that T(n) does not fall in for some , but it is still useful saying that such T(n) is superpolynomial.
On the other hand, there is also good motivation to speak of super-polynomial runtime as in the stronger sense, i.e. . In the context of cryptology it is not too rare to see the so-called "superpolynomial hardness assumption," e.g. the sub-exponential LWE assumption. There, one typically assumes the existence of a function T(n) that is super-polynomial, for which it takes at least T(n) running time to solve the considered computational problem. Of course, in this case, T(n) must be superpolynomial in the above stronger sense, because otherwise the cryptographic schemes that rely on such assumptions may admit a reduction that solves easy instances.
I leave it up to you for the final decision. Thank you for spending time on this. Asd00012334 (talk) 20:53, 31 May 2025 (UTC)[reply]