Longest increasing subsequence problem
The longest increasing subsequence problem is to find the longest increasing subsequence of a given sequence. Note that subsequence we are searching for is not necessarily contiguous. Searching for the longest contiguous increasing subsequence would be a simple matter of scanning the whole sequence to find the points where the values decrease, and taking the longest gap between two such points.
The longest increasing subsequence problem has long been known to be solvable in time O(n log n) (Schensted 1961).
Relations to other algorithmic problems
The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence S is the longest common subsequence of S and T, where T is the result of sorting S. However, for the special case in which the input is a permutation of the integers 1, 2, ..., n, this approach can be made much more efficient, leading to time bounds of the form O(n log log n) (Hunt and Szymanski 1977).
The longest increasing subsequence problem can also be related to finding the longest path in a directed acyclic graph derived from the input sequence: we construct a vertex per sequence value and connect values x and y by an edge when x is both smaller than y and occurs earlier than it in the sequence. Longest paths in DAGs can be found in time linear in the size of the DAG, but in this case the size can be quadratic in the sequence length, so this algorithm is not asymptotically faster than the dynamic programming one.
See also
References
- Hunt, J.; Szymanski, T. (1977). "A fast algorithm for computing longest common subsequences". Communications of the ACM: 350–353.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)
- Schensted, C. (1961). "Longest increasing and decreasing subsequences". Canadian Journal of Mathematics. 13: 179–191.