Space complexity
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely.[1]
Similar to time complexity, space complexity is often expressed asymptotically in big O notation, such as etc., where n is a characteristic of the input influencing space complexity.
Space complexity classes
Analogously to time complexity classes DTIME(f(n)) and NTIME(f(n)), the complexity classes DSPACE(f(n)) and NSPACE(f(n)) are the sets of languages that are decidable by deterministic (respectively, non-deterministic) Turing machines that use space. The complexity classes PSPACE and NPSPACE allow to be any polynomial, analogously to P and NP. That is,
and
Relationships between classes
The space hierarchy theorem states that, for all space-constructible functions , there exists a problem that can be solved by a machine with memory space, but cannot be solved by a machine with asymptotically less than space.
The following containments between complexity classes hold.[2]
Furthermore, Savitch's theorem gives the reverse containment that if ,
As a direct corollary, . This result is surprising because it suggests that non-determinism can reduce the space necessary to solve a problem only by a small amount. In contrast, the exponential time hypothesis conjectures that for time complexity, there can be an exponential gap between deterministic and non-deterministic complexity.
The Immerman–Szelepcsényi theorem states that, again for , is closed under complementation. This shows another qualitative difference between time and space complexity classes, as nondeterministic time complexity classes are not believed to be closed under complementation; for instance, it is conjectured that NP ≠ co-NP.[3][4]
LOGSPACE
L or LOGSPACE is the set of problems that can be solved by a deterministic Turing machine using only memory space with regards to input size. Even a single counter that can index the entire -bit input requires space, so LOGSPACE algorithms can maintain only a constant number of counters or other variables of similar bit complexity.
LOGSPACE and other sub-linear space complexity is useful when processing large data that cannot fit into a computer's RAM. They are related to Streaming algorithms, but only restrict how much memory can be used, while streaming algorithms have further constraints on how the input is fed into the algorithm. This class also sees use in the field of pseudorandomness and derandomization, where researchers consider the open problem of whether L = RL.[5][6]
The corresponding nondeterministic space complexity class is NL.
Auxiliary space complexity
The term auxiliary space refers to space other than that consumed by the input. Auxiliary space complexity could be formally defined in terms of a Turing machine with a separate input tape which cannot be written to, only read, and a conventional working tape which can be written to. The auxiliary space complexity is then defined (and analyzed) via the working tape. For example, consider the depth-first search of a balanced binary tree with nodes: its auxiliary space complexity is .
The critical difference between the space complexity and the auxiliary space complexity is that the former includes input data[7], while the latter does not. In this paragraph, we explain the importance of the auxiliary space complexity and reveal the shortcomings of the space complexity. Strictly speaking, an in-place algorithm is an algorithm that can transform input to output using only auxiliary memory space[8]. Many sorting algorithms are in-place, including bubble sort, selection sort, insertion sort, and heapsort; however, quicksort is not since it requires auxiliary memory space[9], where is the number of elements in an array. On the other hand, since we need memory space to store input data—an array of elements, the space complexities of all above-mentioned sorting algorithms are the same—. Hence only from the viewpoint of auxiliary space complexity, we can tell which sorting algorithms are more space-efficient.
In this paragraph, we explain the importance of the space complexity and reveal the shortcomings of the auxiliary space complexity. It is well-known that there are at least two kinds of data structures to represent a graph , one is the adjacency matrix which requires memory space, and the other is the adjacency list which requires memory space. Then the space complexity of the depth-first search is if we use the adjacency matrix to store a graph, or if we use the adjacency list to store a graph. Obviously, the adjacency list is much more space-efficient than the adjacency matrix especially when the input graph is sparse, such as a tree or a planar graph. On the other hand, the auxiliary space complexity of the depth-first search is always regardless of the data structure of the input graph[10], even which could be extremely poor and thus results in a huge waste of memory space. Accordingly, when researching algorithms, if we pay attention only to the auxiliary space complexity, then we have no incentive to invent new space-efficient data structures for input data.
References
- ^ Kuo, Way; Zuo, Ming J. (2003), Optimal Reliability Modeling: Principles and Applications, John Wiley & Sons, p. 62, ISBN 9780471275459
- ^ Arora, Sanjeev; Barak, Boaz (2007), Computational Complexity : A Modern Approach (PDF) (draft ed.), p. 76, ISBN 9780511804090
- ^ Immerman, Neil (1988), "Nondeterministic space is closed under complementation" (PDF), SIAM Journal on Computing, 17 (5): 935–938, doi:10.1137/0217058, MR 0961049
- ^ Szelepcsényi, Róbert (1987), "The method of forcing for nondeterministic automata", Bulletin of the EATCS, 33: 96–100
- ^ Nisan, Noam (1992), "RL ⊆ SC", Proceedings of the 24th ACM Symposium on Theory of computing (STOC '92), Victoria, British Columbia, Canada, pp. 619–623, doi:10.1145/129712.129772, S2CID 11651375
{{citation}}
: CS1 maint: location missing publisher (link). - ^ Reingold, Omer; Trevisan, Luca; Vadhan, Salil (2006), "Pseudorandom walks on regular digraphs and the RL vs. L problem" (PDF), STOC'06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, New York: ACM, pp. 457–466, doi:10.1145/1132516.1132583, MR 2277171, S2CID 17360260
- ^ Sahni, Sartaj (2005), Data Structures, Algorithms, and Applications in C++, Second Edition, Silicon Press, pp. 58–65, ISBN 0-929306-32-5
- ^ Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2009). Introduction to Algorithms, Third Edition. MIT Press. ISBN 978-0-262-03384-8.
- ^ Sedgewick, Robert (1 September 1998). Algorithms in C: Fundamentals, Data Structures, Sorting, Searching, Parts 1–4 (3 ed.). Pearson Education. ISBN 978-81-317-1291-7.
- ^ Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2009). Introduction to Algorithms, Third Edition. MIT Press. ISBN 978-0-262-03384-8.