Catalytic computing
Catalytic computing is a technique in computer science, relevant to complexity theory, that uses full memory, as well as empty memory space, to perform computations.[1][2] Full memory is memory that begins in an arbitrary state and must be returned to that state at the end of the computation, for example important data.[2] It can sometimes be used to reduce the memory needs of certain algorithms, for example the tree evaluation problem.[1] It was defined by Buhrman, Cleve, Koucký, Loff, and Speelman in 2014[3] and was named after catalysts in chemistry, based on the metaphorically viewing the full memory as a "catalyst", a non-consumed factor critical for the computational "reaction" to succeed.[1]
The complexity class CSPACE(s(n)) is the class of sets computable by catalytic Turing machines whose work tape is bounded by s(n) tape cells and whose auxiliary full memory space is bounded by tape cells.[2] It has been shown that CSPACE(log(n)), or catalytic logspace, is contained within ZPP and, importantly, contains TC1.[2]
References
[edit]- ^ a b c Brubaker, Ben (2025-02-18). "Catalytic Computing Taps the Full Power of a Full Hard Drive". Quanta Magazine. Retrieved 2025-02-22.
- ^ a b c d Buhrman, Harry; Cleve, Richard; Koucký, Michal; Loff, Bruno; Speelman, Florian (2014-05-31). "Computing with a full memory: Catalytic space". Proceedings of the forty-sixth annual ACM symposium on Theory of computing. STOC '14. New York, NY, USA: Association for Computing Machinery. pp. 857–866. doi:10.1145/2591796.2591874. ISBN 978-1-4503-2710-7.
- ^ Buhrman, Harry; Koucký, Michal; Loff, Bruno; Speelman, Florian (2018-01-01). "Catalytic Space: Non-determinism and Hierarchy". Theory of Computing Systems. 62 (1): 116–135. doi:10.1007/s00224-017-9784-7. ISSN 1433-0490.