AI capability control
An AI box is an isolated piece of hardware where an artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficient intelligent AI may simply be able to escape from any box we can create. For example, it might crack the protein folding problem and use nanotechnology to escape, or simply persuade its human ‘keepers’ to let it out.[1][2][3]
Intelligence improvements
Some intelligence technologies, like seed AI, have the potential to make themselves more intelligent, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.
This mechanism for an intelligence explosion differs from an increase in speed in two ways. Firstly, it does not require external effect: machines designing faster hardware still require humans to create the improved hardware, or to program factories appropriately. An AI which was re-writing its own source code, however, could do so while contained in an AI box.
- ^ Yudkowsky, Eliezer (2008), Bostrom, Nick; Cirkovic, Milan (eds.), "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF), Global Catastrophic Risks, Oxford University Press, Bibcode:2008gcr..book..303Y, ISBN 978-0-19-857050-9
- ^ Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr Anthony Berglas
- ^ The Singularity: A Philosophical Analysis David J. Chalmers