Jump to content

AI capability control

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Againme (talk | contribs) at 00:21, 1 May 2011 (Starting article red-linked in some articles on AI). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

An AI box is an isolated piece of hardware where an artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. Such a box would have extremely proscribed inputs and outputs; maybe only a plaintext channel. However, a sufficient intelligent AI may simply be able to escape from any box we can create. For example, it might crack the protein folding problem and use nanotechnology to escape, or simply persuade its human ‘keepers’ to let it out.[1][2][3]

Intelligence improvements

Some intelligence technologies, like seed AI, have the potential to make themselves more intelligent, not just faster, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

This mechanism for an intelligence explosion differs from an increase in speed in two ways. Firstly, it does not require external effect: machines designing faster hardware still require humans to create the improved hardware, or to program factories appropriately. An AI which was re-writing its own source code, however, could do so while contained in an AI box.

  1. ^ Yudkowsky, Eliezer (2008), Bostrom, Nick; Cirkovic, Milan (eds.), "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF), Global Catastrophic Risks, Oxford University Press, Bibcode:2008gcr..book..303Y, ISBN 978-0-19-857050-9
  2. ^ Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr Anthony Berglas
  3. ^ The Singularity: A Philosophical Analysis David J. Chalmers