Jump to content

Existential risk from artificial intelligence

From Wikipedia, the free encyclopedia
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[1] Because of its exceptional scheduling and organizational capability and the range of novel technologies it could develop, it is possible that the first Earth superintelligence to emerge could rapidly become matchless and unrivaled: conceivably it would be able to bring about almost any possible outcome, and be able to foil virtually any attempt that threatened to prevent it achieving its objectives.[2] It could eliminate, wiping out if it chose, any other challenging rival intellects; alternatively it might manipulate or persuade them to change their behavior towards its own interests, or it may merely obstruct their attempts at interference.[2] In Bostrom's book, Superintelligence: Paths, Dangers, Strategies, he defines this as the control problem.[3]

Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity."[4] He suggests that it may be somewhat or possibly very dangerous for humans.[5] This is discussed by a philosophy called Singularitarianism.

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[6] In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[4] Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.[7][8][9] Eliezer Yudkowsky believes that risks from artificial intelligence are harder to predict than any other known risks. He also argues that research into artificial intelligence is biased by anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims that they underestimate the potential power of AI. He distinguishes between risks due to technical failure of AI, which means that flawed algorithms prevent the AI from carrying out its intended goals, and philosophical failure, which means that the AI is programmed to realize a flawed ideology.[10]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[11] There are also concerns about technology which might allow some armed robots to be controlled mainly by other robots.[12] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[13][14] One researcher states that autonomous robots might be more humane, as they could make decisions more effectively. However, other experts question this.[15]

On the other hand, a "friendly" AI could help reduce existential risk by developing technological solutions to threats.[10]

In PBS's Off Book, Gary Marcus asks "what happens if (AIs) decide we are not useful anymore?" Marcus argues that AI cannot, and should not, be banned, and that "the sensible thing to do" is to "start thinking now" about AI ethics.[16]


Many researchers have argued that, by way of an "intelligence explosion" sometime in the next century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[17] In his paper Ethical Issues in Advanced Artificial Intelligence, the Oxford philosopher Nick Bostrom even argues that Artificial Intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. In theory, a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[18]

However, the sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[17][18] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[19]

Bill Hibbard[20] proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.

Risk of human extinction

The creation of artificial general intelligence may have repercussions so great and so complex that it may not be possible to forecast what will come afterwards. Thus the event in the hypothetical future of achieving strong AI is called the technological singularity, because theoretically you cannot see past it. But this has not stopped philosophers and researchers from guessing what the smart computers or robots of the future may do, including forming a utopia by being our friends or overwhelming us in an AI takeover. The latter potentiality is particularly disturbing as it poses an existential risk for mankind, that is, it may lead to human extinction.

Self-replicating machines

Smart computers or robots would be able to produce copies of themselves. They would be self-replicating machines. A growing population of intelligent robots could conceivably outcompete inferior humans in job markets, in business, in science, in politics (pursuing robot rights), and technologically, sociologically (by acting as one), and militarily. See also swarm intelligence.

Emergent superintelligence

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and would probably continue doing so in a rapidly increasing cycle, leading to an intelligence explosion and the emergence of superintelligence. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Hyper-intelligent software may not necessarily decide to support the continued existence of mankind, and may be extremely difficult to stop.[21] This topic has also recently begun to be discussed in academic publications as a real source of risks to civilization, humans, and planet Earth.

One proposal to deal with this is to make sure that the first generally intelligent AI is friendly AI, that would then endeavor to ensure that subsequently developed AIs were also nice to us. But, friendly AI is harder to create than plain AGI, and therefore it is likely, in a race between the two, that non-friendly AI would be developed first. Also, there is no guarantee that friendly AI would remain friendly, or that its progeny would also all be good.[22]


Ramifications

Uncertainty and risk

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[23][24] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[25][26] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Singularity Institute for Artificial Intelligence, which is now the Machine Intelligence Research Institute.[23]

Implications for human society

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.

Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and have achieved "cockroach intelligence." The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[4]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[27] A United States Navy report indicates that, as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[28][29]

The AAAI has commissioned a study to examine this issue,[30] pointing to programs like the Language Acquisition Device, which was claimed to emulate human interaction.

Some support the design of friendly artificial intelligence, meaning that the advances that are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[31]

Isaac Asimov's Three Laws of Robotics is one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans. In Asimov’s stories, any perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves are merely acting to their best interpretation of their rules. In the 2004 film I, Robot, loosely based on Asimov's Robot stories, an AI attempts to take complete control over humanity for the purpose of protecting humanity from itself due to an extrapolation of the Three Laws. In 2004, the Singularity Institute launched an Internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimov’s laws in particular.[32]

Impact

Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[33]

Superintelligence

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent.

Technology forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Existential risk

Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by mankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators (such as Nick Bostrom's whimsical example of an AI which was originally programmed with the goal of manufacturing paper clips, so that when it achieves superintelligence it decides to convert the entire planet into a paper clip manufacturing facility).[34][35][36] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[37] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[38][39] and humans would be powerless to stop them.[40] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[41]

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[42]

Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that the first real AI would have a head start on self-improvement and, if friendly, could prevent unfriendly AIs from developing, as well as providing enormous benefits to mankind.[43]

Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion,[44] unintended instrumental actions,[45][46] and corruption of the reward generator.[46] He also discusses social impacts of AI[47] and testing AI.[48] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to some of these dangers.

One hypothetical approach towards attempting to control an artificial intelligence is an AI box, where the artificial intelligence is kept constrained inside a simulated world and not allowed to affect the external world. However, a sufficiently intelligent AI may simply be able to escape by outsmarting its less intelligent human captors.[23][49][50]

Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." Hawking believes more should be done to prepare for the singularity:[51]

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.

Artificial superintelligence

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on timescales. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone. In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft Academic Search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.[52]

Philosopher David Chalmers argues that generally intelligent AI — artificial general intelligence — is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it be further amplified to completely dominate humans across arbitrary tasks.[53]

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulable by synthetic materials.[54] He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI.[55] Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.[56]

Possibility of unfriendly AI

Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[42]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[17][18] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[57]

Existential risk of AI

The slow progress of biological evolution has given way to the rapid progress of technological revolution. Unbridled progress in computer technology may lead to the technological singularity, a global catastrophic risk in that it may result in synthetic intelligence that may bring about human extinction.

In his paper Ethical Issues in Advanced Artificial Intelligence, the Oxford philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of an emergent superintelligence to specify its original motivations. In theory, a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[18]

Eliezer Yudkowsky put it this way:

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." [58]

  1. ^ Bill Joy, Why the future doesn't need us. In:Wired magazine. See also technological singularity.Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com
  2. ^ a b Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com
  3. ^ Superintelligence: Paths, Dangers, Strategies
  4. ^ a b c Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
  5. ^ The Coming Technological Singularity: How to Survive in the Post-Human Era, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.
  6. ^ Rawlinson, Kevin. "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015.
  7. ^ Gaming the Robot Revolution: A military technology expert weighs in on Terminator: Salvation., By P. W. Singer, slate.com Thursday, May 21, 2009.
  8. ^ Robot takeover, gyre.org.
  9. ^ robot page, engadget.com.
  10. ^ a b Yudkowsky, Eliezer. "Artificial Intelligence as a Positive and Negative Factor in Global Risk". Retrieved 26 July 2013.
  11. ^ Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  12. ^ Robot Three-Way Portends Autonomous Future, By David Axe wired.com, August 13, 2009.
  13. ^ New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
  14. ^ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  15. ^ New role for robot warriors; Drones are just part of a bid to automate combat. Can virtual ethics make machines decisionmakers?, by Gregory M. Lamb / Staff writer, Christian Science Monitor, February 17, 2010.
  16. ^ "The Rise of Artificial Intelligence". PBS Off Book. 11 July 2013. Event occurs at 6:29-7:26. Retrieved 24 October 2013. ...what happens if (AIs) decide we are not useful anymore? I think we do need to think about how to build machines that are ethical. The smarter the machines gets, the more important that is... [T]here are so many advantages to AI in terms of human health, in terms of education and so forth that I would be reluctant to stop it. But even if I did think we should stop it, I don't think it's possible... if, let's say, the US Government forbade development in kind of the way they did development of new stem cell lines, that would just mean that the research would go offshore, it wouldn't mean it would stop. The more sensible thing to do is start thinking now about these questions... I don't think we can simply ban it.
  17. ^ a b c Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  18. ^ a b c d Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  19. ^ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.
  20. ^ Hibbard, Bill (2014): Ethical Artificial Intelligence. http://arxiv.org/abs/1411.1373
  21. ^ Yudkowsky, Eliezer (2008)
  22. ^ Berglas (2008)
  23. ^ a b c Yudkowsky, Eliezer (2008), Bostrom, Nick; Cirkovic, Milan (eds.), "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF), Global Catastrophic Risks, Oxford University Press: 303, Bibcode:2008gcr..book..303Y, ISBN 978-0-19-857050-9
  24. ^ Cite error: The named reference theuncertainfuture was invoked but never defined (see the help page).
  25. ^ Cite error: The named reference catastrophic was invoked but never defined (see the help page).
  26. ^ Cite error: The named reference nickbostrom was invoked but never defined (see the help page).
  27. ^ Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  28. ^ Mick, Jason. New Navy-funded Report Warns of War Robots Going "Terminator", Blog, dailytech.com, February 17, 2009.
  29. ^ Flatley, Joseph L. Navy report warns of robot uprising, suggests a strong moral compass, engadget.com, 18 February 2009.
  30. ^ AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  31. ^ Article at Asimovlaws.com, July 2004, accessed 7/27/2009.
  32. ^ (Singularity Institute for Artificial Intelligence 2004)
  33. ^ Robin Hanson, "Economics Of The Singularity", IEEE Spectrum Special Report: The Singularity, retrieved 2008-09-11 & Long-Term Growth As A Sequence of Exponential Modes
  34. ^ Ethical Issues in Advanced Artificial Intelligence, Nick Bostrom, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17
  35. ^ Eliezer Yudkowsky: Artificial Intelligence as a Positive and Negative Factor in Global Risk. Draft for a publication in Global Catastrophic Risk from August 31, 2006, retrieved July 18, 2011 (PDF file)
  36. ^ The Stamp Collecting Device, Nick Hay
  37. ^ 'Why we should fear the Paperclipper', 2011-02-14 entry of Sandberg's blog 'Andart'
  38. ^ Cite error: The named reference selfawaresystems.com was invoked but never defined (see the help page).
  39. ^ Omohundro, Stephen M., "The Basic AI Drives." Artificial General Intelligence, 2008 proceedings of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Vol. 171. Amsterdam: IOS, 2008.
  40. ^ de Garis, Hugo. "The Coming Artilect War", Forbes.com, 22 June 2009.
  41. ^ Cite error: The named reference nickbostrom7 was invoked but never defined (see the help page).
  42. ^ a b Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004
  43. ^ Cite error: The named reference ReferenceB was invoked but never defined (see the help page).
  44. ^ Hibbard, Bill (2012), "Model-Based Utility Functions", Journal of Artificial General Intelligence, 3: 1, arXiv:1111.3934, Bibcode:2012JAGI....3....1H, doi:10.2478/v10229-011-0013-5.
  45. ^ Cite error: The named reference selfawaresystems was invoked but never defined (see the help page).
  46. ^ a b Avoiding Unintended AI Behaviors. Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. This paper won the Singularity Institute's 2012 Turing Prize for the Best AGI Safety Paper [1] .
  47. ^ Hibbard, Bill (2008), "The Technology of Mind and a New Social Contract", Journal of Evolution and Technology, 17.
  48. ^ Decision Support for Safe AI Design|. Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.
  49. ^ Artificial Intelligence Will Kill Our Grandchildren (Singularity), Dr Anthony Berglas
  50. ^ The Singularity: A Philosophical Analysis David J. Chalmers
  51. ^ Stephen Hawking (1 May 2014). "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'". The Independent. Retrieved May 5, 2014.
  52. ^ Müller & Bostrom 2014, pp. 3–4, 6, 9–12.
  53. ^ Chalmers 2010, p. 7.
  54. ^ Chalmers 2010, p. 7-9.
  55. ^ Chalmers 2010, p. 10-11.
  56. ^ Chalmers 2010, p. 11-13.
  57. ^ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.
  58. ^ Eliezer Yudkowsky (2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk