A.I. Expert: We’re DOOMED!
He has expertise in both science and philosophy, and he is warning us not to fooled by all that happy-clappy speculation by Ray Kurzweil and the like. He’s seeing a future filled with Terminators!
An Oxford philosophy professor who has studied existential threats ranging from nuclear war to superbugs says the biggest danger of all may be superintelligence.
Superintelligence is any intellect that outperforms human intellect in every field, and Nick Bostrom thinks its most likely form will be a machine — artificial intelligence.
There are two ways artificial intelligence could go, Bostrom argues. It could greatly improve our lives and solve the world’s problems, such as disease, hunger and even pain. Or, it could take over and possibly kill all or many humans. As it stands, the catastrophic scenario is more likely, according to Bostrom, who has a background in physics, computational neuroscience and mathematical logic.
“Superintelligence could become extremely powerful and be able to shape the future according to its preferences,” Bostrom told me. “If humanity was sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figure out how to do so safely.”
Bostrom, the founding director of Oxford’s Future of Humanity Institute, lays out his concerns in his new book, Superintelligence: Paths, Dangers, Strategies. His book makes a harrowing comparison between the fate of horses and humans:
Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity. Later, horses were substituted for by automobiles and tractors. When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.
The same dark outcome, Bostrom said, could happen to humans once our labor and intelligence become obsolete.
It sounds like a science fiction flick, but recent moves in the tech world may suggest otherwise. Earlier this year, Google acquired artificial intelligence company DeepMind and created an AI safety and ethics review board to ensure the technology is developed safely. Facebook created an artificial intelligence lab this year and is working on creating an artificial brain. Technology called “deep learning,” a form of artificial intelligence meant to closely mimic the human brain, has quickly spread from Google to Microsoft, Baidu and Twitter.
[su_r_sky_ad]Now hold it – we’re intelligent! Can’t we stop the army of Terminators?
Q: Are you saying it’s impossible to control superintelligence because we ourselves are merely intelligent?
Bostrom: It’s not impossible — it’s extremely difficult. I worry that it will not be solved by the time someone builds an AI. We’re not very good at uninventing things. Once unsafe superintellignce is developed, we can’t put it back in the bottle. So we need to accelerate research of this control problem.
Developing an avenue towards human cognitive enhancement would be helpful. Presuming superintelligence doesn’t happen until the second half of the century, there could still be time to develop a cohort of cognitively enhanced humans who might have the capacity to try to solve this really difficult technical control problem. Cognitively enhanced humans will also presumably be able to better consider long-term effects. For example, today people are creating cellphone batteries with longer lives — without thinking about what the long-term effects could be. With more intelligence, we would be able to.
Cognitive enhancement could take place through collective cognitive ability — the Internet, for example, and institutional innovations that enable humans to function better together. In terms of individual cognitive enhancement, the first thing likely to be successful is genetic selection in the context of in-vitro fertilization. I don’t hold out much for cyborgs or implants.