August 24, 2014 11:30 am -

He has expertise in both science and philosophy, and he is warning us not to fooled by all that happy-clappy speculation by Ray Kurzweil and the like. He’s seeing a future filled with Terminators!

An Oxford philosophy professor who has studied existential threats ranging from nuclear war to superbugs says the biggest danger of all may be superintelligence.


Superintelligence is any intellect that outperforms human intellect in every field, and Nick Bostrom thinks its most likely form will be a machine — artificial intelligence.

There are two ways artificial intelligence could go, Bostrom argues. It could greatly improve our lives and solve the world’s problems, such as disease, hunger and even pain. Or, it could take over and possibly kill all or many humans. As it stands, the catastrophic scenario is more likely, according to Bostrom, who has a background in physics, computational neuroscience and mathematical logic.

“Superintelligence could become extremely powerful and be able to shape the future according to its preferences,” Bostrom told me. “If humanity was sane and had our act together globally, the sensible course of action would be to postpone development of superintelligence until we figure out how to do so safely.”

Bostrom, the founding director of Oxford’s Future of Humanity Institute, lays out his concerns in his new book, Superintelligence: Paths, Dangers, Strategies. His book makes a harrowing comparison between the fate of horses and humans:

Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity. Later, horses were substituted for by automobiles and tractors. When horses became obsolete as a source of labor, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.

The same dark outcome, Bostrom said, could happen to humans once our labor and intelligence become obsolete.

It sounds like a science fiction flick, but recent moves in the tech world may suggest otherwise. Earlier this year, Google acquired artificial intelligence company DeepMind and created an AI safety and ethics review board to ensure the technology is developed safely. Facebook created an artificial intelligence lab this year and is working on creating an artificial brain. Technology called “deep learning,” a form of artificial intelligence meant to closely mimic the human brain, has quickly spread from Google to Microsoft, Baidu and Twitter.

[su_r_sky_ad]Now hold it – we’re intelligent! Can’t we stop the army of Terminators?

Q: Are you saying it’s impossible to control superintelligence because we ourselves are merely intelligent?

Bostrom: It’s not impossible — it’s extremely difficult. I worry that it will not be solved by the time someone builds an AI. We’re not very good at uninventing things. Once unsafe superintellignce is developed, we can’t put it back in the bottle. So we need to accelerate research of this control problem.

Developing an avenue towards human cognitive enhancement would be helpful. Presuming superintelligence doesn’t happen until the second half of the century, there could still be time to develop a cohort of cognitively enhanced humans who might have the capacity to try to solve this really difficult technical control problem. Cognitively enhanced humans will also presumably be able to better consider long-term effects. For example, today people are creating cellphone batteries with longer lives — without thinking about what the long-term effects could be. With more intelligence, we would be able to.

Cognitive enhancement could take place through collective cognitive ability — the Internet, for example, and institutional innovations that enable humans to function better together. In terms of individual cognitive enhancement, the first thing likely to be successful is genetic selection in the context of in-vitro fertilization. I don’t hold out much for cyborgs or implants.

D.B. Hirsch
D.B. Hirsch is a political activist, news junkie, and retired ad copy writer and spin doctor. He lives in Brooklyn, New York.

22 responses to A.I. Expert: We’re DOOMED!

  1. Abby Normal August 24th, 2014 at 11:36 am

    He has a point. Imagine an android with opposable digits, 3-D vision, self-awareness and an IQ tens or hundreds of times higher than Sheldon Cooper’s who can run around on two legs. They’ll manufacture their own offspring and each generation will be 100 times better than the previous one.

    As long as we can pull the plug on them, it should be okay. If not . . . . all bets are off.

    • MIAtheistGal August 24th, 2014 at 12:02 pm

      Asimov’s Three Laws of Robotics should not be ignored!

      • Dwendt44 August 24th, 2014 at 3:08 pm

        Even they can be distorted or twisted.

      • bpollen August 24th, 2014 at 8:11 pm

        It’s not nice to ignore laws… Why, I myself have tried to ignore the Law of Gravity on several occasions and things have not gone at all well.

  2. edmeyer_able August 24th, 2014 at 11:44 am

    It wouldn’t take long before A I figured out that the greatest threat to the world and themselves were humans, hell even Democrats know that.

  3. crc3 August 24th, 2014 at 11:55 am

    This probably won’t even happen because humans will probably destroy themselves before terminators do. In either case…we’re screwed….

  4. Obewon August 24th, 2014 at 12:08 pm

    Impossible to computationally derive digital encryption keys reset monthly make mincemeat of any Controlling Terminators superintelligence! You’re terminated F*cker. Bostrom: It’s not impossible —

  5. craig7120 August 24th, 2014 at 12:11 pm

    We’re DOOMED!?
    We kill each other already, is the issue a robot will be taking the killing jobs away?

    I would not bet against the human species to develop a way to keep killing uninterrupted in spite of A I

    I think A I would be a benefit, if we allow its existence.

  6. trees August 24th, 2014 at 1:12 pm

    In my opinion, it’s ridiculous nonsense. Human creativity and imagination are the hallmarks of our what? Humanity. Self-awareness and consciousness are biological functions. Our carnality is what drives the engine of our existence. The tenth commandment, “Thou shalt not covet thy neighbour’s house, thou shalt not covet thy neighbour’s wife, nor his manservant, nor his maidservant, nor his ox, nor his ass, nor any thing that is thy neighbour’s.”, is a prohibition against conspiring to acquire things that are not yours, it’s a prohibition against taking something you have not earned.

    Philosophically it would be more highly probable that an artificial intelligence, unencumbered by carnal desires and unbridled from biological function, would choose self immolation over world conquest and dominance.

    If the machine could feel, that is.

    But, that would require something that a machine will likely never have, namely, emotions. All of our thought processes are guided by emotion. Emotion is the vapor that coalesces in the mind, and which weights the levers of action.

    Devoid of emotion it’s unlikely that a machine could be capable of thinking for itself in any meaningful way that could be considered as “self-aware”.

    • Dwendt44 August 24th, 2014 at 3:12 pm

      What you are afraid of is a rational highly intelligent machine that doesn’t need a human to service it. It won’t need imaginary beings that live in the clouds. It won’t need ‘feelings’, ‘fears’ or even ‘anger’.
      Humans can and do set their ’emotions’ aside often. Most government functions, at least they used to, were largely emotion free.

      • trees August 24th, 2014 at 4:05 pm

        “Humans can and do set their ’emotions’ aside often.”

        Tell me about the subconscious, or unconscious mind. You may think you’ve “set your emotions aside”, but in reality you’ve only slipped a mask on and are pretending to be something you’re not….

        ” What you are afraid of is a rational highly intelligent machine that doesn’t need a human to service it.”

        I don’t think you grasped my post, I’m not afraid of a machine attaining person-hood, because imho it’s simply not possible….neither am I afraid of a “highly intelligent machine that doesn’t need me”. Actually, a highly intelligent machine that doesn’t need me would be very convenient, don’t you think?

        “It won’t need imaginary beings that live in the clouds.”

        Interesting view. I’m sensing some hostility to God here, yeah?

        • Dwendt44 August 24th, 2014 at 9:01 pm

          Which god? There have been over 3000 of them since the dawn of civilization.
          BTW, if an intelligent machine didn’t need us around, we’d just be in the way, so it would think to remove the human infestation.

          • trees August 25th, 2014 at 1:40 am

            “BTW, if an intelligent machine didn’t need us around, we’d just be in the way, so it would think to remove the human infestation.”

            You beg the question, “in the way of what?”

            You suppose that your machines would have something meaningful to do, some kind of work, but without wants and desires I see them as having nothing to do at all. Devoid of carnality they’d have no desires, nothing motivational.

            No sense of purpose, no hunger for life = no reason to exist, as devoid of purpose, devoid of hunger, existence is without meaning.

            Without the emotion of despair, there would exist nihilism in it’s purest form. The simple dissolution of existence. For something devoid of life, the cessation of being is no big deal.

            But for the living, it’s a different matter altogether.

            Nietzsche stood on the edge of existence and peered into eternity, embracing his god, nihilism, and in return his god filled him with despair, becoming his master. Together they descended into madness….

          • fahvel August 25th, 2014 at 2:19 am

            like roaches??? wow!

    • Abby Normal August 24th, 2014 at 4:02 pm

      “Our carnality is what drives the engine of our existence.”

      That’s profound. Did you think that up all by yourself or did “the voices” help you with it.

      • trees August 24th, 2014 at 4:11 pm

        Actually, the voices did help me with it. They recorded their thoughts in a book I read sometimes….

        What voices help you with things? Or are you so arrogant as to think there aren’t any?

        • Abby Normal August 24th, 2014 at 4:41 pm

          What voices help me with things? The voices of my family doctor. My opthalmologist. My dentist. The voices of people I trust. My conscience. They’ve done quite well by me over the years.

          Carl Sagan, Steven Hawking, Neil deGrasse Tyson my physics, chemistry and biology professors and others have helped me to understand the universe I live in.

          I’ve read the book you’re referring to. Three times, cover to cover, plus several chapter by chapter reference books to help me get the full meaning of what I was reading.

          Jesus was no tea-party conservative.

          • trees August 24th, 2014 at 5:56 pm

            “What you get out of that book depends upon what you bring to it.”


          • fahvel August 25th, 2014 at 2:17 am

            it doesn’t matter what you bring be it a sandwich or a blank mindlessness – it’s still the same fairy tale.

        • fahvel August 25th, 2014 at 2:16 am

          your book is a fairy tale and the voices in your head are whoooooooey – they’re coming to take you away……

    • Kick Frenzy August 24th, 2014 at 4:45 pm

      It seems to me that an AI (sans emotion) wouldn’t choose either of your options.

      I believe it would be closer to The Matrix than Terminator, with machines seeing humans as inefficient encumbrances, better removed from the equation than having to deal with us.

      But that’ll all probably be moot anyway, since we most likely won’t be “humans vs machines” by the time it matters.
      Most likely, it’ll be closer to “cyborgs vs machines”.

  7. R.J. Carter August 26th, 2014 at 3:02 pm