Will the Evolution of Artificial Intelligence Harm Humans? Depends

2 minute read
| Editorial

We tend to speak about Artificial Intelligence (AI) in terms of the pinnacle of its potential evolution, and that’s a problem.

AI concept

This article I found showcases one current debate about the potential for AI doing evil. “Elon Musk fires back at Mark Zuckerberg in debate about the future: ‘His understanding of the subject is limited’.

On Sunday afternoon, while smoking some meats in his back garden, Zuckerberg, the Facebook CEO, questioned why Musk, the CEO of Tesla, SpaceX, and OpenAI, was being so negative about AI.

What they’re debating is the future potential for AIs that can, for all practical purposes, duplicate and then go far beyond the capabilities of the human mind. And, in addition, possess the ability to interact with humans beings for good or evil.

We’re NOT There Yet

Of course, AIs today are very limited. We discover those limitations when we realize that AI demos typically only address one or two specific tasks. Like playing chess. Or driving a car on the roadways—in traffic. Our interactions with Siri provide confirmation every day of that AI’s limits.

So what’s the debate really about? I think those who worry, like Elon Musk, ponder two certain things.

  • Computer hardware and AI development will continue until we do reach the holy grail of, essentially, artificial superminds.
  • There won’t be any way to control who gets to use advanced AI technologies. That is to say, look at how another technology, the Internet, has already been used for evil purposes.

Just as Apple has built a sophisticated web browser, called Safari, that serves us well and trued to protect us, there’s no way to perfectly protect the user when dedicated minds, the hackers, try to subvert the good uses of Safari for financial gain or other purposes.

Moreover, even though Apple has, for example, joined the Partnership on AI consortium, there’s no guarantee that the knowledge or ethics developed there will be constrained only for good purposes, all over the planet Earth.

AI Limits

So then the question boils down to the limits of human capabilities. I don’t think anyone doubts that we’ll get smart enough to build an entity like Star Trek’s Lt. Commander Data. See NASA’s page on the science of Star Trek:

At a conference on cybernetics several years ago, the president of the association was asked what is the ultimate goal of his field of technology. He replied, ‘Lieutenant Commander Data.’ Creating Star Trek’s Mr. Data would be a historic feat of cybernetics, and it’s very controversial in computer science whether it can be done.

So how long will this take? If it takes us another 100 years to build a Lt. Commander Data, unforeseen events, war, climate change, and cultural changes could prevent that kind of evolution from ever happening. On the other hand, if we develop AI technology too fast, without adequate controls, we could end up as we did with nuclear weapons. A lot of power that we struggle to keep under control.

In the end, I think both Mr. Zuckerberg and Mr. Musk have equally good points. In Mr. Zuckerberg’s favor, AI technology will do a lot to help us out in the short term, limited in scope as it is. However, in the long run, Mr. Musk has a great point. Namely, our species hasn’t been able to control its worst instincts on the current day internet. What will we have to do as a species to avoid the worst possible fate of massive AI evil inflicted on ourselves.

That’s what we’re in the process of finding out.

6 Comments Add a comment

  1. geoduck

    There’s a third issue that neither of them talk about: The risk from trusted but imperfect, non super intelligent AIs.

    We’ve all seen when a computer program runs away and, prints a thousand instead of ten. Or that drives the automated device into a wall. Or that causes a stock market flash-crash. These are fairly smart AIs. For an AI to be considered as intelligent as us it would have to be able to see something “doesn’t look right” even if it doesn’t know why. Every week I get a part produced by one of our computer controlled CNCs that is right from it’s point of view, but you just have to see it and know it’s not. These are all AIs making little mistakes. Remember Europe’s Schiaparelli lander last year. It crashed because the computer got overloaded and confused. It turned off it’s rockets well above the surface because it’s instruments said it was below ground. Aperson or super intellegent AI could have looked out the window and seen that wasn’t the case. There’s a number of stories from the Cold War of times the computers reacted and the missiles almost flew. But a PERSON stepped in and said, “this doesn’t seem right” and stopped the launch. Or the Challenger disaster. The Space Shuttle’s computers knew something wasn’t right. They kept adjusting thrust vectoring to correct for the defective SRB. It however wasn’t smart enough to say, “Wait, WHAT might be causing this and abort the mission. The tank and SRBs could have been jettisoned and the Orbiter could have glided back to the cape (In theory anyway, this maneuver was never actually tested). But that never occurred to the AI whose only goal was to stay on course.

    I don’t fear super intelligent AIs. I am most fearful from pretty smart AIs that are trusted but in fact are not smart enough to catch a stupid mistakes or novel failures. That will gladly unleash a disaster and not realize the consequences.

    • John Martellaro

      geoduck. That was one of the best reader comments of all time. Thanks for that astute observation.

    • Jamie

      You nailed it, geoduck. Unethical humans are likely more of a threat in terms of implementing something that just isn’t adequate for the particular task at hand.

      At the moment, and likely into the foreseeable future, ALL AI is ‘not-so-smart’. I personally doubt we will ever achieve a Commander Data, at least not with algorithms alone, as the laws of mathematics don’t allow for the variables required by what would constitute ‘consciousness’ (you can’t break math, human thinking could be said to exist outside of those boundaries).

      I am of the opinion that it is very problematic to represent it otherwise, and currently the hype train is going full speed ahead. It is very impressive computing, and it has its uses, but that’s all it is. Folks like Zuckerberg and Musk aren’t doing us any favors with their misguided hyperbole. Talking about future technologies as though exist today is a bonafide trend at this point in Silicone Valley, and it really just obfuscates what is actually legitimately useful about some of the advances being made (and conversely overestimates the value of others!).

  2. brett_x

    if we develop AI technology too fast, without adequate controls, we could end up as we did with nuclear weapons. A lot of power that we struggle to keep under control.

    That analogy is great, and probably a good way that we can describe the problem to non-techies. The problem is indeed global. No one government can put controls in place to fix the problem. First, we have to agree that there is a problem.

    It’s pretty plain to me that since humans have biological limits, and AI does not, if we do not limit what we allow AI to do, it will supersede us. We’re already there in some cases – where the humans that write machine-learning algorithms can’t explain the outcomes that are produced. I just found a terrific (/frightening?) article describing this in MIT’s Technology Review:
    https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

    The article does side with the rational thought that we need more understanding of what is happening. I kept thinking “But at some point, couldn’t AI just learn to fool us, and tell us what we want to hear?”

    AI has no limits except for what we put on it. But we’re already at a point where we don’t understand the outcomes.

    So, maybe we’re in a similar stage of understanding of when we first brought bacteria/viruses into labs and many people got sick and or died trying to study (and control) them. If true, then with AI, we are bound to have “outbreaks” purely because we don’t know what we don’t know. Yet.

  3. iJack

    “So what’s the debate really about? I think those who worry, like Elon Mush, ponder two certain things.”
    Something tells me that ‘Elon Mush’ Is not a typo.

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account