We tend to speak about Artificial Intelligence (AI) in terms of the pinnacle of its potential evolution, and that’s a problem.
This article I found showcases one current debate about the potential for AI doing evil. “Elon Musk fires back at Mark Zuckerberg in debate about the future: ‘His understanding of the subject is limited’.”
On Sunday afternoon, while smoking some meats in his back garden, Zuckerberg, the Facebook CEO, questioned why Musk, the CEO of Tesla, SpaceX, and OpenAI, was being so negative about AI.
What they’re debating is the future potential for AIs that can, for all practical purposes, duplicate and then go far beyond the capabilities of the human mind. And, in addition, possess the ability to interact with humans beings for good or evil.
We’re NOT There Yet
Of course, AIs today are very limited. We discover those limitations when we realize that AI demos typically only address one or two specific tasks. Like playing chess. Or driving a car on the roadways—in traffic. Our interactions with Siri provide confirmation every day of that AI’s limits.
So what’s the debate really about? I think those who worry, like Elon Musk, ponder two certain things.
- Computer hardware and AI development will continue until we do reach the holy grail of, essentially, artificial superminds.
- There won’t be any way to control who gets to use advanced AI technologies. That is to say, look at how another technology, the Internet, has already been used for evil purposes.
Just as Apple has built a sophisticated web browser, called Safari, that serves us well and trued to protect us, there’s no way to perfectly protect the user when dedicated minds, the hackers, try to subvert the good uses of Safari for financial gain or other purposes.
Moreover, even though Apple has, for example, joined the Partnership on AI consortium, there’s no guarantee that the knowledge or ethics developed there will be constrained only for good purposes, all over the planet Earth.
So then the question boils down to the limits of human capabilities. I don’t think anyone doubts that we’ll get smart enough to build an entity like Star Trek’s Lt. Commander Data. See NASA’s page on the science of Star Trek:
At a conference on cybernetics several years ago, the president of the association was asked what is the ultimate goal of his field of technology. He replied, ‘Lieutenant Commander Data.’ Creating Star Trek’s Mr. Data would be a historic feat of cybernetics, and it’s very controversial in computer science whether it can be done.
So how long will this take? If it takes us another 100 years to build a Lt. Commander Data, unforeseen events, war, climate change, and cultural changes could prevent that kind of evolution from ever happening. On the other hand, if we develop AI technology too fast, without adequate controls, we could end up as we did with nuclear weapons. A lot of power that we struggle to keep under control.
In the end, I think both Mr. Zuckerberg and Mr. Musk have equally good points. In Mr. Zuckerberg’s favor, AI technology will do a lot to help us out in the short term, limited in scope as it is. However, in the long run, Mr. Musk has a great point. Namely, our species hasn’t been able to control its worst instincts on the current day internet. What will we have to do as a species to avoid the worst possible fate of massive AI evil inflicted on ourselves.
That’s what we’re in the process of finding out.