A recent article starts to genuinely approach the real danger of Artificial Intelligence (AI). We’ll have zero visibility into how AI technology thinks and its internal cultural values.
Here is a perfectly splendid article I want to examine in more detail.
The premise is that science fiction has lured us into being concerned about intelligent androids that could be a physical danger to us when, in fact, the dangers are more subtle and hard to identify. But physical danger is at the core of an exciting movie. After millions of years of evolution, humans are well adapted to discerning and avoiding physical danger. These kinds of movies strike a chord with us. We’re entertained. But, the article above points us to a different kind of danger.
Namely, how the AI thinks.
Human Decision Making
I’ve mentioned this danger before in a different context when I wrote about the human decision making process. “How Can We Tell if Our Love for Apple is Logical or Biased?”
There, I alluded to the Nobel prize winning Princeton psychologist Dr. Daniel Kahneman and his research on cognitive biases. “Kahneman: Your Cognitive Biases Act Like Optical Illusions.”
In the process of summing up some of Dr. Kahneman’s work, Robert Cringely in the context of the Michael Lewis book, The Undoing Project, made the following astute observation.
What Kahneman [and Tversky] figured out is that we have ancient brains that generally don’t do the math because math has only been around for 5,000 years or so while we as a species have been walking the Earth for 2+ million years. Our decision-making processes, such as they are, are based on maximizing survival not success. Early hunter-gatherers weren’t so bothered with optimization as just not being eaten. This put an emphasis on making decisions quickly.
In today’s technical environment, a different kind of thinking is called for.
Danger, Will Robinson
The article at The Verge interviews the co-founder and CEO of Clara Labs, Maran Nelson. This excerpt gets to the heart of the matter. (Nelson in italics.)
Almost every time people have played with the idea of an AI and what it will look like, and what it means for it to be scary, it’s been tremendously anthropomorphized. You have this thing — it comes, it walks at you, and it sounds like you’re probably going to die, or it made it very clear that there’s some chance your life is in jeopardy.
The thing that scares me the most about that is not the likelihood that in the next five years something like this will happen to us, but the likelihood that it will not. Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information.
So the idea of HAL from 2001 is distracting people from what the actual threats are.
Very much so.
Zero Visibility Into AI Technology Minds
Humans have grown up with each other for a long time. We’ve learned how to size up others. We know how they tend to think because we’ve all evolved on the same planet. With allowances for the fact that humans arose in different geographical regions, speak different languages, dress different and look different, there is a common culture of humankind. We live and work together with values based on the fact that we’re human. We breathe, we love, we have children, we desire basic human rights, we want to be useful and respected.
When we see another human engaged in some activity, we tend to understand what motivates them. (If they’re mentally healthy.) Our discussions with them, even across languages, are informed by our common ground as humans. It’s all we’ve known until now.
With AI technology, however, the simple process of effectively engaging them is tremendously difficult. The algorithms, the substance of how the AI thinks, can—and will likely be— completely alien.
Asking an AI for an answer is easy. Asking “why do you think that” is more problematic.
Worse, AIs are being developed by giant tech companies with specific agendas. We’ll have no idea what those agendas will be. But money making will likely be at the top of the list.
Humans tend to overestimate the infallibility of computers. In a recent podcast with award-winning roboticist Dr. Ayanna Howard, she told me (as I recall) that adults often will obey robot directives, not because the humans are naturally subservient, but rather because they surmise the thinking machine before them is superior and infallible. How and why we’ll trust AIs is a major research issue.
If we can’t achieve common ground with our new AI technology, understand what drives them, understand their quirks, interrogate them on their reasoning and values, then it’s very likely indeed that advanced AIs will find it all too simple to direct us, manipulate us, and intimidate us into revising our thinking and beliefs. All for someone else’s benefit.
Don’t think it can happen? It already has.