The Real Threat from AI Technology: Not Knowing How it Thinks

3 minute read
| Editorial

A recent article starts to genuinely approach the real danger of Artificial Intelligence (AI). We’ll have zero visibility into how AI technology thinks and its internal cultural values.

Alicia Vikander as AI/android Ava. AI Technology.

Alicia Vikander in Ex Machina. Image credit: Universal Pictures International.

Here is a perfectly splendid article I want to examine in more detail.

The premise is that science fiction has lured us into being concerned about intelligent androids that could be a physical danger to us when, in fact, the dangers are more subtle and hard to identify. But physical danger is at the core of an exciting movie. After millions of years of evolution, humans are well adapted to discerning and avoiding physical danger. These kinds of movies strike a chord with us. We’re entertained. But, the article above points us to a different kind of danger.

Namely, how the AI thinks.

Human Decision Making

I’ve mentioned this danger before in a different context when I wrote about the human decision making process. “How Can We Tell if Our Love for Apple is Logical or Biased?

There, I alluded to the Nobel prize winning Princeton psychologist Dr. Daniel Kahneman and his research on cognitive biases. “Kahneman: Your Cognitive Biases Act Like Optical Illusions.

In the process of summing up some of Dr. Kahneman’s work, Robert Cringely in the context of the Michael Lewis book, The Undoing Project, made the following astute observation.

What Kahneman [and Tversky] figured out is that we have ancient brains that generally don’t do the math because math has only been around for 5,000 years or so while we as a species have been walking the Earth for 2+ million years. Our decision-making processes, such as they are, are based on maximizing survival not success. Early hunter-gatherers weren’t so bothered with optimization as just not being eaten. This put an emphasis on making decisions quickly.

In today’s technical environment, a different kind of thinking is called for.

Lost in Space, Netflix, Alien robot. AI Technology.

Communicating with alien minds can also be dangerous. Image credit: Netflix.

Danger, Will Robinson

The article at The Verge interviews the co-founder and CEO of Clara Labs, Maran Nelson. This excerpt gets to the heart of the matter. (Nelson in italics.)

Almost every time people have played with the idea of an AI and what it will look like, and what it means for it to be scary, it’s been tremendously anthropomorphized. You have this thing — it comes, it walks at you, and it sounds like you’re probably going to die, or it made it very clear that there’s some chance your life is in jeopardy.

Yes.

The thing that scares me the most about that is not the likelihood that in the next five years something like this will happen to us, but the likelihood that it will not. Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information.

So the idea of HAL from 2001 is distracting people from what the actual threats are.

Very much so.

Zero Visibility Into AI Technology Minds

Humans have grown up with each other for a long time. We’ve learned how to size up others. We know how they tend to think because we’ve all evolved on the same planet. With allowances for the fact that humans arose in different geographical regions, speak different languages, dress different and look different, there is a common culture of humankind. We live and work together with values based on the fact that we’re human. We breathe, we love, we have children, we desire basic human rights, we want to be useful and respected.

When we see another human engaged in some activity, we tend to understand what motivates them. (If they’re mentally healthy.) Our discussions with them, even across languages, are informed by our common ground as humans. It’s all we’ve known until now.

With AI technology, however, the simple process of effectively engaging them is tremendously difficult. The algorithms, the substance of how the AI thinks, can—and will likely be— completely alien.

Asking an AI for an answer is easy. Asking “why do you think that” is more problematic.

Worse, AIs are being developed by giant tech companies with specific agendas. We’ll have no idea what those agendas will be. But money making will likely be at the top of the list.

Humans tend to overestimate the infallibility of computers. In a recent podcast with award-winning roboticist Dr. Ayanna Howard, she told me (as I recall) that adults often will obey robot directives, not because the humans are naturally subservient, but rather because they surmise the thinking machine before them is superior and infallible. How and why we’ll trust AIs is a major research issue.

If we can’t achieve common ground with our new AI technology, understand what drives them, understand their quirks, interrogate them on their reasoning and values, then it’s very likely indeed that advanced AIs will find it all too simple to direct us, manipulate us, and intimidate us into revising our thinking and beliefs. All for someone else’s benefit.

Don’t think it can happen? It already has.

8
Leave a Reply

Please Login to comment
6 Comment threads
2 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
6 Comment authors
jackadoodleoxygenthedjwab95NedJohn Martellaro Recent comment authors

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
newest oldest most voted
Notify of
oxygenthedj
Member
oxygenthedj

In the excerpt from the interview on the Verge, there is a bit of a misrepresentation that I believe is important in the conversation. …there are people who are building risk models about what they can do with money. Then they’re giving those risk models, which are in effect models like the ones that are powering Facebook News Feed and all of these other predictive models, and they’re giving them to bankers. And they’re saying, “Hey, bankers, it seems like maybe these securitization loans in the housing, it’s going to be fine.” In reality, the creator(s) of the math were… Read more »

wab95
Member
wab95

John: This is an excellent set of readings. They provide an insight into the interplay between machine learning (algorithmic predictive computation), as articulated by Maran Nelson, and human decision-making and judgment optimised not for success or correctness but survival, relying on impulse and undercut by confirmation bias and uninformed intuition (fast vs slow thinking), as uncovered by the work of Daniel Kahneman and Amos Tversky – two decision-making and predictive processes that human intellect assumes are similar, but are in fact alien. Nelson is correct in pointing out that this is the real danger from so called AI; not the… Read more »

Ned
Member
Ned

Agreed John. How long will it be before “fake news” is dropped from the lexicon because no one can tell the difference between real and contrived information. The calculator debate of the 1970’s was dropped as calculator use became the norm and it became inefficient to be able to manually enter pricing information at the “cash” register. Many cannot do the simple math to count change. I rail against technology companies because I don’t know the character of the people programming the devices, not because I don’t like technology. Who is programming an autonomous car and with what information? Privacy… Read more »

jackadoodle
Member
jackadoodle

And when Deep Blue “does a debate” or “plays Jeopardy”, it is like an advanced Siri doing that. It might be given some representative form such as a physical case, but this is like the Alexa cylinder. It’s a way to give people something to look at. But Deep Blue is not “there” in the room. It connects back to data centers like Siri or Alexa does. And like Siri is given canned responses, a human being sprinkles in little show biz responses to add drama and excitement, such as the comment about “oh, you are speaking 218 words per… Read more »

jackadoodle
Member
jackadoodle

“AI” such as Siri, Big Blue, Alexa is really just when companies give a branding name to a collection of data, databases, algorhythms, etc. It is not a brain in a jar. It’s racks of servers in a building. Actually in many buildings called data centers. These buildings full of servers need human beings to keep mining coal and fixing solar panels to run. Humans fix leaks in the roof. Humans fix circuit breakers and cooling systems. Point is, it’s all dependent on constant human maintenance. And even then “it” is just databases and data. Like a search engine. Any… Read more »

CudaBoy
Member
CudaBoy

Much ado about nothing. Humans can always “pull the plug” if things get out of control.
Meanwhile, Deep Blue just won a debate – the first time a computer out-foxxed a human in this form of subjective battle. One curious thing is after one woman rebutted with a rapid fire response – instead of rebutting right away the Deep Blue said something like “you are speaking at the extremely fast rate of 218 words per minute, there is no need to hurry”