The Real Threat from AI Technology: Not Knowing How it Thinks

Alicia Vikander as the Andoid/AI in Ex Machina.

A recent article starts to genuinely approach the real danger of Artificial Intelligence (AI). We’ll have zero visibility into how AI technology thinks and its internal cultural values.

Alicia Vikander as AI/android Ava. AI Technology.
Alicia Vikander in Ex Machina. Image credit: Universal Pictures International.

Here is a perfectly splendid article I want to examine in more detail.

The premise is that science fiction has lured us into being concerned about intelligent androids that could be a physical danger to us when, in fact, the dangers are more subtle and hard to identify. But physical danger is at the core of an exciting movie. After millions of years of evolution, humans are well adapted to discerning and avoiding physical danger. These kinds of movies strike a chord with us. We’re entertained. But, the article above points us to a different kind of danger.

Namely, how the AI thinks.

Human Decision Making

I’ve mentioned this danger before in a different context when I wrote about the human decision making process. “How Can We Tell if Our Love for Apple is Logical or Biased?

There, I alluded to the Nobel prize winning Princeton psychologist Dr. Daniel Kahneman and his research on cognitive biases. “Kahneman: Your Cognitive Biases Act Like Optical Illusions.

In the process of summing up some of Dr. Kahneman’s work, Robert Cringely in the context of the Michael Lewis book, The Undoing Project, made the following astute observation.

What Kahneman [and Tversky] figured out is that we have ancient brains that generally don’t do the math because math has only been around for 5,000 years or so while we as a species have been walking the Earth for 2+ million years. Our decision-making processes, such as they are, are based on maximizing survival not success. Early hunter-gatherers weren’t so bothered with optimization as just not being eaten. This put an emphasis on making decisions quickly.

In today’s technical environment, a different kind of thinking is called for.

Lost in Space, Netflix, Alien robot. AI Technology.
Communicating with alien minds can also be dangerous. Image credit: Netflix.

Danger, Will Robinson

The article at The Verge interviews the co-founder and CEO of Clara Labs, Maran Nelson. This excerpt gets to the heart of the matter. (Nelson in italics.)

Almost every time people have played with the idea of an AI and what it will look like, and what it means for it to be scary, it’s been tremendously anthropomorphized. You have this thing — it comes, it walks at you, and it sounds like you’re probably going to die, or it made it very clear that there’s some chance your life is in jeopardy.

Yes.

The thing that scares me the most about that is not the likelihood that in the next five years something like this will happen to us, but the likelihood that it will not. Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information.

So the idea of HAL from 2001 is distracting people from what the actual threats are.

Very much so.

Zero Visibility Into AI Technology Minds

Humans have grown up with each other for a long time. We’ve learned how to size up others. We know how they tend to think because we’ve all evolved on the same planet. With allowances for the fact that humans arose in different geographical regions, speak different languages, dress different and look different, there is a common culture of humankind. We live and work together with values based on the fact that we’re human. We breathe, we love, we have children, we desire basic human rights, we want to be useful and respected.

When we see another human engaged in some activity, we tend to understand what motivates them. (If they’re mentally healthy.) Our discussions with them, even across languages, are informed by our common ground as humans. It’s all we’ve known until now.

With AI technology, however, the simple process of effectively engaging them is tremendously difficult. The algorithms, the substance of how the AI thinks, can—and will likely be— completely alien.

Asking an AI for an answer is easy. Asking “why do you think that” is more problematic.

Worse, AIs are being developed by giant tech companies with specific agendas. We’ll have no idea what those agendas will be. But money making will likely be at the top of the list.

Humans tend to overestimate the infallibility of computers. In a recent podcast with award-winning roboticist Dr. Ayanna Howard, she told me (as I recall) that adults often will obey robot directives, not because the humans are naturally subservient, but rather because they surmise the thinking machine before them is superior and infallible. How and why we’ll trust AIs is a major research issue.

If we can’t achieve common ground with our new AI technology, understand what drives them, understand their quirks, interrogate them on their reasoning and values, then it’s very likely indeed that advanced AIs will find it all too simple to direct us, manipulate us, and intimidate us into revising our thinking and beliefs. All for someone else’s benefit.

Don’t think it can happen? It already has.

8 thoughts on “The Real Threat from AI Technology: Not Knowing How it Thinks

  • In the excerpt from the interview on the Verge, there is a bit of a misrepresentation that I believe is important in the conversation.

    …there are people who are building risk models about what they can do with money. Then they’re giving those risk models, which are in effect models like the ones that are powering Facebook News Feed and all of these other predictive models, and they’re giving them to bankers. And they’re saying, “Hey, bankers, it seems like maybe these securitization loans in the housing, it’s going to be fine.”

    In reality, the creator(s) of the math were quite insistent that the bankers were *misusing* the math *before* the collapse of the financial markets. (There was an excellent article about this in Wired shortly after the market implosion.)

    This is important because of the aforementioned idea that we can just “pull the plug”. The people who have the ability to pull said plugs are likely not going to have a financial incentive to do so. In fact, their financial incentives are probably designed to keep the plugs in the sockets for as long as possible, and certainly longer than prudent, because the AI is making them boatloads of cash.

    As long as the gain is privatized and the risk is socialized, as it was in the 2008 collapse, and will be in the next one, why would they pull the plug?

  • John:

    This is an excellent set of readings. They provide an insight into the interplay between machine learning (algorithmic predictive computation), as articulated by Maran Nelson, and human decision-making and judgment optimised not for success or correctness but survival, relying on impulse and undercut by confirmation bias and uninformed intuition (fast vs slow thinking), as uncovered by the work of Daniel Kahneman and Amos Tversky – two decision-making and predictive processes that human intellect assumes are similar, but are in fact alien.

    Nelson is correct in pointing out that this is the real danger from so called AI; not the malevolent or cold superior conscious mind that views all human mind, individual and collective, as inferior and in need of management, but rather the opacity and uncertainty with which AI ‘plays with our data’ to make predictions that are inhuman in both process and values-base, and often wrong. This is not harm being done by ‘artificial intelligence’ but by simple machine stupidity.

    Human decision making, even when impulsive, is driven in large part by a desire for survival, as Hahneman and Tversky point out, but which is easily defeated by bias. It is counter-balanced, however, by our being a social species; relying on our shared values and our will to survive and progress. At the core of our intellect is consciousness and self-awareness, fed a relentless desire for self-preservation and aspiration. These are shared human characteristics. Machine learning is artificial, but not intelligent. It possesses neither consciousness nor our shared drive for self preservation. It is incapable of aspiration, noble or sordid. It mimics human intelligence through mathematically derived speech, and lulls us into a false sense of shared ‘likeness’ but possesses the intelligence of an insect at best, lacking in even that level of consciousness.

    Imagine, for a moment, basing any important decision on the algorithm-driven manipulation of a cockroach being led by a series of mathematical manipulations through a maze of infinite options, each a binary branch point selected by that roach, to a final a binary choice – on or off – with no shared values, concern or even awareness of the consequences to you of its choice.

    However, because AI carries the moniker, ‘intelligence’, we assume that it ‘thinks’, like a conscious mind. In fact, it mimics thought through programmed, algorithmically driven heuristics primarily in response to human input from end users. We acquiesce to its choice because we think that it is like us in thought, perhaps even superior, as the The Verge article points out, but is ‘cognitively’ closer to that cockroach than it is to anything human.

    I concur with you that there is no ‘pulling the plug’ on AI. There is no one ‘intelligence’ (and let’s not even start on the ‘singularity’), but multiple products, each distributed to a limitless array of devices, each with its ultimate impact on an individual human.

    Humans will need to become more savvy users and consumers of these products, just as they have become with PCs; understanding both their power and promise, as well as their limitations and pitfalls, in order to achieve that equipoise between human and interactive but inanimate machine capability.

  • Agreed John. How long will it be before “fake news” is dropped from the lexicon because no one can tell the difference between real and contrived information. The calculator debate of the 1970’s was dropped as calculator use became the norm and it became inefficient to be able to manually enter pricing information at the “cash” register. Many cannot do the simple math to count change. I rail against technology companies because I don’t know the character of the people programming the devices, not because I don’t like technology. Who is programming an autonomous car and with what information? Privacy be damned, is the thing safe? HAL had the utmost confidence in the mission but was it the mission Frank and Dave expected?

  • And when Deep Blue “does a debate” or “plays Jeopardy”, it is like an advanced Siri doing that. It might be given some representative form such as a physical case, but this is like the Alexa cylinder. It’s a way to give people something to look at. But Deep Blue is not “there” in the room. It connects back to data centers like Siri or Alexa does.

    And like Siri is given canned responses, a human being sprinkles in little show biz responses to add drama and excitement, such as the comment about “oh, you are speaking 218 words per minute.” This is like the Siri canned responses. A human being sets that up, and literally even sets that it is only done once and not on every question.

    And when Deep Blue or some other AI does a debate or game of jeopardy, it’s a public relations matter for the engineers on that team. This team of humans works for months preparing the system with answers and data and conversational logic for that specific scenario. They work to try to get their system to pass a Turing test of sorts.

    There’s no “there” there. You end up with cool moments and an impressive event, but it’s basically marketing / show biz. If Deep Blue just won Jeopardy and you said “hey Deep Blue let’s play Wheel of Fortune” it would have zero success until a team of programmers and marketing people added in a bunch of canned responses, logic, etc to play Wheel of Fortune. Basically, the work it takes to write an app that could play Wheel of Fortune is what you would have to do for Deep Blue to play it, and you’d have to also design and add canned responses etc.

    It’s a manual, human, and non-AI process to produce “AI.”

    1. I agree. But the problem is not one super-master AI in the sky that, like a tyrant, manipulates society with a single voice. That can be controlled. Pull the plug.

      The real problem is a billion instances of AIs in our smartphones, smart speakers, and finally robots/androids that answer questions, direct us to information, provide counsel and commentary. These billions of conversations will shape how the human customers feel and think. And because the humans perceive the AI as a superior mind, they won’t notice how their own human thoughts and values have been slowly modified by an alien influence.

      1. The interaction with those thibgs IS a concern, John. You are very right. I don’t fear the AI bits themselves as much as the “man behind the curtain” designing it all. Those men will add their own biases.

        Case in point, as soon as an orange political genius from NYC and reality TV used Twitter to get himself elected as President of the United States, we saw Twitter change to an algorithm based feed instead of a timeline based feed. Suddenly voices that agreed w the new president were being silenced.

        And that revealed what the real purpose of the new algorhithm based feed was.

        It plays to your point about how we can have our attention focused by algos, and goes furher to show how the humans behind the algos can use them for nefarious purposes.

  • “AI” such as Siri, Big Blue, Alexa is really just when companies give a branding name to a collection of data, databases, algorhythms, etc.

    It is not a brain in a jar. It’s racks of servers in a building. Actually in many buildings called data centers. These buildings full of servers need human beings to keep mining coal and fixing solar panels to run. Humans fix leaks in the roof. Humans fix circuit breakers and cooling systems.

    Point is, it’s all dependent on constant human maintenance. And even then “it” is just databases and data. Like a search engine.

    Any danger from AI comes from how it could extend an evil human being’s power. It beings us back to mankind. If it could help an evil human build more effective bombs or more effectively plan attacks against us. That’s the danger of AI and all computers. But there is zero danger of a server farm full of databases and API’s becoming “conscious” and evil.

    People need to get a grip and stop the sensationalism.

  • Much ado about nothing. Humans can always “pull the plug” if things get out of control.
    Meanwhile, Deep Blue just won a debate – the first time a computer out-foxxed a human in this form of subjective battle. One curious thing is after one woman rebutted with a rapid fire response – instead of rebutting right away the Deep Blue said something like “you are speaking at the extremely fast rate of 218 words per minute, there is no need to hurry”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.