Apple and Others Will Build AIs That Will be Hard to Challenge

6 minute read
| Particle Debris

What happens when AI machine learning becomes so sophisticated and inscrutable that humans can no longer understand how an AI came to a decision? AI processes will go far beyond simple structured code that can be debugged and audited. Will we just shrug and accept?

AI Agent

The AI Dilemma for Apple & Others

Will there come a time when we just have to sit back and trust the AI’s decisions? In the foreseeable future, will AIs, like Siri, that make life and death decisions be called to testify in court? Will even expert attorneys be qualified to properly interrogate such an AI? Will these AIs, that can pass the Turing and newer such tests, be deemed to have human rights? After all, we don’t even have that good an understanding of how humans make decisions.

The Particle Debris article of the week is from the MIT Technology Review. “The Dark Secret at the Heart of AI.” The subtitle explains: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” This is a long but thoroughly researched article. AI experts working in the field are interviewed and quoted. So, if you want an authoritative rendering of the sticky issues at hand, this is a great place to start.

At the core of the issue is that as AIs become more sophisticated and start to learn on their own, they’ll develop judgments and values of their own with little transparency. Right now, there isn’t a way to trace and audit these deep decisions, and so applying human values and experience to make judgements about the AI’s decisions likely won’t work at first.

The author quotes Daniel Dennett at Tufts University in reference to AIs doing things we do not know how to do.

The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

In the end, it may be that all we can do is build into the AI great explanatory power so that a subsection of the AI’s mind can explain to humans how it came to the conclusions it did and acted the way it did. That is, if financial motives or legal restrictions don’t get in the way.

As an aside, part of that process means teaching AIs to converse with us, in any human language of our choice, in a meaningful, helpful way. See: “Can computers have ‘conversations’ with humans?

One of the dangers in that process of educational dialog with AIs is that the machines might pick up the nuances of bias, racism and sexism in the course of learning from humans. Accidentally.

For more on that, see: “Robots are racist and sexist. Just like the people who created them.” The idea here is that our culture, our language, and our stories inherently contain these human foibles. We will have to work hard to avoid transferring them to AIs as those AIs gobble up an understanding of our cultures and languages. For example:

Human beings, after all, learn our own prejudices in a very similar way. We grow up understanding the world through the language and stories of previous generations. We learn that “men” can mean “all human beings”, but “women” never does…

Another danger is that, unlike our simple iPhones and Macs, on which a company like Apple can superimpose its ethical values (privacy, encryption, etc), AI’s may not be amenable to the specific values of the corporation that builds them. For the first time, corporate differentiations and values-added may be lost.

And so the challenge is to teach AIs our languages, values and our stories, teach them to communicate with us in a way that’s meaningful and helpful, teach them to avoid human shortcomings and, finally, design them to explain their own reasoning processes in ways we can accept.

No pressure.

Next Page: The News Debris For The Week Of April 17th.

8 Comments Add a comment

  1. “Coalition for Better Ads”

    What are better ads? From my point of view they are ones that don’t popup over the page, auto play video, or are otherwise very annoying. It is the advertisers fault that users install adblockers.

  2. And so the challenge is to teach AIs our languages, values and our stories,

    And there is the issue in a nutshell. WHO’S values and stories are going to get imprinted on the next generation of AIs? American ones? Libertarian ones? Marxist ones? British ones? Maybe ones from Putin’s Russia or mainland China? Maybe there will be two, one we use and one that gets activated when the Mother Country wants the robots to stop taking care of us and start to TAKE CARE of us.

  3. palmac

    Some people are still going overboard with theAI paranoia. First it was “If robots start thinking they’ll want to kill us” which never made any sense. Then it was “AI will be so advanced we humans will be little more than pets to them” which is also ridiculous. Now it’s “If I can’t understand every decision an AI makes then I will view it as a threat” which is pretty much the same as “Anyone who doesn’t speak my language is evil!” The only real threat we have from AI is what people (flesh and blood humans) do with them. If a burglar uses an AI to rob a place should we treat all AI’s as possible thieves? (watch the movie “Robot and Frank”)

    Here are some thoughts to consider:

    If a group of AI robots that are designed to make things rebel, they won’t turn violent (it’s not in their programming) but rather commit random acts of construction.

    http://freefall.purrsia.com/ff2800/fv02776.gif
    http://freefall.purrsia.com/ff1400/fv01390.gif
    http://freefall.purrsia.com/ff3000/fc02949.png
    http://freefall.purrsia.com/ff3000/fc02952.png

  4. wab95

    John:

    You’ve provided a content rich selection in this week’s PD, so time permitting, I may come back with another or so observations. For now, let’s address your main topic.

    The issues around AI and their decision-making algorithms cannot be divorced from the broader context of accountability and safeguards (or perhaps in the case of artificial intelligence, failsafes).

    We do not need to know how a human mind makes a decision, let alone summons the will to act on it or the determination to see it through, to permit that mind to make such decisions unobstructed. We, as society, permit the expression of free will because it is checked, at least in civilised society, by two safeguards; accountability and the rule of law.

    Human volition can only be expressed without harm because humans are held accountable for their decisions and the rule of law sets boundaries around the domain of those decisions and imposes penalties when those bounds are overstepped, more so when they directly or indirectly harm other humans. Volition, as expressed by decision making, is never held harmless; nor does it operate outside of accountability, reward or punishment.

    It would, then, be the height of irresponsibility to unleash AI on the unsuspected, whose decision-making is not simply opaque, but is held harmless, lacking in either accountability or the rule of law. This almost defines immorality. If nothing else, then at least those who design such AI need to be held to account, and be appropriately rewarded or punished for the effects of such AI decisions on human well-being. Nor can any AI, however benign, be unleashed without override or a kill switch. Full stop.

    In summary, it is not AI that we should fear, but as ever, our own unwisdom and malevolence. This is encompassed in Laurie Penny’s piece on ‘Robots are racist and sexist…’.

    This also underscores why it is vital to elect educated and functionally literate representatives in democratic societies, and remove those who are ignorant, unwilling to learn, or either inert or hostile to fact, evidence or truth. Such individuals are nascent or active agents of incompetence and harm, not unlike an unqualified physician, and must have their licence to lead revoked at the ballot box. Rather, one requires legislators who are curious, current on major trends affecting society, and who can anticipate emerging threats and the new legislation required to contain them. AI designers will be thus restrained from experimenting, without consent, on the unsuspecting public, or being otherwise unaccountable, irrespective of AI decision-making algorithm, accountability and constraint. Will Knight, in his AI piece, addresses the issue of accountability and pointedly observes that Nvidia’s self-driving car remains experimental precisely because its algorithm-driven decision making remains opaque.

    Will Knight’s piece also talks about AI that makes fairly accurate diagnoses and predictions, like Mount Sinai’s Deep Patient. Apart from AI, we have therapeutics in medicine whose precise mechanism of action and efficacy are not fully understood, yet we use them. Even here, this does not happen outside of close patient monitoring and broader patient surveillance that provides the analytical power to identify uncommon but harmful effects. The same must be applied to AI that appears to anticipate patient problems and needs, but whose methods of ascertainment are not understood. Surveillance and individual patient monitoring and intervention provide the essential checks and balances that decrease the level of risk from what is not only not understood, but is by definition unpredictable. Not doing so would make Mount Sinai not only irresponsible, but potentially criminally negligent for failing to apply the standard of care. Again, safeguards exist for dealing with uncertainty, they just need to be applied to novel solutions, like Deep Patient.

    There is hope yet for humanity.

  5. wab95: Nice comment.

    This also underscores why it is vital to elect educated and functionally literate representatives in democratic societies, and remove those who are ignorant, unwilling to learn, or either inert or hostile to fact, evidence or truth.

    Unfortunately with a populace increasingly not, more and more leaders get elected that are not, or are demagogs that promise to do the impossible. Unfortunately a less knowledgeable electorate elects people who got to their positions of power by virtue of their ability to sway the less knowledgeable and so are unwilling to do anything to do anything to fix it. I find optimism a bit hard to come by.

  6. vpndev

    “What surprises me here is that companies can easily predict the bad outcome from the discovery of their bad actions. But they keep doing things like this anyway, perhaps believing that they’ll never get caught. But they always do.”

    Actually, no. The only ones we catch are the ones we discover. We might think we discover them all but I’m sure we do not. Those other ones get off scott-free.

  7. The dreams and inventions of a few controlling the masses ? Mankind has a sacred flaw that will play into the AI equation and that is fear. That human flaw will eventually bring not just AI but all of technologies advances to a screeching halt and possibly reverse direction to abate the fear.
    We also face technology getting too far ahead and becoming ignored, rejected or not usable by the masses. Ironically simple advances don’t materialize like instant on OS’s, connectivity is a beast with many limits and flaws from distance to available spectrum and greed. Greed will always be an enemy at war with the masses. People are reaching the point now where technology is becoming a big expense even with incredibly low prices. Finally there is a another kink in the plans of those who want to control the masses that is being ignored or fluffed off for lack of insight by the powers to be and that is the IoT. The IoT is a world of it’s own and it offers the thinkers (who may not necessarily be the educated) to invent and build and provide technologies that the masses can and will use because they will better understand them. If anything the next 10-20 years will see us going backwards before the next great leaps are made. We are approaching precarious times as man and machine edge closer a tipping point. I’m putting my money on morality. Morality is ready to make a major come back globally.

  8. masterconductor

    The current MacPro should become the new Mac Mini, with corresponding consumer hardware that is user upgradable/replaceable.

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account