What happens when AI machine learning becomes so sophisticated and inscrutable that humans can no longer understand how an AI came to a decision? AI processes will go far beyond simple structured code that can be debugged and audited. Will we just shrug and accept?

AI Agent

The AI Dilemma for Apple & Others

Will there come a time when we just have to sit back and trust the AI’s decisions? In the foreseeable future, will AIs, like Siri, that make life and death decisions be called to testify in court? Will even expert attorneys be qualified to properly interrogate such an AI? Will these AIs, that can pass the Turing and newer such tests, be deemed to have human rights? After all, we don’t even have that good an understanding of how humans make decisions.

The Particle Debris article of the week is from the MIT Technology Review. “The Dark Secret at the Heart of AI.” The subtitle explains: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” This is a long but thoroughly researched article. AI experts working in the field are interviewed and quoted. So, if you want an authoritative rendering of the sticky issues at hand, this is a great place to start.

At the core of the issue is that as AIs become more sophisticated and start to learn on their own, they’ll develop judgments and values of their own with little transparency. Right now, there isn’t a way to trace and audit these deep decisions, and so applying human values and experience to make judgements about the AI’s decisions likely won’t work at first.

The author quotes Daniel Dennett at Tufts University in reference to AIs doing things we do not know how to do.

The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?

I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible. If it can’t do better than us at explaining what it’s doing, then don’t trust it.

In the end, it may be that all we can do is build into the AI great explanatory power so that a subsection of the AI’s mind can explain to humans how it came to the conclusions it did and acted the way it did. That is, if financial motives or legal restrictions don’t get in the way.

As an aside, part of that process means teaching AIs to converse with us, in any human language of our choice, in a meaningful, helpful way. See: “Can computers have ‘conversations’ with humans?

One of the dangers in that process of educational dialog with AIs is that the machines might pick up the nuances of bias, racism and sexism in the course of learning from humans. Accidentally.

For more on that, see: “Robots are racist and sexist. Just like the people who created them.” The idea here is that our culture, our language, and our stories inherently contain these human foibles. We will have to work hard to avoid transferring them to AIs as those AIs gobble up an understanding of our cultures and languages. For example:

Human beings, after all, learn our own prejudices in a very similar way. We grow up understanding the world through the language and stories of previous generations. We learn that “men” can mean “all human beings”, but “women” never does…

Another danger is that, unlike our simple iPhones and Macs, on which a company like Apple can superimpose its ethical values (privacy, encryption, etc), AI’s may not be amenable to the specific values of the corporation that builds them. For the first time, corporate differentiations and values-added may be lost.

And so the challenge is to teach AIs our languages, values and our stories, teach them to communicate with us in a way that’s meaningful and helpful, teach them to avoid human shortcomings and, finally, design them to explain their own reasoning processes in ways we can accept.

No pressure.

Next Page: The News Debris For The Week Of April 17th.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Oldest Most Voted
Inline Feedbacks
View all comments

The current MacPro should become the new Mac Mini, with corresponding consumer hardware that is user upgradable/replaceable.


The dreams and inventions of a few controlling the masses ? Mankind has a sacred flaw that will play into the AI equation and that is fear. That human flaw will eventually bring not just AI but all of technologies advances to a screeching halt and possibly reverse direction to abate the fear. We also face technology getting too far ahead and becoming ignored, rejected or not usable by the masses. Ironically simple advances don’t materialize like instant on OS’s, connectivity is a beast with many limits and flaws from distance to available spectrum and greed. Greed will always be… Read more »


“What surprises me here is that companies can easily predict the bad outcome from the discovery of their bad actions. But they keep doing things like this anyway, perhaps believing that they’ll never get caught. But they always do.”

Actually, no. The only ones we catch are the ones we discover. We might think we discover them all but I’m sure we do not. Those other ones get off scott-free.


wab95: Nice comment. This also underscores why it is vital to elect educated and functionally literate representatives in democratic societies, and remove those who are ignorant, unwilling to learn, or either inert or hostile to fact, evidence or truth. Unfortunately with a populace increasingly not, more and more leaders get elected that are not, or are demagogs that promise to do the impossible. Unfortunately a less knowledgeable electorate elects people who got to their positions of power by virtue of their ability to sway the less knowledgeable and so are unwilling to do anything to do anything to fix it.… Read more »


John: You’ve provided a content rich selection in this week’s PD, so time permitting, I may come back with another or so observations. For now, let’s address your main topic. The issues around AI and their decision-making algorithms cannot be divorced from the broader context of accountability and safeguards (or perhaps in the case of artificial intelligence, failsafes). We do not need to know how a human mind makes a decision, let alone summons the will to act on it or the determination to see it through, to permit that mind to make such decisions unobstructed. We, as society, permit… Read more »


Some people are still going overboard with theAI paranoia. First it was “If robots start thinking they’ll want to kill us” which never made any sense. Then it was “AI will be so advanced we humans will be little more than pets to them” which is also ridiculous. Now it’s “If I can’t understand every decision an AI makes then I will view it as a threat” which is pretty much the same as “Anyone who doesn’t speak my language is evil!” The only real threat we have from AI is what people (flesh and blood humans) do with them.… Read more »


And so the challenge is to teach AIs our languages, values and our stories,

And there is the issue in a nutshell. WHO’S values and stories are going to get imprinted on the next generation of AIs? American ones? Libertarian ones? Marxist ones? British ones? Maybe ones from Putin’s Russia or mainland China? Maybe there will be two, one we use and one that gets activated when the Mother Country wants the robots to stop taking care of us and start to TAKE CARE of us.

Lee Dronick

“Coalition for Better Ads”

What are better ads? From my point of view they are ones that don’t popup over the page, auto play video, or are otherwise very annoying. It is the advertisers fault that users install adblockers.