As Siri Gets Smarter, How Will We Learn to Trust It?

AI concept

Serious work, driven by competition, is being done to develop Siri as a better artificial intelligence. Pioneering work is being done on how Siri, in the future, will assess the accuracy of its information. When the human-machine conversation gets really sophisticated, will Siri be able to judge its own authoritativeness? Will we?

AI concept

Today, we ask Siri about some basic information and he/she scans databases, knowledge graphs. The assumption is that the data is accurate. So if Siri looks up a sports score, we assume that the original data that was input is correct.

As Siri becomes more intelligent and conversational, the AI will be looking at more complex databases, and answering more sophisticated inquiries. Siri will need a way to assess the credibility of its own sources. This process, related to Apple’s recent acquisition of Lattice Data is nicely explained in this article: “Apple is shoring up Siri for its next generation of intelligent devices.

The Human Process

Human beings go through their own 18 year process, starting at birth, learning how to assess the value and credibility of information. By the time a human is a young adult, they’re usually pretty good an connecting reality (as perceived) to their own internal decision, knowledge, and physics models.

We’re very familiar with defects in the physics model. It sometimes starts with the declaration: “Hey, guys! Watch this!” The result is often a Darwin Award. Lately, the phenomenon of fake news has challenged people’s ability to assess the credibility of various sources and interpret what they’re told. What will be the impact when AIs are the source?

As the symbiotic relationship with our AIs progresses, both participants will be challenged to make these judgments. Conceivably, AIs could develop their own perceptions and models that have a subtle, almost imperceptible divergence from that of the human partner. How might that go bad for us?

More worrying is research into the self-judgments people make. Research has shown that those who are the most intelligent have the more well-founded doubts about their knowledge and decision processes. The less intelligent a person is, the more over-confident they are in their judgments. How will an AI influence that process?

Add to the mix that AIs, when asked to explain their reasoning, could actually develop the ability to lie about that, further muddying the waters. That’s why some people are so concerned about how AIs will interact with humans. See: “How to keep AI from killing us all.

Setting Limits

Currently, there are no government regulations dictating any kind of standard for how humans will interact with AIs. Organizations like Partnership on AI, of which Apple is a member, have been formed to do the following:

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

It remains to be seen what kinds of mistakes remain to be made, how governments will view these AI agents, and how our culture, thinking and actions may be affected by AIs. One thing is for sure. It will be an adventure.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Oldest Most Voted
Inline Feedbacks
View all comments
W. Abdullah Brooks, MD

John: Great thought piece. I confess that I haven’t had time to read most of your citations, but hopefully will get to some of these at the weekend. In the meantime, at the risk of doing so in the blind, let me address a few quick thoughts to your question, ‘how will we learn to trust AI?’. Before doing so, let’s define, for the purpose of this argument, what is meant by ‘trust’. What trust is not, is blind and uncritical acquiescence to a norm, or merely following observed crowd behaviour. This is sometimes confused with trust, when it is… Read more »


Again, very respectfully, if we are smart, for all but the most trivial matters, we *won’t*.


“Pioneering work is being done on how Siri, in the future, will assess the accuracy of its information. When the human-machine conversation gets really sophisticated, will Siri be able to judge its own authoritativeness?” Well, finally now, some in the AI industry are getting to the heart of the matter. This is where the AI rubber meets the road. No amount of machine learning is going to solve the problem of how a machine is to know how much it knows, and perhaps more critical, how much it does not know. Without this knowledge, machines will be forever dumb, solipsistic… Read more »