As Siri Gets Smarter, How Will We Learn to Trust It?

2 minute read
| Analysis

Serious work, driven by competition, is being done to develop Siri as a better artificial intelligence. Pioneering work is being done on how Siri, in the future, will assess the accuracy of its information. When the human-machine conversation gets really sophisticated, will Siri be able to judge its own authoritativeness? Will we?

AI concept

Today, we ask Siri about some basic information and he/she scans databases, knowledge graphs. The assumption is that the data is accurate. So if Siri looks up a sports score, we assume that the original data that was input is correct.

As Siri becomes more intelligent and conversational, the AI will be looking at more complex databases, and answering more sophisticated inquiries. Siri will need a way to assess the credibility of its own sources. This process, related to Apple’s recent acquisition of Lattice Data is nicely explained in this article: “Apple is shoring up Siri for its next generation of intelligent devices.

The Human Process

Human beings go through their own 18 year process, starting at birth, learning how to assess the value and credibility of information. By the time a human is a young adult, they’re usually pretty good an connecting reality (as perceived) to their own internal decision, knowledge, and physics models.

We’re very familiar with defects in the physics model. It sometimes starts with the declaration: “Hey, guys! Watch this!” The result is often a Darwin Award. Lately, the phenomenon of fake news has challenged people’s ability to assess the credibility of various sources and interpret what they’re told. What will be the impact when AIs are the source?

As the symbiotic relationship with our AIs progresses, both participants will be challenged to make these judgments. Conceivably, AIs could develop their own perceptions and models that have a subtle, almost imperceptible divergence from that of the human partner. How might that go bad for us?

More worrying is research into the self-judgments people make. Research has shown that those who are the most intelligent have the more well-founded doubts about their knowledge and decision processes. The less intelligent a person is, the more over-confident they are in their judgments. How will an AI influence that process?

Add to the mix that AIs, when asked to explain their reasoning, could actually develop the ability to lie about that, further muddying the waters. That’s why some people are so concerned about how AIs will interact with humans. See: “How to keep AI from killing us all.

Setting Limits

Currently, there are no government regulations dictating any kind of standard for how humans will interact with AIs. Organizations like Partnership on AI, of which Apple is a member, have been formed to do the following:

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

It remains to be seen what kinds of mistakes remain to be made, how governments will view these AI agents, and how our culture, thinking and actions may be affected by AIs. One thing is for sure. It will be an adventure.

3 Comments Add a comment

  1. aardman

    “Pioneering work is being done on how Siri, in the future, will assess the accuracy of its information. When the human-machine conversation gets really sophisticated, will Siri be able to judge its own authoritativeness?”

    Well, finally now, some in the AI industry are getting to the heart of the matter. This is where the AI rubber meets the road. No amount of machine learning is going to solve the problem of how a machine is to know how much it knows, and perhaps more critical, how much it does not know. Without this knowledge, machines will be forever dumb, solipsistic algorithm executors, no matter how sophisticated they may outwardly appear.

    Humans on the other hand somehow develop this ability to self-assess their knowledge level. Well, check that, some humans. (I will resist the urge to make fun of voters of a particular candidate, or viewers of a particular news network.) This first occurred to me in grad school, when presented with problem exercises that are related but not straightforward extension to any of the material covered in class. Somehow, you develop an intuition or gut feel about whether the solution you worked out is correct or not even though it’s the first time you’re tackling that type of problem. How does that come about? It’s a process of thinking about what you’re thinking about usually done concurrently. Okay let me make that more concrete with an example. It’s a process of constantly evaluating your approach to solving home work problem No. 3, at the very same instance that you are trying to solve home work problem No. 3. And when you’ve completed your solution, how are you able to say “I think this is right” or the opposite?

    So that’s Big Problem 1 for AI. Here’s Big Problem 2 for AI. Make that for AI that is meant to converse with humans:

    How does a machine get to know (or at least get a pretty good idea of) what the particular human it is talking with knows and is thinking about. This is the indispensable of efficient and effective communication. Humans have this ability. Infants have it at rudimentary levels. Heck some animals, especially primates, have it. Machines? Have you ever been locked in this loop of frustration when Siri misunderstands your question?

    I don’t think the answers to these questions will come from the field of AI proper. They’ll have to work with neuroscientists and philosophers of mind. (Oh god, isn’t that in the useless humanities not STEM?)




    0
  2. wab95

    John:

    Great thought piece. I confess that I haven’t had time to read most of your citations, but hopefully will get to some of these at the weekend.

    In the meantime, at the risk of doing so in the blind, let me address a few quick thoughts to your question, ‘how will we learn to trust AI?’.

    Before doing so, let’s define, for the purpose of this argument, what is meant by ‘trust’. What trust is not, is blind and uncritical acquiescence to a norm, or merely following observed crowd behaviour. This is sometimes confused with trust, when it is nothing of the sort. Rather, this herd-like behaviour is transient, superficial and fickle. Think, adolescent fashion trends. A cool kid sports something new, and soon every kid swears by it and has got to have it simply to fit in. Tomorrow, a cool kid brings something new, and the former cool thing is unceremoniously discarded and disavowed. You can substitute this in adults with possessions, politicians or practices. This herd-like behaviour is never about an object, but about self-interest and seeking safety in numbers. This is not trust.

    Trust is a dynamic and augmentative relationship. It begins unobtrusively with an offer, a promise or a proposal, that, if sufficiently non-threatening or risky, is accepted because the potential cost is acceptable. If the promise is fulfilled, and benefit or gain sufficiently outweighs cost, then trust is born, and over time is nurtured and augmented. Trust can also be born out of an imperative risk/benefit calculus. Think of tenant standing atop a burning high-rise and a fire team below holding a safety net. A snap risk (certain death by flame) vs benefit (possible survival in the net) is made to trust the fire crew. Trust is an imperative. Everybody jumps. This is unlikely to apply to AI, so let’s focus on the former.

    Trust is not superficial or subject to mercurial fancy, but born out of experience. This suggests that for AI tech companies, including Apple, Google, MS, or Amazon, the AI offer has to be sufficiently non-threatening and come at an acceptable level of risk to be willingly accepted. I mentioned yesterday on your ‘AI cyber nanny’ piece that threat and the perception of hazard is increased when our control and choice is removed. Professionals who manage threats and hazards know that, to put a population at relative ease, some measure of information, control and choice must be provided to minimise panic and preserve cooperation and order. We are more likely to accept AI that does not threaten us in any meaningful way, and the engagement with which is left to our discretion and choice. If we have control over the terms and choice of our engagement with AI, and it fulfils the clearly communicated promise that the tech companies make to us, then we are more likely to accept, and in time, invest it with greater trust and greater responsibility, including in matters affecting our well-being.

    This also means that the promise must have two features: it must be clearly communicated, and it must be commensurate with the capacity of AI to deliver on that promise. A lapse in either of these will impair that trust. Trust will emerge from our experience of a promise fulfilled.

    Two things will impair that trust; mishap and betrayal. If the promise is inadvertently broken by mishap or misadventure, then the tech companies have to, immediately, acknowledge the problem, preferably apologise for it, fix the problem and make reparations for damages done (unless they too are seeking the Darwin Award). If the promise is intentionally broken due to human greed or malfeasance, such as a tech company using AI as a honey pot to obtain, without our knowledge or consent, information on us that is used to our detriment (e.g. collecting our health data and selling it to health insurance companies who then raise our insurance premiums), then that trust will be impaired or perhaps irretrievably lost – at least for that company or model.

    The point is, trust has to be earned. For it to even be born, the relationship must begin from an acceptable level of risk which can be adequately controlled by the client. Only if the clearly articulated promise of AI performance is kept, will that trust grow, and greater responsibility, and trust, be invested in AI.




    0
Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account