What Do We Really, Really Want From Siri?

Siri is just good enough that it makes us think about where it could go next. I have questions.

Hey Siri
Hey Siri! I’m gonna …

Apple’s Siri is an Artificial Intelligence (AI) agent. It was introduced in 2011 and has gotten incrementally better ever since. But Siri technology, being an AI, always begs the question: where can she (he) go next? What should Siri be able to do? What ought to be its ultimate manifestation? And limits?

The event that got me thinking was this event, reported in Particle Debris for February 1st, 2019.

A 13-year-old boy told Siri that he planned a school shooting. Siri’s response wasn’t reported, but the youngster took a screen shot of the conversation and posted it to social media. That’s how his intentions were discovered and reported.

For openers, we discussed this event on TMO’s Daily Observations Podcast for February 4th, 2019. The premise starts with the idea that, someday, Siri might have to directly handle such a dangerous situation. That raises many questions.

  1. Given that AIs will get much better and more intuitive in the future, should a personal AI assume the responsibility to report a planned, imminent crime?
  2. To whom should Siri report its concerns? Parent? Teacher? Police? All?
  3. Should Siri always obey Asimov’s Three Laws of Robotics? (For example, never allowing its human companion to come to harm.)
  4. Should conversations with Siri be privileged and protected, as with a priest or attorney?
  5. Who gets to decide the answers to the above questions?

Lifting Limits

AI concept
What are the limits? What do we want them to be?

Today, we excuse Siri’s failures with limits on AI technology, the hardware, and internet speeds. And no doubt, there are artificial constraints also placed on Siri. For example, if you ask Siri on an Apple Watch “what time is it?” she (he) will answer out loud. But if you ask “What’s my pulse?” Siri will launch the Heart Rate app, show it to you and remain silent.

This could be because it’s been determined by Apple engineers that personal health data should not be verbally expressed, given that there may be inappropriate bystanders. Or perhaps Siri doesn’t have direct access to health and fitness data. Or both. As time goes on, should we expect Siri to have wise access and also know when it’s permitted to speak out loud?

This article, which I’ve cited in Particle Debris, asks a related question. “Are Home Health Aides The New Turing Test For AI?” That is, can we judge the sophistication of an AI not by the Turing Test but rather by how it handles its owner’s medical situations?

What does it mean for a machine to be intelligent? For decades, the common answer to that question has been to pass the “Turing test.” This test, named after famed mathematician Alan Turing, says that if a machine can carry on a conversation with a human via a textual interface such that the human can not tell the difference between a human and machine, then the machine is intelligent….

But there’s a problem: we were able to create chatbots that could pass the Turing test a long time ago. We know that the intelligence they display is narrow and limited….

MIT’s Rodney Brooks proposes new ways of thinking about AGI [Artificial General Intelligence] that go way beyond the Turing test…. what he calls ECW. By this he does not mean a friendly companion robot, but rather something that offers cognitive and “physical assistance that will enable someone to live with dignity and independence as they age in place in their own home.”

In short, robot/AI companions of the future may have to make intelligent, informed, compassionate decisions about the health and welfare of the human companion. Or, at least, confer wisely with another responsible human on a health or law enforcement emergency.

From Creepy to Pleasing Astonishment

Many would call this kind of emergent sophistication creepy. Especially as we know that there’s always a temptation to exploit our most personal chats with an AI against us by the developer. (Or by those with a warrant.) Or for financial gain.

But given that we can solve those kinds of problems, I would prefer to transition from the concept of creepy to astonishment. That is, are we constantly amazed at how good Siri is getting? Or must we always be generally disappointed with its limitations?

If the goal of AI research is to produce an intelligence that is indistinguishable (magically) from another human being, then we’ll have to grapple with many uncomfortable technical, privacy and legal decisions about their design. How we approach that will dictate whether our future AIs become astonishingly smart, competent and responsible or just plain perpetually disappointing.

What’s our preference as humans?

3 thoughts on “What Do We Really, Really Want From Siri?

  • John:

    Congratulations on a very thoughtful and thought-provoking piece. Indeed, I believe this is one of your most succinct, simple yet comprehensive analyses on the subject of Siri as an AI, and warrants a thoughtful response.

    Answering the question, what do we want from Siri, with coherence or even in a way that builds consensus is no mean feat in a period during which we lack consensus on what we either expect or want from our technology, not to mention our tech providers. Broadly speaking, there are three categories of fault lines, largely geopolitical in nature, that influence our expectations for tech in general, and will affect those for AI and Siri in particular.

    The first is the tension between retreating to our past or moving forward into our future; the one a known if not rose-coloured safe haven of identifiable historical successes, the other an unknown and possibly high risk venture and with an unknown probability of failure, accompanied by an uneasy and complex relationship with change. This is often reduced to the label of conservative vs liberal, but this does an injustice to the complexity and fluidity of this tension.

    The second is societal character; closed vs open, authoritarian vs liberal, inequitable vs equitable distribution of rights and privileges, which combined, create a range of aspirations and expectations as diverse as the planet itself.

    Finally, there is the tension between individual liberty and public safety and security, resulting in precedent in how this tension is balanced, even if that balance differs by society. All three of these, in turn, circumscribe what AI solutions will be acceptable, at the ground level, in these same settings.

    To traverse such a diverse and challenging landscape will require substantial technical and even legalistic agility and athleticism from tech companies. In simple terms, expect no one-size-fits-all solution in near term. With that caveat, here are a few ideas of what our global tech giants might do. While attempting to be as generic as possible for the broadest application, I will confine my comments primarily to the more liberal and open societies that protect individual liberty. Here are three broad expectations or aspirations for AI in general, Siri in particular, that are achievable in the near future.

    The first is for Siri to leverage the increased biometric feedback that it is already acquiring and will increasingly acquire via wearable tech (eg AW), in which it applies integrative algorithms to apply not only verbal input but biometric and universal micro gestures (eg unconscious facial muscle and eye movements) to read and interpret intent and mental status. Thus, if there is a discrepancy between verbal statement and biometry, Siri might safely ignore or appropriately intervene with the user as needed. Otherwise, by combining these inputs, Siri will become progressively more accurate in reading the user. This is a practical application of MIT’s Rodney Brooks’ revised Turing Test.

    The second is an almost Hippocratic commitment to accommodating human diversity without doing harm to the user. We’ve previously discussed the encroachment of human bias into AI http://fortune.com/longform/ai-bias-problem/ which could, intentionally or unintentionally treat different populations differently, resulting in a higher rate of negative interactions between the user and certain populations. The best way of accommodating this human diversity in AI/Siri is is by ensuring representativeness in the making of AI. This is one of the benefits of having multiple nations working on AI development, but in each case, in the human inputs into AI’s algorithms and solutions need to be as representative of the populations as the people themselves to avoid bias. This aspiration plays to Apple’s strengths and corporate values, and is one that we can expect from Siri.

    The third aspiration for Siri is increased user control over what a progressively more capable and advanced Siri is permitted to do with the individual user. With greater power comes greater responsibility. This should be regulated to individual taste by informed consent; enabling the user to grant or deny consent for Siri to monitor and act upon verbal and biometric feedback, including involuntarily should the user become incapacitated, and specifying whom should be notified (ie the terms of confidentiality). In cases where a person is legally ineligible to consent, then a parent, caregiver or authorised representative can provide consent, witnessed if necessary, for those settings.

    In the end, the level of invasiveness and intervention by Siri should follow established social precedent. Where jurisprudence has already ruled on that balance between individual liberty, privacy, confidentiality on the one hand, and public safety and security on the other, Siri acting as our personal guardian should act accordingly, but with the user’s consent, as with all responsible guardianship. In this complex landscape outlined above, Siri will likely mirror these social fault lines so as to avoid legal challenge as well as to enhance social acceptance. Some of these defaults might well differ between societies. The exception may be with authoritarian societies, in which rising popular expectations of greater individual liberty might well push back against social control and conformity, even if by means of guerrilla hacking.

    If any company can accomplish this, it would be Apple.

  • Tempting though it might be to see Siri as moral arbiter, I’d settle for Siri being half as smart as it was when it was purchased 10 years (??) ago. It was way-smarter on-the-phone as a stand alone app. I appreciate Apple getting it to do anything in as many languages as it does is a colossal achievement, for which Apple never gets due credit, but we all have a feeling that something went wrong somewhere and that clever Siri went a way, never to be seen again.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.