What Do We Really, Really Want From Siri?

3 minute read
| Editorial

Siri is just good enough that it makes us think about where it could go next. I have questions.

Hey Siri
Hey Siri! I’m gonna …

Apple’s Siri is an Artificial Intelligence (AI) agent. It was introduced in 2011 and has gotten incrementally better ever since. But Siri technology, being an AI, always begs the question: where can she (he) go next? What should Siri be able to do? What ought to be its ultimate manifestation? And limits?

The event that got me thinking was this event, reported in Particle Debris for February 1st, 2019.

A 13-year-old boy told Siri that he planned a school shooting. Siri’s response wasn’t reported, but the youngster took a screen shot of the conversation and posted it to social media. That’s how his intentions were discovered and reported.

For openers, we discussed this event on TMO’s Daily Observations Podcast for February 4th, 2019. The premise starts with the idea that, someday, Siri might have to directly handle such a dangerous situation. That raises many questions.

  1. Given that AIs will get much better and more intuitive in the future, should a personal AI assume the responsibility to report a planned, imminent crime?
  2. To whom should Siri report its concerns? Parent? Teacher? Police? All?
  3. Should Siri always obey Asimov’s Three Laws of Robotics? (For example, never allowing its human companion to come to harm.)
  4. Should conversations with Siri be privileged and protected, as with a priest or attorney?
  5. Who gets to decide the answers to the above questions?

Lifting Limits

AI concept
What are the limits? What do we want them to be?

Today, we excuse Siri’s failures with limits on AI technology, the hardware, and internet speeds. And no doubt, there are artificial constraints also placed on Siri. For example, if you ask Siri on an Apple Watch “what time is it?” she (he) will answer out loud. But if you ask “What’s my pulse?” Siri will launch the Heart Rate app, show it to you and remain silent.

This could be because it’s been determined by Apple engineers that personal health data should not be verbally expressed, given that there may be inappropriate bystanders. Or perhaps Siri doesn’t have direct access to health and fitness data. Or both. As time goes on, should we expect Siri to have wise access and also know when it’s permitted to speak out loud?

This article, which I’ve cited in Particle Debris, asks a related question. “Are Home Health Aides The New Turing Test For AI?” That is, can we judge the sophistication of an AI not by the Turing Test but rather by how it handles its owner’s medical situations?

What does it mean for a machine to be intelligent? For decades, the common answer to that question has been to pass the “Turing test.” This test, named after famed mathematician Alan Turing, says that if a machine can carry on a conversation with a human via a textual interface such that the human can not tell the difference between a human and machine, then the machine is intelligent….

But there’s a problem: we were able to create chatbots that could pass the Turing test a long time ago. We know that the intelligence they display is narrow and limited….

MIT’s Rodney Brooks proposes new ways of thinking about AGI [Artificial General Intelligence] that go way beyond the Turing test…. what he calls ECW. By this he does not mean a friendly companion robot, but rather something that offers cognitive and “physical assistance that will enable someone to live with dignity and independence as they age in place in their own home.”

In short, robot/AI companions of the future may have to make intelligent, informed, compassionate decisions about the health and welfare of the human companion. Or, at least, confer wisely with another responsible human on a health or law enforcement emergency.

From Creepy to Pleasing Astonishment

Many would call this kind of emergent sophistication creepy. Especially as we know that there’s always a temptation to exploit our most personal chats with an AI against us by the developer. (Or by those with a warrant.) Or for financial gain.

But given that we can solve those kinds of problems, I would prefer to transition from the concept of creepy to astonishment. That is, are we constantly amazed at how good Siri is getting? Or must we always be generally disappointed with its limitations?

If the goal of AI research is to produce an intelligence that is indistinguishable (magically) from another human being, then we’ll have to grapple with many uncomfortable technical, privacy and legal decisions about their design. How we approach that will dictate whether our future AIs become astonishingly smart, competent and responsible or just plain perpetually disappointing.

What’s our preference as humans?

3
Leave a Reply

Please Login to comment
3 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
3 Comment authors
wab95gGrantpalmac Recent comment authors

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
newest oldest most voted
Notify of
wab95
Member
wab95

John: Congratulations on a very thoughtful and thought-provoking piece. Indeed, I believe this is one of your most succinct, simple yet comprehensive analyses on the subject of Siri as an AI, and warrants a thoughtful response. Answering the question, what do we want from Siri, with coherence or even in a way that builds consensus is no mean feat in a period during which we lack consensus on what we either expect or want from our technology, not to mention our tech providers. Broadly speaking, there are three categories of fault lines, largely geopolitical in nature, that influence our expectations… Read more »

gGrant
Member
gGrant

Tempting though it might be to see Siri as moral arbiter, I’d settle for Siri being half as smart as it was when it was purchased 10 years (??) ago. It was way-smarter on-the-phone as a stand alone app. I appreciate Apple getting it to do anything in as many languages as it does is a colossal achievement, for which Apple never gets due credit, but we all have a feeling that something went wrong somewhere and that clever Siri went a way, never to be seen again.

palmac
Member
palmac

I want Siri to sing, as in “Siri, sing me the latest Apple eula to the tunes of The Beatles Sgt. Pepper album.”