Artificial Intelligence Research, Unintended Consequences and Sex

| Particle Debris

Research into Artificial Intelligence will evolve into many more applications than asking Amazon’s Echo how many teaspoons are in a tablespoon. Or driving an autonomous car. As the technology expands in its capabilities and applications, we’ll be confronted with massive social change. How will Apple, for example, both serve us and meet competitive challenges?

AI agent

Back in August I referenced a terrific article by Steven Levy about Apple’s development of Siri. “Siri and Apple’s Machine Learning Are About to Get a Lot Better.” Author Levy was given an inside look at what Apple is doing with machine learning and the transformation of Siri. The interview was likely granted because there had been a lot of press about artificial intelligence (AI), and Apple’s traditional secrecy made it appear that Apple wasn’t doing much.

Plus, the Amazon Echo was getting a lot of press, and Apple has had nothing equivalent. Even if competing with the Echo is not in Apple’s plans, it certainly seemed like a good idea to reveal to author Levy that a lot of good things are happening at Apple with AI.

Humble Servants. For Now

But the scope and prognosis for AI has sweeping implications for our culture, far beyond a little cylinder that can order more cereal or play music on demand. We’re just geting started. And while Marc Andreessen points out (article #1 below) that the Amazon Echo project consumed 1,500 engineers for four years, the evolution of technology ensures that future projects will be far grander in scope but use merely equivalent resources.

This week I’ve been reading about the unintended consequences of AI. Instead of just one major article, here’s my reading list.

  1. Venture capitalist Marc Andreessen explains how AI will change the world
  2. Westworld-style sex with robots: when will it happen – and would it really be a good idea?
  3. 4 Ways Every Business Needs To Use Artificial Intelligence
  4. Exponential Intelligence: Microsoft Is Building the First AI Supercomputer

One theme of these articles is that AI is being developed because it can make big money. Right now, applications are fairly primitive—the kinds of things you can do with Siri, Cortana or Alexa. As that technology evolves, there will be new applications that may or may not directly serve the consumer on a small scale. For example:

Unintended Consequences

Persuasion. Robo callers that use a vast library of historical data and psychological techniques to sell or persuade politically will be used. “Robo” used to mean robotic-like phone dialers. In the future, people who are unprepared could be manipulated by vastly intelligent AI Robos into a variety of legal and illegal schemes.

Drones. Independent, armed law enforcement drones are conceivable. Unless strict legislation intervenes, various governments could find it convenient and efficient to use AI drones for patrol and arrest, pending arrival of a human police officer.

Sexual surrogates. Article #2 above goes into considerable ethical detail about human sex with androids. A licensed psychiatrist providing a prescription for a sex surrogate is one thing. However, the purchase of a commercial android sentient enough to be pleasing and have a mind of its own is conceivable. What would be the legalities of a rape by the human?

Courts. AI agents that can scan case law on behalf of attorneys are likely. How will the legal system be affected by AI agents that are skilled enough to battle each other? Does an accused without great financial resources have a right to an AI agent of equal prowess as that owned by a government?

Local and Federal Law. AI agents could scan and digest municipal laws on behalf of a citizen. Imagine a (hypothetical) case where where Siri, having just absorbed every law on file in the state of Colorado and the city of Denver advises: “I see you’re about to use Apple Pay to purchase six pounds of fertilizer. The purchase of more than 5 pounds is illegal.” Will Siri cancel the transaction? If the customer insists, will (should) Siri report the violation of the law?

International Relations. Could a day come when the president, plus staff, the Secretary of State and Defense might consult an AI agent on the impact of certain foreign policy decisions? As is often the case, (for example chess) smart decisions can appear illogical on the surface. How would the U.S. President explain certain decisions to the American people that appear offensive but are, in fact, effective and brilliant?

Questions for Apple

We’re just at the beginning of the commercial exploitation of AI agents. The questions I have are:

  1. Will Apple build personal AI agents that are designed to protect us at all costs? This is related to the recent issues raised in the conflict between the FBI and Apple over encryption. What happens when we humans no longer understand what each side in the encryption battle is up to?
  2. How will Apple’s research lend itself to dealing with the AI industry at all levels? That is, will Apple limit it’s vision or, instead, become embroiled in the highest level of effort that only tech giants like Amazon, Google, Microsoft can afford?
  3. As businesses of all kinds begin to utilize AI agents of lesser capability, will Apple’s AI work be able to properly defend us? Should there be a law that says AI agents of a certain class of sophistication (to be defined) cannot approach a human with out his/her own AI representative?

The articles I’ve listed above go into many more issues. But perhaps, before I get a serious headache, it’s time to move on.

Next page: The Tech News Debris for the Week of October 3rd. Why we shouldn’t laugh at the AirPods.

One Comment Add a comment

  1. aardman

    Pardon the duplication, I’m reposting a comment that I meant to put in this article.

    The problem with voice recognition, and this just happened to me with iMessages when I tried it again having reading this article, is if you use a new word, it makes a mistake and you have to correct it manually. That defeats the purpose of using dictation. Okay, it’s too much to ask the app to know something it hasn’t heard before, but there must be a way to make the correction also through the voice interface. Now that seems to be a big problem for a silicon mind, how does iMessages’ natural language interface distinguish between a command and actual text without reserving words that signal that a command or text is about to follow? Humans easily make this distinction in conversation. A steno easily figures out if what she is hearing is text to be transcribed rather than an aside. And if she makes a mistake, it is easily corrected through conversation. She easily infers intent, she has a good idea of what’s in the head of the person giving dictation.

    I think this is a challenge that pervades all of AI: inferring idiosyncratic intent.

    There is a much reproduced experiment in psychology (a field notorious for irreproducible experimental results) where they show that humans, down to 2 years old even, understand the concept of ‘people other than myself have their own thoughts and own sets of knowledge’ and, more importantly, can make accurate guesses of what another person is thinking of and knows.

    Earlier this week, news came out that this experiment was run on bonobos, chimps and orangutans and the key finding is that they also exhibit this ability. (Well maybe not to the extent that we do.) Here…

    https://today.duke.edu/2016/10/falsebeliefs

    As far as I know computers can’t do this, and as long as they don’t, AI systems will not be able to do the Star Trek-like things that people hope they can do in the future.

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account