New Apple Strategy: Partner with Microsoft For Future Battles


| Particle Debris

Page 2 – News Debris For The Week of May 7th

Getting Very Creeped Out

• We’ve seen the robot called Atlas from Boston Dynamics before. But what I’m pondering this week is the confluence of these two videos. First, hold this new video in your mind.

and then view this video, demoing Google Duplex.

These two videos together suggest that the eventual marriage of these two technologies will make for some, um, interesting changes to our lives in the future.

Speaking of Google Duplex, there has been some strong feedback after that demo by Google CEO Sundar Pichai at the I/O 2018 conference. Are we in for a lifetime of evil deception by this kind of AI agent? See: “Google’s AI sounds like a human on the phone — should we be worried?

More questions to ask: Is Google Duplex another Google Glass, doomed from the start by social forces? Next, why is it Google and not Apple or Microsoft making this splash? And finally, is Google seizing the AI high ground or is it blinded by an improper vision that will eventually surrender to calmer, smarter vision from Apple? If something can be done, should it always be done? Is this just normal technical progress with new processes and social behavior to be sorted out later? If you know the answers, chime in with comments below.

More Debris

• The Electronic Frontier Foundation (EFF) has a note on a new, bipartisan bill. “The Secure Data Act Would Stop Backdoors.

The bipartisan Secure Data Act would stop any government agency or court order from forcing a company to build backdoors into encrypted devices and communications.

The EFF likes this bill. You should too.

• Why do humans have emotions? This next article posits that emotions are necessary for survival. Okay, then, should robots/androids have emotions? See: “How Long Until a Robot Cries?” If not, should they be, at least, engineered to read and respond to human emotions? Here’s an excerpt.

But if our emotional states are indeed mechanical, they can be detected and measured, which is what scientists in the field of affective computing are working on. They’re hoping to enable machines to read a person’s affect the same way we display and detect our feelings—by capturing clues from our voices, our faces, even the way we walk. Computer scientists and psychologists are training machines to recognize and respond to human emotion.

• Above, I’ve looked at some technologies from Apple’s competitors, always, to put what Apple does in perpective. After all, Apple is the most valuable company, in terms of market cap, in the world. The story of its succes is never ending. So it is appropriate to close here with a deep look article over at Hodinkee Magazine. Apple, Influence, And Ive.

Apple Watch Series 3 with LTE

It never creeps us out. Sheer joy.

Early in the article, Jonny Ive explains:

I don’t look at watches for their relationship to popular culture, which I know is so much of the fun – but rather as somehow the distillation of craft, ingenuity, miniaturization, and of the art of making.

Herein are keen insights into the thinking that went into the Apple Watch, and it’s very good stuff.

The contrast between the craft of the Apple Watch, as described above, and special projects at Google that always seem to creep us out is fascinating to watch.


Particle Debris is a generally a mix of John Martellaro’s observations and opinions about a standout event or article of the week (preamble on page one) followed on page two by a discussion of articles that didn’t make the TMO headlines, the technical news debris. The column is published most every Friday except for holiday weeks.

13 Comments Add a comment

  1. geoduck

    In the (now long forgotten) series seaQuest DSV I remember a line about how “Apple buys Microsoft and…” It was a throwaway line they writers included because this was the bad old days of Apple at $12/share, and declining sales. The Pippen years. I’d find a good deal of satisfaction in Microsoft and Apple teaming up.

    The Google AI that sounds so human is interesting. In the BBC article (http://www.bbc.com/news/technology-44081393)
    “The demo was called “horrifying” by Zeynep Tufekci, an associate professor at the University of North Carolina who regularly comments on the ways technology and society impact on each other. In a tweet, Prof Tufekci said the idea of mimicking human speech so realistically was “horrible and obviously wrong. This is straight up, deliberate deception. Not okay.” Prof Tufekci said she was surprised that the WaveNet project got as far as a public demonstration and wondered why it had not been quashed internally during development.”

    My first thought was how stupid a response this was. The technology exists. Someone is going to develop it. Someone is going to use it. I’d rather it be Google rather than some foreign power. We’ve just experienced an election influenced by foreign made bots on social media sites. At least we know about these voice bots. We can react to them, learn to be on our guard. I’d rather any technology potentially this disruptive be out in the open. The bad guys will try to use it secretly.

    • aardman

      I agree that you cannot un-know what is already known, so no point in complaining that Google developed this technology. But what really is the point of a machine that simulates the conversational style of a real human being other than to deceive people, perhaps not overtly if people are informed that they are talking to a machine, but subliminally through subconscious emotional manipulation? I shudder to think about the number of people who will be scammed using this technology. Is the landscape of the future one that requires hyper-vigilance against fake human voices, fake photos, fake video, and all other sorts of maliciously deceptive modes of communication?

      • geoduck

        But take this technology out of the world of robocalls. This technology would go a long way toward making androids that could interact with humans conversationally. Even before a Commander Data, it would be nice if Siri or Alexa were something you could chat with. Hold a conversation with. There would be no deceit. You’d know you were talking to Siri, but Siri could interact more ‘normally’ than it does now. I’m very impressed with the technology. As Werner von Braun said “Science is like a knife. Give it to a surgeon or a murderer
        and each will use it differently.” It is up to us to make sure the technology is used ethically.

  2. aardman

    Should robots/androids have emotions? I read the cited article and felt so strongly about it, I had to comment, which for convenience I’m copying below. Sorry if it’s a little long:

    The qualia problem. Qualia is conscious subjective ability to feel sensation, to feel ‘what it’s like’ to experience something. I think emotion is ultimately based on feeling pain and pleasure both of the psychic and physical (physiological?) varieties. Can machines ever have qualia? Can they ever feel pain and pleasure? Maybe once emotion detection is perfected, robots can be programmed to relate and communicate more ‘authentically’ to humans. But there is no true empathy there, only the simulation thereof.

    There is no true emotion without qualia, and this is a problem when it comes to robotics. We have people amongst us who have very high intellects and feel no or very little emotion. We call them sociopaths. In fact the really intelligent sociopaths are able to simulate the outward indicators of emotion, especially empathy, making them very good at manipulating people to bend to their will. And since they feel no pain, especially that type of psychic pain called guilt, sociopaths are also basically amoral because isn’t feeling pain the root motivator of moral and ethical formation?

    So when it comes down to it, this is what this emotion-detecting and emotion-simulating robot project is all about: building highly intelligent sociopaths. Maybe that’s okay if all we want them to do is say, stand in front of Disneyworld and greet visitors as they enter, engaging them in small talk and making them feel welcome and excited. But the talk is all about putting robots in tasks where they will be required to make decisions that involve making moral and ethical choices: driving a car through traffic (the famous ‘whose lives get saved in an unavoidable crash?’ question), taking care of the elderly, and other highly complex situations that can result in harm to people (and animals) if the wrong decisions are made.

    I am amazed that all the boffins in the article who talk about building ’emotional robots’ don’t ever mention qualia. Without qualia, all they are really talking about is the machine simulation of emotion, not the real thing. And as I said, that might be a serious problem if robots are put in situations where idiosyncratic on-the-fly moral choices are called for.

    Oh, and the ‘ethical treatment of robots’? Without qualia, they’re still just machines no different from your toaster. Just because scientists and engineers in the future are able to simulate emotions and consciousness, that doesn’t raise them to the same moral plane as humans, or animals even.

    • aardman

      Ha ha, that last line above. Let me rephrase:

      Just because scientists and engineers in the future are able to build machines that simulate emotions and consciousness, that doesn’t raise those machines to the same moral plane as humans, or animals even.

      Although the original wording also is something worth thinking about, eh?

    • geoduck

      FWIW my dad was a mechanic. We were taught to not abuse machines. That it was ethically wrong to deliberately inflict harm, be it on another person, or an animal, or the engine in your car. Slamming the door was wrong not because of the noise but because it was hard on the door and the house. So to me at least I have trouble with the “no different than your toaster” part. I would no more abuse a toaster than I would my cat.

      I saw an interesting experiment on the web. It was a box on the beach. On top was a red button. As someone approached the box would greet them and explain the whole experiment. It would then converse with them. If the person hit the red button it would cut power and turn the box off. However as the person got closer to the button the box would plead for it’s life. When the person was farther away it would explain that it was just a machine, it felt no pain, it had no life. It could not die. I don’t know the source of this video. I just thought it was an interesting experiment. And no, I would not hit the button, not because of the pleading, I’d just find it too interesting to converse with the AI.

      The qualia question is interesting. Because on a philosophical level who is to say we all aren’t just complex systems programmed to feel empathy, emotion, and pain? As an actor I’ve pondered this many times. In a play once, my character had to hate another character so much Ihe shot him. I had to simulate the emotion authentically enough that it was indistinguishable from the real thing. The other actor and I got along great back stage, but in that scene I had to feel rage, and he had to feel like the smug, sarcastic, jackass who didn’t think my character had the guts to shoot. When you start having to simulate powerful emotions like this, authentically enough that the audience believes them for that scene, you start to wonder about all emotion. Maybe while we really feel the emotion, it is just an evolved, programmed process. In a real sense what is emotion?

      • aardman

        Please don’t jump to the conclusion that when I say machines lie on a lower moral plane than humans or animals, that means I’m declaring open season for abusing and destroying machines. There is no argument there. To me destroying or even abusing a machine that is perfectly useful and beneficial (not just operational, but useful) is unethical. Destroying it is a waste of resources and abusing it is an affront to the people who worked hard to design and build the machine. And even with non-serviceable machines any person who derives pleasure from bashing it to bits is a little disturbed.

        I actually agree that on some philosophical level humans can be seen as complex systems programmed (by evolution and experiential learning) to feel empathy, emotion, and pain. But that’s the thing; we actually **feel** empathy, emotion, and pain. Machines don’t. And it’s quite a stretch of the imagination to imagine how they might plausibly do so in the future. Now there are some people who think that if you build a computer with enough transistors then consciousness will somehow emerge as an inevitable outcome of complexity. (They can’t really explain how it might happen so it’s more like naively extrapolating to silicon the observed correlations of brain size and intelligence in the animal kingdom.)

        In a real sense, what is emotion? There are technical definitions for emotion that are pretty good at least for defining what scientists are trying to learn about through experimentation, but you are right, at the bottom of it all, it’s hard to pin it down to the extent that you can pin down what a femur is. But that’s the whole problem with qualia, it’s why psychologists, neuroscientists, philosophers of mind, evolutionary biologists etc. are all over the place about it and its significance. A neuroscientist can explain with some detail what happens in the eyes and the brain when light of a wavelength that corresponds to blue hits the eye, but the sensation of blueness that we ‘see’ — nobody knows how that arises. We can never be sure that the blue you see is exactly the same blue that somebody else sees, even if both of you aren’t colorblind. But we know it probably can differ based on that blue dress/white dress controversy that hit the net a couple of years ago. Anyway, I’ve started to stray.

  3. foiled64

    I would love to see Apple and Microsoft team up and work together. If they do, in regards to google duplex, I would like to see that the 2 companies make a better, smarter system that includes verification, because it won’t be long before we see appointments scheduled days before they were supposed to be or events conflicting with each other when someone doesn’t enter the be in a calendar. My version of verification would be that Siri sends you a list of available dates in the call so all you have to do is tap/click or even say a date, then Siri would finish the phone call, compared to Google’s way of just scheduling the appointment. (Sorry, I can’t read the other comments right now.)

  4. wab95

    John:

    Great themes for thought and discussion this week. Let me address two of them, an Apple/MS alliance, and AI/emotions.

    A collaboration between Apple and MS has greater potential relevance than mere product development. Apple and MS were born of a common era, grew first as partners then as rivals together, and survived an era that saw the birth of modern personal computing. Importantly, they are both not merely survivors, but architects and moulders of a storied and pivotal period of personal computing culture that has transitioned from its infancy of situational use case (office or home and almost nothing in between) to one of everywhere access on devices ranging from PCs to ultraportables to wearables. These companies have fought digital culture wars not simply in the marketplace but in courts and before world legislative bodies; and have enjoyed both public victory and defeat such that today they have become pillars of our digital landscape, providing us with a measure of structure and stability, as well as landmarks that provide bearings and orientation.

    There’s more. As founding members of our current computing age culture, they bring a measure of authoritative perspective, experience, legacy and continuity that few can rival and that none can either dispute or surpass, even more so if they opt to collaborate and speak with one voice.

    To be sure, the era that followed that of their birth is qualitatively different, and has seen the rise of two types of business that share a common business model. The one was search and the other was social media. What these two industries share is a business model based on barter in information and data; namely, we will provide you with the data or information you want, whether it is looking up a restaurant, or finding and sharing information with a friend or family member, if you provide us with information, specifically your data. This is barter. On the backend, those industries in turn sell those data, or make those data available for a share in any profits the users of those data make. There is a third industry type, which is the emerged during the dotcom bubble, namely online retail, of which Amazon is the unrivalled sovereign, and in which data are collected in order to better target sales. Both search engines and social media have adopted elements of this retail model by acquiring user data in order to market third party products to users. These business models and revenue sources are strikingly distinct from those of either Apple or MS.

    Despite that, both of these companies not simply deal in user data and therefore its ownership and privacy, they have a stake in how those data are handled by the industry writ large, and the follow-on expectations by third parties about such data access. These issue all have a direct impact on both Apple and MS. Apart from any direct market benefits their collaboration would bring, this is another potential benefit that a closer partnership could possibly garner, shaping the solutions concerning data collection and safety.

    The secure data act is a welcome sign that legislators are taking user privacy and data security seriously, and may be amenable to extending those protections further to issues around ownership, and giving users greater rights over the limits and duration of its use. These are issues that affect the entire industry, including Apple and MS.

    Neil Savage’s article, ‘How long until a robot cries’ strikes me as an example of seeking answers to the wrong question. To be clear, the question of what are emotions, what role do they play in our survival and in shaping human relationships and civilisation itself, and, if they are hardwired responses to stimuli, whether or not they can be programmed into AI, are all valid questions, but they miss the larger point, at least insofar as they extend to AI. That question is one of sentience. Whether or not one can programme situation-appropriate simulations of emotion, is fundamentally no different than that of contextually appropriate verbal responses to queries, requests or general conversations with AI. Unless AI itself become sentient, these are all simulacra in a machine that mimics sentience, but is not sentient and therefore is not self aware or alive. Rather, what is functionally important is that, as a tool in service to human need, and to extend human capability, AI should be responsive to human emotion, in addition to all of the other non-verbal sources of human communication whose reception and interpretation are essential in order to create contextually appropriate responses – responses that may be life saving.

    Finally, in response to @geoduck’s and @aardman’s discussion, sentience or its absence in AI, or any other human tool, is irrelevant to the issue of abusive behaviour. There are numerous studies, including psychological clinical trials (involving no harm to real humans or anything else) that show the corrosive effects of abusive behaviour on everyone, including the abuser. Treating all living things with respect, and extending that respectful treatment to limited resources, including our food, clothing, housing, energy and all the components thereof is an important element of our socialisation into mature, responsible and productive humanity, and ultimately our happiness and sense of contentment, and should not be confined only to entities with which we personally identify and therefore like, or to those that can retaliate and hurt us in return. These behaviours and attitudes, virtues if you will, are essential elements of our coming of age, and are important and therefore worthy of pursuit and adoption in their own right.

    • geoduck

      What you said about the corrosive effects of abuse reminded me of an article I read a long time ago. A meat company in the midwest decided to remodel their abbatours. They had been dark, noisy, horror shows, where the killing floors were covered with blood and such, where terrified cows watched it all while awaiting until being forcibly drug to their fate. After remodelling they were clean, well lit, the animals were held gently and moved along comfortably. The killing was done in one space out of sight, and smell, from the other animals. Overall it was a lot less cruel, and the company found the quality of the meat improved. Interestingly enough when this was done the company and city also noticed a marked reduction in domestic abuse among the employees.
      /OT

  5. John Martellaro

    You guys are the greatest. You all make awesome contributions to the discussion in Particle Debris each week. Thanks to all!

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account