New Apple Strategy: Partner with Microsoft For Future Battles

The battlefield amongst the tech giants is constantly shifting. Each is innovating while looking for weaknesses in the competitors and seizing new ground. A formal Apple partnership with Microsoft would change the balance of power.

Nadella introduced as new CEO, February 2014.
Satya Nadella introduced as new Microsoft CEO, February 2014. Image credit: Hit Refresh

Several articles this week suggest that the atmosphere and leadership at Microsoft is ripe for an Apple business opportunity.

The immediate take on these articles is that Microsoft’s vision, driven by CEO Satya Nadella, is now much broader than mere Windows. From the third article:

…of course Windows isn’t going away — but he [Nadella] also wanted to explain his latest buzzwordy vision for the future of the Microsoft: AI, Intelligent Cloud, and Intelligent Edge.

That’s because, of course, the OS in front of us has always been the customary interface to the world. Today, internet speeds, mobility, the cloud, and AI technology, have become dominant factors. It makes sense to think more broadly about how a local OS interfaces outside itself. That’s why we’ve seen Microsoft lean more towards interoperability and away from Windows-first, Windows-only.

Remember, one of the first things Sata Nadella did was to release Microsoft Office for iPad. That was something Steve Ballmer held back.

Similar Visions: Apple & Microsoft

More broadly, Apple and Microsoft complement each other. The pre-iPhone PC wars are over. Apple and Microsoft are like-minded when it comes to the privacy and security of their customers. Each has a common heritage in the PC/Mac world. Microsoft’s Windows phone failed, but the company can benefit by making its pervasive Windows OS work better with other company’s smartphones.

Amazon’s Alexa is poised to seize control of the home automation and AI market, something that could hasten the obsolescence of the traditional OSes. Family service robots are on Amazon’s (and probably Google’s) radar.

Apple could use help with Siri, robotics and business cloud expertise. Microsoft could use help with the adoption of a more pervasive, widely accepted, encrypted communication service like Apple’s Messages. And mobility in general.

While Apple and Microsoft will never merge, working more closely together on certain new projects could shift the balance of power in the high tech world in their favor. This never could have happened under Microsoft’s Steve Ballmer. The current Microsoft CEO is smart, self-confident and open to new ideas that go beyond the old, limited Windows-centric vision.

The first article above goes into some detail about the new overture Microsoft has made to Apple. Sooner or later, one of these companies, Amazon, Apple, Google and Microsoft is going to figure out how to join forces with a competitor with whom they have a lot in common and, then, better secure the battlefield. It’s only a matter of time. Apple and Microsoft, working together, in new, smart ways would be a force to be reckoned with.

Next Page: The News Debris for the week of May 7th. Getting very creeped out.

13 thoughts on “New Apple Strategy: Partner with Microsoft For Future Battles

  • John:

    Great themes for thought and discussion this week. Let me address two of them, an Apple/MS alliance, and AI/emotions.

    A collaboration between Apple and MS has greater potential relevance than mere product development. Apple and MS were born of a common era, grew first as partners then as rivals together, and survived an era that saw the birth of modern personal computing. Importantly, they are both not merely survivors, but architects and moulders of a storied and pivotal period of personal computing culture that has transitioned from its infancy of situational use case (office or home and almost nothing in between) to one of everywhere access on devices ranging from PCs to ultraportables to wearables. These companies have fought digital culture wars not simply in the marketplace but in courts and before world legislative bodies; and have enjoyed both public victory and defeat such that today they have become pillars of our digital landscape, providing us with a measure of structure and stability, as well as landmarks that provide bearings and orientation.

    There’s more. As founding members of our current computing age culture, they bring a measure of authoritative perspective, experience, legacy and continuity that few can rival and that none can either dispute or surpass, even more so if they opt to collaborate and speak with one voice.

    To be sure, the era that followed that of their birth is qualitatively different, and has seen the rise of two types of business that share a common business model. The one was search and the other was social media. What these two industries share is a business model based on barter in information and data; namely, we will provide you with the data or information you want, whether it is looking up a restaurant, or finding and sharing information with a friend or family member, if you provide us with information, specifically your data. This is barter. On the backend, those industries in turn sell those data, or make those data available for a share in any profits the users of those data make. There is a third industry type, which is the emerged during the dotcom bubble, namely online retail, of which Amazon is the unrivalled sovereign, and in which data are collected in order to better target sales. Both search engines and social media have adopted elements of this retail model by acquiring user data in order to market third party products to users. These business models and revenue sources are strikingly distinct from those of either Apple or MS.

    Despite that, both of these companies not simply deal in user data and therefore its ownership and privacy, they have a stake in how those data are handled by the industry writ large, and the follow-on expectations by third parties about such data access. These issue all have a direct impact on both Apple and MS. Apart from any direct market benefits their collaboration would bring, this is another potential benefit that a closer partnership could possibly garner, shaping the solutions concerning data collection and safety.

    The secure data act is a welcome sign that legislators are taking user privacy and data security seriously, and may be amenable to extending those protections further to issues around ownership, and giving users greater rights over the limits and duration of its use. These are issues that affect the entire industry, including Apple and MS.

    Neil Savage’s article, ‘How long until a robot cries’ strikes me as an example of seeking answers to the wrong question. To be clear, the question of what are emotions, what role do they play in our survival and in shaping human relationships and civilisation itself, and, if they are hardwired responses to stimuli, whether or not they can be programmed into AI, are all valid questions, but they miss the larger point, at least insofar as they extend to AI. That question is one of sentience. Whether or not one can programme situation-appropriate simulations of emotion, is fundamentally no different than that of contextually appropriate verbal responses to queries, requests or general conversations with AI. Unless AI itself become sentient, these are all simulacra in a machine that mimics sentience, but is not sentient and therefore is not self aware or alive. Rather, what is functionally important is that, as a tool in service to human need, and to extend human capability, AI should be responsive to human emotion, in addition to all of the other non-verbal sources of human communication whose reception and interpretation are essential in order to create contextually appropriate responses – responses that may be life saving.

    Finally, in response to @geoduck’s and @aardman’s discussion, sentience or its absence in AI, or any other human tool, is irrelevant to the issue of abusive behaviour. There are numerous studies, including psychological clinical trials (involving no harm to real humans or anything else) that show the corrosive effects of abusive behaviour on everyone, including the abuser. Treating all living things with respect, and extending that respectful treatment to limited resources, including our food, clothing, housing, energy and all the components thereof is an important element of our socialisation into mature, responsible and productive humanity, and ultimately our happiness and sense of contentment, and should not be confined only to entities with which we personally identify and therefore like, or to those that can retaliate and hurt us in return. These behaviours and attitudes, virtues if you will, are essential elements of our coming of age, and are important and therefore worthy of pursuit and adoption in their own right.

    1. What you said about the corrosive effects of abuse reminded me of an article I read a long time ago. A meat company in the midwest decided to remodel their abbatours. They had been dark, noisy, horror shows, where the killing floors were covered with blood and such, where terrified cows watched it all while awaiting until being forcibly drug to their fate. After remodelling they were clean, well lit, the animals were held gently and moved along comfortably. The killing was done in one space out of sight, and smell, from the other animals. Overall it was a lot less cruel, and the company found the quality of the meat improved. Interestingly enough when this was done the company and city also noticed a marked reduction in domestic abuse among the employees.

  • I would love to see Apple and Microsoft team up and work together. If they do, in regards to google duplex, I would like to see that the 2 companies make a better, smarter system that includes verification, because it won’t be long before we see appointments scheduled days before they were supposed to be or events conflicting with each other when someone doesn’t enter the be in a calendar. My version of verification would be that Siri sends you a list of available dates in the call so all you have to do is tap/click or even say a date, then Siri would finish the phone call, compared to Google’s way of just scheduling the appointment. (Sorry, I can’t read the other comments right now.)

  • Should robots/androids have emotions? I read the cited article and felt so strongly about it, I had to comment, which for convenience I’m copying below. Sorry if it’s a little long:

    The qualia problem. Qualia is conscious subjective ability to feel sensation, to feel ‘what it’s like’ to experience something. I think emotion is ultimately based on feeling pain and pleasure both of the psychic and physical (physiological?) varieties. Can machines ever have qualia? Can they ever feel pain and pleasure? Maybe once emotion detection is perfected, robots can be programmed to relate and communicate more ‘authentically’ to humans. But there is no true empathy there, only the simulation thereof.

    There is no true emotion without qualia, and this is a problem when it comes to robotics. We have people amongst us who have very high intellects and feel no or very little emotion. We call them sociopaths. In fact the really intelligent sociopaths are able to simulate the outward indicators of emotion, especially empathy, making them very good at manipulating people to bend to their will. And since they feel no pain, especially that type of psychic pain called guilt, sociopaths are also basically amoral because isn’t feeling pain the root motivator of moral and ethical formation?

    So when it comes down to it, this is what this emotion-detecting and emotion-simulating robot project is all about: building highly intelligent sociopaths. Maybe that’s okay if all we want them to do is say, stand in front of Disneyworld and greet visitors as they enter, engaging them in small talk and making them feel welcome and excited. But the talk is all about putting robots in tasks where they will be required to make decisions that involve making moral and ethical choices: driving a car through traffic (the famous ‘whose lives get saved in an unavoidable crash?’ question), taking care of the elderly, and other highly complex situations that can result in harm to people (and animals) if the wrong decisions are made.

    I am amazed that all the boffins in the article who talk about building ’emotional robots’ don’t ever mention qualia. Without qualia, all they are really talking about is the machine simulation of emotion, not the real thing. And as I said, that might be a serious problem if robots are put in situations where idiosyncratic on-the-fly moral choices are called for.

    Oh, and the ‘ethical treatment of robots’? Without qualia, they’re still just machines no different from your toaster. Just because scientists and engineers in the future are able to simulate emotions and consciousness, that doesn’t raise them to the same moral plane as humans, or animals even.

    1. Ha ha, that last line above. Let me rephrase:

      Just because scientists and engineers in the future are able to build machines that simulate emotions and consciousness, that doesn’t raise those machines to the same moral plane as humans, or animals even.

      Although the original wording also is something worth thinking about, eh?

    2. FWIW my dad was a mechanic. We were taught to not abuse machines. That it was ethically wrong to deliberately inflict harm, be it on another person, or an animal, or the engine in your car. Slamming the door was wrong not because of the noise but because it was hard on the door and the house. So to me at least I have trouble with the “no different than your toaster” part. I would no more abuse a toaster than I would my cat.

      I saw an interesting experiment on the web. It was a box on the beach. On top was a red button. As someone approached the box would greet them and explain the whole experiment. It would then converse with them. If the person hit the red button it would cut power and turn the box off. However as the person got closer to the button the box would plead for it’s life. When the person was farther away it would explain that it was just a machine, it felt no pain, it had no life. It could not die. I don’t know the source of this video. I just thought it was an interesting experiment. And no, I would not hit the button, not because of the pleading, I’d just find it too interesting to converse with the AI.

      The qualia question is interesting. Because on a philosophical level who is to say we all aren’t just complex systems programmed to feel empathy, emotion, and pain? As an actor I’ve pondered this many times. In a play once, my character had to hate another character so much Ihe shot him. I had to simulate the emotion authentically enough that it was indistinguishable from the real thing. The other actor and I got along great back stage, but in that scene I had to feel rage, and he had to feel like the smug, sarcastic, jackass who didn’t think my character had the guts to shoot. When you start having to simulate powerful emotions like this, authentically enough that the audience believes them for that scene, you start to wonder about all emotion. Maybe while we really feel the emotion, it is just an evolved, programmed process. In a real sense what is emotion?

      1. Please don’t jump to the conclusion that when I say machines lie on a lower moral plane than humans or animals, that means I’m declaring open season for abusing and destroying machines. There is no argument there. To me destroying or even abusing a machine that is perfectly useful and beneficial (not just operational, but useful) is unethical. Destroying it is a waste of resources and abusing it is an affront to the people who worked hard to design and build the machine. And even with non-serviceable machines any person who derives pleasure from bashing it to bits is a little disturbed.

        I actually agree that on some philosophical level humans can be seen as complex systems programmed (by evolution and experiential learning) to feel empathy, emotion, and pain. But that’s the thing; we actually **feel** empathy, emotion, and pain. Machines don’t. And it’s quite a stretch of the imagination to imagine how they might plausibly do so in the future. Now there are some people who think that if you build a computer with enough transistors then consciousness will somehow emerge as an inevitable outcome of complexity. (They can’t really explain how it might happen so it’s more like naively extrapolating to silicon the observed correlations of brain size and intelligence in the animal kingdom.)

        In a real sense, what is emotion? There are technical definitions for emotion that are pretty good at least for defining what scientists are trying to learn about through experimentation, but you are right, at the bottom of it all, it’s hard to pin it down to the extent that you can pin down what a femur is. But that’s the whole problem with qualia, it’s why psychologists, neuroscientists, philosophers of mind, evolutionary biologists etc. are all over the place about it and its significance. A neuroscientist can explain with some detail what happens in the eyes and the brain when light of a wavelength that corresponds to blue hits the eye, but the sensation of blueness that we ‘see’ — nobody knows how that arises. We can never be sure that the blue you see is exactly the same blue that somebody else sees, even if both of you aren’t colorblind. But we know it probably can differ based on that blue dress/white dress controversy that hit the net a couple of years ago. Anyway, I’ve started to stray.

  • In the (now long forgotten) series seaQuest DSV I remember a line about how “Apple buys Microsoft and…” It was a throwaway line they writers included because this was the bad old days of Apple at $12/share, and declining sales. The Pippen years. I’d find a good deal of satisfaction in Microsoft and Apple teaming up.

    The Google AI that sounds so human is interesting. In the BBC article (
    “The demo was called “horrifying” by Zeynep Tufekci, an associate professor at the University of North Carolina who regularly comments on the ways technology and society impact on each other. In a tweet, Prof Tufekci said the idea of mimicking human speech so realistically was “horrible and obviously wrong. This is straight up, deliberate deception. Not okay.” Prof Tufekci said she was surprised that the WaveNet project got as far as a public demonstration and wondered why it had not been quashed internally during development.”

    My first thought was how stupid a response this was. The technology exists. Someone is going to develop it. Someone is going to use it. I’d rather it be Google rather than some foreign power. We’ve just experienced an election influenced by foreign made bots on social media sites. At least we know about these voice bots. We can react to them, learn to be on our guard. I’d rather any technology potentially this disruptive be out in the open. The bad guys will try to use it secretly.

    1. I agree that you cannot un-know what is already known, so no point in complaining that Google developed this technology. But what really is the point of a machine that simulates the conversational style of a real human being other than to deceive people, perhaps not overtly if people are informed that they are talking to a machine, but subliminally through subconscious emotional manipulation? I shudder to think about the number of people who will be scammed using this technology. Is the landscape of the future one that requires hyper-vigilance against fake human voices, fake photos, fake video, and all other sorts of maliciously deceptive modes of communication?

      1. But take this technology out of the world of robocalls. This technology would go a long way toward making androids that could interact with humans conversationally. Even before a Commander Data, it would be nice if Siri or Alexa were something you could chat with. Hold a conversation with. There would be no deceit. You’d know you were talking to Siri, but Siri could interact more ‘normally’ than it does now. I’m very impressed with the technology. As Werner von Braun said “Science is like a knife. Give it to a surgeon or a murderer
        and each will use it differently.” It is up to us to make sure the technology is used ethically.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.