Several things have become clear regarding AIs in our lives. There is little regulation. AIs can be manipulated in clever ways. Small devices like Google Home and Amazon Echo have very indirect business models so that they can be priced for the middle class, but have hidden drawbacks. I wonder where all this will lead with family service robots if Apple doesn’t step in and do it right.
More and more, I wish Apple would jump in, set a new, higher standard, and use its financial muscle and reputation to do it right. Just as Apple engineers may be doing with Apple Watch and blood glucose measurement.
Thankfully, Apple has joined the Partnership on AI, but I am skeptical that such an organization can reign in the worst instincts of some companies not in the organization—or even within.
Case in Point
As these small, AI driven devices and their mobile siblings like Siri and Cortana become more sophisticated, and start to become independently mobile as family service robots (or dangerous dolls) they’ll also fall prey to shenanigans that will create new headaches for consumers.
First of all, companies are very good at extolling the benefits of these robots, but with sophistication comes a newfound onus on the consumer to be very astute about their use.
My example here is a recent trick played by Burger King on Google Home users. You can read about it here: “Burger King advert sabotaged on Wikipedia.” It backfired badly, and it makes me wonder how anyone at Burger King thought this would be a good idea. It was a mild form of what’s called cross-device tracking, something which in full form is much more malicious.
Basically, today, AI and internet technologies are so sophisticated that if something can be done to make money, it will be. Either by thoughtless corporations or by the bad guys.
I know that companies that build small, friendly family service robots are cognizant of the need for the robot to not be physically imposing. They have big eyes and remind you of a kitten, not a Transformer. After all, if you combine physically strong robots with extrapolated hacking technologies, large muscular family service robots could be turned into lethal weapons.
I don’t have a lot of confidence that there will be strict government regulation of these devices, no matter how invasive they become or susceptible to hacks. And so, as we all do, there will be a cautious trade between convenience and security risks—just as we do now with Amazon’s Alexa.
A New Beginning
AI’s have gotten off on the wrong foot, just a bit, by being tied to a network. The good news is that they’re driven by much more powerful servers than can be packed into a small robot. That means the network dependence of the AI’s means huge potential gains in power with the locally limited hardware, but it also means opportunities for dirty tricks. And that even extends to the users!
Right now the evidence shows that if a dorky and invasive technical stunt can be pulled off to make a few bucks, it will be. Of course, we don’t see the many cases when corporations exercised restraint, but my feeling is that, sitting behind their consoles, corporate data center managers see opportunities everywhere to abuse consumers. The stand-off distance is great enough that people are seen as a collective mass rather than individual families.
This is why, I hope Apple will eventually get into this business and set a very high bar, just as the company has with iPhone encryption and privacy. Apple doesn’t need to sell you a burger and has better ways to sell you a movie than have its robot lecture you.
Once your family service robot figures out how to operate and take control of your home entertainment system or grocery list, all bets are off. Fortunately, today, you can still pick it up, dash its brains out and throw it in the trash.