The Future iPhone: Making Suggestions We No Longer Understand

| Particle Debris

More and more, computational devices are making suggestions, even making decisions for us. As the algorithms get more and more sophisticated, human beings could start to lose the ability to evaluate and call into question those suggestions and decisions. Worse, machines have the potential to learn and self-improve much faster than humans, leaving us even further behind. What happens next?


More and more, computers are making decisions for us. The good news is that, in 2015, in most cases, the human being who is advised has adequate tools to evaluate the quality of computer suggestions. How long will that last?

Recently, the point has been brought home about how the direction provided by a computational device isn't always properly questioned. For example, there have been stories about people who blindly follow a GPS device, providing turn-by-turn directions, and get into big trouble. "8 drivers who blindly followed their GPS into disaster."

This general problem isn't going away.

Unable to decide what to watch, many TV viewers increasingly rely on suggestions from algorithms. In fact, the failure of traditional TV services to provide good suggestions is leaving them at a disadvantage. "Survey: Lack of Content Recommendation, À La Carte Options Help Drive Cord Cutting."

Our iPhone increasingly wants to make proactive suggestions, leaving our minds, presumably, more free to concentrate on how to pay for all the services used.

Smart homes with home automation systems will squeeze every penny out of our utility bills. Will they, at some point, interfere with or conflict with our body rhythms or human needs? Will home owners just give up questioning recommendations?

Autonomous (self-driving) cars will, someday, take over the task of getting us where we want to go. What if we don't know where to go for dinner? Could it be that the car will also optimize by insisting that it take us to the nearest, highest rated, best department of heath scored, least expensive Italian restaurant? Gotta always be optimizing.

Research into Superintelligence suggests that super learning machines could, soon, produce results, analysis and decisions that human beings no longer have any insight into. Does following the suggestions of such a Superintelligence, that's on an ever increasing learning curve, lead to the abrogation of our own social responsibilities? Could human beings, with the help of such rudimentary algorithms in our future smartphones, start to make decisions that are so divorced from casual understanding that the results are puzzling— yet spectacularly successful? By whose standards?

A few of these ideas are discussed on page 2 below, some interesting news debris that starts to paint a picture. Human beings are, more and more, guided by computer decision making. At what point do we no longer understand why we do certain things?

Next page: the tech news debris for the week of October 26.

Popular TMO Stories



If there’s any question about how computer driven decision making can go awry you need to look no farther than the stock market. All large funds use computers to drive trades based on preset rules and algorithms. Starting with the crash of ‘87 right up through the FlashCrash of 2010, to today, these programs have caused huge swings, and made ones from other causes worse. Sometimes deliberately, but more often accidentally they have cost investors millions.

It’s easy to program a computer. It’s HARD to program a computer to behave itself in the real world. the real world is messy and there’s always an exception they didn’t think of.


Thats the thing - ANY non-linear data is difficult to quantify beyond a certain point - it can suggest trends, but there will always be too many disparate, individuated factors for statistical information to be truly representative, nothing in the natural world behaves mathematically.

So far as super intelligence goes, alas, intelligence isn’t the only factor that informs a person’s thinking. It would be wise for us not to abdicate our responsibility in that respect too, living in the world is not strictly a computational exercise (something Silicon Valley has never fully grasped, but more so with the current generation), human beings will always have the edge in that respect. None of this broaches the topic of ethics, either,  and the literally millions of factors that make ethics operationally cohesive for people, again, something highly individuated. To an extent, computers always *will* be toys in the sense that behaviorally they will always be a pale imitation of something real, there’s just no way around the inherent limitations of logic.

Log in to comment (TMO, Twitter or Facebook) or Register for a TMO account