More and more, computational devices are making suggestions, even making decisions for us. As the algorithms get more and more sophisticated, human beings could start to lose the ability to evaluate and call into question those suggestions and decisions. Worse, machines have the potential to learn and self-improve much faster than humans, leaving us even further behind. What happens next?
More and more, computers are making decisions for us. The good news is that, in 2015, in most cases, the human being who is advised has adequate tools to evaluate the quality of computer suggestions. How long will that last?
Recently, the point has been brought home about how the direction provided by a computational device isn't always properly questioned. For example, there have been stories about people who blindly follow a GPS device, providing turn-by-turn directions, and get into big trouble. "8 drivers who blindly followed their GPS into disaster."
This general problem isn't going away.
Unable to decide what to watch, many TV viewers increasingly rely on suggestions from algorithms. In fact, the failure of traditional TV services to provide good suggestions is leaving them at a disadvantage. "Survey: Lack of Content Recommendation, À La Carte Options Help Drive Cord Cutting."
Our iPhone increasingly wants to make proactive suggestions, leaving our minds, presumably, more free to concentrate on how to pay for all the services used.
Smart homes with home automation systems will squeeze every penny out of our utility bills. Will they, at some point, interfere with or conflict with our body rhythms or human needs? Will home owners just give up questioning recommendations?
Autonomous (self-driving) cars will, someday, take over the task of getting us where we want to go. What if we don't know where to go for dinner? Could it be that the car will also optimize by insisting that it take us to the nearest, highest rated, best department of heath scored, least expensive Italian restaurant? Gotta always be optimizing.
Research into Superintelligence suggests that super learning machines could, soon, produce results, analysis and decisions that human beings no longer have any insight into. Does following the suggestions of such a Superintelligence, that's on an ever increasing learning curve, lead to the abrogation of our own social responsibilities? Could human beings, with the help of such rudimentary algorithms in our future smartphones, start to make decisions that are so divorced from casual understanding that the results are puzzling— yet spectacularly successful? By whose standards?
A few of these ideas are discussed on page 2 below, some interesting news debris that starts to paint a picture. Human beings are, more and more, guided by computer decision making. At what point do we no longer understand why we do certain things?
Next page: the tech news debris for the week of October 26.