Writing for The New Yorker, Ted Chiang believes that the concept of a technological singularity, in which computers / AI would be able to make themselves ever smarter, is similar to an ontological argument. In other words, it probably won’t happen.

How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved.

Check It Out: The Singularity: Can Computers Make Themselves Smarter

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account