Writing for The New Yorker, Ted Chiang believes that the concept of a technological singularity, in which computers / AI would be able to make themselves ever smarter, is similar to an ontological argument. In other words, it probably won’t happen.

How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved.

Check It Out: The Singularity: Can Computers Make Themselves Smarter?

2 Comments Add a comment

  1. wab95


    Just a quick comment, and pardon the repetition, but as you will see below, the repetition is relevant to the topic. It is how we learn.

    The laws of physics are both inviolate and immutable. We have found no exceptions at any level of discrete or aggregate matter or energy in either inanimate or animate entities.

    The proponents of the ‘singularity’, despite their inconsistent and at times contradictory models for the emergence of either super-intelligence or consciousness, posit a novel event; an inflection point at which an explosion in intelligence, however defined (see Max Tegmark’s “Life 3.0: Being Human in the Age of Artificial Intelligence”) or consciousness is achieved, outperforming and ultimately displacing human intelligence – follow-on subordination or subjugation of the latter optional.

    Ted Chiang correctly positions much of this argument as ontological.

    However, when we look at the organic emergence of intelligence, specifically human accomplishment, and more importantly, when we contextualise human intelligence and accomplishment as part of the natural evolution of life and intelligence, we observe not an ontological nor even a phylogenetic model, but a recursive or iterative model we describe as evolutionary; beginning with individuals and then converting to cooperation between intelligent individuals into a repository of civilisation through the tool of recorded language across time. Super-intelligence is thus gradual and cooperative, and requires the capacity to record, store and augment that knowledge over time.

    Single cellular organisms establish viability and perfect three tasks, homeostasis, eating and procreation; leading to ever more complex life that achieve greater complexity in task mastery, such as new models of evasion and/or capture, much of which appears genetically hardwired, until we get to a level of complexity that requires a prolonged ‘childhood’ that requires parenting, ie teaching and mentoring, in order to transmit even more complex knowledge through individual learning – all of which occurs in the context of a complex and interactive collection of individual life, divided by species and specialisation that in turn defines an ecosystem, driving complexity between species towards greater fitness, including intelligence; until we get to human life, in which we have that higher order of complexity at the individual level with addition of two essential tools: 1) unambiguous language, which permits the transmission of information in realtime, and 2) written language, a more recent tool, that permits transmission of language across time.

    The individual entities, only through a slow process of evolved complexity, achieve a requisite level of intelligence, beyond which individuals are no longer the primary drivers of complexity; that crown is taken by their product, civilisation, which is the result of spoken, and more importantly, written language that aggregates and transmits knowledge, and therefore complexity, across time.

    The growth of intelligence is interactive, iterative and evolutionary, achieving a plateau at the individual level, at which point it must become cooperative. At that point, it is only taken to greater complexity through trans-generational aggregation of knowledge, communicated and further aggregated through the medium of written language across generations through time.

    This is the only empirical model that we have. If the laws physics permitted another model, particularly one even more efficient and competitive – in a word – ‘fit’, as posited by the proponents of the ‘singularity’, then we would have seen it and it would have already, and organically, overtaken and supplanted the slow march of human dominance.

    That humankind will violate that model, and create a more efficient path to artificial intelligence, either in an individual ‘computer’ that recursively learns until it becomes ‘super-smart’ or that the loose affiliation of computer ware on the internet will glom into a coordinated and efficient ‘super-brain’ lacks both precedent and plausibility, and therefore remains confined to fantasy.

    If the singularity is to move beyond the realm of fantasy, then let us see either a testable hypothesis, or better still, a demonstrated proof of concept.

    • wab95

      Just wanted to correct a typo:

      ‘…2) written language, a more recent tool that permits transmission of information and knowledge across time’.

Add a Comment

Log in to comment (TMO, Twitter, Facebook) or Register for a TMO Account