Facebook’s Assault on Democracy Foretells AI Future

AI problem solving

Everyone assumes that the full technological panoply of AI will be judiciously monitored, regulated and contained for the public good. Right. Just like Facebook handled outsider misinformation.

Business Insider has posted an interesting opinion piece about how money trumps all in the high technology affairs of Facebook and Twitter. The result opened the door for new kinds of attacks.

This is good read. Quoting:

Facebook, along with Twitter and Google, are scrambling to contain a problem that happened on their turf, thanks to a system they created and which has been immensely profitable for them.

It gives me great pause to think about this state of affairs came about with Mark Zuckerberg now describing how the owner of the barn that let the horses go free, for profit, is going to secure the barn.

This crisis has been like no other. Traditional financial affairs have been monitored and regulated by the government. From time to time, the regulations are watered down or cleverly bypassed, but equilibrium, in the hands of steady, experienced people, has always been restored. But AI is fundamentally different.

Enter AI

Early work with AI has been promising, and there are weekly breakthroughs. Because Siri is so much a part of our lives, I try to keep Particle Debris readers up-to-date on what happening and what we can expect. In response to the 27 August column, one of our distinguished readers wrote:

Other than attention-grabbing pronouncements of Ray Kurzweil and his brethren, no one has really proved the inevitability of the Singularity, capital S –the day when machine intelligence supplants human intelligence, rendering the latter redundant, sub-optimal, and dispensable.

I’d like to respond. Using a rocket analogy to AI, we are where Robert H. Goddard was in the 1930s with primitive, multi-stage rocket experiments. Science fiction writers in the 1940s quickly fantasized about travel to the moon. And yet, at the time, no one had proven that we could send astronauts to the moon and bring them back safely.

It took 30 years of continuous, often perilous, technical development to create the Apollo system. But we got there. And in hindsight, it happened faster than anyone in the 1940s could have imagined.

I think we tend to get distracted and remain overly optimistic that all will turn out well. But we’re in a very advanced era now in which things can go wrong in a much more massive way and at an alarming rate, not previously experienced. The way Facebook failed to discipline itself for the sake of the overriding social good combined with the inability of government to recognize and deal with the way Facebook has been exploited leads one to believe that the same difficulties will arise with AI.

Perhaps the good news is that, as an emerging high-tech, global society, we’ve been given an early glimpse of how technological developments need to be properly cultivated, managed and chaperoned. The stakes are too high to let the same kind of negligence and greed prevail with AI.

6 thoughts on “Facebook’s Assault on Democracy Foretells AI Future

  • I came back to check on the comments, but they appear to be missing – in fact from every article I’ve tracked. Is this a new feature, a problem with my browser(s), or is the site experiencing problems with showing comments?

    Thanks

  • @aardman:
    You are eminently quotable.

    John:

    Your analogy about rocket technology, and its nonlinear pace of development, leading such luminaries as one Arthur C Clarke (you may have heard of him) to opine in 1946 in an essay titled ‘The Challenge of the Spaceship’ that manned space flight was a distant, if not improbable eventuality, is instructive. Point taken. Progress, specifically technological, has never been linear, but episodic with explosive growth periods following specific infection points in breakthroughs not only in understanding but translation into transformational products.

    Why, therefore, would we not expect similar growth patterns to characterise the development and progressive capabilities of AI/AGI? Indeed, we should. In fact, we are already witness to some remarkable developments in this infant field that should lead us to anticipate fecund growth spurts resulting in revolutionary societal and cultural impacts across industries, including medicine, commerce, transportation and defence, to name only four. That should not be in question or dispute.

    What is in question, and which should remain until evidence emerges to the contrary, the subject of profound scepticism if not incredulity, is not only a ‘singularity’, but AI achieving any measure of objective sentience as we currently understand it.

    Permit me to proffer a plausible alternative scenario, one in which at least this TMO reader has ample confidence. Just as we currently have a designation and discipline of ‘machine learning’, at some point someone will coin the phrase ‘machine consciousness’ or ‘machine sentience’. (Somewhere in the hallowed halls of MIT or in the bowels of its deepest laboratories, sits a geek shouting into his/her computer screen that the phrase has already been coined, followed by some colourful metaphors). Three predictions; all of which add yet more nails into the coffin of the PC Era.

    First, breakthroughs in processors, including quantum CPUs amongst others, will endow AI with substantially greater general (as opposed to narrow) capacity. AI will undergo rapid advances in not simply speed, responsiveness, and capability; but many of its outputs will surpass anything humans can do, not just in narrow capabilities, but increasingly with a broader range of related matters and relevance, as effective networks become more complex.

    Second, the rationale for (re)defining AI sentience will be something to the effect that our current understanding of sentience is rooted in analogue models of biochemical processes. AI is digital, and is therefore fundamentally different in not simply process but expression. Furthermore, AI’s interface with the outside world is also digital and not analogue. It has no eyes, ears, taste, touch or smell with which to perceive the world and demonstrate its curiosity and exploration of it. It has only digital access, and while AI is ‘aware’ (we will debate that term) of the world and of humans, that awareness is confined to their digital footprint alone; just a two-dimensional lifeform would only be aware of us two dimensionally, and not be capable of awareness of our third dimension. Thus, ‘machine sentience’ is essentially digital, non-human and alien; which does not disqualify its existence. It must be understood on its own terms.

    Third, some leading voices in AI will declare sentience has been achieved. They will cite proactive search algorithms as evidence of AI’s demonstration of curiosity, learning and growth, as well as ‘ghosts in the machine’, unanticipated and inexplicable ‘behaviours’ and ‘personality traits’ as examples of this alien sentience. Some may go so far as to declare it alive. A majority of expert opinion will, at least initially, dissent.

    At no point in these three outcomes will AI demonstrate any independent, internal ambition or intent to rule the world. If at any point it has effects on unanticipated areas, this will be accidental. It will, as do all of our tools, be susceptible to misuse and great harm.

    As one rooted in the biological sciences, not only would I not see this as sentience or consciousness, but as falling far short of any of the accepted definitions of ‘life’, including artificial, unless we are to make novel concessions that are as yet not on offer. We will debate the meaning, limits, and eventually legal rights of machine sentience, as we should. Because, whether or not we ever accept AI as sentient, conscious or living, however defined, all of the above attributes apply to us; and how we interact with the world, including everything in our orbit including our own creations and products, has profound effects on our own character, relationships, progress and our stewardship over that part of the world over which we have unambiguous domain.

  • Mr. M., I am flattered that you have chosen to quote me. And I’m not being ironic or sarcastic.

    There is a glaring qualitative difference though between the Singularity project and the quest to put a man on the moon. The latter was mostly, if not totally, an engineering problem, not a scientific problem. The physics of the moon shot was pretty much fully fleshed out by the time Robert Goddard launched his first rocket. In fact, general relativity wasn’t even needed, good old Newtonian mechanics worked well enough. There were no theoretical obstacles, the engineers just needed to build rockets, space suits, and other equipment that can do the job. That wasn’t an easy task, but nevertheless, NASA’s boffins knew that theoretically, it was doable. (Although theoretical feasibility does not imply practical achievability. Case in point: positive-output fusion. So far.)

    Singularity, though, is not just an engineering problem. There are fundamental scientific unknowns that have to be figured out, paramount among them is what exactly is human intelligence. What are its constituents, and how do these fit and work together? Unlike Newtonian physics for the moon shot, there is no fully developed theory of intelligence that lays out the path towards Singularity. So, with apologies, I think a more apt analogy to the Singularity project is not rocketry but time travel or faster-than-light travel. When I said that the inevitability of the Singularity has not been proved, I meant that the theoretical underpinning that says that it’s achievable is, perhaps not non-existent, but certainly still quite incomplete and unsettled.

    Does this mean I think AI research is futile or pointless? By no means. Breakthroughs in AI will be a boon for society if — a very big IF– we use them wisely. So I agree with your closing statement that we need to be very careful how these breakthroughs are deployed. Above all, the greatest care must be taken when AI is being asked to make decisions that carry implicit moral choices. (I think it should not be doing this at all, but who am I to say so, eh?)

    1. If we set out to design a “singularity” then you are absolutely correct. I’m more concerned, however, about an accidental singularity. We make single scope AIs for traffic control, and food distribution, and financial transactions, and stock market management, and on and on. Then people start saying, “You know it would make sense for the food production and transport AI to be able to talk to the traffic AI.” “You know, the financial transaction AI should be able to talk to the stock market AIs.” “You know the power grid AI should be able to talk to the stock market and transport AIs.” When that happens the possible results and consequences will grow exponentially. We won’t be able to predict the logical conclusions these AIs will come up with.

      I don’t fear the growth of a single malevolent Singularity AI. I’m more concerned with the growth of a pseudo-singularity made of all of these discrete AIs that function without regard for the consequences outside of their little world. I don’t fear an AI deciding humans are unnecessary. I do fear a hospital management AI deciding that it has too many patients, and then talking to the traffic AI and them deciding the best way to solve this would be to shut down the roads to the hospital so no more could arrive.

      I’m less concerned with malevolence than I am unintended consequences.

  • Traditional financial affairs have been monitored and regulated by the government. From time to time, the regulations are watered down or cleverly bypassed, but equilibrium, in the hands of steady, experienced people, has always been restored.

    This would be a start – respect and treat our data as a valuable private asset, and subject it to similar legal protections to those afforded to real estate, savings, shares etc.
    Where that leaves us wrt AIs, I don’t know. As with GMOs, nukes etc, the precautionary principle should apply and I would hope those working in the field are aware of possible ramifications. AI gives everyone the creeps and performs miracles in equal measure – we can breathe easy as long as there is an OFF switch, but even now can you switch off, say, Google?
    On the bright side… er…

  • Well said.

    I think we tend to get distracted and remain overly optimistic that all will turn out well. But we’re in a very advanced era now in which things can go wrong in a much more massive way and at an alarming rate, not previously experienced.

    It’s already happened more than once. I remember at least a couple of stock market swings called Flash Crash’s caused by programmed trading bots. They had very simple AIs that reacted without thinking, and wiped out billions of dollars in value. AIs are in control of lots of things and we will see more events where programmed AI logical failures lead to disastrous results. I hope that nobody gets killed, but I wouldn’t bet on it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.