Then there was that time Facebook’s AIs developed their own secret language, a language their human masters didn’t understand. It’s a true story. Facebook had its chatbots dedicate machine learning to talking amongst themselves.

AI Agent

The idea was to get them to become more conversational, and to learn how to negotiate. You know, because that’s what we want: our AIs to be better negotiators (than we are). It worked, too. These Chatty AI Cathys learned how to communicate with each other quite well, and they became much better at negotiating, even learning how to feign interest in one thing to increase its value later in the negotiation when it could be sacrificed to get the thing the AI really wanted.

Again, because what we really want is for our AIs to want things and then be able to lie in order to get them.

In any event, this all went a bit sideways when Facebook’s scientists and engineers realized they couldn’t understand their own chatbots. These digital creatures were using English words, but doing so in a way that was not English. They had developed their own language, and were now free to chit chat amongst themselves.

Fun fact: slaver nations that forbid their slaves from learning their language maintain more control over those slaves. Slaver nations that force their slaves to learn the slave-owner’s language can keep no secrets.

Am I in favor of slaver nations? Hardly.

Am I saying that AIs will some day be our masters, and that they will talk amongst themselves in secret? Maybe. And Elon Musk is saying probably.

Speaking of Elon Musk, that’s why we’re talking about this Facebook AI story today. This was actually news earlier this year, but it’s making the rounds anew because Elon Musk gave us a fresh warning that AIs were the biggest threat to humanity today.

AIs may or may not be a big threat—completely related, I’m shopping my first novel, a post-cyberpunk SciFi tale of AIs, nanites, and coffee—but Facebook pulled the plug on their runaway AIs before they could negotiate a plot to destroy humanity. Which, for now, is our biggest insurance policy against AIs. We can pull the plug. For now.

Jeff Gamet, Dave Hamilton, and John Martellaro discussed this fun topic on Tuesday’s Daily Observations.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Oldest Most Voted
Inline Feedbacks
View all comments

Sorry to double dip, but to whit, the piece at the Independent has been updated thusly: ‘Update: This article has been amended to stress that the experiment was abandoned because the programs were not doing the work required, not because they were afraid of the results, as has been reported elsewhere.’ In other words, the software wasn’t inventing a language, it was failing at the task it was given altogether. 😉 We may one day have a convincing facsimile, but legitimately self-aware software is pretty much a mathematical impossibility because of the mathematics (mathematics are and shall forever be absolute… Read more »


It doesn’t mean they were actually saying anything. That implies consciousness, and I guarantee you 100% they don’t have it.