OpenAI, an AI research institute cofounded by Elon Musk and Sam Altman, built an AI text generator that its creators worry is dangerous.
Jack Clark, policy director at OpenAI, says that example shows how technology like this might shake up the processes behind online disinformation or trolling, some of which already use some form of automation. “As costs of producing text fall, we may see behaviors of bad actors alter,” he says.
Based on the examples I think it’s safe to say this AI would pass the Turing Test.
Check It Out: This AI Tool Scares the Crap Out of Elon Musk