2019 is undoubtedly going to be a big year in AI. The discussion over fake news will continue too. Sean Gourley, CEO of machine intelligence company Primer, wrote in Wired that while progress in AI is being made, at the moment humans, not algorithms, need to lead the fight against fake news. I know from my own research into fake news how important a role bots play in the spread of disinformation. Unfortunately, the technology is not yet discerning enough to be relied upon to separate fact from fiction. AI has not been able to fight back. It may be able to one day, but until then, it is down to us humans.
One of the reasons that computational propaganda has been so successful is that the naïve, popularity-based filtering systems employed by today’s leading social networks have proven to be fragile and susceptible to targeted fake information attacks.To solve this problem, we will need to design algorithms that amplify our intelligence when we’re interacting together in large groups. The good news is that the latest research into such systems looks promising.
The session was wide ranging, but included accusations that Facebook has upended democratic institutions, led by “frat bot billionaires from California.”
It works by storing a unique digital fingerprint for every photo found on these trusted sites. It also saves a signature of every photo you see while you browse with the extension installed.
Tim Cook continues to raise his profile, tweeting more publicity photos of his visits to lots of places, including a Normandy war cemetery. Bryan and Jeff reexamine the idea that Mr. Cook may be thinking of political office. They also talk about Dow Jones’s brief flirtation with publishing fake news about Apple, and how Apple has changed the way on/off buttons work in iOS.
The news sent shares of $AAPL up as trading bots reacted to the news without being able to tell it was fake.
I know it’s coming. I know it’s unavoidable. But that doesn’t keep me from being terrified of this inevitable future when fake things are indistinguishable from reality. Adobe has its VoCo technology in testing—and that’s scary enough, but now University of Washington researchers have demonstrated the ability to to match speech to a generated video. In the demonstration video, they used real speech from former president Barack Obama and matched it to artificially generated video of him speaking those same words. It’s easy to see this tech being used to match falsified speech to falsified video. And while there are some aspects of UW’s artificially generated video that look fake, this is a demonstration, not a finished product. Within a few years, the ability to perfectly fake video and speech together will be available on our smartphones. The end result will be an ever-greater cynicism towards never believing anything you see. It’s inevitable, scary, and the technology is impressive as all heck. It will also be a huge test of democracy. Not only can
someone anyone be made to say something they didn’t, anyone could also deny saying something they really did say, claiming to be the victim of this technology. The Atlantic has a good story with a lot more information on the university project.
InVID’s Chrome plugin is the front end to a sophisticated backend that sifts through metadata, information from the videos themselves, and social media information that a journalist—or anyone—could then use to determine whether a video is “fake.”
The proliferation of “fake news” has been blamed in part on social media companies’ hands-off approach to curation. Charlotte Henry argues this is one area where social media can take its cues from Apple and its heavily curated approach to Apple News.