Twitter Tests Fake News Warning System

Twitter is testing a fake news warning system on its platform. Bright labels will appear under tweets with misinformation.

Twitter confirmed that the leaked demo, which was accessible on a publicly available site, is one possible iteration of a new policy to target misinformation it plans to roll out March 5.

In this version, disinformation or misleading information posted by public figures will be corrected directly beneath the tweet by fact-checkers and journalists who are verified on the platform, and possibly other users who will participate in a new “community reports” feature, which the demo claims is “like Wikipedia.”

I could see “community reports” abused by Twitter trolls mass-reporting anything they disagree with as fake news. Hopefully Twitter builds a good system.

Only 44% of People Correctly Spotted Fake News on Facebook

In a small study (n=80) undergraduate students were fitted with a wireless electroencephalography (EEG) headset. They were then asked to read political news headlines as they would appear on a Facebook feed to determine their credibility. They overwhelmingly chose headlines that aligned with their political beliefs as true.

“We all believe that we are better than the average person at detecting fake news, but that’s simply not possible,” said lead author Patricia Moravec, assistant professor of information, risk and operations management. “The environment of social media and our own biases make us all much worse than we think.”

New Tool Credder Will Rate News Media Credibility

A startup called Credder wants to offer a rating system like Rotten Tomatoes, but for news publications. The hope is to offer people a way to check the credibility of a particular website, and rate them.

Startup Credder is trying to solve this problem with reviews from both journalists and regular readers. These reviews are then aggregated into an overall credibility score (or rather, scores, since the journalist and reader ratings are calculated separately). So when you encounter an article from a new publication, you can check their scores on Credder to get a sense of how credible they are.

Sounds like a good idea to me.

The 11 People Trying to Fight Fake News in the Indian Election

The Indian election is the world’s largest democratic exercise. There is, unsurprisingly, some concern that it could be undermined by fake news. Bloomberg News met Boom Live, an 11-strong team of fact-checkers that make up 1 of the 7 firms working with Facebook’s efforts.

Based on the early tallies, more than 60 percent of India’s 900 million eligible voters are expected to cast ballots between now and May 19, as the center-left Congress Party tries to seize power from the right-wing Bharatiya Janata Party. As in other elections around the world, paid hacks and party zealots are churning out propaganda on Facebook and the company’s WhatsApp messenger, along with Twitter, YouTube, TikTok, and other ubiquitous communication channels. Together with Facebook’s automated filters, Boom’s 11 fact-checkers and its similar-size fellow contractors are the front line of the social network’s shield against this sludge.

It is Still Down to Humans to Fight Fake News

2019 is undoubtedly going to be a big year in AI. The discussion over fake news will continue too. Sean Gourley, CEO of machine intelligence company Primer, wrote in Wired that while progress in AI is being made, at the moment humans, not algorithms, need to lead the fight against fake news. I know from my own research into fake news how important a role bots play in the spread of disinformation. Unfortunately, the technology is not yet discerning enough to be relied upon to separate fact from fiction. AI has not been able to fight back. It may be able to one day, but until then, it is down to us humans.

One of the reasons that computational propaganda has been so successful is that the naïve, popularity-based filtering systems employed by today’s leading social networks have proven to be fragile and susceptible to targeted fake information attacks.To solve this problem, we will need to design algorithms that amplify our intelligence when we’re interacting together in large groups. The good news is that the latest research into such systems looks promising.

 

Dow Jones Fake News, Cook’s Barnstorm Tour, iOS 11 Control Center - ACM 432

Tim Cook continues to raise his profile, tweeting more publicity photos of his visits to lots of places, including a Normandy war cemetery. Bryan and Jeff reexamine the idea that Mr. Cook may be thinking of political office. They also talk about Dow Jones’s brief flirtation with publishing fake news about Apple, and how Apple has changed the way on/off buttons work in iOS.

Want to Be Terrified? Watch This University Demo That Fakes a President Speaking

I know it’s coming. I know it’s unavoidable. But that doesn’t keep me from being terrified of this inevitable future when fake things are indistinguishable from reality. Adobe has its VoCo technology in testing—and that’s scary enough, but now University of Washington researchers have demonstrated the ability to to match speech to a generated video. In the demonstration video, they used real speech from former president Barack Obama and matched it to artificially generated video of him speaking those same words. It’s easy to see this tech being used to match falsified speech to falsified video. And while there are some aspects of UW’s artificially generated video that look fake, this is a demonstration, not a finished product. Within a few years, the ability to perfectly fake video and speech together will be available on our smartphones. The end result will be an ever-greater cynicism towards never believing anything you see. It’s inevitable, scary, and the technology is impressive as all heck. It will also be a huge test of democracy. Not only can someone anyone be made to say something they didn’t, anyone could also deny saying something they really did say, claiming to be the victim of this technology. The Atlantic has a good story with a lot more information on the university project.

InVID Says It Can Help Detect 'Fake News' Videos

InVID’s Chrome plugin is the front end to a sophisticated backend that sifts through metadata, information from the videos themselves, and social media information that a journalist—or anyone—could then use to determine whether a video is “fake.”