A startup called Credder wants to offer a rating system like Rotten Tomatoes, but for news publications. The hope is to offer people a way to check the credibility of a particular website, and rate them.
Startup Credder is trying to solve this problem with reviews from both journalists and regular readers. These reviews are then aggregated into an overall credibility score (or rather, scores, since the journalist and reader ratings are calculated separately). So when you encounter an article from a new publication, you can check their scores on Credder to get a sense of how credible they are.
Sounds like a good idea to me.
The Indian election is the world’s largest democratic exercise. There is, unsurprisingly, some concern that it could be undermined by fake news. Bloomberg News met Boom Live, an 11-strong team of fact-checkers that make up 1 of the 7 firms working with Facebook’s efforts.
Based on the early tallies, more than 60 percent of India’s 900 million eligible voters are expected to cast ballots between now and May 19, as the center-left Congress Party tries to seize power from the right-wing Bharatiya Janata Party. As in other elections around the world, paid hacks and party zealots are churning out propaganda on Facebook and the company’s WhatsApp messenger, along with Twitter, YouTube, TikTok, and other ubiquitous communication channels. Together with Facebook’s automated filters, Boom’s 11 fact-checkers and its similar-size fellow contractors are the front line of the social network’s shield against this sludge.
Companies that built their fortunes on internet data have too much at stake to preserve authoritative news or privacy. Apple stands in their way.
2019 is undoubtedly going to be a big year in AI. The discussion over fake news will continue too. Sean Gourley, CEO of machine intelligence company Primer, wrote in Wired that while progress in AI is being made, at the moment humans, not algorithms, need to lead the fight against fake news. I know from my own research into fake news how important a role bots play in the spread of disinformation. Unfortunately, the technology is not yet discerning enough to be relied upon to separate fact from fiction. AI has not been able to fight back. It may be able to one day, but until then, it is down to us humans.
One of the reasons that computational propaganda has been so successful is that the naïve, popularity-based filtering systems employed by today’s leading social networks have proven to be fragile and susceptible to targeted fake information attacks.To solve this problem, we will need to design algorithms that amplify our intelligence when we’re interacting together in large groups. The good news is that the latest research into such systems looks promising.
The session was wide ranging, but included accusations that Facebook has upended democratic institutions, led by “frat bot billionaires from California.”
It works by storing a unique digital fingerprint for every photo found on these trusted sites. It also saves a signature of every photo you see while you browse with the extension installed.
Tim Cook continues to raise his profile, tweeting more publicity photos of his visits to lots of places, including a Normandy war cemetery. Bryan and Jeff reexamine the idea that Mr. Cook may be thinking of political office. They also talk about Dow Jones’s brief flirtation with publishing fake news about Apple, and how Apple has changed the way on/off buttons work in iOS.
The news sent shares of $AAPL up as trading bots reacted to the news without being able to tell it was fake.
I know it’s coming. I know it’s unavoidable. But that doesn’t keep me from being terrified of this inevitable future when fake things are indistinguishable from reality. Adobe has its VoCo technology in testing—and that’s scary enough, but now University of Washington researchers have demonstrated the ability to to match speech to a generated video. In the demonstration video, they used real speech from former president Barack Obama and matched it to artificially generated video of him speaking those same words. It’s easy to see this tech being used to match falsified speech to falsified video. And while there are some aspects of UW’s artificially generated video that look fake, this is a demonstration, not a finished product. Within a few years, the ability to perfectly fake video and speech together will be available on our smartphones. The end result will be an ever-greater cynicism towards never believing anything you see. It’s inevitable, scary, and the technology is impressive as all heck. It will also be a huge test of democracy. Not only can
someone anyone be made to say something they didn’t, anyone could also deny saying something they really did say, claiming to be the victim of this technology. The Atlantic has a good story with a lot more information on the university project.
InVID’s Chrome plugin is the front end to a sophisticated backend that sifts through metadata, information from the videos themselves, and social media information that a journalist—or anyone—could then use to determine whether a video is “fake.”
The proliferation of “fake news” has been blamed in part on social media companies’ hands-off approach to curation. Charlotte Henry argues this is one area where social media can take its cues from Apple and its heavily curated approach to Apple News.