After already acknowledging its rampant fake news problems and promising to come up with some sort of solution back in November, Facebook today announced its actual plan of action. The social network is going to work closely with fact-checking websites to verify stories that are being shared by its users, while at the same time labeling potentially false stories and limiting their spread.
In a lengthy blog post, Adam Mosseri, Facebook’s News Feed VP, explained how the company is going to handle the flood of shared links in order to weed out those that lead to fake articles. “We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully,” Mosseri writes. “We’ve focused our efforts on the worst of the worst, on the clear hoaxes spread by spammers for their own gain, and on engaging both our community and third party organizations.”
To that end, Facebook is changing the way its own user reports system works and adding the ability to specifically label a news story as fake. Those reports will then be handled by Facebook, who is partnering with a number of news organizations that will help to verify whether a story is indeed false or not. The organizations Facebook choose are signatories of the International Fact Checking Code of Principles, and includes names like Snopes, ABC News, and PolitiFact.
If, after being investigated, a story appears to be false, Facebook will flag it with a new status it calls “Disputed.” Disputed stories will include a large red flag notification that says “Disputed by 3rd Party Fact-Checkers,” and will both appear lower in the news feed and will not longer be able to be featured as an ad or promoted.
It’s a meaningful step towards correcting the problem, though its reliance on users to flag fake news — when the users are the ones sharing, commenting, and believing the fake news to begin with — might be a bit shortsighted. We’ll have to see how it pans out in the coming weeks.