Facebook has started rating user that report content based on their “trustworthiness” on the platform, according to the Washington Post. The newspaper spoke to Tessa Lyons, the Facebook project manager tasked with fighting “fake news,” who revealed the existence of the score.
The reputation score runs from zero to one, and primarily appears to rate users based on whether or not they accurately report rule-breaking content, or if they report things just because they don’t like an individual (or media outlet).
However, there’s clearly a lot of unknowns about the score, how it’s calculated, and what it’s used for. Per the report:
Users’ trustworthiness score between zero and one isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic, and which publishers are considered trustworthy by users.
We’re unlikely to get more detail surrounding how the system works, according to the report, because the company is worried that revealing too much detail could lead to further gaming of its algorithm, which is the opposite of what it wants.
With a trustworthiness score in place, it’s clear that Facebook is trying to use its customers as the solution to the problem of fake news. Rather than just relying on the quantity and type of engagement on a post, Facebook is going some way to measuring the quality of the engagement by measuring the users themselves.