The Republican Standard

Facebook Now Rating User Trustworthiness, But Won’t Tell Users If They Are Trusted

Cambridge analytica

The public perception of the trustworthiness of tech giant Facebook has been waning over the past year as they collect humongous amounts of user data to sell off, censor conservative news, promulgate an aura of “Big Brother is watching you,” and had their CEO, Mark Zuckerberg, testify in front of Washington lawmakers, which was met with infamous acclaim. Rather than upping their own reputation among the public, Facebook has assigned that role to their users, creating a new “trustworthiness scale” out of which users are assigned a value to determine their reputation on the platform.

Developed over the past year, Facebook’s new scale is in effort to assess “fake news” on its platform, including measuring the credibility of users to help identify malicious actors. The company, like others in Silicon Valley and parts of the social media industry, has relied on its users to report problematic content, insofar as misinformation is concerned. However, as Facebook heavily relies on user policing, some users falsely report items as “untrue,” which has led to an unbalanced method for making sure everything on the platform is measured as “true.”

Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview with The Washington Post that it is “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.”

Therefore, Facebook is currently assigning users a score on a scale between zero and one to determine their truth values.

“Users’ trustworthiness score between zero and one isn’t meant to be an absolute indicator of a person’s credibility,” Lyons said, “nor is there is a single unified reputation score that users are assigned.”

She explained that the score is just “one measurement” among “thousands of new behavioral clues” that the tech giant takes into account to understand risk assessments with disinformation. Furthermore, Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by an aggregate of users on the platform.

A few years ago, as a part of enhancing its algorithms, Facebook gave its users the ability to report posts they believe to be false, as well as reporting problematic content on the basis of being pornography, violence, unauthorized sales, hate speech, and other reasons. Before, users could block the content, but now Facebook is using it to help them out as they do not have enough human editors to comb through the hundreds of million of posts that are made daily on its platform.

Regardless, there is one issue with the new user reputation system: no one knows what criteria are used to develop a user’s score and users cannot see their scores. As well, the company is extremely wary of speaking about them because, as Lyons said, Facebook is concerned that doing so will be “tipping off bad actors.”

The one thing Lyons explained, according to the report, was that “One of the signals we use is how people interact with articles.” She added, “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

This would allegedly change a user’s trustworthiness score.

The paradox with this previously unheard of measure is that Facebook cannot tell users how they are judging them, because if they do, the algorithms they built to judge users will have to be changed to ensure that users don’t know about them so that they cannot be changed.

Surely, the phrase: “If I told you, I’d have to kill you,” would come into play here.

Exit mobile version