Facebook logos pictured on the screens of a smartphone and a laptop computer. (JUSTIN TALLIS/AFP/Getty Images)
By Ese Olumhense
A ‘careful approach’ to fake news
As part of its ongoing effort to curb the spread of misleading or completely fabricated news articles on its platform, Facebook launched a tool Friday to flag links shared from fake news sites, cautioning readers that the material shared has been disputed by non-partisan fact-checking sites.
Though the feature isn’t yet available to everyone, according to the social media giant’s Help page, it’s the latest step in their war on fake news.
Facebook incurred the wrath of users frustrated by the many hoax news stories surrounding the 2016 election. Bending to pressure, the site announced in late 2016 a series of initiatives that it would take to deal with its fake news problem.
“We believe in giving people a voice and that we cannot become arbiters of truth ourselves, so we’re approaching this problem carefully,” said VP of Product for News Feed at Facebook, Adam Mosseri, in a December blog post.
As part of this careful approach, Facebook says that it will work with independent fact-checkers to identify fake news stories, which would then be flagged. These flagged posts would be deprioritized in news feeds, and if a user tries to share a flagged story, they’ll see a warning cautioning that the story had been disputed. Flagged stories cannot be promoted or turned into advertisements.
It’s unclear whether the mechanism outlined in December is the one in place now, or if other features have been included.
How lies and exaggerations spread on Facebook
Though it isn’t a news site, 66 percent of Facebook’s users rely on the platform to access news, a 2016 study found. This is up from 47 percent in 2013.
Considering the massive reliance on the social network for news, it became a lightning rod for 2016 election news.
But it soon emerged that some of the “news” appearing in Facebook feeds was misleading, or flat-out fake. Seeing an opportunity to capitalize on the interest in the presidential election, predatory publishers drove significant traffic to their sites with fake articles on anything from Democratic candidate Hillary Clinton’s supposed ill health to rumors that now-President Donald Trump’s tax returns had leaked. At times, the misinformation campaigns bordered on dangerous, as fake stories teasing civil war or threatening riots if a particular candidate won or lost became more and more popular.
After the election, some journalists blamed Facebook for Trump’s eventual election, claiming that its lucrative advertising prospects helped malicious actors sway popular opinion, even when those actors lived outside the United States.
Fight over fake news continues
Fake news did not stop after Trump’s historic upset. In fact, it became a major talking point for Americans on either side of the political spectrum, weaponized to discredit and delegitimize news pieces that don’t adhere to either side’s agenda.
While Facebook’s latest effort is certainly appreciated by some news consumers, others are skeptical, believing that the company’s actions amount to arbitrary and unjustifiable censorship.
“Who are these people that will be deciding what is relevant and what is not to the largest social media site in the world?” asked Mickey White, conservative commentator and critic in December. “The source of information for over half the country. We don’t know that have any qualifications outside of their own individual bias.”
Facebook has enlisted fact-checking organizations like Politifact and Snopes to help monitor stories flagged as fake. The sites are part of a network of fact-checking organizations coordinated by the Poynter Institute. Members of the group must apply and be vetted by a team at Poynter, and agree to a set of principles including transparency and nonpartisanship.