The buzz word that seems to be on everybody’s mind these days is fake news. The term is used by news professionals, politicians, and ordinary people to describe all sorts of news stories that they find questionable. At the center of the storm is Facebook, a site that is commonly accused of facilitating the spread of fake news and harming the political discourse in America and abroad. Facebook understands this situation and has been trying to combat it. Recently it has released a new tool in its fight against fake news.
Combating fake news is a tricky business for a site like Facebook. The site wasn’t designed to be a news web page like the homepages for CNN or FOX News, but given Facebook’s dominance in the world of social media, it makes sense that it is a major source of news for the 1.86 billion people who use Facebook.
All of these users are free to share whatever claims or links they want and Facebook has limited resources to counteract false information that might be spread. Still, Facebook is trying to do what it can and is taking a new step by allowing links to become flagged as “disputed.”
Facebook has relied on community policing efforts in the past. Users have always been able to tag posts that violate community policies regarding graphic photos. This new tool expands the power of the user to include the ability to flag stories as “disputed.” Stories aren’t automatically tagged by this measure, but it can lead to a bright red warning appearing below links that have been widely disputed and found problematic.
The idea is simple enough but implementing it is a tricky balancing act. In America’s politically charged environment just about every story is likely to be disputed by someone. If left to run wild this tool might end up tagging every link and leading to confusion and chaos. Facebook understands this, which is why they have on staff researchers who will examine stories that are regularly flagged to see if they are truly being disputed.
It’s a combination of automation and hands-on editorial control that seeks to make the most of Facebook’s limited resources. Facebook is definitely a wealthy and powerful company but given the sheer volume of data posted to Facebook every minute they can’t afford to personally monitor any more than a tiny fraction of all updates posted to the site.
The theory is that the new tool won’t just be based on the opinions of the researchers themselves, but on third-party fact checkers. Just as the web has always played host to questionable claims, it has also offered a chance for fact checkers to verify or debunk controversial claims. Some of these organizations include Snopes, Politifact, and the Washington Post’s Fact Checker. These sorts of groups have always been around, but they have been growing and multiplying in recent years.
As with every solution, there are pros and cons. Relying on third party fact checking organizations is a good way to gain some clarity on matters but when you enter the murky world of politics truth can seem a little fuzzy. While there are many organizations that attempt to deliver the unvarnished truth, there are also agencies that work with an agenda while fact checking.
In the battle over fake news, there’s a lot of political capital to be gained in proving that your side is more truthful than the other, which is why fact-checking pages have been put up to promote just about every political view and party. In the end, even those fact checkers that are the most dedicated to objectivity still have personal views and biases that can seep into their work.
This is why it can get tricky to apply the title “fake news” to many political stories. Facebook is trying to sidestep this issue by simply pointing out that the facts of stories are disputed. It doesn’t go as far as marking stories as false; it only says that prominent fact checkers find the claims in a story suspect. This allows users to approach certain stories with an increased level of caution and quickly access multiple takes on an issue so they can decide what to believe for themselves.
No matter what Facebook does, there are going to be complaints. When the Facebook news feed was entirely automated, it would often share outlandish and conspiratorial claims. When users complained, Facebook stepped in and hired curators to remove stories that weren’t up to their standards. Users looked at the steps taken by the curators and realized that they had a track record of disproportionately deleting stories from conservative sources.
The curators claimed this was simply a byproduct of the fake news industry, which they claimed was more interested in serving a conservative user base. To this day Facebook continues to struggle to strike a balance between free speech and editorial standards based on verifiable news. The new flagging system from Facebook is another step in its evolution. Casual users might just notice changes like new reaction options, but keen-eyed observers will notice that Facebook is always making tweaks to try and improve its services. If all goes as planned, this new tool will help inject some skepticism into the global flow of news.