Facebook already knows how to hide news stories…in China
Facebook has come under fire in the past few weeks for the role the posts people are calling “fake news” played in the US Presidential election. These are typically posts with false or misleading information that are designed to spread as quickly as possible, skewing the opinions of viewers.
Anecdotal evidence suggests that such fake news sites even influenced the way that people voted which has led to calls for Facebook to monitor the posts people are sharing to prevent the spread of false information which is harmful or even encourages the viewer to hate minority groups like refugees or immigrants.
Facebook has argued that it would be too difficult to monitor the huge number of pages that spread this sort of information among Facebook’s billion or so users.
But some have argued that Facebook is being less than completely truthful about how much it could influence what content is posted on the platform. And it does seem as though Facebook is able to restrict some content.
Part of being a huge global media platform like Facebook is that certain content has to be restricted to keep things that are illegal in different countries from being shared. For instance, Facebook has blocked content in Russia and other countries where things that would be legal in America run afoul of these countries’ laws.
In fact, one of the last markets that Facebook has really failed to penetrate is China, a country of billions of potential users. That is largely because Facebook has been unwilling to impose the sort of censorship that China requires.
Facebook has been banned in China since 2009 when a series of protests in the western Xinjiang province prompted the government to restrict access. By the end of the year, it had progressed to a full ban.
Now Facebook is struggling to break into China’s market and is willing to impose certain restrictions if necessary. As a result, they have developed an algorithm that can detect when certain content is being shared and restrict it. It is a feature the government of China is demanding, and one that could theoretically be used to restrict the spread of fake news among US users.
Some have argued that Facebook’s unwillingness to apply this feature to fighting the spread of fake news is motivated by the additional ad revenue they create when people share content, even if the content is hateful or blatantly false.
So is this a simple case of a company bowing to the pressure of censorship in order to access one market, and refusing to apply it in another to keep revenues up?
Either way, it does raise the issue of whether it would be morally acceptable to censor any sort of free speech being shared on social media. Governments have seen in recent years the role that social media can play in spreading democratic dissent. That is why autocratic regimes around the world have often moved to restrict access to social media at the first sign of trouble.
It makes it easier to rule over people when they have fewer methods of communicating with each other.
While it would undoubtedly be a good thing if all the news shared on Facebook was completely accurate and without bias, and fake news may indeed have a negative effect on democracy, would creating a way to censor a platform where almost half of all Americans find their primary source of news really be a good move for democracy?
Surely, like any technology, the potential for abuse would make this an issue that needs to be treated carefully.