Following mounting pressure, including an advertising boycott of Facebook earlier this year, Facebook, YouTube and Twitter have agreed to create a unified definition of harmful content across all their social media platforms.This will likely provide welcome clarity and consistency for all acceptable use and content policies across the tech industry.
Online content sharing platforms have fostered the age old battle between the right to freedom of expression and the restriction of illegal 'hate speech'. Social media has enhanced our ability to communicate across borders and interact immediately over politics, fashion and anything in between. However, it has also facilitated the dissemination of hateful and often harmful speech to a mass audience at the touch of a button. Large tech companies have often taken the view that they are not responsible for the content their users create, they provide the infrastructure and it is up to individuals to provide the content. Over the last few years, there has been pressure for technology companies, particularly Facebook and Twitter, to take accountability for the harmful content shared over their platforms and to recognised the consequences of their inability to act. This has lead to content policies and acceptable use policies, which form an important basis of any online technology that facilitates user generated content, to state that company's will take down hate speech, and other forms of harmful content published on their platforms. But defining harmful content has been more difficult with platforms having different thresholds for what constitutes online harm.
In 2008, Facebook took down an image of a new mother breastfeeding her child as it constituted 'offensive' under its content policy which sparked a global protest. In May this year, Twitter restricted access to a tweet by Donald Trump on the basis it constituted harmful content under its content policy, but Facebook took no action over the same content as it did not meet its internal content policy definitions. The balance between whether content meets the threshold of illegal hate speech is difficult to define and not many people accept that tech companies should be the gatekeeper to freedom of speech through deciding what can or cannot be published by individuals. However, it is generally accepted that tech companies need to take some responsibility for the dissemination of harmful content, as can be seen with the European Commission's Code of Conduct on countering illegal hate speech online published in May 2016 and the new proposed Digital Services Act.
This issue doesn't only affect the giants in the tech world, but any technology offering which facilitates user generated content. By offering this service, tech companies have a responsibility to have in place content and acceptable use policies to minimise illegal hate speech. Hopefully, the agreement of a unified definition of online harm between Facebook, Twitter and YouTube will provide welcome clarity and a baseline that can be used by all tech companies across the globe.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.