Momentum grows globally around outlawing online hate
Categories: Latest News
Thursday May 02 2019
The recent announcement by Facebook that 12 prominent and “dangerous” far-right groups and individuals have been banned from their platform on the grounds that they “spread hate” suggests social media companies are increasingly accepting the role they play in the dissemination of extremist material. Jacinda Ardern, the Prime Minister of New Zealand, set out her plans to host a summit in Paris alongside French president Emmanuel Macron, where they will call for social media companies to commit to a pledge to eliminate such hateful content online.
In the UK, the Joint Human Rights Committee recently accused Twitter and Facebook of failing to protect against violent or misogynistic abuse on their platforms, observing and criticising an unwillingness to take action unless prompted by high-profile figures. This comes following the government’s publication of its Online Harms White Paper. The White Paper proposes establishing a legal duty of care to online users overseen by an independent regulator, holding companies to account for tackling a wide-range of online harms. This decision to regulate the leading social media platforms is the first of its kind. Nevertheless, the UK cannot tackle online hate on its own; this is a global initiative, and other governments should follow suit.
While it is, nevertheless, heartening that the problem of online hate is being acknowledged globally and momentum is gathering around the necessity to combat hate online, this should be taken as the first step in eliminating it. Primary legislation dealing with hate speech online is currently lacking in many countries around the world, and needs to be developed and implemented through consultation with representative organisations of the communities directly affected by such online hate. Such hatred can be effectively tackled only through ascribing a statutory footing to the protections needed by communities to regulate the modern means through which it is disseminated.
The features of social media platforms, such as Facebook, make the rapid spread of misinformation a dangerous inevitability. This results in individuals and organisations from both ends of the extreme ends of the political and ideological spectrum being able to “weaponize” the platform; using it as an integral means of disseminating hate, vitriol, and propaganda. Indeed, the Council on Foreign Relations suggest that, at their most extreme, stories and attacks which are spread online contribute to violence ranging from lynching to ethnic cleansing.
The terrorist who massacred 50 innocent Muslims in Christchurch, New Zealand, on the 15 of March felt confident enough to livestream his actions on Facebook. This demonstrates the very real potential for social media platforms to be utilised as a tool to extend the impact of fear and terror. It is incumbent upon social media platforms, such as Facebook, to recognise their responsibility in combatting extremism and hatred online. However, with their current initiatives in tackling these issues being transparently flawed, intervention in the form of primary legislation must be enacted to safeguard communities and individuals from the exclusionary socio-political consequences, as well as the potentially violent impacts of this phenomenon.
Ultimately, the increasing awareness amongst heads of state and social media companies regarding the harms of hatred online should be applauded, but this is just the first step. Primary legislation needs to be implemented to deal with social media offences and hate speech online and policymakers must commit to working with social media companies to protect free speech while developing an efficient strategy to tackle hate speech online in consultation with Muslim grassroots organisations.