Facebook takes action against 1.5 billion accounts in first 3 months of 2018
Categories: Latest News
Wednesday May 16 2018
The latest figures released by Facebook, documenting the staggering number of accounts and posts the social media giant has removed due to hate content, illustrates the significant (and legally ill-defined) problem of hate speech online.
Facebook revealed that between January and March 2018 the platform had taken action against nearly 1.5 billion accounts, permanently removing more than 583 million accounts, 837 million pieces of spam and 28.8 million pieces of malicious content.
Figure 1. Data does not include posts based on child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying and harassment.
The social network stated that the majority of the content within the report was removed by Facebook before it was flagged by users of the platform.
Facebook added that whilst it was developing technology to better tackle malicious content, it was struggling to identify and remove hate speech. Of the hate speech posts that were removed by the platform only 38% were flagged by Facebook’s technology, with the significant majority being flagged by users.
Mr Guy Rosen, Facebook’s vice-president of product management, admitted that “we have a lot of work still to do to prevent abuse. It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important”.
Mr Rosen added that “Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue…it’s why we’re investing heavily in more people and better technology to make Facebook safer for everyone”.
The move comes as pressure mounts against social media platforms to take a more active role in tackling malicious content online.
In February 2018, the Home Affairs Select Committee on hate crime and its violent consequences questioned director of Facebook’s policy for the UK, Mr Simon Milner, as to why a number of Islamophobic pages were active on the social networking platform. Mr Milner responded by stating that Islamophobic pages were being taken down, however, not if they “focused on the religion of Islam, not on Muslims”.
This blurred boundary between criticism of Islam versus hate speech against Muslims is a grey area often exploited by those trying to hide behind hate speech by portraying it as criticism of Islam. This is why we need an agreed official definition of Islamophobia, currently absent.
The All-Party Parliamentary Group on British Muslims is currently seeking to propose such a definition and MEND is currently preparing a submission in response to this. In MEND’s definition, we directly address criticism of religion and argue that:
“While criticism of Islam within legitimate realms of debate may not be Islamophobic, it may become Islamophobic if the arguments presented are used to justify or encourage vilification, stereotyping, dehumanization, demonization or exclusion of Muslims. For example, by using criticism of religion to argue that Muslims are collectively evil or violent”.
Currently, social media platforms take voluntary steps in attempting to curb hate speech without being regulated by the Government. However, this mode of operation is being challenged by a number of states, dissatisfied by the progress made by the platforms. This has resulted in a number of legislations worldwide being introduced, including Germany’s hate speech law.
This is important as the epidemic of hate content on social media platforms is primarily due to a general lack of adequate legislation regulating the online world.
Mr Carl Miller, a research director at Demos (a cross-party think-tank), said: “We have not had a proper law passed on this since social media became in widespread use. If you talk to lawyers about this, most of them will say they don’t even know which Act really applies here. Some of it is the Communications Act, as I said, some of it is the Protection from Harassment Act. Some people say it is public order legislation; others say that counter-terrorism or incitement of racial hatred legislation applies here”.
Only by introducing adequate legal structures that are able to regulate the online world will the UK Government be able to tackle hate speech online, including the estimated 7,000 Islamophobic tweets a day, and make sure that, in the words of the Prime Minister, “what is illegal offline is illegal online”.