fbpx
Search Donate

Show results for
  • News
  • Videos
  • Action Alerts
  • Events
  • Resources
  • MEND

Facebook’s complicity in far-right hate

Facebook’s complicity in far-right hate

Categories: Latest News

Wednesday February 19 2020

The issue of hate on social media has garnered huge attention over recent years, with issues ranging from democracy being undermined, to the cyberbullying of minors being splashed across newspapers. In response, large social media platforms have undertaken a number of public steps to demonstrate their commitment to tackling the variety of problems and ensuring that their respective platforms do not undermine the safety of the user or wider society. However, new analysis by The Guardian highlights how Facebook has failed to take action against far-right pages two months after being made aware of them. As far-right and Islamophobic groups continue to exploit social media platforms, major platforms (such as Facebook) must ensure that they are faster in reacting to far-right hatred.

The Guardian’s investigation found that two months after notifying Facebook about a number of pages spreading disinformation and anti-Islamic hate, the pages are still active. In December 2019, the paper reported that “a group of mysterious Israel-based accounts” were “part of a covert plot to control some of Facebook’s largest far-right pages…and create a commercial enterprise that harvests Islamophobic hate for profit”. The group controlled a 21-page network that produced “more than 1,000 coordinated faked news posts per week to more than 1 million followers”. The update by The Guardian notes that whilst several pages had been removed, a number of pages are still actively producing hate. One such claim was that the German Chancellor, Angela Merkel, was “paying terrorists to kill Jews” in Palestine. Facebook has also been criticised by Axel Bruns, Professor at the Digital Media Research Centre at Queensland University of Technology, who stated: “What happens with Facebook is that they tend to only act when something blows up big enough for them to be concerned with their public standing”. Clearly, there is still a greater need for Facebook to proactively challenge the spread of far-right hatred at a much faster rate.

The issue of far-right hatred infiltrating a social media platform is in no way unique to Facebook; YouTube has a long history of being slow to react to far-right hatred, or else failing to act at all. A number of studies, reports and anecdotal evidence has demonstrated the radicalisation of individuals through far-right content on online platforms, including those who would later proceed to commit terrorist atrocities, such as the Christchurch terrorist.

Moreover, the issue of ‘algorithm radicalisation’ must be addressed. This is where online algorithms use previous usage to suggest content that not only agrees with your ideas but also is an increasingly exaggerated version of your ideas (leading people to adopt extreme versions of their once sensible ideas). During the 2016 US election, El-Bermawy noted that his YouTube recommendations were increasingly pro-Clinton based and pro-Trump content was never being recommended. In another case highlighted by Albright (2018) on Medium, the writer starts with a video about a high-school shooting in Florida and is increasingly exposed to controversial, violent and alt-right videos. Zeynep Tufekci, in The New York Times, wrote about how she used multiple accounts to watch certain videos and saw how YouTube started recommended either far-right or far-left videos depending on her original video choice. In the case of El-Bermawy, the user is stuck in a filter bubble, while Albright is taken down a ‘rabbit hole’ in which the content becomes increasingly aggressive. Both trajectories are dictated by YouTube’s algorithms. Again, whilst YouTube has made a number of public statements talking about their aim to tackle the issue of ‘algorithm radicalisation’, there is a corpus of anecdotal evidence indicating that the problem continues to arise.

Another platform that has garnered strong criticism for being relaxed on far-right hatred is Twitter. A number of research studies have demonstrated how networks are operating on Twitter to project far-right Islamophobic hatred. Work by Natalie Bucklin, a data scientist at DataRobot working on ‘AI for Good’, analysed over 297,849 Twitter accounts that were associated with two far-right extremist users. Bucklin identified “a network of approximately 19,000 users with high proximity to extremist content”. She added that the network contained right-wing figures “such as Charlie Kirk and Ryan Fournier” and also others that were “openly racist accounts”. The network was found to be actively working together in “promoting extremist right-wing content on Twitter”. Bucklin concludes by noting that “Twitter allows these networks to continue relatively unchecked”.

Therefore, all three major social media platforms, whilst having made public statements of tackling hate, must recognise their complicity in propagating racism, white supremacy, and Islamophobia. More work needs to be undertaken by such platforms to ensure that they were not being actively exploited by those dedicated to propagating hatred and agenda-driven fake news. Furthermore, MEND urges policymakers to urgently implement primary legislation to deal with social media offences and hate speech online and commit to working with social media companies to protect free speech while developing an efficient strategy to tackle hate speech online in consultation with Muslim grassroots organisations.

Newsletter

Find out more about MEND, sign up to our email newsletter

Get all the latest news from MEND straight to your inbox. Sign up to our email newsletter for regular updates and events information

reCAPTCHA