Online hate, offline symptoms
Categories: Latest News
Friday January 15 2021
Once an outlandish apocalyptic idea cliched in Hollywood action movies, the recent attack on the US capitol brought into stark view just how fragile US and Western democracy is. Videos of the attack on what is meant to be one of the most fortified public buildings in the US, the Capitol, continue to be unearthed. However, whilst such disturbing content is set to elicit an emotional response of anger and perhaps panic towards the offline activities of a far-right network, the true problem perhaps lies elsewhere. It is in cyberspace that far-right activists are being nurtured, that far-right networks are being organised, and that plans for politically-motivated violence are being formalised.
Therefore, whilst resources are continually spent on fortifying offline spaces in preparation for further far-right attacks, equal effort must also be taken in tackling online far-right sentiment if we are to curb far-right violence.
Prior to the attack on the Capitol, various online platforms were being used by far-right networks to lay the foundation. From fringe websites like Parler, Gab, and TheDonald, to mainstream social media platforms like Twitter and TikTok, far-right networks were busy trying to propagate, justify and normalise prejudice. Taking the example of Parler, a social media platform that has largely been blacklisted from the Internet, many observers noted how far-right networks were openly being formalised. Launched in 2018, Parler has gained infamy for lacking guidelines on hate speech letting prejudicial discourse thrive on its platform. Researchers have found striking examples of racism and Nazi rhetoric, including posts that theorise Jews are descended from Satan and hashtags such as “HitlerWasRight” gaining popularity. Importantly such discourse is not discussed behind closed doors in private chats concealed by sophisticated technology, rather it is all in public view, being frequently flagged by academic researchers and civil rights organisations. Further research demonstrates that the situation is replicated on other social media platforms, including 4Chan, 8Kun, Reddit, Telegram. This wide prevalence of hate speech and far-right rhetoric does not mean the issue is unsolvable. Rather the issue can no longer be ignored.
Whilst there is a growing appetite to tackle online hatred, strong concrete steps are going to be required to dislodge far-right networks from cyberspace. Since the attack on the Capitol, major internet stakeholders have taken decisive steps to counter online far-right networks, including banning Donald Trump from their respective platforms. Twitter, a crucial way by which Trump addressed his followers and the world, is one example. The platform had within limits been moderating Trump’s tweets since June including adding labels to them and restricting their sharing. After the Capitol attach, Twitter suspended Trump’s account for 12 hours and then later banned him after further inflammatory tweets. However, critics were quick to add that Twitter introduced such actions only after years of inflammatory discourse being propagated by Trump. Other notable names including Tommy Robinson, Katie Hopkins, Steve Bannon were also kicked off social media platforms only after amassing a large following and spewing racist discourse. However, there is some hope – internet service providers and social media platforms are beginning to proactively tackle online hate speech. Amazon, Apple, and Google have all acted in unison to remove the aforementioned far-right online haven, Parler. The decision means that no-one will be able to access Parler’s website or app for the foreseeable future. Twitter also has over the last days suspended more than 70,000 accounts linked to the far-right Qanon movement. The vital leadership being demonstrated by the major tech companies is crucial in ensuring online far-right networks can effectively be dismantled.
Spurred on by events across the Atlantic, UK far-right groups are actively utilising political events such as the Capitol assault to incite hatred further. Individuals such as Tommy Robinson and groups like Britain First, whilst having been barred from mainstream social media platforms, are utilising fringe platforms like Telegram to propagate their discourse. Over the last weeks, chatrooms associated with Tommy Robinson and Britain First have inundated their followers with misinformation and in response, some followers have expressed veiled and overt far-right motivated threats. In response to a video by Tommy Robinson about the inauguration, one follower ominously wrote: “At the inauguration all the traitors will be in one place”.
Others share more anti-Semitic comments calling the Rothschilds – a Jewish family often claimed to be controlling the world by conspiracy theorists – “filthy extraterrestrial pedophile jew god [sic]”.
The chatrooms act as echo-chambers spreading the idea that as conservatives they are being hounded by the liberal agenda, and warning their followers that things will get worse for them and that they should prepare. Such messaging urges the followers to distrust all information sources that hold views which do not align with that of the far-right space. In attempting to tackle the formation of online far-right spaces, such chatrooms are important platforms to counter and dismantle before they are able to indoctrinate their followers.
This fostering online hatred inevitable spills into the offline world, with the most extreme cases manifesting as terror attacks. In one case, Filip Golon Bednarczyk transitioned from posting horrific Islamophobic and anti-Semitic content online, to preparing to undertake a terror attack. Bernarcyzk was caught with bomb manuals, components and 2kg of sulphur powder. He later admitted to multiple terror and explosive offences. This was the culmination, however, of years of fomenting prejudice – much of it online. On Facebook, Bednarczyk regularly wrote hate speech targeted against Muslim communities, Jewish communities, and the LGBTQI+ communities. In one post he shared a meme that showed Mecca (Islam’s holiest site) being blown up by a nuclear bomb. In other posts, he shared memes that supported the Christchurch terrorist that killed 51 people. Importantly, cyberspace acted as a haven from which Bednarczyk could reinforce his prejudice and exercise it, slowly preparing for offline attacks.
When conceptualising the threat of the far-right we often envisage white supremacist rallies, and marches. However, we must be cognisant also of the online activities of such networks where prejudicial discourse is fostered, reinforced and used to lay a foundation for offline violence.
MEND, therefore, calls upon the UK Government to:
- Clearly and urgently outline its plans to tackle online far-right activity.
- Outline its strategy to implement primary legislation to deal with social media offences and hate speech online, including the removal of extreme content.
- Develop an efficient strategy to tackle hate speech online in consultation with Muslim grassroots organisations.