fbpx
Search Donate

Show results for
  • News
  • Videos
  • Action Alerts
  • Events
  • Resources
  • MEND

The Matrix of Violence – Automated Racism in Police Surveillance

The Matrix of Violence – Automated Racism in Police Surveillance

Categories: Latest News

Tuesday September 01 2020

This article was first published on Byline Times.

The last weeks have witnessed a growing outcry against the State’s use of algorithms to predict the results of students across the country at the detriment of students predominantly from disadvantaged backgrounds. Students that were predicted A*s and As found themselves receiving Bs and Cs without any evidence justifying the downgrades, with their teachers being left just as perplexed. Patterns, however, quickly began to emerge, as the keen-eyed noted students that hailed from private schools and lived in affluent areas were predicted better grades by the algorithm as compared to counterparts from disadvantaged backgrounds.

It was soon apparent that within the operation of the algorithm, biasness had inadvertently seeped in.

After mounting outcry, the Government sought to abandon the use of the particular algorithm (though not entirely), and instead rely on the judgement of teachers and schools.

Whilst tempting to dismiss this case as “unique”, the unethical use of automated processes by the State is quickly becoming the new norm. Sectors such as healthcare and hospitality are undergoing a dramatic transition to reap the benefits of algorithms and Artificial Intelligence (AI), with efficiency rates improving in orders of magnitudes. Other sectors, however, are adopting algorithms at the detriment of the public, and in particular minority communities. One such sector is the police sector – obsessed with implementing AI-based systems without proper consideration of the outcomes.

One example of the police’s questionable use of AI-based systems is the “gangs violence matrix” (GVM) introduced soon after London’s 2011 riots. The met police describe GVM as an “intelligence tool use[d] to identify and risk-assess gang members across London who are involved in gang violence”. The AI-system works by maintaining a database of names that have previously been come in contact with police and partner agencies, and determining their level of ‘threat’ by their network of friends and acquaintances. The flawed nature of GVM is the consequence of the ‘guilt-by-association’ nature of the system, and the biased dataset being introduced to the matrix by an institution shown to be systemically racist.

In one telling incident, Bill, a young Black male, had been repeatedly stopped by the police without ever having committed any crime or anything to arouse reasonable suspicion. Bill’s initial interaction with the police was at the tender age of 11 when he was stopped and searched by the police. When he asked why he was being searched, one of the police officers replied: “Because I want to”. The police’s interest in him only grew over the years, as the GVM highlighted that Bill’s network was worthy of such interest. By the time he was 14, Bill was being arrested at times more than once a week – again without ever being charged. The systemic racism of the met police resulted in Bill being cast into the eyes of the police, but it was the algorithm that reinforced the idea that he was a threat. In essence, the dataset provided to the matrix disproportionately represents individuals from vulnerable communities and the network-based threat modelling further justifies the increased securitisation of particular communities.

Bill’s sole ‘crime’ was to be living in a poor area, knowing people who were engaged in criminal behaviour, and being a victim of an unethical stop and search. In other incidents, people who had shared videos of grime or drill music were deemed to be expressing ‘gang-affiliation’ thus warranting their addition to GVM and being considered a ‘threat’.

In a similar (troubling) case, the met police are increasingly normalising the use of facial recognition technology which aims to be “intelligence-led” and deployed to “specific locations” – the danger for minority communities is stark. The met police state that they are “using this technology to prevent and detect crime by helping officers find wanted criminals”. They also state that the system is robust and errors occur once in every one thousand cases. However, an independent review (commissioned, and later dismissed, by Scotland Yard) stated that the rate of false positives was likely to be four in every five cases.

The review further added that whilst the met police states that the technology would only be used to identify individuals that were “wanted” (in itself an ambiguous term), the data that was being utilised by the technology was at times significantly out of date. As such, individuals that had already been dealt with by the courts, and that were not wanted for offences outlined in the dataset, were being stopped by the police.

The review added that though the police claim that only particular locations would be affected, and the public would be clearly notified, the burden of avoiding the technology was significant. In some cases, to avoid particular locations, individuals were expected to take a nearly 20-minute detour. In other cases, posters notifying the public of the use of the technology put individuals in the range of the cameras themselves.

Perhaps most damningly, the authors of the review concluded that it was “highly possible” that the introduction of technology “would be held unlawful if challenged before the courts”.

The introduction of the technology should also not be considered an inevitable consequence of modern society. Major cities across the world have already acknowledged the unethical nature of the technology and have either banned it or halted trials (including San Diego and San Francisco).

In essence, the development of artificial intelligence and its incorporation within State functions promises significant benefits, reducing the workload on a stretched workforce. However, the technology should not be seen as a magic wand able to fix all problems without fault. Rather automating State processes in contexts demonstrated to be structurally racist risks further cementing structural bias on a plane not easily distinguishable by human oversight. Therefore, the State should first and foremost address the institutional racism that has marred the function of key public institutions before utilising information from the respective institutions to try and teach an algorithm.

Newsletter

Find out more about MEND, sign up to our email newsletter

Get all the latest news from MEND straight to your inbox. Sign up to our email newsletter for regular updates and events information

reCAPTCHA