Skip Main Navigation
Share this page
17 November 2023
Wider justice system

Luke Chambers, a computer scientist and PhD (Law) Candidate at Northumbria University, examines how a new era of evidence will challenge courts, the judiciary, and the public they serve.

The text reads: guest blog, Luke Chambers. It is accompanied by Luke's photo.

Artificial intelligence (AI) has become intrinsic to our daily lives almost overnight. It’s now found everywhere in society. It helps manage the routes we take to work, the digital content we consume, even whether we’re accepted for loans or jobs. Unsurprisingly, these technologies are also beginning to feature in criminal evidence. As these tools become more common, they are increasingly appearing in smaller cases before magistrates—and, while they bring many benefits, they also pose challenges for magistrates and present risks to fair trial.

How accurate?

One of the most pressing issues is the lack of reliable methods by which to ascertain how accurate an AI system is. People frequently over-trust machines or overestimate their accuracy due to automation bias, and computer systems are also famously poor at gauging their own accuracy.

Unlike much traditional equipment, it’s almost impossible to get a ‘general’ accuracy rate for AI systems because they vary so widely. In addition, any accuracy rating can change over time or instantly in response to a myriad of factors. The problem here isn’t necessarily that AI systems are inherently unreliable—though with no realistically enforceable quality standards in industry, this is often a concern—it’s that it’s very difficult to tell the difference between the systems that are reliable and ones that are not. Was the driver driving carelessly, or the victim of automated racial bias? Telling the difference often requires extensive expertise in computer science, mathematics, or statistics. In magistrates’ courts, this can introduce equality of arms challenges.

Consider any of the many examples where new AI technologies may end up in magistrates’ courts as criminal evidence. Given the previous example, perhaps a car with built-in driver drowsiness detection (which are already on UK roads) states that a driver was falling asleep while driving. This could one day become evidence in a careless or dangerous driving case. However, such systems have in the past misread Asian faces due to being initially trained on only white faces. This is but one hypothetical example on how much magistrates can rely on new AI features in commercial products when these inevitably begin to appear as part of the wide variety of cases that come before them.

Further challenges

In many cases, the only way for the accused to prove their innocence is by getting hold of a copy of the AI used—or at least an engineer’s report of it—and hiring an expert witness.

However, this too is fraught with problems; expert witnesses in this area are in short supply and cost many hundreds of pounds per hour. How many accused brought before magistrates’ courts each day have the resources available for such an expense, especially with legal aid dwindling by the year? Much of this can come to naught anyway—intellectual property protections on many AI systems mean that the creators or users of such systems are under no obligation to provide copies to the courts or, in many cases, to the accused.

A further area of difficulty is in the lack of leadership around AI deployment and lack of agreed upon standards, which make universal testing of systems extremely difficult. Understanding why and how an AI system has reached the decision it has is one of the biggest challenges for police forces or technology organisations today. And it is certain to be a major topic of debate in the courtroom.

Then there are additional concerns that AI built into commercial products could alter evidence without the court’s knowledge. For example, some CCTV cameras in operation today have AI that can ‘fix’ parts of the picture and generate visual ‘filler’ to make up for moisture on the lens, glare, or other impurities. This is known as automated image correction. There is currently no concrete obligation to tell the court or defendant if a CCTV image in evidence is partly AI-generated.

Mitigating the challenges

Make no mistake—AI can unlock tremendous new methods of reducing crime, prosecuting offenders and, in some cases, even exonerating the innocent. It is certain to be key to 21st century policing.

However, widespread lack of funding, leadership and adaptability also exposes significant risks to defendants in magistrates’ courts that will begin to become widespread in the next five to ten years. Though legislative and procedural adaptations will hopefully be made in coming years, problems for magistrates can currently be mitigated by:

  • Training magistrates on the strengths and weaknesses of AI evidence, so that they can better understand complex evidence, even without an expert present.
  • Providing reference materials to magistrates on a variety of upcoming AI tools that are commonly used by police or are likely to be seen in courts.
  • Making clear to defendants their options on legal aid and taking time to advise self-represented court users on the importance of counsel in cases with AI evidence.

As the frontline of the criminal justice system, magistrates have a significant role to play in the future of fair trial in the technological age. Having these mitigations in place can help make sure that magistrates’ courts continue to convict on the factual matters of the case, instead of convicting those who cannot afford the correct counsel when faced with AI evidence they can neither understand nor refute. In this way the strengths of the technology can be harnessed, without enabling weaknesses into our justice system.