When you ask someone to think of a judge, they’ll probably imagine  a stern-looking, middle aged man or woman, in red robes and a  white wig or perhaps a sombre suit. The kind of judge you see on Law and Order or, if you’re a similar age to me, Judge John Deed! Whatever you imagine, chances are you’re thinking of a serious, senior person in one way or another.

Fast forward a few years and there’s a chance the type of judge we imagine will be very different. I’m not  talking about what they’re wearing, or whether they’re male or female, old or young. There’s a chance we’ll be talking about whether the judge is even human.

Software robotics – where bot programmes automate tasks usually performed by a person – are becoming more common in financial services. If you’ve spoken to a chat bot when doing your online banking you’ll know what I mean. But, it’s the ability of machines to learn to diagnose problems, known as Cognitive Robotic Process Automation, that might end up having the most interesting impact on our justice system.

Artificial Intelligence is developing so quickly, it’s possibly only a matter of time before robo-judges become more common than high court ones. The appeal court of the computer might still involve humans, but their role could be secondary to a machine-managed justice process.

Computers are already offering judgements to settle online disputes. Alibaba, the Chinese-based online banking service, uses algorithms to deal with over 400,000 disputes a year. By comparison, the UK justice system only deals with 14,000 cases a year.

Online Dispute Resolution (ODR) platforms like Modria have been used by companies such as PayPal and Ebay to process over 400 million disputes. 90% of these can be resolved automatically – without human intervention.

The applications for this kind of software are vast for our justice system. Modria is already used to automatically resolve time-consuming disputes over tenancies, child custody and small claims, speeding up the outcomes. These cases take 50% less time and cost a lot less.

In America, courts have been using data for a number of years as a way to predict chances of a suspect not showing for court, or of a criminal re-offending. The use of AI in sentencing has been challenged because machine learning means no-one really knows how the algorithms reach their outcomes. This makes it difficult both for lawyers to defend a client and for human judges to assess whether the AI predictions are fair. 

Fairness is a concept that needs to be factored into all aspects of the machine-operated justice process. We need to be careful that bots don’t take on human biases as they learn. That said, a well-trained robo-judge may have advantages over a human; they won’t make snap decisions because they’re hungry, they won’t try to speed through the last hour of a case to make a train, and they won’t be swayed by how well someone presents themselves.

AI can be accurate. A 2016 study by University College London, the University of Sheffield, and the University of Pennsylvania looked at judicial decisions of the European Court of Human Rights using previously developed AI. They correctly predicted the outcomes of 79% of cases.

While removing humans from many processes helps make them seem more efficient, there is a risk that blind reliance on AI could undermine the very nature of justice. Remorse requires empathy, and removing emotion from the process will affect more than efficiency. If a crime is committed by a human, against humans, the situation may be far more complex than any machine can understand, no matter how advanced it’s education.

“To err is human, to forgive, divine,” wrote Alexander Pope in 1711. Over 300 years later, the world has developed in ways Pope could never have imagined. Yet, with computers now in the position to ‘play God’, or judge, perhaps we need to think about what should remain sacred to humans, and what we are prepared to sacrifice to the robots.