AI could change how we obtain legal advice, but those without access to the technology could be left out in the cold

AI could change how we obtain legal advice, but those without access to the technology could be left out in the cold

The legal profession has already been using artificial intelligence (AI) for several years, to automate reviews and predict outcomes, among other functions. However, these tools have mostly been used by large, well established firms.

In effect, certain law firms have already deployed AI tools to assist their employed solicitors with day-to-day work. By 2022, three quarters of the largest solicitor’s law firms were utilising AI. However, this trend has now started to encompass small and medium firms too, signalling a shift of such technological tools towards mainstream utilisation.

This technology could be enormously beneficial both to people in the legal profession and clients. But its rapid expansion has also increased the urgency of calls to assess the potential risks.

The 2023 Risk Outlook Report by the Solicitors Regulation Authority (SRA) predicts that AI could automate time consuming tasks, as well as increase speed and capacity. This latter point could benefit smaller firms with limited administrative support. This is because it has the potential to reduce costs and – potentially – increase the transparency around legal decision making, assuming the technology is well monitored.

Reserved approach

However, in the absence of rigorous auditing, errors resulting from so-called “hallucinations”, where an AI provides a response that is false or misleading, can lead to improper advice being delivered to clients. It could even lead to miscarriages of justice as a result of courts being inadvertently misled – such as fake precedents being submitted.

A case mimicking this scenario has already occurred in the US, where a New York lawyer submitted a legal brief containing six fabricated judicial decisions. Against this background of a growing recognition of the problem, English judges were issued with judicial guidance surrounding use of the technology in December 2023.

See also  We mapped every large solar plant on the planet using satellites and machine learning

This was an important first step in addressing the risks, but the UK’s overall approach is still relatively reserved. While it recognises technological complications associated with AI, such as the existence of biases that can be incorporated into algorithms, its focus has not shifted away from a “guardrails” approach – which are generally controls initiated by the tech industry as opposed to regulatory frameworks imposed from outside it. The UK’s approach is decidedly less strict than, say, the EU’s AI Act, which has been in development for many years.

The European Union’s AI Act introduces a strict framework for technological development.
Areporter / Shutterstock

Innovation in AI may be necessary for a thriving society, albeit with manageable limitations having been identified. But there seems to be a genuine absence of consideration regarding the technology’s true impact on access to justice. The hype implies that those who may at some point be faced with litigation will be equipped with expert tools to guide them through the process.

However, many members of the public might not have regular or direct access to the internet, the devices required or the finances to gain access to those AI tools. Furthermore, people who are incapable of interpreting AI instructions or those digitally excluded due to disability or age would also be unable to take advantage of this new technology.

Digital divide

Despite the internet revolution we’ve seen over the past two decades, there are still a significant number of people who don’t use it. The resolution process of the courts is unlike that of basic businesses where some customer issues can be settled through a chatbot. Legal problems vary and would require a modified response depending on the matter at hand.

See also  Seti: how microbes could communicate with alien species

Even current chatbots are sometimes incapable of providing resolution to certain issues, often passing customers to a human chatroom in these instances. Though more advanced AI could potentially fix this problem, we have already witnessed the pitfalls of such an approach, such as flawed algorithms for medicine or spotting benefit fraud.

The Sentencing and Punishment of Offenders Act (LASPO 2012) introduced funding cuts to legal aid, narrowing financial eligibility criteria. This has already created a gap with regards to access, with an increase in people having to represent themselves in court due to their inability to afford legal representation. It’s a gap that could grow as the financial crisis deepens.

Even if individuals representing themselves were able to access AI tools, they might not be able to clearly understand the information or its legal implications in order to defend their positions effectively. There is also the matter of whether they would be able to convey the information effectively before a judge.

Legal personnel are able to explain the process in clear terms, along with the potential outcomes. They can also offer a semblance of support, instilling confidence and reassuring their clients. Taken at face value, AI certainly has the potential to improve access to justice. Yet, this potential is complicated by existing structural and societal inequality.

With technology evolving at a monumental rate and the human element being minimised, there is real potential for a large gap to open up in terms of who can access legal advice. This scenario is at odds with the reasons why the use of AI was first encouraged.

See also  AI has the power to revolutionise lending, but at a cost to people with lower credit scores

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.