How to practice ethical use of AI in law

How to practice ethical use of AI in law

A recent UUÂãÁÄÖ±²¥ report explored the potential impact of generative AI. We look at how lawyers can address the ethical concerns and take full advantage of AI in the future.

A recent UUÂãÁÄÖ±²¥ report, Generative AI and the future of the legal profession, sought to gauge awareness of generative artificial intelligence (AI) in the legal sector, explore how the sector currently uses generative AI tools, and discuss the promising role of generative AI in the future.

The report demonstrated, among other things, that awareness of generative AI in the legal community is high, with 87% of respondents aware of its existence. In addition, the majority of people aware of AI also possessed an awareness of its power: 95% said generative AI will have a notable impact on law.

The impact and the future of AI depends . The sector seems to appreciate that fact, with 90% of respondents to the report raising concerns about possible negative impacts. And, thankfully, an awareness of the major ethical problems is the easiest way to avoid such problems.

Indeed, as long as platforms grapple with the real-world impact of AI, take action to prevent bias, and practice accountability and transparency, AI will provide huge benefits across the sector. Below we look at how lawyers can navigate key ethical concerns and embrace the responsible use of AI.

How to eliminate bias

The future of law, as with most industries, will depend on the evolution of AI. But AI systems can present risks, especially if users pick the wrong platforms. Poorly developed AI systems can, for example, produce biased results, as explained by the : ‘Bias can creep in at many stages…and the standard practices in computer science aren’t designed to detect it.’

But AI systems can quite easily limit bias by taking a few simple steps, such as carefully selecting all content sources for all inputs, practicing statistical modelling to note potential issues, taking reactive measures to minimise incidents of bias, and so on. In short, human intervention drastically limits bias.

In the UUÂãÁÄÖ±²¥ report, for example, Alison Rees-Blanchard, head of TMT legal guidance at UUÂãÁÄÖ±²¥, says: ‘Any generated output must be checked thoroughly. However, where those tools are trained on a closed and trusted data source, the user can have greater confidence in the generated output and hallucinations will be easier to identify, as verification of the output is made easier.’

Download: Generative AI and the future of the legal profession

Many AI systems already avoid bias. And these are precisely the systems upon which lawyers should depend. Take the , for example. UUÂãÁÄÖ±²¥ chief product officer, Jeff Pfeifer, explains that, in creating the tool, the team always considered the : ‘[The tool] prevents the creation or reinforcement of bias, we ensure that we can always explain how and why our systems work in the way they do…and we respect and champion privacy and data governance.’

Taking simple steps and using trusted platforms eliminates much of the risk. Knowing where the data comes from, and ensuring effective oversight, allows lawyers to responsibly reap the rewards of AI.

Increasing accountability and transparency

Transparency and accountability are essential for the ethical (and effective) use of AI. Platforms should ideally explain to customers and other users how the solutions were created, the information upon which they rely, and how the solutions generally work. An appropriate level of transparency creates trust, not simply for users, but also for regulatory bodies, ensuring all parties are satisfied.

The level of transparency will differ depending on the application of AI, as different contexts and different audiences require different explanations.

Transparency goes . The best way to create accountability is to always ensure effective human oversight in the development and implementation of AI tools. Human oversight not only eliminates bias, but also forges paths to accountability. The , for example, depends on legally trained professionals informing choices about model construction and deployment. Human intervention, as above, seems the best route to responsible use of AI. 

Preventing the spread of misinformation

It is essential, in the development of AI systems, that platforms consider the real-world impact. And perhaps the greatest real-world risk of AI is the spread of misinformation. AI systems tend to spread misinformation for two reasons: the platform has been fed false information, or the platform does not have the right information so makes up answers, commonly known as ‘hallucinations’.

Both of these issues can be avoided by human intervention. Lawyers should ensure, as above, that they use systems that rely on the right information, with effective oversight, which will vastly improve outputs, as Rees-Blanchard explains: ‘Trained on a closed source and taught not to deviate, the results are exponentially more accurate.’ Responsible AI systems will also give lawyers the best results.

So lawyers should always opt for generative AI platforms that are both proactive and reactive, always ensuring the right information is fed into them and that the right information comes out. Human intervention will reduce bias, ensure effective transparency and accountability, and prevent the spread of misinformation. By introducing these simple and effective safeguards, AI systems such as will streamline your organisation, boost  productivity, and vastly improve the practice of law. 

Read full findings from our generative AI survey


Related Articles:
Latest Articles:
About the author:
Dylan is the Content Lead at UUÂãÁÄÖ±²¥ UK. Prior to writing about law, he covered topics including business, technology, retail, talent management and advertising.    Â