Should law firms disclose the use of generative AI?

Should law firms disclose the use of generative AI?

A recent UUֱ report demonstrated that firms need to tell clients they are using AI. But how much do information should they give? And what’s the best way to give that information?

Lawyers and law firms will use artificial intelligence (AI) in the future. The inevitability is pushed by two needs: the need to remain competitive and the need to meet client expectations. And, as shown in the recent UUֱ report, Generative AI and the future of the legal profession, that feeling of inevitability is echoed across the sector. Nearly three quarters (70%) of respondents, for example, agreed or strongly agreed that firms should embrace cutting-edge tech, including generative AI, and just under half (49%) expect their firms to use generative AI in the next 12 months.

The debate, then, is not around if firms will use AI, not even when, but how firms will use AI. And that invites various questions around client interaction. Should firms tell clients if they use AI? Do they need to inform clients every time they use AI? Do clients have a right to opt-out? And so on. In this article, we explain the need for transparency, how much transparency might be needed, and express how law firms can divulge generative AI usage in a sensible and responsible way.

The need for transparency

Generative AI tools will increasingly form part of both the in-house and private practice toolkit, says Ben Allgrove, partner and chief innovation officer at Baker McKenzie. ‘Clients do not want “AI powered solution; they want the right legal services to meet their needs.’ It is not that clients will expect AI. Clients will expect the best and quickest solutions – and those will invariably rely on the use of AI.

But clients will generally expect transparency around the use of AI. In the UUֱ report, for example, more than four in five (82%) in-house counsel said they would expect firms to tell them when they have been using generative AI. General respondents broadly echoed that sentiment, with 75% saying that they believe their clients should know when firms are using generative AI.

The are obvious. Transparency builds trust, strengthens client relationships, allows you to mitigate future problems, and so much more. So firms should definitely tell clients that they are using AI. That much is obvious. The difficult point is how much detail firms should divulge.

The degree of transparency

The need for a degree of transparency is clear. But the degree of transparency remains up for debate. Natalie Salunke, general counsel at Zilch, says that she’d only expect firms to provide information if AI was used to change how personal data or confidential information was processed. ‘You don't buy a car and go “oooohh, I wonder what technology is in there?” Salunke says in the UUֱ report. ‘You just want to make sure that it works, that you're safe and that it’s going to get you from A to B.’

But other clients may want more transparency, may even want all detail of the AI tools used, the data upon which AI systems are built, the people responsible for the tools, and much more. But excessive detail may prove unrealistic, as it would undermine time-saving and cost-reductions – the main purpose of using AI. So the best route is to establish a common practice, finding a degree of transparency that ensures clients are happy while still retains the advantages of AI.

It's important to note that the degree of expected transparency may shift as AI progresses. If everyone in the sector used generative AI, for example, then revealing the use of AI might start to feel redundant. Andy Cooke, General Counsel at TravelPerk, explains in the report that using AI may become standard across the sector, so the need to alert clients of every use may prove excessive. In that instance, though, you may need to provide detail about the AI systems that you’re using.

Artificial Intelligence (AI) Regulation

How to ensure transparency

Firms should explore documentation that defines your use of AI on a broad scale, which you can then distribute to prospective clients. One of the first pieces of documentation firms can create, for example, is an ‘AI policy’. An AI policy aims to establish some principles you follow when using AI tools.

will depend on the shape and size of the law firm. Small firms may note core principles, emphasising the need to maintain mindfulness, privacy, and responsibility when using AI. Larger firms may go into detail about the ways lawyers should use individual platforms, making ongoing and incremental changes based on the latest AI developments. But all AI policies should ensure platforms consider real-world impact, take steps to prevent bias, and practice accountability and transparency – some of the core principles identified in the creation of the .

AI policies give general rules around the use of AI. But clients may want to know about the AI tools firms plan to use, why they’ve chosen such tools, and how those tools will be applied. Firms can develop an ‘AI charter’ (one term used by ) that provides , why firms have opted for those platforms, how they will use those platforms, and options available to clients around AI usage, such as an opt-out clause or the ability to review all AI decisions.

The AI policy and AI charter provide clients with a degree of transparency, building trust and ensuring that firms are using generative AI responsibly, but do not take too much time to put together or maintain.  In certain instances, clients may wish to know even more about the AI in use – and firms should be willing to provide that information, always remaining reasonable and time-efficient. 


Related Articles:
Latest Articles:
About the author:
Dylan is the Content Lead at UUֱ UK. Prior to writing about law, he covered topics including business, technology, retail, talent management and advertising.