Generative AI and the expectations of in-house teams

Generative AI and the expectations of in-house teams

There has been an irresistible and unstoppable rise of generative artificial intelligence (AI) in the legal sector. Our recent report, Lawyers cross into the new era of generative AI, demonstrates that more than a quarter (26%) of lawyers and legal experts use generative AI tools at least monthly, a notable rise from only 11% in our July 2023 survey

Adoption rates across the sector have more than doubled in the past six months as the legal market has recognised the potential of AI.

Despite the growth, an aversion to AI remains. Our report shows the number of in-house teams expecting external counsel to use generative AI reached 70% in the July 2023 report, but has declined to 57% in January 2024.

Mark Smith, Director of Strategic Markets at UUÂãÁÄÖ±²¥ argues that fears around the accuracy, quality, and security of information explains the declining expectations: "I suspect the number one reason is accuracy and fear of having the wrong advice, particularly heightened by issues with free-to-use generative AI."

In this article, we dig deeper into key insights from our report, explore the core concerns that have led to a decline in the expected use of AI, and explore how large firms can use AI efficiently and responsibly.

Concerns over reliability, accuracy, and security

In-house legal teams have been the pioneers of generative AI in the sector. The reasons are simple: in-house teams are generally time poor, deal with high-volume and low-value tasks that are ripe for automation, and often have the backing of businesses that are more comfortable with innovation, are willing to invest in tech and are less risk-averse than large law firms.

It was no surprise that in-house teams led the way in expecting external counsel to adopt AI in July 2023. In comparison, just 55% of law firms felt that clients expected them to use this legal tech. But in the past six months, we've seen a massive reduction in the expectations largely due to increasing awareness around legal, ethical, and reputational risk.

AI has become a huge talking point. Users have amplified the benefits, clients have championed the costs-saving capabilities, and legal practitioners have shared knowledge of use cases. And, as the benefits were hyped and the virtues were amplified, so were the risks and concerns. Our report shows that such concerns are well known across the sector, particularly at large firms. Respondents cited hallucinations (57%), and security issues (55%) as hurdles to AI adoption. Only 10% cited no concerns about using generative AI.

In-house teams are right to be concerned. However, the concerns should not rest with generative AI as a form of tech, but with generative AI platforms misusing the tech. Irresponsible platforms are opaque, trained on insufficient inputs, and lacking human oversight. That inexorably leads to unreliable, inaccurate, and nonsecure outputs. Such outputs produce bias, leak information, , rely on outdated information, and pose a substantial and legal risk to businesses.

Join the Lexis+ AI Insider programme today and be the first to know about upcoming products and gain access to exclusive news articles and webinars.

How to overcome generative AI concerns

Our report asked respondents how confident they would be using a generative AI tool that was grounded on legal content sources, with citations to the verifiable authority used to generate the response. Almost two-thirds (65%) said they would feel confident using an AI-powered grounded on legal research and guidance content, such as Lexis+ AI.

Lexis+ AI is grounded in the largest repository of accurate and exclusive legal content and combines the power of generative AI with proprietary UUÂãÁÄÖ±²¥ search technology and authoritative content. Results are always backed by verifiable, citable authority or source, mitigating the major concerns. 

AI is not the issue; irresponsible AI is the issue. Samuel Pitchford, solicitor for Pembrokeshire County Council in Wales, summed up the sentiment of the sector: "We avoid using generative AI for research purposes to avoid hallucinations. But the development of closed generative AI tools, trained exclusively on legal source material and available only to subscribers should be less prone to hallucinations and would allow our team to use generative AI for research."

Our report made it clear large law firms should use responsible AI platforms, the sort that boasts human oversight, awareness of real-world impacts, verifications of sources, and accurate outputs. Large firms should also inform third parties of the legal tech they're using. Transparency around the use of AI is quickly becoming an expectation. According to the , 78% of in-house counsel agree that law firms should make them aware when using generative AI legal tech.

Download our report, Lawyers cross into the new era of generative AI, and discover the full findings today. 


Related Articles:
Latest Articles:
About the author:
Emma is the head of the Public Sector Practice Area Group & In-house Sector Strategy at UUÂãÁÄÖ±²¥. 
Emma works with the wider UUÂãÁÄÖ±²¥ business to ensure our practical guidance solutions meet the needs of our in-house legal and public sector customers. Emma is a qualified lawyer, legal research training expert and an experienced risk and compliance specialist, with a demonstrated history of working in the legal services industry.