Our recent report, Lawyers cross into the new era of generative AI, shows an accelerated use of generative artificial intelligence (AI) across the legal sector. The appetite is huge among lawyers, with adoption rates more than doubling from just 11% of lawyers in July 2023 to over a quarter (26%) in January 2024. But, as ever with the latest tech developments, adoption of generative AI has proved uneven, with certain parts of the legal sector more enthusiastic than others.
Lawyers from academic institutions have proven the most enthusiastic, with 33% of respondents from academic institutions regularly using AI compared to the overall use of 26%. Lawyers from academic institutions use AI more than small and large law firms, and more than in-house lawyers and lawyers at the Bar. But, while academic lawyers are profoundly enthusiastic, they're also the most concerned about the ethical implications of AI, and most aware of the .
Lawyers in academic institutions are proving a paradox, boasting both enthusiasm and caution. In this article, we explore the potential of generative AI on the future of legal academic institutions and look at how academic lawyers can work to maximise the impact of generative AI while mitigating risks.
AI has arrived and the landscape is changing. Discover Lexis+ AI Insiders. Join today and stay ahead of the curve.
As mentioned above, lawyers from academic institutions are utilising AI more than any other section of the legal sector. Academic lawyers are not simply adopting the tech, but have been implementing and integrating it across operations. For example, only 22% of lawyers from academic institutions said that they'd taken no steps for the implementation of AI, compared to 46% of all respondents, including 62% of all in-house lawyers and a startling 92% of all barristers.
That means 78% of lawyers have implemented generative AI in some form. Lawyers from academic institutions were also the most likely to have hired generative AI experts, rearranged team structures to better focus on AI-solutions, and launched an AI-powered product for client use. They're more likely to have integrated with pre-existing systems. In short, academic lawyers are not simply paying lip-service to generative AI, nor are they just occasionally checking ChatGPT. Academic lawyers are taking serious and progressive steps towards both implementation and widespread integration.
Implementation is perhaps best seen around training. Nearly one-fifth (19%) of all respondents to our report said they're using AI for training, way above any other section of the legal sector. Using AI in training provides huge opportunities, a point the legal sector understands. 71% of respondents to our report agreed that students should learn how generative AI works, 69% said graduates would be expected to know how to use generative AI tools, and 63% believed that academic institutions would soon train students on using generative AI.
Students will learn not only about AI but with AI. They can reap huge benefits from implementation of generative AI in classrooms. AI training could, among other things, provide a more dialogic and interactive learning experience, afford students personalised feedback through analysed learning patterns, allow them to utilise AI-powered to support learning journeys, of cases or legal negotiations, boost legal research capabilities, and so on.
The above forms of learning will likely include elements of learning about generative AI and how it's used across the legal sector. It's perhaps no surprise that only 28% of respondents in our report disagreed with the notion that generative AI would fundamentally change the way the law is taught.
Lawyers from academic institutions are significantly more concerned about the risks of AI. Our report showed that only 10% of respondents had no concerns about generative AI and that number reduces to just 3% for lawyers from academic institutions. That means that an incredible 97% claim to have some concerns, and 55% have fundamental or significant concerns.
The main concerns, according to the report, revolve around hallucinations (57%), security (55%), and untrustworthy tech (55%). The concerns are largely geared towards the platforms, rather than the tech. Certain generative AI platforms are opaque, trained on insufficient inputs, and lacking human oversight which results in inaccurate, unethical, and poor quality outputs. The worst platforms produce bias, leak confidential information, generate hallucinations, and rely on outdated information.
But the report shows great confidence towards ethical and responsible platforms, such as Lexis+, which are grounded on reputable legal sources, providing hallucination-free linked citations. Our report showed that 57% of academic lawyers felt comfortable using such responsible platforms, and only 29% suggested they were uncomfortable. As suggested, caution from academic lawyers largely resides in the platform, not the tech. Lawyers from academic institutions can avoid such risks simply by picking more trusted platforms, platforms with human oversight and awareness of real-world impacts.
Unlock our report, Lawyers cross into the new era of generative AI, today and reveal our full findings.
* denotes a required field
0330 161 1234