Written by Charlotte Dixon for the Tech and Media Section.
AI's introduction to society has sparked an array of emotions. It is known that AI offers benefits to its users. AI boasts efficiency, especially with its ability to process vast quantities of information. However, what does this mean for the world of law?
Generative AI can be defined as a subcategory of artificial intelligence that uses deep learning algorithms to generate new outputs based on large quantities of existing input data;[1] examples of this include ChatGPT[2], Gemini (formerly known as Bard)[3] and Claude.[4] The Law Society summarised that traditional AI recognises, whilst generative AI creates. There have been frequent conversations within the legal sector regarding whether generative AI can be used to facilitate the work of legal professionals, and even wider, what are the general legalities of using generative AI?
Despite its general usefulness, generative AI can be inaccurate in its outputs; this makes it unlikely that it can be used for legal purposes without being checked over by a legal professional before use. However, there are example of successful AIs within the legal profession such as HarveyAI[5] which claims, ‘AI will not be the chatbot that compliments your workflow; it will be the platform your workflow is built on’. HarveyAI also claims to be a secure platform, but regardless legal professionals should be cautious when feeding confidential client information into a generative AI, especially a non-specialised AI such as ChatGPT or Gemini. The principles of the Solicitor’s Regulation Authority (SRA) Standard and Regulations still apply to legal professionals regardless of whether they use generative AI or not; this means legal professionals who use such tools are not free from any responsibility or liability if the AI produces incorrect or unfavourable results. For example, two U.S. lawyers were fined $5,000 USD for making ‘acts of conscious avoidance and false and misleading statements to the court’[6] after submitting a legal brief that contained six fictitious cases generated by ChatGPT.
The Law Society suggests that for those who do choose to use generative AI tools as part of their legal service, despite the risks associated, should take several precautions. The first of these is careful factchecking to validate the accuracy and reliability of AI-generated outputs; this entails cross-referencing information against authoritative sources and scrutinising the coherence and consistency of the generated content. Additionally, carrying out due diligence is paramount to assess the suitability and ethical implications of utilising generate AI in specific contexts; this involves evaluating the technology’s capabilities, limitations, and potential biases, as well as ensuring compliance with regulatory frameworks and ethical standards. Conducting thorough risk assessments and implementing robust risk management strategies can help anticipate and address any legal, privacy or security concerns. Furthermore, providing comprehensive staff guidance and training is crucial to equip legal professionals with the necessary knowledge and skills to effectively leverage generative AI while navigating its complexities. This includes educating staff members on best practices for utilising this technology, recognising potential pitfalls and promoting a culture of accountability and transparency in its use.
In conclusion, the integration of generative AI within the legal sector presents a promising yet complex landscape. While these advanced technologies offer potential efficiencies and novel solutions, their inherent limitations and legal implications demand careful consideration. The case of HarveyAI and the cautionary tale of the U.S. lawyers fined for submitting erroneous briefs underscore the importance of maintaining human oversight, rigorous fact-checking, and adherence to regulatory standards. As legal professionals navigate this evolving terrain, it becomes imperative to strike a balance between harnessing the capabilities of generative AI and safeguarding against its risks. By adopting prudent precautions and upholding ethical standards, the legal community can leverage these tools to enhance their practices while upholding the integrity and trust essential to the profession.
References
[1] Law Society, 'Generative AI - The Essentials ' (The Law Society, 17 November 2023) <https://www.lawsociety.org.uk/topics/ai-and-lawtech/generative-ai-the-essentials#h3-heading2> accessed 13 February 2024
[2] Open AI , 'ChatGPT' (ChatGPT, 30 November 2022) <https://openai.com/blog/chatgpt> accessed 13 February 2024
[3] Google, 'Gemini' (Gemini, 6 December 2023) <https://gemini.google.com/?utm_source=google&utm_medium=cpc&utm_campaign=2024enGB_gemfeb&gclid=Cj0KCQiAw6yuBhDrARIsACf94RUd-UOCkxEHKwfIfvL3cwQN89GNx9ykkZ_S5LMqJ0_kEbfkonp1ckIaAsdsEALw_wcB> accessed 13 February 2024
[4] Anthropic, 'Introducing Claude' (Claude, 14 March 2023) <https://www.anthropic.com/news/introducing-claude> accessed 13 February 2024
[5] Harvey, 'Sequoia and OpenAI Back Harvey to Redefine Professional Services, Starting with Legal' (Harvey , 26 April 2023) <https://www.harvey.ai/blog/sequoia-and-openai-back-harvey-to-redefine-professional-services-starting-with-le> accessed 13 February 2024
[6] Sara Merken, 'New York Lawyers Sanctioned Using Fake ChatGPT Cases in Legal Brief' (Thomson Reuters, 26 June 2023) <https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/> accessed 13 February 2024
Comments