Even Artificial Intelligence needs a Watchful Eye
Brenda F. Pagliaro, Esquire
Florida Supreme Court Certified Circuit Civil Mediator and Qualified Arbitrator
The use of Artificial Intelligence (“AI”) is gaining speed in the legal profession and used is being used in document review to memorandum, brief writing and even negotiations. Although it is an amazing and magnificent tool, it also has limitations; limitations that cannot be overlooked. The fact is, AI systems are not neutral and possess inherent bias just like humans. In a legal context, this bias can undermine arguments, produce false information, discriminate, and expose practitioners to professional risk.
Hidden behind each AI query (“prompt”) is a treasure trove of AI systems using large language models (“LLMs”). These models are based on deep learning and massive datasets built to understand, summarize, and generate human-like text, code, and images. Behind the scenes, the LLMs generate information from any number of sources and inputs; many of which were compiled by humans. Just like me, or you, we all have some underlying bias whether we know it or not. Some are subconscious while others may be obvious. Artificial Intelligence is no different.
Plain and simple, AI language models possess bias. These models learn by processing vast amounts of text including legal databases, newspapers, magazine articles, general internet content and the like. Therefore, the corpus of this information is not a balanced representation of the human experience or legal thought as a whole. In fact, a portion of the body of the case law used by AI is drawn from cases and laws when discrimination was overlooked or invisible to the courts. AI trained on this material does not distinguish between a precedent that reflects justice or one that reflects prejudices of its time. This is referred to as historical bias.
Another form of AI bias is developed through the AI training process. Training bias occurs when human reviewers evaluate AI outputs, based on the information retrieved, and determine (subjectively) which responses are better than others. Ironically, the reviewers carry their own set of inherent bias, or assumptions about what constitutes a complete, thorough, persuasive or neutral information, legal argument and analysis. Over thousands of evaluations, the AI model learns to reproduce outputs that align with preferences of a particular group of people, not the preferences of the legal profession as a whole, and certainly not the client an attorney might represent. Therefore, it is critical to use this tool with caution.
Responsible use of AI requires treating AI output as a draft from an unvetted source and not a final product. It requires independent verification at every material point. Some suggestions for responsible use are to anchor each prompt with key information instead of generalized inquiries. When researching for a legal case, anchor your specific prompts to include the jurisdiction of the case and any controlling authority. This will help produce a more detailed and on-point case specific result.
In addition, require AI to produce counterarguments before searching for arguments that support your case and position. Verify every single cite against primary resources because AI is known to hallucinate, which means it generates false case names and holdings. Attorneys across the country have been sanctioned for using false citation in their arguments.
AI is also not “real-time”. You need to check what version of AI you are using and when it was last updated. Finally, always run a second (or third) prompt asking AI to identify its assumptions made and any omitted. Always ask for the sources used for any research performed and verify each one.
In conclusion, AI is truly a powerful tool but it is not a substitute for sound legal judgment. The attorney with a watchful eye who understands where AI bias originates and builds a discipline and verification framework into their workflow will be a responsible AI user.