Even Artificial Intelligence needs a Watchful Eye

Brenda F. Pagliaro, Esquire
Florida Supreme Court Certified Circuit Civil Mediator and Qualified ArbitratorBusinessman touching the brain working of Artificial Intelligence (AI)

The use of Artificial Intelligence (“AI”) is gaining speed in the legal profession and is being used in document review, memorandum and brief drafting and even negotiations. Although it is an amazing and magnificent tool, it has limitations; limitations that cannot be overlooked. The fact is, AI systems are not neutral and possess an inherent bias just like humans. In a legal context, this bias can undermine arguments, produce false information, perpetuate discrimination, and expose practitioners to professional risk. The pending Mobley v. Workday litigation is a perfect example. In Mobley, the plaintiff alleged systemic discrimination was built into the algorithms of Workday’s systems used in the hiring process. This litigation is ongoing with costly discovery underway with algorithms/language learning models at the center of the case.

Hidden behind each AI query (“prompt”) are complex systems powered by large language models (“LLMs”). These models are based on deep learning and massive datasets built and trained to understand, summarize, and generate human-like text, code, and images. Behind the scenes, the LLMs generate responses based on patterns learned from vast datasets compiled during training rather than retrieving information in real-time. Many of these datasets are created with human input and just like me, or you, we all have some underlying inherent bias whether we know it or not. Some are subconscious while others are more obvious. Artificial Intelligence is no different.

Plain and simple, AI language models possess bias. These models learn by processing vast amounts of text including legal databases, newspapers, magazine articles, general internet content and the like. Therefore, the corpus of this information is not a balanced representation of the human experience or legal thought as a whole. In fact, a portion of the body of the case law used by AI is drawn from cases and laws when discrimination was overlooked or invisible to the courts. AI trained on this material lacks an inherent ability to distinguish whether a precedent reflects a longstanding legal principle or the social bias of its time. This is referred to as historical bias and it requires careful guidance by the user to make these distinctions.

Another form of AI bias is developed through the AI training process. Training bias occurs when human reviewers evaluate AI outputs and determine (subjectively) which responses are better than others. Ironically, the reviewers carry their own set of inherent bias, or assumptions about what constitutes complete, thorough, persuasive or neutral information, legal argument and analysis. This process is referred to as reinforcement learning from human feedback (RLHF) and it incorporates subjective human judgments into model behavior.

Utilizing thousands of evaluations, the AI model learns to reproduce outputs that align with preferences of a particular group of people, not the preferences of the legal profession as a whole, and certainly not the client an attorney might represent. Therefore, it is critical to use this tool with caution.

Responsible use of AI requires treating AI output as a draft from an unvetted source and not a final product. It requires independent verification at every material point. Some suggestions for responsible use are to anchor each prompt with key information instead of generalized inquiries. When researching for a legal case, anchor your specific prompts to include the jurisdiction of the case and any controlling authority. This will help produce a more detailed and on-point case specific result.

In addition, require AI to produce counterarguments before searching for arguments that support your case and position. Verify every single cite against primary resources because AI is known to hallucinate, which means it generates false case names and holdings. Attorneys across the country have been sanctioned for using false citations in their arguments.

AI systems do not operate on “real-time” information and may rely on data that is outdated or limited depending on the system. You need to check what version of AI you are using and when it was last updated. Finally, always run a second (or third) prompt asking AI to identify its assumptions made and any omitted. Always ask for the sources used for any research performed and verify each one.

It is true that AI is a powerful and efficient tool but it is not a substitute for sound legal judgment. The attorney who maintains a watchful eye and understands where AI bias originates and builds a discipline and verification framework into their workflow, will not only use AI effectively but ethically as well.