News & Insights


Artificial intelligence tools are increasingly being used in all walks of life, including among legal professionals. Lawyers can use AI to assist with various administrative tasks as well as in conducting legal research. As AI continues to evolve, it has the potential to improve efficiency and accuracy especially with respect to routine matters, freeing up lawyers to focus on strategy and analysis. However, as seen in a recent federal court case, it is critical for lawyers and clients to take care in embracing technology. While AI can offer significant benefits, there are also risks. 

Technology in Legal Research

Legal research has been done on computers for decades. The primary legal databases used by attorneys, Westlaw and LexisNexis, have continuously improved their search function, enabling attorneys to find results more efficiently. However, to ensure that no cases or statutes are missed, many lawyers start by typing in a very general query in plain English followed by hours of careful review to refine their search and verify the accuracy of the results. 

Using AI is another step in the evolution of legal research. In theory, ChatGPT and similar tools should be able to deliver answers to research questions much faster than existing searches. However, that isn’t always true because of the difficulty in crafting appropriate ‘prompts’ or questions. In addition, just as with Westlaw and Lexis, time must be spent to verify the results because pertinent information may be missing. However, unlike Westlaw and Lexis, ChatGPT suffers from a significant problem whereby it can deliver false or “hallucinated” results. This is where some lawyers have gotten into trouble and been sanctioned by courts. 

Mata v Avianca, Inc.

In Mata v Avianca, Inc., a matter currently pending in the Southern District of New York. the plaintiff’s attorney submitted a brief that contained quotations from and citations to three cases that do not exist. In drafting the brief, the plaintiff’s counsel relied upon ChatGPT to provide him with support for the assertions in his brief. ChatGPT generated a variety of support, though some of the cases cited did not exist and those that did exist did not stand for the matter ChatGPT asserted. The plaintiff’s counsel submitted the briefs without having verified the case law. Instead, counsel simply asked ChatGPT if those cases were real and it replied in the affirmative. While more did happen in the interim, Judge Kevin Castel eventually issued an order imposing sanctions on the firm stating:

“In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

The reality is that Westlaw and LexisNexis already offer a generative AI tool to assist in legal research, but are in the process of improving it. However, even then, attorneys will be obligated to check the work they submit or read the cases they cite. 

Attorney Obligations with Respect to Technology

The Mata case and others like it are a warning for lawyers and clients alike. However, lawyers cannot pretend that AI doesn’t exist. The obligations of attorneys have been heightened with the introduction of new technology. Indeed, the American Bar Association has amended Comment 8 to the Model Rule of Professional Conduct 1.1 to address technological competency, advising that attorneys should keep abreast of changes in the law and its practice and be knowledgeable of the benefits and risks associated with relevant technology. As applied to generative AI, this comment requires attorneys to understand how generative AI could assist in researching particular legal issues while simultaneously recognizing that this technology is still in its infancy and prone to “hallucinating.” 

AI’s Impact on Legal Services 

AI will never be a replacement for the knowledge and experience of a skilled attorney. However, attorneys should be considering how AI can be used in their practice to better serve clients. While there are dangers, they shouldn’t rule out AI completely. Everything produced by a generative AI tool should be checked and double-checked against more trusted sources, especially as these tools are still early in their development. 

Clients should look for an attorney who embraces the potential of technology while guarding against the perils. If you are looking to hire an attorney, contact us for a consultation.