Using AI in Law: Compliance with ABA Model Rules and Best Practices

AI

Source: Marc Hoag via Midjourney

This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions. Any discussion on technical matters may be inaccurate or no longer timely, so you should consult with the various vendors mentioned below to clarify and to ascertain accuracy.


The ABA Model Rules of Professional Conduct strongly imply, if not yet require, the adoption of generative AI technology in the practice of law.


The legal sector is rapidly embracing generative AI to streamline tasks such as research, drafting, document review, finding the needle in a haystack of email discovery, and more.

However, the case last year of a New York lawyer being sanctioned for filing a brief with fake AI-generated case citations serves as a cautionary reminder: while AI can enhance our capabilities, it must be wielded with utmost responsibility to prevent ethical violations and safeguard our reputation.

Accordingly, the ÅBA Model Rules of Professional Conduct and California Rules of Professional Conduct, which largely align with the ABA Rules, not only imply but arguably necessitate the adoption of technology, including generative AI, to maintain competent client representation.

Embracing Competence through Technology

Maintaining competence is a fundamental ethical obligation for lawyers. According to ABA Rule 1.1 on Competence, this includes staying updated on changes in the law and its practice, including the benefits and risks associated with relevant technology (Comment 8 to ABA Rule 1.1).

This rule arguably implies that lawyers should use AI tools to enhance their practice, while emphasizing the need to ensure the accuracy and reliability of AI-generated content.

Regular training and the adoption of advanced AI tools are prudent ways to fulfill this obligation, ensuring that lawyers are well-versed in both the capabilities and limitations of AI technologies.

Enhancing Diligence with AI

ABA Rule 1.3 on Diligence requires lawyers to act with reasonable diligence and promptness in representing a client; this therefore implies strongly that technology must be used to balance a lawyer’s workload.

A lawyer relying on pen and paper to draft in days or weeks what would ordinarily take hours with a word processor, or to eschew email entirely for communicating with opposing counsel in favor of using only postal mail, would arguably run afoul of this rule if only because of the unnecessary delays and resulting costs that such anachronistic work would impose upon clients.

AI tools can likewise be a significant asset in managing workloads and delivering timely service to clients. By automating routine tasks like legal research, document review, drafting, and sifting through hundreds and thousands of discovery documents, lawyers can free up valuable time for more complex and strategic aspects of their work; in some instances the time and cost savings can be profound, sometimes reducing to mere minutes what would have otherwise taken hours.

However, it is crucial to remember that AI is not a substitute for human judgment and expertise, and errors — “hallucinations” — are still a very real concern. Lawyers must still exercise diligence in reviewing and verifying AI-generated content to ensure its accuracy and relevance to the specific case at hand.

Ensuring Accuracy in AI Outputs

AI tools vary significantly in their capabilities and reliability. General generative AI chatbots, whether OpenAI’s ChatGPT (yes, even the newer version “ChatGPT-4o"); Anthropic’s Claude; or Google’s Gemini (née Bard) are more prone to producing inaccurate or “hallucinated” information compared to specialized legal AI platforms. While these errors can often manifest as inaccurately quoted statutes, they can also result in wholly fabricated case citations and utterly bogus facts, as occurred with the aforementioned New York attorney.

In contrast, specialized legal AI platforms are designed with access to authoritative legal databases against which they cross-reference all input queries and output, such as Lexis+ AI; Westlaw’s recently acquired CaseTest CoCounsel; or GC.AI’s impressive “AI legal intern.” (Disclaimer: I have been testing GC.AI of my own accord using the company’s standard 14-day trial period and have received no compensation or otherwise benefited from mentioning it in this article.)

ABA Rule 3.1 on Meritorious Claims requires that lawyers only bring forward claims that are based on law and fact. Comment 2 to this rule emphasizes that an “action is frivolous … if the lawyer cannot make a good faith argument on the merits of the action taken” or support it by a good faith argument for the extension, modification, or reversal of existing law.

Therefore, it is essential to independently verify the accuracy of AI-generated content, including sources, citations, case law, statutes, and quoted language.

Manually checking all citations and legal references against primary legal sources is crucial, and simply asking or “prompting” a generative AI tool to “double-check its work,” as it were, will not usually suffice.

What can sometimes help is to use multiple generative AI platforms in parallel, by running your desired output from the first platform back and forth through all of them, as if slowly distilling the end result through a funnel to filter out the errors.

An alternative product not strictly for legal research is the AI-powered search engine Perplexity; its value above and beyond even the latest build of ChatGPT-4o is that it is a search engine first which leverages generative AI both for intaking search queries as well as outputting results. It also provides a robust list of its citations alongside its output, so it serves as a reliable “reality check” for your final query.

Maintaining Transparency and Protecting Confidentiality

Transparency in disclosing AI usage not only fosters trust but also helps the court and clients understand the role of AI in your work. ABA Rule 1.4 on Communication emphasizes the importance of keeping clients reasonably informed about the status of their matter.

When using AI tools, lawyers should disclose this to their clients and explain how AI is being used to assist with their case. This can be done through engagement letters or other client communications. Below is an example of such a clause:

5.         Use of Artificial Intelligence (AI)

In the course of representing you, I may employ generative AI tools including, without limitation, ChatGPT, Anthropic Claude, Google Gemini, Lexis+ AI, CaseText CoCounsel, GC AI, or others, for tasks such as legal research, document drafting, and generating creative content. These tools are designed to augment, not replace, my legal expertise. While they offer advantages in efficiency, they are still under development and may have limitations in accuracy and reliability. I will use these tools in a manner consistent with ABA and California ethics rules, maintaining human oversight to review their outputs, and to ensure client confidentiality. If you have any questions or concerns about this, please feel free to ask.

⚠️ This paragraph is effective when, and only when, initialed by you here: _______.

However, ABA Rule 1.6 on Confidentiality mandates that lawyers must protect client information. When using AI tools, particularly those connected to public APIs, it is crucial to ensure that no confidential information is inadvertently disclosed.

Using specialized legal AI platforms like the aforementioned offerings from Lexis and Westlaw/CaseText’s CoCounsel, for instance, can hugely mitigate privacy risks. Although those products use ChatGPT, they have a licensing arrangement (or similar) with OpenAI, makers of ChatGPT, that grant them the right to run their own private local instance of ChatGPT’s large language model (LLM) on their own servers.

What this means in practical terms is that all prompts or queries submitted via those platforms, and any files uploaded, still use ChatGPT, but the language processing is handled entirely on those platforms’ own private servers; nothing is transmitted to OpenAI’s servers or uses their API (application programming interface).

Crucially, there are, in general, three key factors you’re looking for in order to confidently use generative AI with the highest standards of security and confidentiality, in more of less decreasing order of ease with which you can ensure such control:

  1. Disable Data Storage and Usage for Training: Turn off any settings that allow your prompts or queries to be saved by the AI provider, and that the AI platform does not use your prompts or queries for training purposes.

  2. Choose Services with a Local Private Instance of an LLM: By using a platform that runs their own local private instance of ChatGTP, you avoid transmitting confidential information to OpenAI’s servers.

  3. End-to-End Encryption: Use an AI platform that offers end-to-end encryption to protect data during transmission between your device and their servers.

In short, ensure the AI platform provides security and confidentiality standards comparable to those of popular cloud platforms such as Google Drive, Dropbox, and Microsoft OneDrive.

Another privacy feature worth looking for, and something offered by GC.AI, is ensuring that every user has their own dedicated database so that your AI “chats” and any uploaded documents including PDFs never get intermingled with other users’ content; also, that such content including and especially any uploaded PDFs, can (a) be easily deleted and/or better yet, (b) are automatically deleted when you end your browsing session by simply closing the web browser, logging off, or at least, after a certain amount of time has passed.

Ethical Supervision of AI Tools

As AI tools — generative AI specifically — become more prevalent in legal practice, lawyers must remember that they are ultimately responsible for the work product generated using these tools.

ABA Rules 5.1 and 5.3 emphasize the ethical duties of lawyers to supervise subordinate lawyers and non-lawyer assistants, including AI tools. This involves understanding how the AI tool works, monitoring its output for accuracy and reliability, and ensuring that its use aligns with ethical and professional standards, including and especially, matters involving privacy and confidentiality.

Addressing Emerging AI Challenges

The rapid advancement of AI in the legal industry raises novel ethical and legal questions that lawyers must be prepared to address. ABA Resolution 112 (2019) urges courts and lawyers to address emerging issues related to AI usage, such as algorithmic bias, explainability, and transparency.

By staying informed about these issues and proactively implementing best practices for ethical AI usage, lawyers can harness the power of AI to improve efficiency and client service while upholding the highest standards of professionalism and integrity. This proactive approach includes engaging in continuous education about AI technologies and their ethical implications.

If you’re curious to try an interesting academic thought exercise, consider asking your favorite AI chatbot — be it ChatGPT; Claude; or GC.AI — to draft a new set of fictitious rules for the ABA or California Rules of Professional Conduct governing the use of AI. I used the prompt below, but you should play around and see what you come up with:

Can you please produce a fictitious rule for the California Rules of Professional Conduct, about the use of AI like LLMs like ChatGPT? Please give it appropriate headings and section numbers to fit into a logically coherent place in the existing rules, make sure it's suitably detailed, and so on.

I don’t want to give anything away, but I think you’ll find this exercise as entertaining as it is awe-inspiring and impressive. And while it obviously offers no practical value, it certainly suggests a very plausible direction to where things are likely headed with generative AI and the legal community.

Closing Thoughts

Generative AI is a revolutionary, society-changing tool the likes of which humanity has never seen before; a fantastic magic heretofore relegated only to the realm of fantasy and science fiction. It can significantly enhance the efficiency, accuracy, and client service in legal practice, but is not yet a replacement for human judgment and expertise.

Attorneys must remain ethically responsible for their work product, regardless of the tools used. By selecting appropriate AI tools, verifying AI-generated content, adhering to court rules, transparently disclosing AI usage, and addressing the ethical implications of AI, lawyers can effectively integrate AI into their practice while avoiding potential pitfalls.

And despite the risks, it is precisely because of the immense power and productivity gains offered by generative AI that the ABA and California Ethics Rules at least impliedly support and potentially encourage the thoughtful use of such technology to uphold the high standards of competence and ethical practice required in the practice of law, which at the end of the day, must always be in the client’s best interest.

For more insights on responsible AI usage in legal practice, feel free to reach out or explore our resources.

Previous
Previous

Everything you need to know about the EU AI Act

Next
Next

Everything you need to know about Regulation Crowdfunding (“Reg CF”)