In Formal Opinion 512, issued on July 30, the ABA’s Standing Committee on Ethics and Professional Responsibility identified some of the ethics issues lawyers face with artificial intelligence (AI).
The opinion focuses on competence under Model Rule 1.1 and said that lawyers must have both legal and technical competence, specifically a “reasonable understanding” of AI. The ABA offers these guidelines:
- Lawyers should either have that reasonable understanding themselves or draw on the expertise of others who can provide guidance.
- The duty is not static. Lawyers should keep up with changes in the technology and remain vigilant about its benefits and risks.
In a surprising prognostication, the opinion speculates that there could come a time when lawyers will have to use generative AI “to competently complete certain tasks for clients.”
Other issues addressed by the opinion include confidentiality impacted by a lawyer’s use of generative AI. The opinion advises lawyers to thoroughly review the terms of use and privacy policies of AI tools and consult with IT or cybersecurity experts if necessary. In some cases, lawyers should obtain a client’s informed consent before using a generative AI tool.
The opinion advises to inform clients about the use of generative AI tools and that law firms should establish clear policies, training on ethical use, and monitoring compliance.
In its first major pronouncement on the ethics of using generative AI in law practice, the American Bar Association has issued an opinion saying that lawyers need not become experts in the technology, but must have a reasonable understanding of the capabilities and limitations of the specific generative AI technology the lawyer might use.
View referenced article