Michael Abbott and Aaron Tantleff Discuss Fiduciary Diligence on AI
Foley & Lardner LLP partners Michael Abbott and Aaron Tantleff discuss the risks and opportunities of AI use in financial services in the planadviser article, “AI Is Here. Fiduciaries Must Remain Diligent.“
Abbott said when operating under the Employee Retirement Income Security Act, it is important that the same processes and evaluations are in place as they would be for other plan design and investment decisions. He noted that when operating as a plan fiduciary, one must be especially diligent in ensuring that an AI process is not introducing bias, not only to protect the plan and participants, but to stay protected against potential lawsuits.
“We are still in an environment where going through the procedural prudence and process matters,” Abbott explained. “Just relying on an AI-generated output is probably not going to get you where you need to be in terms of satisfying ERISA requirements.”
Tantleff said it is important to know what data and information are being used by AI, allowing one to account for any bias or errors in the materials it is producing.
“Are we using training data, validation data? What am I putting in here, and what is the purpose of it?” he asked. “I, as a human, can create a selection bias in terms of what is being put into the AI…That is always a risk, so there must be controls to it.”
“If I’m on a committee and I’m a plan fiduciary, I need to be asking these professionals that I’m working: ‘How is AI figuring into what you are telling me?’” Abbott added. “I need to know how you came to do what you did and what went into it.”
To read Abbott’s and Tantleff’s article cited by planadviser on this topic from earlier this year, click here.