While the Securities and Exchange Commission has yet to issue regulations specific to artificial intelligence, that doesn't mean companies are off the hook when it comes to disclosures, as the technology's use can easily be slotted into existing requirements.
Speaking today at a virtual conference hosted by Financial Executives International, Scott Lesmes, partner-in-charge of public company advisory and governance with law firm Morrison Foerster, noted there are many risks that come with AI, including false or misleading information, data breaches, cyberattacks, intellectual property risk and much more. He said people need to be taking these risks seriously.
"These mistakes are in the real world and have had significant consequences," he said.
He pointed to a
Incidents like these underscore the need for robust AI governance. He noted there has been a rise in companies forming cross-disciplinary AI governance committees encompassing finance, legal, product, cybersecurity, compliance and in some cases HR and marketing. Failing that, he has also seen companies add AI oversight on the duties of existing committees. While some companies have established dedicated AI departments, more commonly they have been giving AI oversight duties to their chief information security officer or other relevant C-suite positions.

He noted there has been a dramatic increase in board supervision of AI, saying that in the most recent 10-K season many clients added "Oversight of AI" in terms of what the board was responsible for; while it was a small percentage, he was certain it was going to increase over time. He has found that many boards either designate a single AI expert who handles such matters, or they place the responsibility on either existing technology committees or (more commonly) audit committees.
"There is certainly a tension; audit committees already have such a full plate, so adding another responsibility, especially with such a broad mandate, can be a little unsettling but that is where many companies are putting this, if they handle it on the board level. [The] audit committee does make some sense, because it is very focused on internal controls as well as compliance," he said.
Boards generally need to consider the legal and regulatory factors that may impact operations. As many have management frameworks for oversight, so too should there be AI frameworks for how the board fulfills these responsibilities. In executing these duties, boards need to understand the critical AI uses and risks in the company, how they integrate with business processes, what is the nature of the AI system, how the company mitigates risk, and how oversight responsibility is divided between board and management, as well as any material AI incidents.
"The board does not need to know about every AI incident altogether; there needs to be a level of understanding of what's important enough to share and what's not. The board should understand the material incidents, how the company responded and the material impact," he said.
SEC disclosures
Ryan Adams, another Morrison Foerster partner in the same practice area, noted that even though regulators like the Securities and Exchange Commission have yet to issue specific rules or guidance around AI, they have stressed the importance of complying with existing obligations, which may or may include disclosures regarding the company's use of AI and its impact, particularly where it concerns business operations. Already companies need to report material risks and changes in their filings, and as AI further embeds itself into the global economy, it will almost certainly be a factor.
Further, companies should not be making false claims or misleading potential investors in general, and this applies to AI as well. Adams noted that the government has been especially interested in "AI washing," that is exaggerating, or making false claims about the company's AI capabilities or use. He pointed to one example where the SEC brought charges against the CEO and founder of a startup who said the company had a proprietary AI system that could help clients find job candidates from diverse backgrounds, but this AI did not, in fact, exist. Adams pointed out that this didn't even involve a public company, just a private one that was trying to raise investment capital.
"So it makes clear that the SEC will scrutinize all AI-related claims made by any company, public or private, trying to get investors to raise capital," he said.
He added that AI washing can be thought of as similar to inflating financial results or just making up the numbers entirely. Just as an entity should not overstate the capacities of its AI systems, the same has applies to automation technology in general. Regulators want clear and candid disclosures about how a company uses AI and how it presents material risks. In this regard, Adams warned against generic or boilerplate disclosures regarding AI.
"Regardless of the type of company you are, you have to take this seriously. Anyone touting the benefits of AI with customers or the public needs to make sure what they say is truthful and accurate and can be substantiated, or risk potential legal consequences," he said.
It is important to keep materiality in mind. Neither investors nor regulators want to read a list of every conceivable AI-related risk a company faces when only one or two are relevant. Adams conceded this might require slightly different thinking, as accountants tend to lean on quantitative factors to assess materiality, but AI can also carry qualitatively material factors as well.
Some of the examples of risk that he mentioned include:
- The risk that AI could inadvertently breach confidentiality agreements through sensitive information in the training data;
- AI could completely disrupt traditional business functions if used properly, or completely disrupt new ones if used improperly;
- The risk of being unable to find the experts needed to properly monitor an AI system;
- Third-party fees for things like data storage or increased energy use;
- AI can disrupt competitive dynamics in the market; and,
- Ethical risk like the aforementioned racist algorithm, as well as legal or regulatory risks.
"You could go on forever with these AI risks … Just because you use AI and a risk is potential does not necessarily mean disclosure is appropriate. You need to spend time thinking about whether AI-related risks are appropriate to disclose and, if they are, they should be narrowly tailored to describe the material risk," said Adams.
When assessing materiality, he said to go with the same standard accountants have been using for ages: Is there a substantial likelihood a reasonable investor would consider this information important to determine whether to buy, sell or hold a security? Where AI introduces a slight wrinkle is that, given the pace of change in the field, it is important for companies to review and reevaluate their risk factors every quarter.
But risks are not the only things one should disclose. Adams noted that companies should also consider AI impacts when drafting management discussion and analysis or the executive overview, pointing out major developments, initiatives or milestones related to the technology. AI could also come up in discussions of capital expenditures. If the entity made big AI investments that are material and known to the business, that needs to be disclosed. Another area AI plays into is cybersecurity disclosures, which already involve a number of SEC requirements. The two topics, he said, often go hand in hand, so if AI interacts with cybersecurity in any way, it might be worth disclosing.
Overall, Adams recommended that companies:
- Fully and accurately disclose their AI use;
- Avoid overly vague or generic language given AI's wide variations;
- Avoid exaggerated claims about what their AI is capable of doing, taking care especially not to discuss capacities in terms of hypotheticals;
- Be specific about the nature and extent of how the entity is using AI and the role AI plays in business operations;
- Have a good understanding of vendors and other third parties who use AI, as their risks could ripple outward;
- Establish, or at least begin to establish, an AI governance framework;
- Train the staff in AI so they can understand what it can and cannot do;
- Actively monitor company AI usage;
- Regularly update stakeholders on changes, progress and improvements in company AI use; and,
- Have either the legal department or outside counsel review any public statements or marketing materials mentioning AI.
While the current administration has emphasized a less regulated approach to AI, Adams noted that the SEC is still active in its dialogues with the business community around potential regulation, mentioning a recent meeting with the investment advisor community as well as a strategy roundtable with the financial services community.
"The big takeaway here is that both the SEC and industry are saying, 'We want to have active and ongoing communications as this develops' … Any regulations we do see, if any, in the future [will be] informed by what is actually happening in the marketplace," he said.