Framework provides starting point for gen AI governance

A coalition of accounting educators and tech leaders, drawing on the contributions and input of over 1,000 practitioners and academics, has released a generative AI governance framework to provide a strong foundation for organizations unsure of how to best oversee use of the new technology. 

The group, led by academics from American and European universities as well as accounting tech leaders and solution providers, is intended to be a more approachable and easy to grasp framework for generative AI governance, in contrast to more specialized and complex ones that some practitioners have found intimidating. 

"We have heard from numerous groups, especially internal audit functions, that they are being tasked with overseeing gen AI and don't know where to start," said Brigham Young University professor David Wood, one of the framework's authors, as well as one of Accounting Today's Top 100 Most Influential People. "This framework gives them the starting place to really focus on the most important risks first as gen AI comes to their organizations."

AI governance
Bartek - stock.adobe.com

The framework itself is meant to be flexible for all kinds of organizations, from nonprofits to small businesses to multibillion-dollar enterprises. Its use can be scaled up or down depending on the desired complexity. It begins by providing five domains that can be summarized for groups with high-level oversight: 

1. Strategic alignment and control environment;
2. Data and compliance management; 
3. Operational and technology management;

4. Human, ethical and social considerations;
5. Transparency, accountability and continuous improvement.

Within each of these domains, the framework identifies several risks and control considerations pertaining to how generative AI can threaten organizational objectives and how organizations can craft governance approaches that mitigate those risks. This includes things like aligning generative AI initiatives with wider company goals, establishing processes for identifying and mitigating data risks, integrating generative AI into operational processes, conducting generative AI training, and ensuring traceable and transparent generative AI decision making. 

For those who want to go further, the framework provides control considerations for each of the domains. For instance, in just the "strategic alignment and control environment" domain, the framework asks users to consider aspects such as developing a strategic roadmap with cross-functional buy-in for gen AI integration that aligns with organizational goals, setting up metrics and key performance indicators to measure the effectiveness of gen AI initiatives in achieving strategic goals, implementing scenario planning for gen AI initiatives to anticipate and prepare for potential unexpected events, defining and communicating roles and responsibilities related to gen AI governance within the organization, establishing a committee or comparable institution to oversee gen AI governance and policy implementation, and many more. 

For those who want to go even further, there is a maturity model in a separate document for going over the fine details. The model is a tool to help organizations evaluate their current governance practices, identify areas for improvement, and strategically plan for future enhancements. By assessing their maturity levels across various control considerations, organizations can gain insights into their strengths and weaknesses. 

Within just the domain of "access control policies" part of the "data and compliance management," for example, the model outlines the qualities of "maturity nascent" control (no formal data governance framework and ad-hoc management of data risks) to "maturity emerging" (basic access control policies implemented but they may not be strictly enforced or comprehensive), "maturity established" (enhanced access control policies in place, more consistently enforced with improved data protection), and "maturity leading" (strict access control policies fully enforced, with role-based access to sensitive data and tools). 

"Not all the control considerations will be relevant to everyone, but by providing a maturity model, a company can decide what fits them," said Wood. "If companies participate, we will also provide benchmarking data so companies can see how they compare to other organizations. This will help them further guide their response to the relevant risks that GenAI presents to their organization." 

The framework was developed with the assistance of over 1,000 accounting professionals and academics. They helped identify potential risks from gen AI, evaluate the risks, generate and evaluate controls, evaluate the entire framework, and more. 

So far, said Wood, the biggest users of the new framework are the Connor Group, which took part in drafting it, and Boomi, which sponsored its development. He said the coalition is currently working with several very large companies headquartered in Europe to do testing with the framework. He is also receiving inquiries from various groups in the United States, from churches to small accounting firms to very large organizations.

Wood stressed that the technology is still so young, and even with the help of a framework, it would be concerning if an organization thought it was already getting things right and didn't need further improvement. "The framework is meant to help companies see where they are at and then start the journey," he said. "I emphasize it will be a journey, on working with and managing gen AI."

For reprint and licensing requests for this article, click here.
Technology Artificial intelligence Data security Data governance
MORE FROM ACCOUNTING TODAY