While AI is still in the Wild West phase that many new technologies go through, as the technology has spread there have been increasing calls from both organizations and individuals to make the field slightly less wild. Not so much that it completely kills the innovation and vibrancy of this burgeoning field, but enough that serious players will feel safe entering this space without being worried they're putting themselves at risk.
In particular, our experts are interested in measures that can improve the transparency and accountability of AI systems, such as clear labeling of AI-generated content, the ability to trace the model's decision-making process, and disclosure of the data and algorithms involved. There was also strong support for ensuring these systems are explainable and, especially important for the accounting community, auditable.
"An AI regulation that emphasizes transparency in the training of large language models would be highly beneficial," said Mike Gerhard, chief data and AI officer with BDO USA. "Understanding how these models are trained, including the data sources and methodologies used, is crucial for ensuring accountability and trust in AI systems. This transparency would be particularly advantageous in fields like accounting, where leveraging AI to enhance audit quality requires a clear understanding of how AI decisions are made."
Respondents also expressed strong support for regulations aligned with principles-based or risk-based approaches, such as the EU AI Act, which focus on safety, fairness and non-discrimination while still providing space for innovation. This is especially important given the stakes involved with AI's ascendency, especially for traditionally marginalized communities.
"I believe we need to get ahead of the eight ball when it comes to the ethical issues stemming from AI's inherent bias problem,"" said Pascal Finette, founder and CEO of training and advisory firm Be Radical. "When we let AI perform tasks such as sifting through resumes, making creditworthiness decisions, or assessing job interviews, we ought to be sure it does so without (hidden) biases. Part of this problem is on the vendor side, but part of this ought to be codified (and thus protected) by law."
At the same time, virtually everyone cautioned against going too hard on regulation, especially at this early stage of the technology's evolution.
"As further governance emerges, I hope we don't see overly restrictive rules that stifle creativity and progress," said Avani Desai, CEO of Top 50 firm Schellman. "Rather, I'd love to see further regulations that strike the right balance between ensuring the ethical and secure use of AI while encouraging innovation. Public-private partnerships and feedback loops from organizations doing the assessments will be crucial in getting that right."
Will we see more focus on AI regulation in 2025? Well, the only thing we know for sure is we don't know anything for sure. But we can make educated guesses. While no one outright said we'd definitely see new regulations rolled out, some predicted scandals would likely draw attention to the need for further oversight of AI systems.
"AI's capability will continue to evolve," said Abigail Zhang-Parker, an accounting professor at the University of Texas at San Antonio. "The cost of using AI (e.g., Open AI's API service) will continue to go down. There will be more AI applications. At the same time, we will also see more AI-related negative incidents, particularly those that raise important ethical concerns and debates."
Overall, when asked for their most confident predictions, many said the widespread integration of AI into workflows will accelerate, especially given the rising prevalence of autonomous AI agents with limited decision-making power. The rise of these virtual workers is widely predicted to increase productivity and efficiency at firms. At the same time, some experts warned how this might shift employment dynamics, as well as increase risk of ethical dilemmas.
"I am confident that AI will either reduce the number of new hires the largest accounting firms plan to hire or lead to further staff reductions, if not both," said Jack Castonguay, a Hofstra University accounting professor and the vice president of learning and development at Surgent. "The largest firms have planned for this stage of AI for years and they thought this day would come sooner. They know they can do more with less. I'm also quite confident we'll see a scandal where a firm misuses AI or subjugates its judgment to AI that leads to a fraud or material error getting through an audit. We've already seen this occur in the legal field. It's only a matter of time until it happens to an accounting firm."
In this, the second of three parts, we look at our experts' answers to:
- What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?
- What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?
You can read the