AI Thought Leaders: 2025 predictions and AI regulation

While AI is still in the Wild West phase that many new technologies go through, as the technology has spread there have been increasing calls from both organizations and individuals to make the field slightly less wild. Not so much that it completely kills the innovation and vibrancy of this burgeoning field, but enough that serious players will feel safe entering this space without being worried they're putting themselves at risk. 

In particular, our experts are interested in measures that can improve the transparency and accountability of AI systems, such as clear labeling of AI-generated content, the ability to trace the model's decision-making process, and disclosure of the data and algorithms involved. There was also strong support for ensuring these systems are explainable and, especially important for the accounting community, auditable. 

"An AI regulation that emphasizes transparency in the training of large language models would be highly beneficial," said Mike Gerhard, chief data and AI officer with BDO USA. "Understanding how these models are trained, including the data sources and methodologies used, is crucial for ensuring accountability and trust in AI systems. This transparency would be particularly advantageous in fields like accounting, where leveraging AI to enhance audit quality requires a clear understanding of how AI decisions are made."  

Respondents also expressed strong support for regulations aligned with principles-based or risk-based approaches, such as the EU AI Act, which focus on safety, fairness and non-discrimination while still providing space for innovation. This is especially important given the stakes involved with AI's ascendency, especially for traditionally marginalized communities. 

"I believe we need to get ahead of the eight ball when it comes to the ethical issues stemming from AI's inherent bias problem,"" said Pascal Finette, founder and CEO of training and advisory firm Be Radical. "When we let AI perform tasks such as sifting through resumes, making creditworthiness decisions, or assessing job interviews, we ought to be sure it does so without (hidden) biases. Part of this problem is on the vendor side, but part of this ought to be codified (and thus protected) by law."

At the same time, virtually everyone cautioned against going too hard on regulation, especially at this early stage of the technology's evolution.

"As further governance emerges, I hope we don't see overly restrictive rules that stifle creativity and progress," said Avani Desai, CEO of Top 50 firm Schellman. "Rather, I'd love to see further regulations that strike the right balance between ensuring the ethical and secure use of AI while encouraging innovation. Public-private partnerships and feedback loops from organizations doing the assessments will be crucial in getting that right."

Will we see more focus on AI regulation in 2025? Well, the only thing we know for sure is we don't know anything for sure. But we can make educated guesses. While no one outright said we'd definitely see new regulations rolled out, some predicted scandals would likely draw attention to the need for further oversight of AI systems. 

"AI's capability will continue to evolve," said Abigail Zhang-Parker, an accounting professor at the University of Texas at San Antonio. "The cost of using AI (e.g., Open AI's API service) will continue to go down. There will be more AI applications. At the same time, we will also see more AI-related negative incidents, particularly those that raise important ethical concerns and debates."

Overall, when asked for their most confident predictions, many said the widespread integration of AI into workflows will accelerate, especially given the rising prevalence of autonomous AI agents with limited decision-making power. The rise of these virtual workers is widely predicted to increase productivity and efficiency at firms. At the same time, some experts warned how this might shift employment dynamics, as well as increase risk of ethical dilemmas. 

"I am confident that AI will either reduce the number of new hires the largest accounting firms plan to hire or lead to further staff reductions, if not both," said Jack Castonguay, a Hofstra University accounting professor and the vice president of learning and development at Surgent. "The largest firms have planned for this stage of AI for years and they thought this day would come sooner. They know they can do more with less. I'm also quite confident we'll see a scandal where a firm misuses AI or subjugates its judgment to AI that leads to a fraud or material error getting through an audit.  We've already seen this occur in the legal field. It's only a matter of time until it happens to an accounting firm."

In this, the second of three parts, we look at our experts' answers to: 

  • What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?
  • What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

You can read the first part here. We'll have our third and final part—where we get into one of the more esoteric aspects of AI—next week.

Davis Bell

CEO, Canopy
Davis Bell
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I don't really think AI can be regulated, honestly. There are too many businesses and people working on it in too many places, and it's too strategically and economically important. It's also moving too fast. 

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I am confident that the software we and others are going to ship in 2025 — powered by AI — is going to blow people's minds and save them incredible amounts of time and money. 

Jim Bourke

Managing director, Withum Advisory Services
Bourke-Jim.jpg
Jim Bourke
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I will say that I generally don't like too much regulation in our space, but one area that I would love to see regulated would be around transparency. Right now we all think AI is amazing with how quickly it brings back results of our inquiries. But where most searches fall short is full transparency around the source of that analysis. So a better, more defined way to communicate full transparency would be helpful. As to legislation that I would not want to see, it would have to be any legislation that restricts our use of AI and the tools that deliver it to us. The free and open access to AI is what we currently have and I would hate to see industries or countries limit the access to such information. We have advanced so much with free and open access, let's not stifle that growth!

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I know that this may seem obvious, but there is no doubt in my mind that we are going to see an explosion by the existing software vendors in our space finding ways to build and deliver AI features into their existing platforms. We are also going to see a significant amount of new players in the tax and audit space offering solutions to support these verticals that are being driven by AI.  I believe we will see significant productivity gains as a result of this and the challenge for firms will be filling the staff utilization void that will be created as a result of tapping into this automation.

Samantha Bowling

Managing partner, GWCPA
Bowling-Samantha-Garbelman Winslow
Bonnie Johnson
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I'd love to see a regulation requiring transparency about AI-driven decision-making processes. On the other hand, overly restrictive regulations that stifle innovation without understanding AI's nuances could hinder progress.

What AI prediction for 2025 are you most certain of?

I'm confident that AI-powered tools will see widespread adoption, transforming every aspect of accounting and business operations for both our firm and the clients we serve. To maximize this potential, we must lead the charge in embracing these advancements, ensuring they elevate the quality of our services and the value we provide.

Ted Callahan

Director of partnerships and strategy, QuickBooks Partners Segment, Intuit
Ted Callahan
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

At its core, technology is a tool that can improve our daily lives, and AI is no exception. However, the potential of AI raises concerns about the hypothetical abdication of responsibility around how and when it is used. As an industry, we need to stay engaged in how the technology is used and applied — and it's critical to create standards that ensure accountability for its use and application.  

At Intuit, we're focused on supporting the responsible use and development of AI. In fact, throughout the last decade of our innovation journey, we've been guided by our AI principles, which impacts how we operate and scale our AI-driven expert platform with customers' best interests in mind. From accountability, transparency, fairness, privacy and security, these principles help us provide capabilities to customers without compromising their data. 

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

AI is playing an important role in the talent pipeline. Nearly every firm leader I've spoken with is facing hiring challenges today, especially at the entry level position — as we have seen fewer college graduates earning a CPA licensure. This has been a consistent trend over the last few years as our 2023 survey found that 90% of accountants were struggling to hire the talent they needed to grow their firms. Amid widespread hiring challenges, more than 9 in 10 agree AI could help them solve skills shortages, by both retaining great talent, and automating tasks. As the technology continues to evolve and perspectives on this technology shift, AI will continue to close the talent gap by providing accounting professionals with the tools they need to streamline their day-to-day jobs and in turn, attract and retain valuable talent.

Daren Campbell

Tax technology and transformation leader, EY Americas
Daren Campbell
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

One thing I'd love to see are AI regulations around transparency and explainability, making sure that as users of AI we have visibility and an understanding of how the AI models are making decisions. This is a tough area to address, but as we put trust and reliance on AI, having transparency will help with the adoption and responsible use.

On the flip side, I would hate to see an overabundance in regulation that hinders our ability to use AI in creative activities. AI is a tool that can be used by individuals in creative spaces, and though there will need to be some level of regulation to make sure individual intellectual property is maintained and people get credit for what they develop, I would love to see AI continue to be used creatively, both in business and otherwise.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

In the next year, there will be a convergence in the way that small language models and large language models will work together to be able to deal with some of the more specialty areas or technical areas. This will be a result of a focus on fine-tuning and approaches that are able to get into that level of minute detail.

Additionally, we'll also continue to see AI embedded in our everyday life and experience in more ways.

Jack Castonguay

Vice president of strategic content development, Surgent
Castonguay-Jack-Knowfully Learning Group
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I would want to see two big regulations tomorrow: requiring every AI model to have a kill switch, and that new or updated models must get approval to deploy after the owners tests them in the same way we make airplane manufacturers get approval for new/updated models before passengers can fly on them. Those two pieces aren't nearly enough, and I want so much more, but they would represent a giant step forward on preventing the worst possible outcomes of AI. I know this may be a copout, but right now the legal landscape is the wild west. There are no rules. So, the only regulation I would hate to see is no regulation. We can't keep letting AI get more powerful and consume more data without some guardrails.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I am confident that AI will either reduce the number of new hires the largest accounting firms plan to hire or lead to further staff reductions, if not both. The largest firms have planned for this stage of AI for years and they thought this day would come sooner. They know they can do more with less. I'm also quite confident we'll see a scandal where a firm misuses AI or subjugates its judgment to AI that leads to a fraud or material error getting through an audit.  We've already seen this occur in the legal field. It's only a matter of time until it happens to an accounting firm.

Danielle Supkis Cheek

Head of analytics and AI, Caseware
Cheek-Danielle-PKFTexas.jpg
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?  

As someone in a highly regulated industry, I want regulations that are principles-based and protect the public interest. Regulations that are overly prescriptive on the steps required to execute a task risk hampering innovation. 

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?  

 The only thing I'm truly confident about is that we'll be blindsided by something incredibly cool. The only constant is change, which is why concepts such as agile project management will remain incredibly important. I can't even begin to predict what technology will look like, but I know those with an agile mindset will adapt fastest. 

Ellen Choi

Founder and CEO, Edgefield Group
Ellen Choi
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

An AI regulation I'd love to see is one that governs the creation and dissemination of AI-generated content, especially deepfakes and "fake news." The progress in multimodal AI has made it easier than ever to create hyper-realistic content, which can be misused for manipulation under the guise of satire or other purposes. Clear guidelines and accountability for creators and platforms would help curb malicious use while preserving legitimate applications.

At the same time, I'd hate to see overly restrictive regulations that stifle innovation. The reality is that we're in a global arms race for AI leadership, and onerous rules could cause us to fall behind. The key is finding a balance that promotes responsible use while allowing firms and innovators to move forward at the pace necessary to remain competitive.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

By the end of 2025, I predict we'll see the public launch of "accountant intern" AI agents that delivers tangible value with near-human accuracy (emphasis on intern). These agents will handle foundational tasks like data entry, reconciliation and initial tax preparation—areas where AI can already outperform humans in speed and consistency.

What's exciting is that these agents won't just automate; they'll assist in judgment-based workflows, flagging anomalies or making suggestions based on learned patterns. Vendors are already training AI agents for discrete accounting tasks, and the leap to a more integrated "intern" model feels inevitable.

While human oversight will remain essential, these agents will become trusted collaborators, freeing up accountants to focus on advisory work and higher-level decision-making. It's not just a vision—it's a near-term reality we're about to see come to life.

Sergio de la Fe

Enterprise digital leader, partner, RSM US LLP
Delafe-Sergio-RSM.jpeg
Sergio de la Fe
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

An AI regulation I would welcome in the accounting industry would highlight that AI is powerful tool to assist accounting professionals in performing services. On the other hand, an AI regulation that conflicts with existing regulatory requirements and other obligations could create confusion, stifle innovation and slow the adoption of AI to meet the specific needs of the accounting profession and its clients.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

What I think we'll see in 2025 is significant investments in upskilling and adoption. Over the next 12 months we'll see the auditor and tax professionals start to build AI into their individual tasks and begin taking advantage of the current state of the tools to spend their time on tasks that require more higher levels of critical thinking. Then later in 2025 as AI continues to evolve, some functions will become close to fully automated, and the auditor will review the findings/outcomes and allow them to focus more on strategic, value-added activities. But to get to that augmented worker future state we need to enable our workforce with tools and training to take advantage of this technology.

Avani Desai

CEO, Schellman
Desai-Avani-Schellman
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I always say regulations tend to lag behind technology, but this time, it feels different. Take ISO 42001, for example—it's an international framework that's already out there and gaining traction quickly, so AI appears to also be changing the game in how we're starting to think ahead. As further governance emerges, I hope we don't see overly restrictive rules that stifle creativity and progress. Rather, I'd love to see further regulations that strike the right balance between ensuring the ethical and secure use of AI while encouraging innovation. Public-private partnerships and feedback loops from organizations doing the assessments will be crucial in getting that right.
 
What AI prediction for 2025 are you most certain of?

I think we'll see a huge shift toward more agent-focused AI systems—tools that can act on behalf of users to automate tasks and streamline workflows. It's going to redefine how we interact with technology and get things done.

Pascal Finette

Founder and CEO, Be Radical
Pascal Finette
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I believe we need to get ahead of the eight ball when it comes to the ethical issues stemming from AI's inherent bias problem. When we let AI perform tasks such as sifting through resumes, making creditworthiness decisions, or assessing job interviews, we ought to be sure it does so without (hidden) biases. Part of this problem is on the vendor side, but part of this ought to be codified (and thus protected) by law.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

The one thing I am truly certain of when it comes to AI is that today's AI is the worst it will ever be. Models will keep getting bigger and more specialized; we will refine our approaches and algorithms, and we will experiment with new modalities and user experiences. But where we will truly end up by the end of 2025 is anyone's guess. It just won't get any worse than what it is today, that's for sure.

Prashant Ganti

Vice president of global product strategy, development and alliances - Enterprise Finance Suite, Zoho
Prashant Ganti
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see? 

I'd love to see regulations that will offer more transparency in the AI usage of an organization, requiring them to share the type of data being used, the algorithms' decision making processes, and how AI-driven decisions are monitored for ethical implications. An AI regulation that is in line with what I'd love to see is the EU's AI Act, which requires that AI systems are safe, transparent and non-discriminatory, with different rules outlined according to the risk they pose. This balances both safety and innovation. 

I would hate to see AI regulations that might be overly restrictive, preventing developers from testing and developing innovative solutions in areas where it could prove to be a valuable addition. 

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year? 

I'm confident that AI will be more enmeshed in businesses' day-to-day operations, with AI tools more deeply embedded in the solutions companies use across industries, enabling them to enhance operational efficiency and respond to market changes more quickly. 

Furthermore, from an accounting professional's perspective, I believe this represents a "VisiCalc moment." VisiCalc, often considered the first successful spreadsheet, was quickly adopted by accountants, reducing tasks that once took a week down to just a few hours. Today, accountants face what I call "1.5 problems"—issues that are too complex for one person to tackle easily but not complex enough to warrant a large team or significant resources due to poor ROI. With the new capabilities to automate processes and write code, accountants can now solve these problems independently, giving them greater control and autonomy over their work

Mike Gerhard

Chief data and AI officer, BDO USA
Mike Gerhard
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see? 

An AI regulation that emphasizes transparency in the training of large language models would be highly beneficial. Understanding how these models are trained, including the data sources and methodologies used, is crucial for ensuring accountability and trust in AI systems. This transparency would be particularly advantageous in fields like accounting, where leveraging AI to enhance audit quality requires a clear understanding of how AI decisions are made. Conversely, regulations that overly hinder progress and creativity in AI development could be detrimental. It's a fine line to walk, and striking the right balance is key to fostering an environment where AI can thrive and drive meaningful advancements.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year? 

I'm confident that in the accounting industry, gaining a competitive edge will increasingly hinge on how effectively firms harness AI technology. Obvious key areas will be leveraging AI for efficient document creation, streamlining processes and boosting productivity. Additionally, the ability to utilize AI in safe and secure ways will be paramount. Firms that prioritize data security and ethical AI practices will not only safeguard sensitive information but also build trust with their clients. As AI continues to evolve in 2025, those who adeptly integrate these technologies into their operations are likely to lead the market. They will set new standards for innovation and client service, distinguishing themselves by their ability to use AI not just as a tool, but for strategic advantage. This shift will redefine how firms operate, emphasizing the importance of both technological prowess and ethical responsibility.

Chris Griffin

Managing partner of transformation & technology, Deloitte & Touche LLP
Chris Griffin Deloitte
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

The dynamic landscape of AI technology necessitates a thoughtful approach to regulation, one that balances innovation with public safety and ethical considerations. A collaborative framework comprised of industry leaders, policymakers and academic experts can ensure regulations provide responsible AI development while fostering innovation.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

AI will continue to transform how work gets done, automating routine tasks and enabling humans to focus on more complex activities. Reskilling and upskilling programs will remain essential to equip professionals with the appropriate skills to leverage this technology.

Aaron Harris

Chief technology officer, Sage
Aaron Harris
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

We need regulation for audit and assurance of AI models used not only in accounting/finance workflows, but more broadly in processes critical to keeping our economy efficient and fair.  With trust as one of the biggest hurdles to AI adoption, this regulation will serve as a signal to users that AI is safe to use.  However, we need to be careful to not create regulation that slows innovation or equally tips the scales for the "hyper scalers" who are already poised to be the biggest AI winners.
 
What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

"Personal Agents" that know us and have access to our digital lives will become the new super app. These agents will empower everyone with super intelligent personal assistants capable of managing many aspects of our lives.   

Wesley Hartman

Founder and CEO, Automata Practice Development
Wes Hartman 2
Sonia Alvarado
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

A regulation I would love to see is transparency on how much data AI has ingested and better ways to be able to look at any data that relates to you. We know we are being tracked everywhere so ads can be fed to us, but we don't have control over that profile that has been created. I would want that transparency in both AI and social media. There isn't a specific regulation I would like to see, but I don't want to see the AI companies making those regulations. If the AI companies make the regulations, they could put in language that could stifle competition and make it so only the big players and be in the space. 

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

Scan and fill technology for tax. I think there will be a number of new players in that space.

Joel Hughes

CEO, Rightworks
Joel Hughes
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

Given the vast amounts of sensitive client information to which the accounting profession has access, I'd like to see AI-specific modifications to data privacy and security standards. However, I would be cautious of overly restrictive regulations since they could stifle innovation and hinder beneficial breakthroughs. We believe the key is striking a balance between the two. For example, an ideal regulatory framework would promote transparency and data protection while fostering an environment conducive to AI innovation. This approach would safeguard sensitive information and maintain accountability, without limiting the potential for significant advancements in the field. 

Additionally, AI regulations should protect the public good and give those that are negatively impacted a chance for resolution. There should be sanctions and penalties for people who knowingly create or spread misinformation, such as creating and using deepfakes to misrepresent someone. 

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

2025 will be the year in which accounting application vendors create useful AI "components" to do specific tasks with their applications. Whether it's customizing tax research to ask specific considerations for a tax scenario, analyzing financial reports to identify anomalies, or automatically crafting custom engagement letters, any accounting vendor that wants to stay in business will need to show they have incorporated technological advancement in some way.  

Broadly speaking, we believe we'll also see the first steps of more sophisticated agentic AI, surpassing current robotic process automation capabilities. These AI agents will leverage LLMs and asynchronous inference, which will allow them to process, reason and iterate more thoroughly before acting on behalf of users. Initially, this will show up as a preparation process, queuing up tasks for human review and confirmation, striking a balance between AI efficiency and human oversight.

Kacee Johnson

VP of strategy and innovation, CPA.com (an AICPA company)
Johnson-Kacee-CPAcom NEW 2022
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

An AI regulation I'd love to see: A requirement for transparency in AI systems, ensuring that any AI-generated content or decisions are clearly labeled and traceable to their source. And I believe frameworks for Responsible AI need to be widely available & adopted. 

An AI regulation I'd hate to see: Overly restrictive laws that stifle innovation by making it too difficult for organizations to experiment or deploy AI solutions. Balancing regulation with innovation is key.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I think we'll see bookkeeping algorithms achieving accuracy rates of 90% or higher by the end of 2025. Advances in AI and machine learning are rapidly refining data processing and classification, reducing errors and enabling more reliable, automated bookkeeping solutions.

Jenn Kosar

US AI assurance leader, PwC
Jenn Kosar
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

Regulation that creates an operating framework for independent third-party attestation in certain circumstances, whether based on risk criteria, use case type, or nature of stakeholder impact, would be a step forward for building trust in AI. As AI becomes integral to all facets of business, the key to adoption and success will be trust. Third-party auditors possess specialized expertise and can offer impartial assessments of AI systems, backed by professional standards, licensing and quality measures, enhancing their credibility. This is critical for providing assurance to stakeholders that complex systems can repeatedly produce intended outcomes while mitigating potential risks. Regulations that overstep in terms of specifically mandating how risks are addressed by organizations without context of use, risks, complex technology ecosystems with multiple parties responsible for risk mitigation and control, are not only hard to implement and enforce, but could unnecessarily stifle important innovations.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

AI will become intrinsic across every part of company operations. This is a big change from how executives have historically thought about AI—as solely a technology that's an enabler and one that's primarily the domain of IT organizations. GenAI is different from prior technologies (including traditional forms of AI) because its rapidly advancing capabilities can be leveraged more easily by nearly anyone. 

In a business context, that means any employee with the right tools and understanding of the risks and appropriate mitigations can challenge the status quo—rethinking how existing work is done, what previously unsolvable problems can be addressed with the help of AI, and what new products and services are possible. Given the incredible potential for its use in the front, middle and back office, business leaders, including the CEO, will be looking for broader, more strategic ways to deploy AI.

Thomas Mackenzie

KPMG US and global chief technology officer, KPMG
Thomas Mackenzie
What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?  

AI and GenAI will become one of the most important technologies for finance leaders to prioritize in the next year, and in the future, this trend will fundamentally shape the competitive positioning of finance functions in business strategy.   

Our report found that while cloud technology is the No. 1 technology companies are prioritizing to enhance financial reporting, only 67% of finance leaders placed it at No. 1, down from 77% in May. This may be due to 31% of companies reporting that they have completely implemented cloud migration into their financial reporting processes and are seeing the results of it. 

Conversely, the prioritization of AI is steadily accelerating—61% of finance leaders report non-generative AI as the technology they are prioritizing to enhance financial reporting (up from 46% in May). This is reflected in 81% of companies reporting that they've allotted up to 15% of their IT budget on AI-related activities.  

(declined to answer question on regulation)

Blake Oliver

CEO, Earmark
Oliver-Blake-2024.jpg
Blake Oliver
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I'd hate to see us over-regulate AI preemptively like they're doing in the European Union. However, I am concerned about the potential for rogue AIs that operate autonomously and maliciously. That is something we really ought to regulate. Creating an AI that can operate autonomously without an off switch should be illegal. We probably need more than just an off switch—because what if it doesn't work when we need it?

What AI prediction for 2025 are you most certain of?

2025 will be year three of GPT generative AI technology. For me, this technology became useful in November 2022 with the release of GPT-3.5. Let's look at the history of other tipping points in technology, such as the smartphone, the internet and even the spreadsheet. You'll see that it takes about five years for these new technologies to be integrated into the tools and apps we use in our profession. So next year, we should start seeing some excellent productivity gains from AI inside accounting and finance software. That's going to scare and inspire people because those who haven't yet been exposed to AI will finally get a taste of it.

Adam Orentlicher

Chief technology officer, Wolters Kluwer
AdamOrentlicher.jfif
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

Ideal Regulations: 

Frameworks that allow for balancing innovation with responsibility. This includes transparency in AI decision-making for sensitive applications, standards for model governance, and practical guidelines for AI use. 

Non-ideal Regulations: 

Overly prescriptive technical requirements that could freeze innovation. Regulations requiring complete algorithmic transparency or treating all AI applications with the same risk level could hamper progress. 

The focus should be on outcomes and controls rather than specific technical approaches.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year? 

By 2025, AI will be deeply embedded in accounting workflows in three key areas:

Optimizing firm operations: AI will provide real-time insights into firm efficiency, routing work based on expertise and capacity while identifying opportunities for service expansion.

Elevating professionals: AI serving as a 'second set of eyes' to streamline time-intensive tasks—from automating data entry to summarizing complex research—increasing capacity for the professional to focus on higher value tasks and advisory.

Enhancing client service: Software will anticipate client needs through pattern recognition across various  data sets, enabling proactive advisory services.

Abigail Parker-Zhang

Accounting professor, University of Texas at San Antonio
Abigal Zhang
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

Would like to see more AI regulations like the EU AI act which takes a risk-based approach to regulate AI applications. Don't want to see regulations that ban or limit the overall development of AI.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

AI's capability will continue to evolve. The cost of using AI (e.g., Open AI's API service) will continue to go down. There will be more AI applications. At the same time, we will also see more AI-related negative incidents, particularly those that raise important ethical concerns and debates.

Hitendra Patil

Founder and CEO, Accountaneur
Hitendra-Patil-AccountantsWorld
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

The regulation I would love to see would be one that makes AI systems explain how they created the numbers insights, or if the systems are autonomous—they must provide decision logic information in detail. Imagine a spelled-out report regarding AI's decisions, ensuring full accountability of these "digital minds"—something like a recipe for an accounting/tax/audit dish—that gives an exact idea of what the AI system put into it.

Regulation I wouldn't want to see is any rule requiring humans to sign off on every little thing AI systems create. That would be like asking the chefs to handpick every grain of salt. It would slow down the process, inflate overheads, and, quite frankly, kill the whole vibe of the AI revolution. We must ensure AI safety, but we shouldn't place innovation in a stranglehold.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

In general, I'm sure AI will run the show when it comes to routine, well-defined, "rules-driven" transactional work—like classifying transactions, creating the next steps in the workflow, etc. Ledgers updating on their own, and transactions pushing through like a well-oiled machine while compliance checks are performed in real-time with precision is not a pipe dream. At the same time, AI assistants (autonomous AI agents) will be churning numbers to forecast financial trends with impressive accuracy, which will power advisory practices. Real-time financial insights will enable quicker and better decisions. 

AI will also make auditors look like saviors—quickly catching the anomalies to prevent non-compliance. 

Enzo Santilli

Principal, Grant Thornton Advisors LLC
Enzo Santilli
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

A good regulation should protect personal privacy, especially of non-public figures — akin to the EU's "Right to be Forgotten." It should also safeguard copyright owners as models are trained. On the other hand, a bad regulation creates unnecessary bureaucracy and arcane rules — such as a recent proposal for a new federal agency to license and certify AI systems. If the private sector finds that valuable, let nonprofits or other private companies handle the task.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year? 

In business, I believe the use of AI for marketing and proposal creation will achieve adoption in at least 75% of companies by the end of 2025. AI excels at text, image and video creation, and the risks associated with using AI in creating sales materials is far lower than other applications. This is happening already, of course, but the use cases around this will receive high adoption in 2025 — particularly as firms see their competitors doing it and fear being left behind.

Doug Schrock

Managing Principal of Artificial Intelligence, Crowe LLP,
Doug Schrock
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

Not a direct answer — but I believe it will be extremely difficult for regulation to keep pace with the speed of AI technical change and to be applied effectively without significant unintended consequences. The handful of global AI technology leaders can't slow or stop AI progress due to the prisoner's dilemma seen in game theory. With AI progress outpacing safety controls, I'm more concerned about societal and national security agreements than I am about business regulation.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

Here at the end of 2024, business leaders are beginning to implement meaningful pilots, but by and large still feel they are early adopters. By the end of 2025 the large majority of leaders will (rightfully) feel they are behind the leaders, and the risk they were concerned about will start to be seen in examples of lost revenue or margin.

Eitan Sharon

SVP of data and science, Xero
Eitan Sharon
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

The kind of regulation that I'd like to see is the kind that builds public confidence, where there are appropriate safeguards in the technology and where there is regulation in the areas that have higher risks such as data privacy and protection. 

Equally, the kind of regulation that I would want to avoid is regulation that inhibits experimentation with the technology, where we withhold benefits from the public and businesses and where we wouldn't be able to find new use cases. That's because the most impactful use cases of AI probably haven't even been found yet.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

2025 is the year that AI becomes normal. What I mean by that is that AI will start to move from being experimental to deeply embedded into processes and businesses. It will reach a point where it becomes invisible, in the same way that navigating a city using a map book has disappeared and GPS navigation is an invisible part of our everyday lives. 

I think we'll also assess AI products on value, not technologies. I'm also certain that companies will find ways to use AI to unleash human potential. At Xero, our responsible data use commitments encapsulate our desire to not just innovate, but to do so responsibly. Augmenting human ingenuity is one of the unchanging things we see with new technologies that makes me optimistic about the future.

Donny Shimamoto

Founder and managing director, IntrapriseTechKnowledgies
Donny Shimamoto of Intraprise Techknowlogies
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I really want to see regulations that protect children from the use of social media (and its associated AI algorithms) at a device level. The damage that has already been done is astounding. Companies are so focused on the potential profit that they aren't looking enough at the wellbeing of the kids, and parents need more tools to be able to protect their children.

Not sure there is any regulation that I'd hate to see. Currently so much related to AI is the Wild West and unregulated, and the companies producing it are more worried about being first to market that I feel more regulation is needed to force better governance.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

That AI helpers in virtual meetings will be the norm. They are so helpful for generating transcripts, creating summaries and identifying action items that I can't see how anyone wouldn't want to use them.

Sean Stein Smith

Accounting professor, Lehman College
Stein Smith-Sean-Lehmann College 2022
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I would love to see AI regulation enacted that allows a sandbox or safe harbor approach for innovative approaches and strategies at both the federal and state level. AI and specifically GenAI are technologies that are already having a dramatic impact on the business landscape, as well as society at large, and fostering continued innovation is essential for U.S. leadership in this space. While guardrails are certainly necessary for the safe and secure development of AI initiatives, innovation and forward-thinking applications need to also be part of this process.

A nightmare scenario for AI would involve the politization of the field, whether led directly by policymakers or indirectly through corporate contributions and revolving doors between government agencies and firms in the space. AI is simply too important to let political moods dictate policy conversation; all such discussions must be held with an objective framework and outlook toward future development.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

My prediction for AI in 2025 is that we will either see the first fully AI-generated TV show or movie do well with audiences, or that we will collectively see/watch the first entirely AI news program or anchor personality makes its debut. This important development will also serve as a reality check and reminder for how quickly AI is developing, and how important the industry is.

Vsu Subramanian

SVP of content engineering and head of AI, Avalara
Vsu Subramanian.jpg
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

The AI space is in a rapid innovation phase. There are no clear winners and there are many opportunities and potential risks. There is still a lot of exploration and research in play. So, I feel it would be premature to regulate AI without letting further innovation play out, and having a better understanding of it. Within the industry, many companies (including Avalara) have developed responsible use of AI standards, which I think is the right step at this stage. There are many research institutions looking at risks of misuse and governance, and more are needed. I think we still have much to learn before trying to legislate regulation. And while I think some good questions are being raised, we certainly don't yet know all the answers. And many more questions about AI use are on the horizon.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I believe chat interfaces that are multimodal will be more the norm for getting answers and consuming knowledge.

We could see development of domain-specific models that are finetuned in specific domain areas, such as medical research, legal, compliance, finance, etc.

Eyal Shinar

Co-founder and CEO, Black Ore
Eyal Shinar
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

For regulation, I'd like to see robust frameworks focused on protecting client data and privacy — especially as AI systems become more integrated with sensitive financial information. The key is crafting regulations that address fundamental issues like data security and accountability, rather than getting caught up in technical specifications.

The EU AI Act's initial attempt to regulate models based on parameter count became obsolete almost immediately, showing why we shouldn't constrain specific technical approaches. Instead, we need a regulatory framework that protects clients while allowing innovation and competition to flourish — one that focuses on where technology is headed, not where it is today.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

My highest confidence prediction for 2025 is about economics, not technology: AI will become dramatically more accessible as the foundation model space commoditizes. The era of dramatic capability leaps — like what we saw from GPT-3 to GPT-4 — will moderate, leading to intense price competition between open and closed source providers.

This will force well-funded AI companies to move beyond selling raw model access into specialized applications. In accounting, I expect we'll see AI tools that can handle entire accounting workflows, from transaction coding to audit sampling to tax planning, at a fraction of today's cost.

The result won't just be cheaper AI — it will fundamentally change who can access enterprise-grade AI capabilities. Small firms and individual practitioners will be able to leverage tools previously available only to the largest firms.

Prasad Sristi

Chief AI officer, Ascend
Prasad Sristi
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see? 

I'm not a big fan of regulation but ensuring that customer data is protected, and the ownership is maintained with the customer, is important. This will empower a high trust innovation environment. Apart from that, I would not like to see any regulation that stifles innovation.  

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?  

I think 2025 will be the year where AI will positively impact the work of junior accountants. A chasm will start to appear between firms that invest in AI and those that don't. At AI-focused firms, we will see AI agents taking away some of the undifferentiated heavy lifting from their plates. They can start focusing on the big picture and their experience curves will start accelerating. In fact, with the right AI innovation, this generation of junior accountants could get more experience out of their first five years in the profession than any other generation. This will have a large positive impact on retention and contribute to the revitalization of the entire profession. Looking back from the future, we will probably say that 2025 is the year when it all began. 

Ben Wen

CEO and co-founder, Tallyfor
Ben Wen, Tallyfor
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

Regulating such a young and fast-moving industry is not easy. Some areas that could use some help are liability and disclosure rules. Building trust into our economic and information systems that will be using more AI would reduce risk and help more people feel comfortable with the technology. And while not strictly an AI regulatory issue, the consolidation of AI interests in the big tech firms is the opposite of what recent innovation typically looks like.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

Avoiding the most definite predictions like a new GPT model from OpenAI or new chips from Nvidia, a slightly more provocative call would be that 2025 is the year that AI agents begin to regularly talk to other AI agents. What that means is as an AI agent seeks to accomplish a task in the network, that it inevitably will invoke the services of another AI agent. These chains of agents will cr​​eate durable pathways of economic value, resulting in mergers and acquisitions. Recognizing those value chains will take longer than 2025.

David Wood

Accounting professor, Bringham Young University
Wood-David-Brigham Young
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I would like to see AI regulation that encourages exploring and using AI in accounting spaces. I talk to many practitioners who won't try new things with AI because of how the SEC, PCAOB or other regulators might respond. I would like to see regulation that specifically encourages exploration and testing of how AI can improve practice. I would hate to see outright bans on the technology. I believe in significant consequences for misuses of the technology, but in most situations, I would prefer not to have bans on the development of the technology.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I'll avoid a "baby prediction" and make a more bold prediction. I believe we will see a software program come out that puts together AI in such a way as to have significant transformation power for the industry. I'm thinking of something that improves efficiency/profitability by 10%+ or entirely removes steps of the auditing or tax process. I'm hoping that the improvement is 25%+ but that might take two years to achieve!

Joe Woodard

Founder and CEO, Woodard
Joe Woodard.png
Joe Woodard
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I would love to see more regulation around the use of AI within social media, especially with deepfakes that victimize individuals by using their images in falsified photos and videos. We are seeing this play out right now with politicians and celebrities, but as AI-powered deepfake technology becomes more ubiquitous, this type of victimization will become commonplace throughout our society. 

As for over-regulation, I am concerned about any limitations placed on AI that cause the U.S. to be more vulnerable on the global stage, especially with cyber-defense and military defense.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

I believe AI will continue to disrupt the job market, especially the professional services segment. Within that segment, professionals who perform service work that is cyclical and predictable are more likely to be impacted. Professions in areas like research, analytics, creative design, journalism and creative writing will also experience significant disruption over the coming year.

Carmel Wynkoop

Partner-in-charge, AI, analytics & automation, Armanino
Carmel Wynkoop, Armanino
Photo Credit - Robert Houser
What is an AI regulation you'd love to see? What is an AI regulation you'd hate to see?

I'd love to see transparency requirements for AI systems. I'd hate overly restrictive laws that stifle innovation without clear benefits.

What AI prediction for 2025 are you most certain of? Something you are very confident we'll all see next year?

AI will help standardize audit processes, increasing accuracy and reducing costs.
MORE FROM ACCOUNTING TODAY