Despite its growing relevance to the business world, auditors still lack an official framework or set of standards on artificial intelligence.
However, these professionals are not sitting on their hands waiting for one to emerge — they are developing new practices and procedures for AI as well as finding ways existing ones can be applied to this technology. While there are few such engagements today, auditors aim to be ready for the day they become widespread.
"It will happen where AI will be part and parcel of any client's financial reporting system, no matter the industry," said Michael Hayes, financial services practice leader at top 25 firm PKF O'Connor Davies. "Whether in six months, 12 months or 18 months, it's coming. So, we think of it like this: We are very focused on AI."
Part of what enables professionals to move forward, despite lacking authoritative guidance, is there are already several aspects of AI usage that fit within currently existing standards, especially those concerning technology controls that have previously been applied to other areas.
"Let's take into account AI platforms or machine learning platforms or RPA platforms or any other," said Avani Desai, CEO of the Top 100 firm Schellman and an audit technology expert. "There are still general technology controls an auditor needs to look at and I think that is the baseline. So what peripheral tools are connected to the AI platform? Who has access to it? What is the algorithm being used? Who has reviewed the algorithm for completeness and accuracy? All the things we do from a tech platform perspective we should do with AI."
Paul Goodhew, EY's global assurance innovation and emerging technology leader, noted there are similarities to IT audit when a client is making heavy use of AI. Auditors still need to look into whether the organization has the right governance and oversight of its technology, if the right controls and processes are in place, and whether the organization has equipped itself with the right skills. To this he would add what the AI itself is doing and whether this is what it was intended to do in the first place and how this is reflected in the outputs.
"Certainly from an IT audit perspective, today the auditor has certain expectations with regard to information technology that might impact the financial statements," he said. "So when you add those dimensions of stakeholder expectations, as well as the changing regulatory environment, which is changing at a rapid pace, you have … movement toward emphasizing the role of the auditor to providing assurance over AI."
AICPA vice president of assurance and advisory innovation Amy Pawlicki raised a similar point, saying there are very strong analogies to IT audit. The auditors are still asking whether the system is operating as intended, whether they can track the algorithms and review the code, whether they can test the outputs and the degree of variance in those outputs, whether there are controls and if those controls are regularly tested, among other factors.
But while the process itself is similar, she said the subject matter is very different. While there is certainly room for auditors to maneuver in the AI space, they are still using analogs that aren't specifically made for these situations. For example, AI tends to be more of a "black box," meaning it can be difficult to impossible to really understand what is happening to produce the outputs it does. This means the auditor can't examine the code or understand how the algorithm works.
"It's not a traditional assurance engagement," said Pawlicki. "You can do a traditional assurance engagement with a code review in a white box, but if there is a black box you can look only at the output and [whether] that output is in accordance with expected parameters."
Deloitte audit and assurance services partner Ryan Hittner, who leads Deloitte's Trustworthy AI service offering, raised the same issue, using the same term. He said there are certain audit procedures, like linear regression, that are intuitive to understand and apply. In this example, it's easy to understand how a model or tool translates into outputs. But when looking at AI, he said, it can be more difficult to gain this intuitive understanding, so auditors need to adapt their approach.
"We're left with looking at outputs only. Consider a black box or vector tool where you don't have access to the underlying underpinnings of the model. In most cases, rather than addressing or assessing the theoretical workings on the model, you look more toward reviewing the output," he said.
However, even in these cases auditors might be helped by analogizing to procedures that deal with people, who also tend to be black boxes whose inner thoughts cannot be read.
"As AI gets better and better, our view is treating it like humans makes sense," said Hittner. "We can't go into someone's head and understand exactly how they thought and 'validated' their thinking. So we're left with looking at outputs, dealing with them, and dealing with the uncertainty."
Jodie Lobana, a member of the International Internal Audit Standards Board as well as the chair of the McMaster Artificial Intelligence Society, raised the black box issue as well. But she said that even if someone can't look inside, there are still things that can be examined or tested, all of which have to do with one of the main things at issue in an audit: the data.
"What you're really doing is testing the input going in to make sure the provenance of the data, where it is coming from, has proper consents from the people, and that it's accurate and reliable and robust. And then you'll look at what comes out of the model and check those results," she said.
She noted that the aims of the audit remain the same regardless of the technology involved. The auditor, whether internal or external, wants to know if the information is accurate, reliable and unbiased.
"In addition, the methodology you follow to test something still remains the same," she added. "You start with the objectives of the process, determine the key risks in achieving those objectives, determine the related controls you wish to see, and then, audit to check whether those controls exist and are working effectively."
Beyond specific procedures, auditors are also adapting frameworks from other organizations toward AI and using them for guidance. The Committee of Sponsoring Organizations of the Treadway Commission's framework for enterprise risk management and the National Institute of Standards and Technology's AI risk management framework were both cited by multiple individuals as good analogs for dealing with AI. Others mentioned the International Standards Organization and the Cloud Security Alliance as also having good resources. Each of the Big Four firms, too, have released their own frameworks for responsible AI usage, which their professionals use as a guide when engaged with AI-enabled clients.
But Pawlicki, from the AICPA, thinks these are imperfect substitutes. Yes, professionals can likely conduct an examination of a client's AI capacities using them. But they're not specific to the accounting profession, making them not wholly suitable for the situation. Further, the fact that none of these substitutes is authoritative means less consistency from audit to audit and entity to entity.
"It can be done. But the stronger and more consistent the underlying framework the engagement is being done against, the more consistency, comparability, transparency and value users get from those services to gain confidence in what is coming out of these AI systems and not just be blindly relying on them," she said. Until that is done, according to Pawlicki, people won't really be able to audit AI because there won't be anything to audit against.
Professionals likely won't need to wait long for official guidance, though, as many organizations are working on exactly that. Pawlicki said the AICPA is currently in the process of forming a group of experts from among firm members to think about how AI is being used in the assurance and auditing space and whether they need to undertake a specific project. She pointed to other frameworks developed by the AICPA — such as those relating to sustainability, blockchain or cybersecurity — as examples of how things could go with AI. In each of the cases, auditors at first did not have official structures on how to properly evaluate and test these things, but eventually the AICPA was able to gather experts and develop a proper framework for all of them.
The Public Company Accounting Oversight Board is also monitoring the situation. A spokesperson with the board said the regulator is already engaged in a research project regarding the use of technology, which includes AI. This project has already informed a recent standard that clarified certain auditor responsibilities when using technology-assisted analysis. The spokesperson also said the PCAOB created a Technology Innovation Alliance Working Group that advises the board on the use of emerging technologies by auditors and preparers, and makes recommendations to the board regarding existing or future oversight programs. Recently, members of the Standards and Emerging Issues Advisory Group held a listening session on AI.
Lobana, of the IIASB, noted that larger regulatory frameworks are also in motion, pointing to the European Union's Artificial Intelligence Act as well as the Canadian Digital Charter as examples. They will provide another set of governance controls that auditors will be able to test against.
Regardless of standards or framework, examining AI already involves heavy assistance from technical specialists in the field and this is unlikely to stop. Desai, the Schellman CEO, said firms in the future will likely need dedicated experts if they want to conduct AI audits because the things they'll be examining are highly complex.
"You may be going into real developers and technologists and AI-powered algorithms are very complicated," she said. "Someone will need to go through the algorithms and make sure they're appropriate, and that to me is a coder. I'd need someone who understands the code. I feel like I'm a high tech person, and I don't think I could do that. So I think every firm will have a group that says these are the AI [experts], they understand machine learning models and robotic processing."
Desai added it would be difficult for a firm to upskill everyone with the skills they'd need. She said that 50% of the audit work could be done by upskilled experts, but 50% will be done by subject matter experts.
Hayes, the PKF O'Connor Davies audit partner, had a similar prediction. He said he doesn't understand actuarial tables, but he can still audit them with the help of an expert. Similarly, he envisions auditors will be working hand in hand with technical experts such as Suma Chander, a PKF O'Connor Davies partner who specializes in high-level technology strategy for organizations of all sizes. She said that while she is not actually conducting the financial audit, she and professionals like her are arming the auditor with the information needed to do so effectively.
"I am not auditing the statement," she said. "That is the financial auditor. I am auditing the technology and making sure the financial auditor is getting the information he [needs] from a technical standpoint. I am [overseeing] the data gathering in terms of the extraction, how we get it out of the system, how it calculates, what systems they are using, what tools they are using, is there any type of AI that could be built into the reporting of the financial statements. This is the work of the technology, making sure to look into what is driving the process and giving the information to [the auditor to] make sure the numbers are accurate."
While Deloitte also anticipates using technical experts on AI audits, there seems to be more focus on equipping auditors with basic AI knowledge. These auditors don't need to be technical experts, but they do need to at least understand what AI can and can't do and know the kinds of questions they should be asking during an AI audit engagement.
"We always have a multi-disciplinary approach," said Deloitte's Hittner. "We have upskilling of our auditors on basic AI and what to look for, [such as] identifying where AI or some advanced approaches are being used. That is probably the most important thing. AI can be implemented within those systems without much knowledge, so it's critical to identify and uptrain folks to identify that. We always have specialists to handle certain complex situations. So we would treat it similarly to how we look at other things like valuation. But ultimately, we want to leverage the auditor skillset, which is critical for auditing AI, and layer it in with a multidisciplinary mindset."
Pawlicki, from the AICPA, said that regardless of what eventually emerges, it is important for accountants to be leaders when it comes to AI oversight, as it plays directly to the profession's strengths.
"The profession stands ready to help in this space," she said. "This is our wheelhouse. This is what we do. We provide confidence in information, independent third-party assurance that brings confidence to users of financial statements provided by third parties."