Ramp releases tool to detect AI-generated receipts

Ramp, a spend management solutions provider, released a new solution in 24 hours in direct response to recent advances in AI image generation that make it easy to create extremely convincing fake receipts that could be used for financial fraud. 

Dave Wieseneck, an "expert in residence" at Ramp who administers the company's own instance of Ramp, noted that faking receipts is not a new practice. What's changed is that, with the recent image generation update from OpenAI, it has become much easier — making what may have once been a painstaking effort into a casual thing done in minutes.

"So while it's always been possible to create fake receipts, AI has made it super duper easy, especially OpenAI with their latest model. So I think it's just super easy now and anybody can do it, as opposed to experts that are in the know," Wieseneck said in an interview. 

Generated by ChatGPT
AI generated receipt

Rather than try to assess the image itself, the software looks at the file's metadata for markers particular to generative AI systems. Once those markers are present, the software flags the receipt as a probable fake. 

"When we see that these markers are present, we have really high confidence of high accuracy to identify them as potentially AI-generated receipts," said Wieseneck. "I was the first person to test it out as the person that owns our internal instance of Ramp and dog foods the heck out of our product." 

He said the speed at which they produced this solution is part of the company culture. The team, especially small pods within it, will observe a problem and stop what they're doing to focus on a specific need. They get a group together on a Slack channel, work through the problem, code it late at night and push it out in the morning. 

Wieseneck conceded it is not a total solution but rather a first line of defense to deter the casual fraudster. He compared it to locking your door before going out. If the front door is unlocked, a person can stroll in and steal everything but will likely give up if it is locked. However, a professional criminal with tons of breaking and entering experience is unlikely to be deterred by a lock alone, versus a lock plus an alarm system plus an actual security guard. 

"But that doesn't mean that you don't lock your door and you don't add pieces of defense to make it harder for people to either rob your house or, in this case, defraud your company," he said.

This isn't to say there's no plans to bolster this solution further. After all, the feature is only days old. He said the company is already looking into things like pixel analysis and textual analysis of the document itself to further enhance its AI-detection capabilities, though he stressed that they want to be very confident it works before pushing it out to customers. 

"We're focused on giving finance teams confidence that legitimate receipts won't be falsely flagged. So we want to tread carefully. We have lots of ideas. We're going to work through them and kind of solve them in the same process we've always done here at Ramp," he said. 

This is likely only the beginning of AI image generators being used to fake documentation. For instance, it has recently been found that bots are also very good at forging passports.

AI fraud ascendant

This speaks to an overall trend of AI being used in financial crimes which was highlighted in a recent report from financial and risk advisory solutions provider Kroll, which surveyed about 600 CEOs, chief compliance officers, general counsel, chief risk officers and other financial crime compliance professionals. What they found was that experts in this area are growing alarmed at the rising use of AI by cybercriminals and other bad actors, and few are confident their own programs are ready to meet this challenge. 

The poll found that 61% of respondents say use of AI by cybercriminals is a leading catalyst for risk exposure, such as through the generation of deep fakes and, likely, AI-generated financial documents. While 57% think AI will help against financial crime, 49% think it will hinder (Kroll said they are likely both right). 

"The rapid-fire adoption of AI tools can be a blessing and a curse when it comes to financial crime, providing new and more efficient ways to combat it while also creating new techniques to exploit the broadening attack surface — be it via AI-powered phishing attacks, deepfakes or real-time mimicry of expected security configurations," said the report. 

Yet, many professionals do not feel their current programs are up to the task. The rise in AI-guided fraud is part of an overall projected 71% increase in financial crime risks in 2025. Meanwhile, only 23% rate their compliance programs as "very effective" with lack of technology and investment named as prime reasons. Many also lack confidence in the governance infrastructure overseeing financial crime, with just 29% describing it as "robust." 

They're also not entirely convinced that more AI is the solution. The poll found that confidence in AI technology has dropped dramatically over the past two years. Those who say AI tools have had a positive impact on financial crime compliance have gone from 39% in 2023 to only 20% today. Nonetheless, there remains heavy investment in AI. The poll found 25% already say AI is an established part of their financial crime compliance program, and 30% say they are in the early stages of adoption. Meanwhile, in the year ahead, 49% expect their organization will invest in AI solutions to tackle financial crime, and 47% say the same about their cybersecurity budgets. 

To help combat AI-enabled financial crime, Kroll recommended companies form cross-functional teams that go beyond IT and cybersecurity and involve those in AML, compliance, legal, product and senior management. Further, Kroll said there has to be focused, hands-on training with new AI tools that are updated and repeated as the organization implements new AI capabilities and the regulatory and risk landscape changes. Finally, to combat AI-related fraud, Kroll recommended companies maintain a "back to the basics" approach. Focus on fundamental human intervention and confirmation procedures — regardless of how convincing or time-sensitive circumstances appear.

For reprint and licensing requests for this article, click here.
Technology Expense management Artificial intelligence Fraud
MORE FROM ACCOUNTING TODAY