Skip to Content

The Fraudsters Have AI, Too

Articles Neil Hodge Jun 10, 2024

In February, a finance worker at a multinational firm in Hong Kong was tricked into paying $25 million to fraudsters who used deepfake artificial intelligence (AI) technology to pose as the company’s U.K.-based chief financial officer in a video conference call. At first, the employee thought the original email asking him to attend the online call was a phishing scam because it asked him to make a “secret transaction.” Yet, he was reassured when he joined the meeting because he recognized his colleagues also were on the call. However, they too were deepfake re-creations whose likenesses and voices had been taken from publicly available video and audio footage.

As this incident illustrates, just as AI is beginning to propel businesses forward, it also is creating enormous fraud risks. Such scams are set to proliferate for one simple reason — they work. AI has the potential to create sophisticated deepfakes and synthetic identities that can bypass conventional security measures, as well as enable highly automated, large-scale attacks, making them more challenging to detect and deter.

Additionally, while typical online frauds rely on existing security flaws, AI can be used to build self-learning bots to find new vulnerabilities and build attack types that could automatically detect and evade detection over time. In short, fraudsters using AI have the ability to produce more sophisticated, more targeted attacks more frequently and more easily. As a result, it is important for internal audit to raise awareness and help bolster controls.

Recognize the Signs

Fraudsters and other malevolent actors are not just targeting cash: They are going after customer and corporate data, intellectual property, or other commercially sensitive information. And they can strike an organization in many different ways. Ryan Hittner, co-lead of Deloitte & Touche’s Artificial Intelligence & Algorithmic Assurance practice in New Jersey, says there are three key types of AI fraud that organizations need to prepare for:

  • Identity/impersonation fraud.
  • Fake documents and data produced to look real and credible.
  • Mass volume, sophisticated phishing attacks.

“These kinds of frauds already existed,” he says. “The problem is that AI can do them more effectively and at scale, which means controls need to be improved.”

Fortunately, experts say there are several steps organizations can take to defend themselves. The most obvious first step is to consider the specific areas of the business that may be vulnerable to AI technologies, says Elizabeth Metliss, managing associate in the London office of law firm Mishcon de Reya LLP.

For example, Metliss says, organizations should consider how they interact with customers or clients — such as on the phone or online — or if there are photos or videos of personnel on a company website that could be exploited. She adds that organizations should have a “house style” for communications to make them more distinctive and more difficult for AI to emulate, which could include adding background noise or music to any videos uploaded onto their websites or social media channels. Such measures would help employees across the business understand what to look out for, such as requests or content that may appear unusual or outside of what would be expected from normal business activity.

Organizations also should use AI tools to help detect anomalies, Metliss says. In the case of a cloned voice recording or a deepfake, red flags could be patterns or intonation in language or speech, while language analysis in emails or other written communications could flag instances where irregular financial transaction requests have occurred. “This can alert your business to any potential unauthorized activity or if an employee has fallen victim to a scam,” she says. Tools or apps that can help verify or analyze content include reverse image searches, fact-checking websites, and deepfake-detection software.

Teaching employees how to recognize signs of AI fraud and what to do when they receive suspicious requests is important. According to Hittner, case studies can be useful, ranging from the simplest, low-level fraud to the most complicated. “Frauds will happen, but having a culture where escalating reporting to those who have the power to prevent further damage quickly is very important,” he says.

Hittner adds that internal audit needs to be kept informed about AI fraud attempts and incidents “so that it can be involved in creating the solution from the start.” That is better than assessing controls after an incident and finding “they are weak or, worse still, didn’t work and didn’t flag any frauds that have gone undetected,” he explains.

Assess the Threat

Roy Waligora, head of Investigations and Corporate Forensics at professional services firm KPMG in London, says organizations should perform thorough threat assessments to reveal both internal and external risks. This will help the organization define its risk tolerance, identify current fraud prevention measures, and pinpoint areas where it can improve its fraud-prevention strategy.

Waligora adds that internal audit should assess whether management has “considered all risks associated with AI that are likely to have a significant financial, reputational, or regulatory impact.” Internal audit should conduct a robust fraud risk assessment and review the effectiveness of existing prevention and detection processes. This involves comparing the current risk management framework with the necessary actions and controls to identify any weaknesses. Auditors should update this analysis regularly to ensure that fraud responses remain effective and proportionate in detecting new fraud methods, he says.

Identifying the areas of the business that are using AI will provide greater visibility about where risks may occur in systems and data sources. Once potential vulnerabilities are identified, organizations should set up multilayered controls that combine AI-powered fraud detection with human oversight and robust security practices, says Eric Schwake, director of Cybersecurity Strategy at Salt Security in Eugene, Ore. For example, if developers are using AI to build an application programming interface (API) to allow different software to talk to one another, there must be processes, procedures, and governance rules in place to validate their security before they go into production.

Schwake says internal audit has a crucial responsibility in assessing the governance structures of AI, guaranteeing data security, access controls, and the overall integrity of AI systems. To achieve this, auditors “should regularly simulate fraudulent attacks to identify any vulnerabilities in AI defenses and promote awareness of any AI-related fraud risks throughout the organization,” he explains. “For internal development of applications and APIs where AI has been used, auditors play a key role in guiding what ‘good’ is so that before these go into production, as much risk is mitigated as possible.”

Strengthen Controls

Internal audit has a strong role to play in beefing up an organization’s defenses against AI fraud, according to Theresa Grafenstine, executive vice president and CAE at PenFed Credit Union in Washington, D.C. “Internal audit has a duty to understand AI fraud risks and what controls will help,” she says.

Grafenstine say the controls organizations need to combat AI fraud risks are “largely the same as those already in place for online and other frauds.” The key difference is to use multifactor authentication — ranging from security challenge questions to biometric data — to combat attempted fraud via AI-generated video messages, voice calls, and other scams. “Combating AI fraud comes down to doing more checks to verify whether the demand for cash, data, access, or whatever it might be is actually coming from the correct person and then being sent to the right person,” Grafenstine explains.

How crucial is multifactor authentication? “The Hong Kong AI scam couldn’t have happened with additional layers of sign-off,” says Nick Henderson-Mayo, director of Learning and Content at VinciWorks in Jerusalem. He advises auditors to stress test business procedures to determine how many levels of sign-off are required before money can be sent. “The more people involved in a process, the less likely an AI fraud can be successful,” he notes. “This might feel burdensome to businesses more used to agility, but the risk of AI crime is unprecedented, and defensive measures must be taken.”

The first step to assessing controls is for internal audit to ask a lot of simple questions to get deeper answers. “You don’t need to know everything about how the AI works, but you do need to get to the bottom of how good the organization’s controls are to identify suspicious activity and mitigate the risks,” Grafenstine explains. She cautions auditors against relying on the IT function to explain AI problems or letting cybersecurity and AI terminology prevent them from learning what they need to know.

Second, internal audit should conduct an access and identification audit to check whether challenge questions are regularly updated, and if the organization’s multifactor authentication is strong enough to deal with AI-generated video and voice manipulation. Internal audit also should consider what other controls should be in place to deal with more sophisticated threats.

Grafenstine adds that internal audit should ensure the organization communicates AI fraud risks in terms that people in the business can understand. “Engagement is crucial,” she says. “If people don’t understand what you want, how can they comply?”

The board and executive team also need to be part of the conversation. “The people most likely to be targeted or impersonated are managers and board members,” she explains. “So, they need to be actively part of the solution, show leadership, and be involved in attack simulation exercises and other measures aimed at mitigating the risk.”

Break the AI Illusion: Expert Tips

Create a “house style” to make communications more difficult for AI to emulate. —Elizabeth Metliss, Managing Associate, Mishcon de Reya LLP

Teach employees how to recognize the signs of AI fraud and respond to suspicious requests. —Ryan Hittner, Co-lead, Artificial Intelligence & Algorithmic Assurance Practice, Deloitte & Touche

Assess whether leaders have considered all AI risks that could have a significant impact. —Roy Waligora, Head of Investigations and Corporate Forensics, KPMG

Set up multilayered controls that combine AI-powered fraud detection with human oversight. —Eric Schwake, Director of Cybersecurity Strategy, Salt Security

Involve leadership in attack simulation exercises, as they are the most likely to be impersonated. —Theresa Grafenstine, Executive Vice President and CAE, PenFed Credit Union

Stress test business procedures to determine how many levels of sign-off are required to send money. —Nick Henderson-Mayo, Director of Learning and Content, VinciWorks

Don’t wait for the risk to arrive to combat it. Take an offensive approach to stay one step ahead. —Antonio Cacciapuoti, Head of Internal Audit, Eurizon Capital S.A. Luxembourg

Be an early adopter of AI tools to more quickly understand the risks associated with them. —Alan Kato, Executive Auditor, Inter-American Development Bank

Take the Offensive

Even with such controls in place, some experts say organizations may need a more aggressive approach to countering AI fraud because of the potentially deeper and long-lasting impact an incident could have on them. “We are talking about an emerging risk for which we still have no effective risk responses,” says Antonio Cacciapuoti, head of Internal Audit at asset management company Eurizon Capital S.A. Luxembourg. “The mitigation actions are still in the testing phase, and the path from inherent risk to residual risk is tortuous.”

As such, Cacciapuoti recommends an offensive approach. “Why wait for the risk to arrive and then try to combat it?” he says. “Why not try to attack the risk head on and stay one step ahead?”Cacciapuoti adds that using AI is the best control to combat AI risks. “AI can process large amounts of data and perform tasks quickly, which makes it ideal to detect complex fraudulent activities that are difficult to detect using traditional systems,” he explains.

However, to achieve this, internal audit needs stronger AI skills to understand this high volume of data — and it needs to be as dynamic as the risk is, Cacciapuoti says. “Internal audit also needs to collaborate more closely with other risk and assurance functions within the organization,” he notes. “It can’t deal with the risk on its own.”

For Alan Kato, executive auditor of the Inter-American Development Bank in Washington, D.C., collaboration is crucial, especially with functions that are responsible for digital transformation. “Get their insights and use their expertise — you will need it,” he says. Internal audit also should be an early adopter of the technologies it is meant to audit. “The more familiar internal auditors are with AI tools, the easier it will be for them to understand the nature of the risks associated with the technology,” he explains.

Pushing for additional resources — or reprioritizing existing resources — is going to become a more frequent discussion point for CAEs and executives, Kato says. “Internal audit follows a risk-based approach, so if the key risks to the business are coming from AI-related fraud, you need to move attention and resources there,” he explains. And internal audit can’t let a lack of AI capability define its approach. “CAEs need to make the case for more resources and be at the front-end of AI capabilities if internal audit is to help protect the organization,” he says.

Raising the Guard

It is clear AI-related fraud poses serious risks to organizations, especially as the damage can be both quick and long-term. From deepfakes to automated attacks, AI frauds are fast becoming as intelligent as any AI application. Despite this, there are clear pathways for organizations to better protect themselves. Internal audit has a crucial role in determining the level of risk assessment and strengthening controls.

Neil Hodge

Neil Hodge is a freelance journalist based in Nottingham, U.K.