As this incident illustrates, just as AI is beginning to propel businesses forward, it also is creating enormous fraud risks. Such scams are set to proliferate for one simple reason — they work. AI has the potential to create sophisticated deepfakes and synthetic identities that can bypass conventional security measures, as well as enable highly automated, large-scale attacks, making them more challenging to detect and deter.
Additionally, while typical online frauds rely on existing security flaws, AI can be used to build self-learning bots to find new vulnerabilities and build attack types that could automatically detect and evade detection over time. In short, fraudsters using AI have the ability to produce more sophisticated, more targeted attacks more frequently and more easily. As a result, it is important for internal audit to raise awareness and help bolster controls.
Recognize the Signs
Fraudsters and other malevolent actors are not just targeting cash: They are going after customer and corporate data, intellectual property, or other commercially sensitive information. And they can strike an organization in many different ways. Ryan Hittner, co-lead of Deloitte & Touche’s Artificial Intelligence & Algorithmic Assurance practice in New Jersey, says there are three key types of AI fraud that organizations need to prepare for:
- Identity/impersonation fraud.
- Fake documents and data produced to look real and credible.
- Mass volume, sophisticated phishing attacks.
“These kinds of frauds already existed,” he says. “The problem is that AI can do them more effectively and at scale, which means controls need to be improved.”
Fortunately, experts say there are several steps organizations can take to defend themselves. The most obvious first step is to consider the specific areas of the business that may be vulnerable to AI technologies, says Elizabeth Metliss, managing associate in the London office of law firm Mishcon de Reya LLP.
For example, Metliss says, organizations should consider how they interact with customers or clients — such as on the phone or online — or if there are photos or videos of personnel on a company website that could be exploited. She adds that organizations should have a “house style” for communications to make them more distinctive and more difficult for AI to emulate, which could include adding background noise or music to any videos uploaded onto their websites or social media channels. Such measures would help employees across the business understand what to look out for, such as requests or content that may appear unusual or outside of what would be expected from normal business activity.
Organizations also should use AI tools to help detect anomalies, Metliss says. In the case of a cloned voice recording or a deepfake, red flags could be patterns or intonation in language or speech, while language analysis in emails or other written communications could flag instances where irregular financial transaction requests have occurred. “This can alert your business to any potential unauthorized activity or if an employee has fallen victim to a scam,” she says. Tools or apps that can help verify or analyze content include reverse image searches, fact-checking websites, and deepfake-detection software.
Teaching employees how to recognize signs of AI fraud and what to do when they receive suspicious requests is important. According to Hittner, case studies can be useful, ranging from the simplest, low-level fraud to the most complicated. “Frauds will happen, but having a culture where escalating reporting to those who have the power to prevent further damage quickly is very important,” he says.
Hittner adds that internal audit needs to be kept informed about AI fraud attempts and incidents “so that it can be involved in creating the solution from the start.” That is better than assessing controls after an incident and finding “they are weak or, worse still, didn’t work and didn’t flag any frauds that have gone undetected,” he explains.
Assess the Threat
Roy Waligora, head of Investigations and Corporate Forensics at professional services firm KPMG in London, says organizations should perform thorough threat assessments to reveal both internal and external risks. This will help the organization define its risk tolerance, identify current fraud prevention measures, and pinpoint areas where it can improve its fraud-prevention strategy.
Waligora adds that internal audit should assess whether management has “considered all risks associated with AI that are likely to have a significant financial, reputational, or regulatory impact.” Internal audit should conduct a robust fraud risk assessment and review the effectiveness of existing prevention and detection processes. This involves comparing the current risk management framework with the necessary actions and controls to identify any weaknesses. Auditors should update this analysis regularly to ensure that fraud responses remain effective and proportionate in detecting new fraud methods, he says.
Identifying the areas of the business that are using AI will provide greater visibility about where risks may occur in systems and data sources. Once potential vulnerabilities are identified, organizations should set up multilayered controls that combine AI-powered fraud detection with human oversight and robust security practices, says Eric Schwake, director of Cybersecurity Strategy at Salt Security in Eugene, Ore. For example, if developers are using AI to build an application programming interface (API) to allow different software to talk to one another, there must be processes, procedures, and governance rules in place to validate their security before they go into production.
Schwake says internal audit has a crucial responsibility in assessing the governance structures of AI, guaranteeing data security, access controls, and the overall integrity of AI systems. To achieve this, auditors “should regularly simulate fraudulent attacks to identify any vulnerabilities in AI defenses and promote awareness of any AI-related fraud risks throughout the organization,” he explains. “For internal development of applications and APIs where AI has been used, auditors play a key role in guiding what ‘good’ is so that before these go into production, as much risk is mitigated as possible.”
Strengthen Controls
Internal audit has a strong role to play in beefing up an organization’s defenses against AI fraud, according to Theresa Grafenstine, executive vice president and CAE at PenFed Credit Union in Washington, D.C. “Internal audit has a duty to understand AI fraud risks and what controls will help,” she says.
Grafenstine say the controls organizations need to combat AI fraud risks are “largely the same as those already in place for online and other frauds.” The key difference is to use multifactor authentication — ranging from security challenge questions to biometric data — to combat attempted fraud via AI-generated video messages, voice calls, and other scams. “Combating AI fraud comes down to doing more checks to verify whether the demand for cash, data, access, or whatever it might be is actually coming from the correct person and then being sent to the right person,” Grafenstine explains.