On the Frontlines: Fighting AI Cyber Threats
Blogs Theresa Payton Feb 10, 2025
![](/globalassets/site/magazine/voices/2025/on-the-frontlines-fighting-ai-cyber-threats/theresa-payton_695x458.jpg?width=400&quality=100)
Ahead of her presentation at The IIA’s 2025 GAM: Great Audit Minds Conference, Theresa Payton, an artificial intelligence (AI) strategist, cybersecurity expert, and the first female White House chief information officer, explores the rise in AI-driven cybercrime and how organizations and internal audit functions can proactively combat disinformation and fraud.
Last year, an AI-powered cyberattack targeting Hong Kong-based engineering and consulting firm Arup made headlines around the world. The attack, which leveraged deepfake and cloning technology to impersonate the organization’s C-suite and persuade an employee to wire $25 million to the perpetrators, sparked fear about the threat of AI-powered cybercrime.
Since then, the frequency and sophistication of deepfake and AI-related cyberattacks have only increased.
In an era where AI, deepfakes, and cyber threats evolve at breakneck speed, internal auditors stand as a buffer of assurance, protecting organizations from unseen risks in an increasingly digital world. It’s imperative that internal auditors embrace their role in helping safeguard the trust, integrity, and business continuity of the organizations they serve.
Deepfakes and Cloning: Understanding the Nuances
Artificial intelligence can be leveraged in a number of ways to supercharge cybercriminal activity, and it is important to understand the nuances within AI-driven exploitation.
Both deepfakes and voice or identity cloning are common uses of AI that can cause large-scale operational, financial, and reputational damage. Deepfakes generally refer to AI-generated audio, video, or images that are designed to impersonate individuals convincingly and are often used to spread disinformation about or within an organization. Voice and identity cloning, although similar, leverage AI to replicate a specific person’s voice, credentials, or digital identity to manipulate systems or people. Importantly, cloning tends to pose a direct operational threat and is often targeted toward a specific activity, such as fraudulent wire transfers, unauthorized system access, and identity theft.
In both types, proper threat mitigation requires that organizationwide technology safeguards are in place and that every team member remains vigilant to the warning signs of potential cyber threats.
Who is Most Vulnerable?
Although no industry or organization is exempt from AI-powered cyber risk, those handling high-value data, financial services, healthcare, government agencies, and critical infrastructure often face the greatest level of risk. Similarly, no department is immune to a cyber threat, but finance, human resources, legal, and executive leadership are often prime targets of digital attacks.
Ultimately, attackers will go where the incentives are the highest, so the more valuable the data, the bigger the target. Perpetrators of deepfakes seek to manipulate decision makers and alter business realities that can jeopardize the operational, reputational, and financial integrity of the organization.
Safeguarding Your Organization
With most organizations, their biggest mistake is assuming cloning and deepfakes won’t happen to them. Overconfidence is the root of vulnerability, and every organization can be a target.
Every function and department play a role in safeguarding an organization against AI-powered cyber threats. Internal audit is responsible for identifying potential blind spots, assessing AI governance within the organization, and stress-testing internal controls. Internal audit is an indispensable partner to boards and management as well as to technology and security teams, providing oversight, objectivity, and strategic foresight to help mitigate emerging threats.
Alongside the internal audit function, senior leadership and the board must work to ensure their governance frameworks are aligned with the evolving threat environment. Cybersecurity and IT departments are needed to implement technical safeguards and response strategies while overseeing continuous monitoring of potential threats.
Organizations can ensure security and resiliency by implementing multifactor authentication beyond voice or video confirmation and using simulated deepfake attacks to stress-test internal controls. Intense AI training builds team awareness about potential risk areas and weaknesses and helps identify threats before they escalate. Establishing clear guidelines for responsible AI usage within the organization and conducting regular auditing of AI systems ensures that organizations balance the potential for AI innovation with the risk of exploitation.
Looking Ahead
While greater regulatory clarity and new AI frameworks are forthcoming, regulations often lag behind technological advancements. The European Union’s AI Act and the U.S. Executive Order on AI are steps in the right direction, but organizations cannot afford to wait. Self-regulation, transparency, and ethical AI deployment must be priorities for organizations and internal audit functions today.
Safeguarding against the threat of deepfakes, cloning, and other AI-powered cyberattacks is an organizationwide effort. It requires the cooperation of internal audit, boards, senior management, and security and technology teams to ensure that AI and cybersecurity strategies are proactive, resilient, and forward-thinking.
Theresa Payton will present How AI, Deepfakes, Voice Cloning, and ChatGPT Are Transforming Cybercrime at the 2025 GAM: Great Audit Minds Conference, March 10-12 in Orlando, Fla.