Financial services firms and their auditors should pay attention to the EU AI Act.
AI-dependent financial services organizations and their internal auditors need to assess the gaps between their AI processes and the requirements of the EU AI Act.
Articles Logan Wamsley Dec 28, 2023
AI-dependent financial services organizations and their internal auditors need to assess the gaps between their AI processes and the requirements of the EU AI Act.

European Union (EU) lawmakers this month agreed on a structure for the EU AI Act, one of the most significant regulatory attempts to address the risks associated with artificial intelligence (AI) — while preserving its many benefits. While some details are not finalized, the trailblazing legislation will have consequences for organizations that are leveraging the technology.
According to the EU, the regulation is directed primarily toward companies and industries where AI, if mishandled, poses the greatest risk to society. The financial services industry sits near the top of that list, alongside sectors such as education and healthcare. As such, financial sector internal auditors should consider their organization’s AI-related risks in preparation for the new law.
According to Ernst & Young’s 2023 Financial Services GenAI Survey, 99% of financial services leaders surveyed say their organizations are deploying AI. Some applications include:
Such advances come with a litany of caveats, however. Regulators around the world have voiced concerns about the bias embedded in the algorithms used for major decisions such as credit approvals, as well as inaccurate information transmission by chatbots. They also are concerned about whether many financial service firms can provide the transparency and data privacy needed to leverage AI ethically and safely.
“AI can introduce certain risks, including safety and soundness risks like cyber and model risks,” the U.S. Financial Stability Oversight Council notes in its 2023 annual report. “Errors and biases can become even more difficult to identify and correct as AI approaches increase in complexity, underscoring the need for vigilance by developers of the technology, the financial sector firms using it, and the regulators overseeing such firms.”
Although the EU AI Act is not expected to be implemented until at least 2025, industry analysts say it could be the model for new regulations by other governments. Plus, similar to the General Data Protection Regulation, the law will apply to all providers, distributors, and users of AI systems that do business in the EU regardless of where they are located.
Under the current draft, the EU AI Act will classify products as presenting unacceptable risk to individuals (such as social scoring), high risk to individuals (such as using AI systems in hiring or employee ratings), or low risk to individuals (such as AI chatbots). The regulation is especially stringent for high-risk AI products, requiring users to:
These requirements make transparent, interpretable AI systems and processes a necessity. Making adjustments to comply with the law will require time, resources, and personnel.
The financial services industry worldwide is on notice to address AI risk. This month, the U.S. Financial Stability Oversight Council’s annual report identified AI as a potential risk to the nation’s financial stability.
The challenge for financial firms operating in the EU will be identifying gaps in current systems against the essential requirements outlined in the EU AI Act. “Regulators are applying increasing pressure on companies to identify the risks associated with their AI systems and manage them effectively,” notes a Deloitte article, “EU Artificial Intelligence Act.”
In the article, Deloitte partners Mark Cankett and Benjamin Dreifus Lewowicz, and associate director Roger Smith write, “It is essential that AI providers and users have robust risk management frameworks, comprehensive controls, and validation methodologies in place. The EU AI Act will require organizations to re-examine and, where necessary, enhance their control frameworks to meet the requirements of the act.”
This approach is consistent with the considerations outlined in The IIA’s recently updated Artificial Intelligence Auditing Framework. In such processes, internal audit can “ensure that legal and compliance teams monitor all current and emerging regulatory requirements,” according to the framework, which is among the resources available from The IIA’s Artificial Intelligence Knowledge Center.
In addition to the current draft of the EU AI Act, organizations can benchmark AI processes against frameworks such as the U.S. National Institute of Standards and Technology’s AI Risk Management Framework, the U.K.’s draft framework for AI regulations, and updates from Japan’s interim discussions on AI.
While the approaches in these frameworks may overlap, it is critical that gap analyses match the financial firm’s regulatory landscape as closely as possible — and be continually monitored and updated. “With varying guidance provided through each regulatory body and government and the rapidly changing legal and regulatory landscape for AI, a global organization should consider the regional context for each AI development,” write Lukas Kruger and Lewis Keating, U.K.-based directors in Deloitte’s risk advisory practice, in “Digital Risk — Artificial Intelligence.” They add that internal audit should “decipher which controls and governance should be standardized across the organization and which should be discretionary.”
Moreover, internal auditors and other risk management functions should monitor the actions of financial sector standard-setting organizations and regulatory authorities such as central banks and securities regulators, the European Insurance and Occupational Pensions Authority, and the International Organization of Securities Commissions.
Even as the EU readies its AI regulation, the reality is AI is advancing too quickly for any regulatory body to fully address the technology’s risks. Recognizing this, financial sector internal audit functions should provide assurance that any changes made to comply with a regulatory framework also align with key organizational initiatives.
An example of these initiatives is improving AI literacy. “By deepening people’s understanding of AI use cases and its associated risks, a foundation can be built for the effective implementation of AI and the pragmatic management of its risks,” notes a Deloitte report, AI Regulation in the Financial Sector.
Internal audit also can provide assurance around AI usage based on existing control frameworks and update the organization on potential adjustments to comply with regulatory changes. In an uncertain regulatory environment, internal audit can be a source of clarity and direction for their organizations about AI risk and compliance.