Skip to Content

Online Exclusive: Europe to Regulate AI

Articles Logan Wamsley Dec 28, 2023

European Union (EU) lawmakers this month agreed on a structure for the EU AI Act, one of the most significant regulatory attempts to address the risks associated with artificial intelligence (AI)  — while preserving its many benefits. While some details are not finalized, the trailblazing legislation will have consequences for organizations that are leveraging the technology.

According to the EU, the regulation is directed primarily toward companies and industries where AI, if mishandled, poses the greatest risk to society. The financial services industry sits near the top of that list, alongside sectors such as education and healthcare. As such, financial sector internal auditors should consider their organization’s AI-related risks in preparation for the new law.

AI Use in Financial Services

According to Ernst & Young’s 2023 Financial Services GenAI Survey, 99% of financial services leaders surveyed say their organizations are deploying AI. Some applications include:

  • Chatbots. AI-powered bots equipped with natural language processing can walk customers through account details and direct complaints to appropriate customer service units.
  • Fraud detection and prevention. AI is being added to current rule-based anti-money laundering transaction monitoring and screening systems, enabling them to identify previously undetectable transactional patterns, data anomalies, and suspicious relationships between individuals and entities.
  • Predictive analytics. AI has become a central part of revenue forecasting, stock price predictions, risk monitoring, and case management.
  • Credit risk management. Fintech and digital banking markets are using AI to develop more reliable credit risk management models and solutions, helping to determine the creditworthiness of borrowers by harnessing data to predict the probability of default. 

Such advances come with a litany of caveats, however. Regulators around the world have voiced concerns about the bias embedded in the algorithms used for major decisions such as credit approvals, as well as inaccurate information transmission by chatbots. They also are concerned about whether many financial service firms can provide the transparency and data privacy needed to leverage AI ethically and safely.

“AI can introduce certain risks, including safety and soundness risks like cyber and model risks,” the U.S. Financial Stability Oversight Council notes in its 2023 annual report. “Errors and biases can become even more difficult to identify and correct as AI approaches increase in complexity, underscoring the need for vigilance by developers of the technology, the financial sector firms using it, and the regulators overseeing such firms.”

A First Regulatory Step

Although the EU AI Act is not expected to be implemented until at least 2025, industry analysts say it could be the model for new regulations by other governments. Plus, similar to the General Data Protection Regulation, the law will apply to all providers, distributors, and users of AI systems that do business in the EU regardless of where they are located.

Under the current draft, the EU AI Act will classify products as presenting unacceptable risk to individuals (such as social scoring), high risk to individuals (such as using AI systems in hiring or employee ratings), or low risk to individuals (such as AI chatbots). The regulation is especially stringent for high-risk AI products, requiring users to:

  • Use AI systems in accordance with the instructions of use.
  • Assign oversight responsibilities to human beings who have the necessary competence, training, and authority.
  • Monitor the operation of AI systems.
  • Inform the provider or distributor of any risks or incidents involved with the use of AI systems and suspend the use of the system, if necessary.
  • Ensure that input data is relevant in view of the intended purpose of the AI system, to the extent that such data is under the organization’s control.
  • Keep automatically generated logs by the AI system, to the extent that such logs are under their control (recordkeeping).
  • Perform a data protection impact assessment based on information provided by the AI system vendor.
  • Cooperate with national competent authorities (NCAs) on any action related to an AI system. NCAs are responsible for monitoring compliance with national laws and regulations.

These requirements make transparent, interpretable AI systems and processes a necessity. Making adjustments to comply with the law will require time, resources, and personnel.

Assessing Gaps

The financial services industry worldwide is on notice to address AI risk. This month, the U.S. Financial Stability Oversight Council’s annual report identified AI as a potential risk to the nation’s financial stability.

The challenge for financial firms operating in the EU will be identifying gaps in current systems against the essential requirements outlined in the EU AI Act. “Regulators are applying increasing pressure on companies to identify the risks associated with their AI systems and manage them effectively,” notes a Deloitte article, “EU Artificial Intelligence Act.”

In the article, Deloitte partners Mark Cankett and Benjamin Dreifus Lewowicz, and associate director Roger Smith write, “It is essential that AI providers and users have robust risk management frameworks, comprehensive controls, and validation methodologies in place. The EU AI Act will require organizations to re-examine and, where necessary, enhance their control frameworks to meet the requirements of the act.”

This approach is consistent with the considerations outlined in The IIA’s recently updated Artificial Intelligence Auditing Framework. In such processes, internal audit can “ensure that legal and compliance teams monitor all current and emerging regulatory requirements,” according to the framework, which is among the resources available from The IIA’s Artificial Intelligence Knowledge Center.

In addition to the current draft of the EU AI Act, organizations can benchmark AI processes against frameworks such as the U.S. National Institute of Standards and Technology’s AI Risk Management Framework, the U.K.’s draft framework for AI regulations, and updates from Japan’s interim discussions on AI.

While the approaches in these frameworks may overlap, it is critical that gap analyses match the financial firm’s regulatory landscape as closely as possible — and be continually monitored and updated. “With varying guidance provided through each regulatory body and government and the rapidly changing legal and regulatory landscape for AI, a global organization should consider the regional context for each AI development,” write Lukas Kruger and Lewis Keating, U.K.-based directors in Deloitte’s risk advisory practice, in “Digital Risk — Artificial Intelligence.” They add that internal audit should “decipher which controls and governance should be standardized across the organization and which should be discretionary.”

Moreover, internal auditors and other risk management functions should monitor the actions of financial sector standard-setting organizations and regulatory authorities such as central banks and securities regulators, the European Insurance and Occupational Pensions Authority, and the International Organization of Securities Commissions. 

The Value of Principles

Even as the EU readies its AI regulation, the reality is AI is advancing too quickly for any regulatory body to fully address the technology’s risks. Recognizing this, financial sector internal audit functions should provide assurance that any changes made to comply with a regulatory framework also align with key organizational initiatives.

An example of these initiatives is improving AI literacy. “By deepening people’s understanding of AI use cases and its associated risks, a foundation can be built for the effective implementation of AI and the pragmatic management of its risks,” notes a Deloitte report, AI Regulation in the Financial Sector.

Internal audit also can provide assurance around AI usage based on existing control frameworks and update the organization on potential adjustments to comply with regulatory changes. In an uncertain regulatory environment, internal audit can be a source of clarity and direction for their organizations about AI risk and compliance.

Logan Wamsley

Logan Wamsley is associate manager, content development at The IIA.