In Singapore, the government is overhauling its popular “Ask Jamie” chatbot with large language model engines to provide more personalized answers to businesses and citizens about the country’s complex web of government services. Meanwhile in the U.S., governments are using AI to establish drone paths, coordinate traffic flows to aid first responders, make welfare payments, adjudicate bail queries, and much more.
While governments have been quick to use AI, they have been slower to enact laws and regulations to ensure AI’s ethical use and keep citizens safe as the technology permeates so much of society. One exception is the European Union’s (EU’s) Artificial Intelligence Act, approved in March, which is the world’s first comprehensive AI regulation.
The U.S. federal government, by comparison, still lacks overarching regulation. This fact, however, has not stopped state and local governments from taking action. In such a fast-changing environment, internal auditors must stay updated on the actions governments are taking, what agencies are taking them, how the actions are impacting those agencies, and where internal audit can provide the most value.
State AI Task Forces
At the state level, a recent development is the creation of AI advisory councils or task forces. States such as Massachusetts, New Jersey, Oregon, Rhode Island, Texas, Washington, and Wisconsin have announced intentions to study and monitor AI use by public sector agencies, assess various risks, and advise on actions and the creation of future controls.
“As AI becomes more prevalent as a revolutionary tool in our lives and in our workforce, we must ensure that this technology is developed in a responsible and ethical way in Texas to help boost our state’s growing economy,” said Texas Governor Greg Abbott after signing the bill establishing that state’s advisory council.
The State Regulatory Web
In addition to AI councils, 17 states have enacted legislation to regulate the design, development, and use of AI. Most of these laws establish a regulatory and compliance framework to address data privacy and accountability risks. While these measures have some common aspects, they differ from state to state and may only apply to certain sectors or industries.
In May, Colorado became the first state to pass a comprehensive law regulating AI. The state’s Consumer Protections for Artificial Intelligence Act includes safeguards for health decisions made using AI and requires developers to provide disclosures about AI systems that make “high-risk” decisions, Politico reports. A previous Colorado law prohibits insurers from using consumer data from AI systems in a way that propagates discrimination.
Meanwhile, a California law requires criminal justice agencies that use AI-powered pre-trial risk assessment tools to analyze whether they may have inadvertent bias or discriminatory effects. Additionally, California, Illinois, and Maryland have laws to ensure individuals are informed how and when AI is being used. For example, an Illinois law requires employers to notify job applicants before a video interview that AI may be used to analyze their fitness for a particular role.
Local Responses
Some municipal governments are going even further with AI-related legislative actions than the states. For example, Seattle’s Generative Artificial Intelligence Policy is aligned with the priorities put forth by President Biden’s 2023 AI Executive Order. New York City’s Automated Employment Decision Tools law “prohibits employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.”
An AI Framework
As the patchwork of state and local AI regulation becomes more complex, it risks creating uncertainty for institutions, AI developers and users, and the public. In this environment, governing bodies, councils, risk management functions, and public sector internal audit functions need to monitor any AI-related government action or development — and their risks to the organization.
Internal audit’s growing AI-related responsibilities were the primary impetus for The IIA’s recently updated Artificial Intelligence Auditing Framework. “The biggest goal was to modernize the framework using examples that are more relevant to today’s world,” said George Barham, IIA Director of Standards and Professional Guidance, in a recent interview for the All Things Internal Audit podcast. “It’s written from the standpoint of auditors beginning their journey on the AI path, starting with a lot of foundational knowledge they would need to know, history, and examples of uses.”
From there, the framework explains how internal audit can discuss AI with stakeholders and build a clear picture of how the organization compares to the current AI landscape, Barham noted. Internal audit can use this information to help adapt the AI framework for their organization.
According to the AI Auditing Framework, an effective framework consists of three domains:
- Governance, which refers to organizational roles and responsibilities for AI oversight.
- Management, which refers to day-to-day monitoring to ensure AI tools and controls are working as designed.
- Internal audit, which provides advisory and assurance services.
Additionally, the framework provides a variety of tools internal auditors can customize and incorporate into their processes. While not comprehensive, these considerations and checklists function as a “jumping off point” for internal audit functions that are just beginning with AI, Barham said.
Auditing and Using AI
To get started with addressing AI risks, Barham suggested internal auditors:
- Have proactive discussions with management to understand their approach and current use of AI.
- Gain an understanding of how the organization assesses AI-related risks, including how those risks fit into the overall enterprise risk management process.
- Communicate internal audit’s ability to advise the organization about AI, especially in organizations that are not in a position to benefit from assurance activities.
As AI evolves, internal auditors and other stakeholders must continually refine their processes and responsibilities. Moreover, to stay abreast of AI risks, internal auditors must continually ask stakeholders:
- How does AI help the organization reach its strategic goals?
- What risks are involved and how is the organization mitigating them?
- Are there adequate internal controls surrounding AI-related processes?
- Is the data that will be used for AI complete, accurate, and reliable?
- How is AI tested before and after deployment to ensure biases do not exist?
In addition to assessing AI risks, internal audit should look for opportunities to leverage AI for its own processes. The accuracy and efficiency of tools such as generative AI can be immensely valuable for public sector agencies that may lack technical staff or have limited budgets. To help internal auditors get started, The IIA has recently launched a series of AI Use Case videos that demonstrate how to use generative AI to enhance risk assessments and create operational plans.
“Using generative AI can significantly enhance the efficiency and effectiveness of your audit,” says the videos’ host, Imran Nashir, a Netherlands-based member of The IIA AI Advisory Group. “It helps in identifying risk factors early, allows for better planning, and can quickly adapt to new information quickly, making it a valuable tool in any auditor’s toolkit.”
An Ongoing Challenge
AI is here and advancing faster every day — which means governments can’t stand still or fall behind in ensuring laws keep up with the technology. They must constantly be in motion to ensure that they protect the health and safety of their citizens. As the landscape of artificial intelligence continues to evolve, the approach used by internal auditors also must evolve. Leveraging existing organizational relationships to partner with management as an advisor may be the first step, with assurance activities to follow.