Skip to Content

Online exclusive: AI in the Public Sector

Online Exclusives Logan Wamsley Aug 17, 2023

Auditors can help steer government organizations through the twists and turns of artificial intelligence risks and requirements.

Governments worldwide are establishing training and other requirements for public sector use of artificial intelligence. Internal auditors should become knowledgeable about the AI landscape to help agencies navigate these requirements and address risks.

The speed of risk is fast, and few risks are moving faster than those related to artificial intelligence (AI). To keep up, the U.S. House of Representatives advanced the AI Training Expansion Act in July. The bipartisan-sponsored bill would require AI training for supervisors, managers, and data and technology workers with jobs linked to the federal government’s use of AI systems. 

The bill builds upon the AI Training Act signed by President Biden in 2022, which requires the director of the Office of Management and Budget to establish an AI training program for the acquisition workforce.

Around the world, the web of legislation and guidance regarding AI education and training is poised to grow more tangled in the coming years (see “The Global Focus on AI” on this page). One example is the impending release of the European Union (EU) AI Act, which will be the world’s first major attempt at AI regulation.

Such regulations present an added wrinkle to organizations, which must account for the inherent risks AI brings such as lack of transparency, data privacy, algorithmic bias and discrimination, and security. On top of that, they must address the legal and compliance risks legislative bodies will place upon them regarding AI use, education, and training. For public sector organizations, internal audit will be invaluable in assuring they can adapt to this environment.

A Long Way to Go

In dealing with AI, public sector organizations have some gaps to fill related to leadership, strategic planning approaches, and capacity. Research from Stanford University, Implementation Challenges to Three Pillars of America’s AI Strategy, details the AI-related legal requirements recently placed on U.S. federal agencies:

  • 88% of federal agencies failed to submit AI plans to identify regulatory authorities and mechanisms to promote responsible AI.
  • 76% of agencies failed to submit AI use case inventories.
  • The Office of Personnel Management has yet to establish an AI occupational series or estimated workforce needs as required under the AI in Government Act.

Long term, the potential consequences for these shortcomings are “sobering,” the report states. “Failure of [the government] to provide proper resources and mandate senior personnel to discharge these responsibilities, fundamentally risks giving up on U.S. leadership in AI innovation and responsible AI,” it notes. Additionally, without sufficient top-level support, many public sector entities must fend for themselves, resulting in an AI implementation picture that is, at best, “fragmented and consistent.”

“The federal workforce does Herculean work, but faces fundamental challenges developing teams that can design, implement, and regulate AI effectively and responsibly,” said Stanford Professor Daniel Ho, one of the authors of the report. Appearing in May before the U.S. Senate Committee on Homeland Security and Governmental Affairs, Ho said less than 2% of AI professionals with doctorate degrees work in government.

A Guide Through Uncertainty

The public sector internal audit community has long been aware of these struggles. However, there are several steps and strategies internal audit can take to guide — through awareness and assurance — their organizations onto a path for long-term success.

Internal auditors should leave no stone unturned to educate themselves on safe and ethical AI implementation and use. While it is unrealistic for all auditors to be an authority on AI technology, they can increase their knowledge of the AI landscape, including new regulations, guidance, hiring, training, and frameworks.

“By keeping abreast of AI-related news, resources, and frameworks created by subject matter experts, you can accelerate the introduction of AI risk management practices and minimize all of the unpleasant bewilderment,” writes Philip McKeown, a California-based managing consultant with CrossCountry Consulting in an AuditBoard blog post, “The Four Waves of Artificial Intelligence: Key Considerations for Internal Audit.”

Internal auditors can start by increasing their knowledge of authoritative risk management frameworks such as the U.S. National Institute of Standards and Technology’s (NIST’s) AI Risk Management Framework and promoting their tenets to stakeholders. Such flexible, voluntary frameworks “help companies and other organizations in any sector and size to jump-start or enhance their AI risk management approaches,” says Laurie Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director. Internal audit can enhance the organization’s implementation of risk frameworks by providing all relevant stakeholders an AI lens in its future risk assessments.

Especially with AI risks, the audit committee, board (or governing body), and executive management team must be aligned regarding the full implications of the technology. According to The IIA’s Global Perspectives and Insights, Artificial Intelligence — Considerations for the Profession of Internal Auditing Part 1, internal audit can leverage its experience with the organization’s risks to help the entity evaluate, understand, and communicate the impact that AI may have on the organization’s ability to create value.

The Global Focus on AI

The trend of public sector bodies making quick, radical steps to equip their workforce to operate in an AI-driven ecosystem safely and effectively isn’t limited to the U.S.:

  • In 2019, the U.K. Central Digital and Data Office and Office for Artificial Intelligence published guidance on building and using AI in the public sector, which provides resources and case studies on assessing, planning, and managing AI safely and ethically.
  • In 2022, the Secretary of the State of Digital Affairs in the Netherlands announced an algorithm registry, which lists all the AI applications currently being used by the Dutch government, as well as provides information regarding monitoring, human intervention, risks, and performance standards.
  • In 2018, Italy, with the aid of its Task Force on Artificial Intelligence, released “The White Paper,” which details various methods of adopting AI into public policies, including the importance of understanding AI tools for anyone who “works in the offices of the Public Administration.”

Attracting AI Talent

Frameworks alone have a limited impact without a baseline enterprisewide understanding of the risk in question — which requires awareness and education. In the case of a subject as complex as AI, this is not an easy task, but it is one that must be pursued. The advantages are two-fold: 1) It creates and enhances talent within the organization; and 2) it makes the organization attractive to experienced talent already in the field.

“We need better models — building on the U.S. Digital Service, public-private partnerships, and academic-agency partnerships — to attract AI talent to public service and build cross-functional teams,” Ho told the Senate committee. He stressed the importance of establishing new pathways and trajectories for technical talent in government. “Retaining AI talent requires giving them meaningful positions related to their expertise,” he said.

In this pursuit, internal audit can help in several ways. For example, it could highlight to board or governing body members the value of fostering in-house AI talent. There are several options that can enable employees of all levels the opportunity to grow their AI competencies.

Washington D.C.-based nonprofit Partnership for Public Service, for example, offers a free AI Federal Leadership Program for senior public sector executives on AI best practices and how to lead AI technology implementation in their organizations. For employees who are not executives, the University of Pennsylvania offers free courses on AI fundamentals, strategies, and governance. While these options may not produce a workforce of AI experts, it can sow the seeds of a productive, progressive AI culture.

Seizing a High-impact Opportunity

Just because AI regulation is still in its infancy does not mean that organizations should view any aspect of this technology — from awareness and education to implementation — as voluntary. This cannot be farther from the truth. As the Stanford report says, “Current requirements may appear to agencies like ‘unfunded mandates’ and be treated like checklists, when they should in fact be seized as opportunities for strategic planning around AI.”

Central to internal audit’s mission in its organization is to dispel this perspective. Indeed, it might be every bit as critical to ensuring an organization’s future as financial stability. The rise of AI provides public sector internal auditors an opportunity to be part of a seismic change that will have far-reaching impacts for government, business, and society.

Logan Wamsley

Logan Wamsley is associate manager, Content Development, at The IIA.