The bill builds upon the AI Training Act signed by President Biden in 2022, which requires the director of the Office of Management and Budget to establish an AI training program for the acquisition workforce.
Around the world, the web of legislation and guidance regarding AI education and training is poised to grow more tangled in the coming years (see “The Global Focus on AI” on this page). One example is the impending release of the European Union (EU) AI Act, which will be the world’s first major attempt at AI regulation.
Such regulations present an added wrinkle to organizations, which must account for the inherent risks AI brings such as lack of transparency, data privacy, algorithmic bias and discrimination, and security. On top of that, they must address the legal and compliance risks legislative bodies will place upon them regarding AI use, education, and training. For public sector organizations, internal audit will be invaluable in assuring they can adapt to this environment.
A Long Way to Go
In dealing with AI, public sector organizations have some gaps to fill related to leadership, strategic planning approaches, and capacity. Research from Stanford University, Implementation Challenges to Three Pillars of America’s AI Strategy, details the AI-related legal requirements recently placed on U.S. federal agencies:
- 88% of federal agencies failed to submit AI plans to identify regulatory authorities and mechanisms to promote responsible AI.
- 76% of agencies failed to submit AI use case inventories.
- The Office of Personnel Management has yet to establish an AI occupational series or estimated workforce needs as required under the AI in Government Act.
Long term, the potential consequences for these shortcomings are “sobering,” the report states. “Failure of [the government] to provide proper resources and mandate senior personnel to discharge these responsibilities, fundamentally risks giving up on U.S. leadership in AI innovation and responsible AI,” it notes. Additionally, without sufficient top-level support, many public sector entities must fend for themselves, resulting in an AI implementation picture that is, at best, “fragmented and consistent.”
“The federal workforce does Herculean work, but faces fundamental challenges developing teams that can design, implement, and regulate AI effectively and responsibly,” said Stanford Professor Daniel Ho, one of the authors of the report. Appearing in May before the U.S. Senate Committee on Homeland Security and Governmental Affairs, Ho said less than 2% of AI professionals with doctorate degrees work in government.