Voice of the CEO: What to Know About the White House’s Executive Order On AI
Blogs Anthony Pugliese, CIA, CPA, CGMA, CITP Oct 31, 2023
Yesterday, the White House released the first artificial intelligence (AI)-specific executive order in the United States, designed to promote responsible innovation by identifying and avoiding potential risks related to the emerging technology. The order, which has the effect of a law, directs government agencies to take action in eight specific areas, each of which could impact organizations based or operating in the U.S. — and, by extension, internal auditors working to support those entities.
As such, internal auditors should be familiar with the core parts of the order. For multinational organizations, the U.S. government’s upcoming actions should be considered alongside other nation’s laws and regulations related to AI, such as the European Union’s Artificial Intelligence Act, China’s Generative AI Regulation, or Canada’s Artificial Intelligence and Data Act.
To help you get started in the U.S., here’s a breakdown of each area as listed by the White House, along with some considerations for internal auditors and their organizations:
1. New Standards for AI Safety and Security
For internal auditors, one key item in this focus area relates to standards. The National Institute of Standards and Technology has been tasked with setting safety-test standards, known as red-team testing. Further, companies that develop any foundation model deemed to pose a serious risk to national security, national economic security, or national public health and safety must not only notify the federal government when training the model, but the company must also share the results of the red-team tests.
The executive order also calls for the development of standards related to biological synthesis screening and for detecting AI-generated content and authenticating official content.
As such, if your organization develops AI models, engineers certain biological materials, or develops content — which, for this last item, is most organizations today — you should stay abreast of these standards’ development. The U.S. Department of Commerce has been tasked with developing the guidance for content authentication and the watermarking of AI-generated content; however, the new guidance is not expected until late 2024 or early 2025.
2. Protecting Americans’ Privacy
The White House also is calling on Congress to pass bipartisan data privacy legislation to protect Americans’ information — something The IIA has already been involved in. Earlier this year, The IIA offered proposed amendments to the American Data Privacy and Protection Act, including a call for independent assurance over impact assessments concerning algorithmic biases.
Further, the order calls for the prioritization of the development of privacy-preserving techniques and funding privacy-preserving research and technologies. If you’re an internal auditor whose organization works in these fields, you may find that this is an area of opportunity for your business partners.
Finally, this section of the order also seeks to evaluate how federal agencies collect and use commercially available information, particularly as it relates to personally identifiable information (PII). As such, internal auditors working in federal agencies may want to start looking at how they are using — or may use — AI in relation to PII.
3. Advancing Equity and Civil Rights
Internal auditors working in real estate, healthcare, criminal justice, and related industries will want to pay close attention to this section. The White House has noted that, “Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, health care, and housing.” As a result, the order seeks guidance for landlords, federal benefits programs, and federal contractors to ensure they aren’t inadvertently using AI algorithms that perpetuate discrimination.
What might a biased algorithm look like? Consider a photo recognition software being developed to test emotions based on facial movements. If the training data (photos) used to develop the AI are exclusively of men in their 20s, the completed product may not be able to correctly determine emotions for men in their 60s or for women, as one example.
If your company is using AI in one of the specified industries or operating in certain geographic areas, you may already be looking at AI bias. For example, New York City passed a law to require employers to conduct bias audits of AI tools used in employment decisions, while Illinois was the first state to enact restrictions on the use of AI in hiring.
If you aren’t looking at AI bias, and your company is already or planning to use AI, you need to understand what information is or was used in the development of the AI tool. Consider, for example, how bias may have been introduced into the data when it was collected, prepared, and labeled.
4. Standing Up for Consumers, Patients, and Students
This is another area where internal auditors working in healthcare need to pay attention. The executive order is specifically looking at harmful or unsafe healthcare practices involving AI. If this is your area, consider how your organization might be using AI in every step of the supply chain — from development and testing to promotion and sales.
For internal auditors in education, this is an area that could prove to be a value creator, as the executive order calls for the creation of resources to support educators deploying AI-enabled educational tools, such as personalized tutoring.
5. Supporting Workers
Every industry faces disruption from AI. As a result, internal auditors in every field should already be considering how the adoption of AI tools in their organization could impact workforce needs in the short and long term.
The executive order’s focus in this area could ultimately benefit internal auditors in this task, as the White House has ordered a report on AI’s potential labor-market impacts (though several such studies are already available from various entities), as well as the development of principles and best practices related to job displacement; labor standards; workplace equity, health, and safety; and data collection.
While we wait for the federal government to release these resources, internal auditors should start looking at what’s available today and plan for their organization accordingly.
6. Promoting Innovation and Competition
This is another area of opportunity for internal auditors in many industries, as the order calls for promoting AI research, providing small developers and entrepreneurs with resources, and streamlining the visa process for highly skilled immigrants and nonimmigrants whose work may support American organizations in AI development.
While the order seeks to help spur innovation, internal auditors should always consider compliance, reporting, and other risks that could arise with these new opportunities in AI, just as they would with other federal programs.
7. Advancing American Leadership Abroad
Of course, many organizations are international — as is their use of AI. The White House is calling on the U.S. Department of State and Department of Commerce to develop international frameworks for “harnessing AI’s benefits and managing its risks and ensuring safety.” This section also calls for developing and implementing AI standards with international partners and in standards organizations to ensure that it is “safe, secure, trustworthy, and interoperable.”
This is an area that, globally, The IIA is exploring. In August, we issued an updated look at our Artificial Intelligence Framework, and we will soon be launching a task force of AI — and internal audit — experts to develop new and updated resources for our members and the profession across the globe.
8. Ensuring Responsible and Effective Government Use of AI
This focus area is particularly relevant for our thousands of members working in the public sector, as it both calls for guidance for federal agencies’ use of AI and for the acceleration of hiring AI professionals and adoption of AI products and services.
Individuals supporting federal agencies should seek a seat at the table early on in their organization’s process of AI adoption. By being brought into these discussions early, internal auditors can best help their agencies evaluate, understand, and communicate how AI could impact their ability to create value.
Understanding how and when federal agencies will be adopting AI also is important for internal auditors outside the U.S. federal government, as such rules could cascade to state or local governments or could impact the type and amount of business available to certain organizations, particularly those that contract with federal agencies.
What Internal Auditors Can Do Now
While the White House’s executive order gives internal auditors plenty to think about for the next few months and years, there are plenty of actions internal auditors can take now to ensure they and their organizations are prepared for the continual disruption AI is certain to bring most industries.
Here are five specific steps you can take, starting today:
1. Upskill
Stay current with the latest advancements in AI and how businesses are using them. Take advantage of tools like LinkedIn Learning or podcasts, as well as The IIA’s ongoing research and thought leadership, including our AI Auditing Framework, referenced above, and the Global Knowledge Brief series, The Artificial Intelligence Revolution, parts one, two, and three. The IIA also has created a task force and steering committee to explore the benefits, risks, and impacts of using and auditing AI.
2. Lead the Charge
Take a leadership role when it comes to advising on AI projects in your organization. Internal auditors are equipped to provide assurance on AI governance, investments, strategy, and controls. While technology changes rapidly (and not everyone is an expert), processes for good governance are foundational. The Internal Auditing’s Role in Corporate Governance paper from The IIA can be an excellent resource for your organization, even if you don’t see the White House’s executive order as immediately applying to you.
3. Experiment
An organization’s needs and opportunities are always changing. Be open to experimenting with AI applications and the potential for internal audit and business process improvements. Of course, make sure that you always exercise good control-minded judgment in doing so, as to not compromise your organization’s information and to ensure you don’t rely on errors in AI logic. Make sure to test the validity of these experiments.
4. Educate
Help others in your organization realize the benefits of AI, when to use it, and how to mitigate the risks. Ensure your organization has relevant policies and procedures in place and that your staff are trained on them. As mentioned elsewhere, make sure you are at the table when your organization starts discussing the use of AI to ensure that appropriate controls are being considered from the outset.
5. Stay Human
Humans have the final say when it comes to making good business decisions, and those decisions are not just about numbers. Ethics and values are important, too. Be the expert of your organization’s code of ethics and The IIA’s Code of Ethics. Be the one to decide whether something is “the right thing to do.”
Takeaways
The rapid adoption of AI affects just about every aspect of our businesses and our lives. It’s nearly impossible to avoid AI in today’s world, whether it’s smarter smartphones, predictive social media algorithms, cashier-less grocery stores, medical imaging, automated screening in hiring processes, content creation, or hundreds of other examples. And as the new White House executive order demonstrates, AI is not only here to stay, it’s growing exponentially.
To help your organization navigate these new waters, stay up to date not only on the technology itself, but also on laws and regulations surrounding it, as well as the risks and opportunities that arise with those.
Remember: It’s the humans, not the technology, who create the ultimate business value.