Skip to Content

COSO Issues GenAI Guidance

Articles Jake Lamb Mar 11, 2026

Report shows how to align AI governance with control frameworks.

COSO has released new guidance to help organizations manage the risks of generative AI while maintaining effective internal controls. The report applies COSO’s Internal Control–Integrated Framework to AI systems, offering control mapping, risk assessment tools, and governance guidance across the AI lifecycle.

New COSO guidance aims to help organizations manage the risks of generative artificial intelligence (GenAI) while strengthening internal controls. Achieving Effective Internal Control Over Generative AI provides a practical roadmap for applying COSO’s Internal Control–Integrated Framework to AI systems.

GenAI is streamlining work across finance, operations, and compliance — from faster reconciliations to quicker analysis and decisions, the report says. “Generative AI is transforming how organizations work, make decisions, and manage information,” Lucia Wind, executive director and chair of COSO, said in a press release. She emphasized that while the technology has great potential, it also requires disciplined oversight grounded in proven internal control principles.

According to the report, GenAI introduces new cyber risks, including prompt manipulation, opaque reasoning, model drift, and frequent system changes. Left unchecked, these risks can undermine reliable reporting, compliance, and operational integrity.

“Generative AI is advancing faster than most governance frameworks were designed to handle,” said IIA President and CEO Anthony Pugliese in a LinkedIn post announcing the new guidance. “As organizations embed AI into core business processes, many are still determining how to manage the risks that accompany it.”

Rather than proposing a new governance structure, the publication applies the COSO framework’s five internal control components — control environment, risk assessment, control activities, information and communication, and monitoring activities — to GenAI use cases. “GenAI introduces risks that evolve as quickly as the technology, itself,” said co-author David Wood, professor at Brigham Young University, in COSO’s press release. “By grounding GenAI governance in COSO’s established internal control principles, organizations can build systems that are both adaptable and audit ready.”

The report introduces a “capability-first” taxonomy that groups GenAI use cases into eight categories: ingestion, transformation, posting, orchestration, judgment, monitoring, regulatory intelligence, and human-AI interaction. Each category includes control considerations reflecting how risks arise across the data-to-decision life cycle.

The publication also includes audit-ready control mapping, minimum control expectations aligned with the COSO framework, and practical implementation tools such as risk assessment matrices, testing procedures, and metric dashboards.

The COSO report was authored by Wood, Scott Emett of Arizona State University, Marc Eulerich of the University of Duisburg-Essen, Jason Guthrie of Ernst & Young, and Jason Pikoos of Meta. It is intended for management teams, risk and compliance professionals, controllers, IT and information security leaders, internal auditors, external auditors, and board oversight committees.

Jake Lamb

Jake Lamb is the managing editor of Internal Auditor.