On the Frontlines: Navigating AI Risk Management
Blogs Danephraim Abule Endashaw, CISA, ACCA Sep 03, 2025

As the world is increasingly being shaped by artificial intelligence (AI), the dialogue has now shifted from if we should integrate the technology into our lives to how. The potential benefits of AI are vast, from optimizing global supply chains, to real-time time risk management, to individualized health care services. However, with great innovation comes greater risk — and that requires responsibility. The very algorithm dedicated to enhancing our quality of life carries inherent risk that may turn against us if not managed properly.
This is where the AI Risk Management Framework, introduced in early 2023 by the U.S. National Institute of Standards and Technology (NIST), can help. NIST’s AI framework serves as a vital resource for organizations to navigate the complicated world of AI to ensure AI is created and used in a way that is safe, fair, and reliable. With the NIST AI framework, organizations can leverage the powerful potential of AI, while avoiding its possible risks.
Understanding the Core Concept: What is AI Risk?
Risk can be defined as the function of likelihood (the probability of an event happening) and impact (the consequence of an event, if it happened). The results are not always negative; they can be positive, negative, or a mix of both. With AI, such an approach helps organizations to optimize new opportunities created by the technology, while adequately identifying and addressing potential harms. Think of an AI-powered self-driving car that could significantly reduce crashes caused by human error — while a fault in its GPS could also cause an accident. As with other risks, AI risk is determined by looking at how likely that malfunction is to happen (likelihood) and how bad the damage could be (impact).
The framework emphasizes a proactive, rather than a reactive approach to risk — anticipating potential problems and developing mitigating tools from the very beginning of the AI development life cycle. This helps reduce the probability of negative outcomes and builds trust with customers, users, and the public at large.
The Dual Nature of AI: Maximizing Benefits, Minimizing Harms
The NIST framework’s focus on not just minimizing negative impacts but also maximizing positive ones makes it unique. The dual and holistic approach of the framework encourages organizations and developers to discover and leverage innovative solutions that AI can provide, while remaining attentive to the various types of risks that can arise from its deployment. Understanding and addressing these risks ensures that while we capitalize on the advantages of AI, we also protect against potential harms. Here, the framework provides a helpful starting point by categorizing potential harms as harm to people (e.g., discrimination), harm to organizations (e.g., reputational damage), and harm to ecosystems (e.g., environmental damage or market destabilization). With a systematic consideration of these potential harms, organizations can begin to craft strategies to mitigate them. This might involve adding stronger data privacy protocols, building in "explainability" features that make AI decision-making processes more transparent, or establishing clear lines of accountability for when things go wrong.
The Human Factor: Overcoming the Assumption of Infallibility
The existence of a common perception that AI systems are inherently objective or superior to human decision-making is a significant challenge in managing risks attached to AI. Such a mindset can lead to a phenomenon called automation bias (over-reliance on AI). The NIST framework cautions us that AI systems are created by humans and are therefore susceptible to human biases and errors. For example, a hiring tool powered with AI trained on historical data from a company with a history of gender bias may inadvertently learn to favor male candidates. This bias could go undetected if the organization assumes the AI is objective, perpetuating and even amplifying existing inequalities. The NIST framework advocates for a "socio-technical" approach to counter this, which shows that AI systems are not just technological objects but are deeply embedded in social, organizational, and cultural contexts. This requires involving a diverse range of stakeholders in the risk management process, from data scientists and engineers to ethicists, social scientists, and the people who will ultimately be affected by the AI system.
The Way Forward: A Culture of Responsible AI
When it comes to getting AI right, everything hinges on how we think about risk from the very beginning. This isn't just a technical problem; it's a human one. It requires a fundamental shift in our thinking — moving away from focusing only on the code and algorithms, and instead putting people at the center of the equation. By using the principles laid out in frameworks like the AI Risk Management Framework, organizations aren't just checking a box — they're changing their perspective. That's how we steer clear of potential disasters and unlock the incredible, positive things AI can do for us.
The views and opinions expressed in this blog are those of the author and do not necessarily reflect the official policy or position of The Institute of Internal Auditors (The IIA). The IIA does not guarantee the accuracy or originality of the content, nor should it be considered professional advice or authoritative guidance. The content is provided for informational purposes only.