Skip to Content

The Big Idea: Explainable AI Pulls Back the Curtain on Machine-made Decisions

Articles David Salierno Feb 06, 2023

In March 2018, Elaine Herzberg was pushing her bicycle across a road in Tempe, Ariz., when she was hit and killed by a self-driving Volvo that was part of a test conducted by Uber. Herzberg is believed to be the first pedestrian killed by a self-driving car.

While also faulting the Volvo’s human co-pilot, an exhaustive investigation found that the vehicle’s artificial intelligence failed to “see” Herzberg. 

Figuring out what happened took more than 20 months. The Tempe Police Department and the U.S. National Transportation Safety Board came to different conclusions about the role that AI played in the accident. Both faulted Herzberg — who was impaired and pushing her bike across a busy road at night outside of a marked pedestrian crossing — but the NTSB also severely criticized Uber for its technology failures and poor safety culture. Arizona prosecutors declined to file criminal charges against the company, instead charging the Uber safety driver with negligent homicide for not stopping the car manually. Meanwhile, a lawsuit filed by Herzberg’s family has yet to be resolved, in part because of the complexity of putting AI on trial in a courtroom.

The inability of experienced state and federal investigators to agree on how AI contributed to Herzberg’s death — and their protracted investigation — points to the need to better understand the technology. How good is AI? To what extent should it be trusted? And how does it make decisions? Explainable AI, or XAI, can help answer these questions — and it may help internal auditors provide assurance to stakeholders around organizational use of AI.

What Is Explainable AI?

“AI doesn’t need to be scary,” says Alan Cross, chief commercial officer at Diveplane, an AI business solutions firm based in Raleigh, N.C. “Instead, it needs to be understandable and accountable. Understandable AI allows users to forensically examine what data is leading the AI to make certain decisions.”

Put another way, XAI gives users the ability to see inside what’s described as the “black box” of algorithmic decision-making. With AI, someone can see what is being fed into the black box — as well as its output — but lacks visibility into the workings in between. But when the box can be opened to reveal those inner workings, it takes an otherwise mysterious process and makes it transparent.

And that transparency is critical, given the extent to which AI has permeated daily life and crept into organizational decision-making. Whether it’s a health app on a wearable device that monitors vital signs, an online loan application that can approve (or deny) credit in minutes, or a human resources tool that determines whether an applicant gets hired — AI seems ubiquitous these days.

While there are many use cases where AI constitutes nothing less than a complete game changer and appears to benefit the public good, the potential for unintended harm remains. For example, a recent U.S. Federal Trade Commission report, Combatting Online Harms Through Innovation, cautions about the use of AI aimed at addressing online wrongdoing such as fraud, media manipulation, and bias. “Both designers and users of AI tools must … continue to monitor the impact of their AI tools, since fair design does not guarantee fair outcomes,” the report warns.

The proliferation of AI across so many spheres of everyday life means more people, including internal auditors, need to be able to examine AI models to ascertain what is influencing their decisions.

Peering Inside the Black Box

Some companies have already adopted tools that help users understand how their AI makes decisions. In 2019, for example, Uber and GM made their proprietary tool for visualizing inputs from self-driving software freely available on the internet. And Google has a cloud service that enables users to upload an AI model and see a graphic depiction of how the model weights factors used in its decision-making.

When XAI is used, Cross says, it makes processes more auditable. Working with an AI technology expert, such as a data scientist, internal auditors could answer specific questions about data features that drive organizational decisions, enabling them to make more informed assessments. AI algorithms make decisions by predicting outcomes based on analyzing data patterns. While auditors may not be the ones “pressing the buttons,” the data expert can help them determine what the AI is predicting — and how’s it’s being predicted. “If they’ve got access to an understandable AI platform,” Cross says, “it can dramatically enhance internal auditors’ ability to do their job.”

The Legal Landscape

While the transparency enabled by XAI can go a long way toward preventing unfair or harmful outcomes, AI poses more than ethical and safety concerns — it also can create legal liabilities for its users. For example, is a human resources tool evaluating resumes fairly, or is it disadvantaging minority applicants or those who live in certain neighborhoods? These types of specific outcomes have been widely reported and are now subject to regulation. And even more sweeping regulations are under consideration, especially in the European Union.

The EU’s General Data Protection Regulation, for example, states that when individuals are impacted by decisions made through AI, they are entitled to “meaningful information about the logic involved.” And the recently proposed AI Act would enable a court to order a company that uses an AI system to make impactful decisions to turn over evidence of how the software works. If an AI system is determined to be involved in decision-making that leads to a harmful outcome, a presumption of liability could exist (See “Is XAI Always Necessary?” on this page).

In the U.S., the California Consumer Privacy Act dictates that users have a right to know inferences made about them by AI systems and what data was used to make those inferences. Moreover, a New York City law that went into effect Jan. 1 prohibits the use of AI tools to screen a candidate or employee for an employment decision unless that tool has passed a “bias audit” within one year before its use.

For internal auditors looking to provide assurance on AI compliance, XAI could provide a window into exactly what is driving AI decisions and whether any potential bias is inadvertently baked into the algorithms. “Transparent AI isn’t just relying on data and hope as a strategy,” Cross says. “In the case of potential AI-related hiring bias, transparent AI can actually tell you if there’s some heavy influence on where a person lives, which is determining whether or not they get hired. And that clearly is wrong.”

Trust in a Better Future

Regardless of legal compulsions, Cross says the community of AI producers and users should work with the public sector to make AI more transparent and accountable. “There are opportunities for governments to set the bar higher in terms of data and use of AI,” he says. “But beyond the public sector, there’s been a groundswell of organizations saying there’s got to be a better way.”

For Cross and others, XAI is the pathway to a better future, one where AI doesn’t unfairly complicate people’s lives or harm them. He says that’s a necessary goal for all AI. “It’s all about public trust,” he says. “You’ve got to be able to explain how your AI works to people who aren’t data scientists. They have to feel comfortable, as a consumer, a human being, and a member of society, that AI is not unfairly impacting them.”

Cross acknowledges the public trust isn’t quite there yet. There is still too much misuse of AI weighing down public perception of the technology. But XAI, he says, will help get us there.

For internal auditors, understanding the issues around AI — not just the legalities, but the ethics and fairness, too — will be essential. Helping ensure clients build and use the AI tools they need to stay competitive, while also earning the trust of employees, customers, regulators, and the public, may be crucial to organizational success as AI becomes further entrenched in everyone’s lives.

Is XAI Always Necessary?

Not all AI needs to be explainable at a granular level. The AI in a smartphone’s navigation app, for example, is unlikely to cause harm. It might be wrong, it might inconvenience the user, but it’s not going to prevent someone from getting a credit card or a job promotion.

Differentiating between AI that might cause harm and AI that doesn’t was part of the impetus behind the European Union’s proposed AI Act. In establishing standards of liability, it defines three classes of AI — one for AI that creates unacceptable risk (which the law would ban), another for high-risk applications, and the third for applications that are neither banned nor considered high risk, which would mostly be left unregulated. If the AI does not have meaningful impact on people’s lives, it may not need to be understandable — although XAI can be helpful in developing and debugging any AI system.

David Salierno

David Salierno is managing partner at Nexus Brand Marketing in Winter Park, Fla.