While also faulting the Volvo’s human co-pilot, an exhaustive investigation found that the vehicle’s artificial intelligence failed to “see” Herzberg.
Figuring out what happened took more than 20 months. The Tempe Police Department and the U.S. National Transportation Safety Board came to different conclusions about the role that AI played in the accident. Both faulted Herzberg — who was impaired and pushing her bike across a busy road at night outside of a marked pedestrian crossing — but the NTSB also severely criticized Uber for its technology failures and poor safety culture. Arizona prosecutors declined to file criminal charges against the company, instead charging the Uber safety driver with negligent homicide for not stopping the car manually. Meanwhile, a lawsuit filed by Herzberg’s family has yet to be resolved, in part because of the complexity of putting AI on trial in a courtroom.
The inability of experienced state and federal investigators to agree on how AI contributed to Herzberg’s death — and their protracted investigation — points to the need to better understand the technology. How good is AI? To what extent should it be trusted? And how does it make decisions? Explainable AI, or XAI, can help answer these questions — and it may help internal auditors provide assurance to stakeholders around organizational use of AI.
What Is Explainable AI?
“AI doesn’t need to be scary,” says Alan Cross, chief commercial officer at Diveplane, an AI business solutions firm based in Raleigh, N.C. “Instead, it needs to be understandable and accountable. Understandable AI allows users to forensically examine what data is leading the AI to make certain decisions.”
Put another way, XAI gives users the ability to see inside what’s described as the “black box” of algorithmic decision-making. With AI, someone can see what is being fed into the black box — as well as its output — but lacks visibility into the workings in between. But when the box can be opened to reveal those inner workings, it takes an otherwise mysterious process and makes it transparent.
And that transparency is critical, given the extent to which AI has permeated daily life and crept into organizational decision-making. Whether it’s a health app on a wearable device that monitors vital signs, an online loan application that can approve (or deny) credit in minutes, or a human resources tool that determines whether an applicant gets hired — AI seems ubiquitous these days.
While there are many use cases where AI constitutes nothing less than a complete game changer and appears to benefit the public good, the potential for unintended harm remains. For example, a recent U.S. Federal Trade Commission report, Combatting Online Harms Through Innovation, cautions about the use of AI aimed at addressing online wrongdoing such as fraud, media manipulation, and bias. “Both designers and users of AI tools must … continue to monitor the impact of their AI tools, since fair design does not guarantee fair outcomes,” the report warns.
The proliferation of AI across so many spheres of everyday life means more people, including internal auditors, need to be able to examine AI models to ascertain what is influencing their decisions.
Peering Inside the Black Box
Some companies have already adopted tools that help users understand how their AI makes decisions. In 2019, for example, Uber and GM made their proprietary tool for visualizing inputs from self-driving software freely available on the internet. And Google has a cloud service that enables users to upload an AI model and see a graphic depiction of how the model weights factors used in its decision-making.
When XAI is used, Cross says, it makes processes more auditable. Working with an AI technology expert, such as a data scientist, internal auditors could answer specific questions about data features that drive organizational decisions, enabling them to make more informed assessments. AI algorithms make decisions by predicting outcomes based on analyzing data patterns. While auditors may not be the ones “pressing the buttons,” the data expert can help them determine what the AI is predicting — and how’s it’s being predicted. “If they’ve got access to an understandable AI platform,” Cross says, “it can dramatically enhance internal auditors’ ability to do their job.”
The Legal Landscape
While the transparency enabled by XAI can go a long way toward preventing unfair or harmful outcomes, AI poses more than ethical and safety concerns — it also can create legal liabilities for its users. For example, is a human resources tool evaluating resumes fairly, or is it disadvantaging minority applicants or those who live in certain neighborhoods? These types of specific outcomes have been widely reported and are now subject to regulation. And even more sweeping regulations are under consideration, especially in the European Union.
The EU’s General Data Protection Regulation, for example, states that when individuals are impacted by decisions made through AI, they are entitled to “meaningful information about the logic involved.” And the recently proposed AI Act would enable a court to order a company that uses an AI system to make impactful decisions to turn over evidence of how the software works. If an AI system is determined to be involved in decision-making that leads to a harmful outcome, a presumption of liability could exist (See “Is XAI Always Necessary?” on this page).
In the U.S., the California Consumer Privacy Act dictates that users have a right to know inferences made about them by AI systems and what data was used to make those inferences. Moreover, a New York City law that went into effect Jan. 1 prohibits the use of AI tools to screen a candidate or employee for an employment decision unless that tool has passed a “bias audit” within one year before its use.