Skip to Content

AI Common Sense

Blogs Sara I. James, PHD, CIA Apr 14, 2026

We can all agree that LLMs are changing the workplace and even people’s personal lives. There are now multiple options, from free, publicly accessible software to more secure, tailored agents developed in-house. They are easy to use, promise rapid results, and are often imposed by senior management.

On LinkedIn and other media, one can read comments that feature a divisive, sometimes panicked tone: “Adapt, adopt, or be left behind!” versus “The bots will kill us all.” But what if the reality is, as I hope, more nuanced, and within our control?

This is where internal audit can provide assurance in times of rapid change. As organizations scramble to demonstrate that they are keeping pace with tech, important questions seem to be going unanswered, basic controls falling away. We have been here before, with society as a whole surviving the dot-com bubble and the global financial crisis. But these events cost many organizations and people dearly.

Turning to our current technogenic environment, are we asking, “What is the problem that an LLM — or other agent — is designed to solve? Are staff members using legitimate applications for real needs?” In the past year, I’ve encountered the following:

  • Internal auditors who admit to emailing workpapers to their personal email addresses, so they can use ChatGPT to create findings. This is because the long-promised in-house LLM is still not available, yet deadlines remain tight “because AI means you can do this more quickly.”
  • Assurance professionals telling me their boards have mandated near-universal LLM use, so as not to risk falling behind peers. No objectives, no criteria — but consequences for those who don’t comply. (Accenture recently publicized its policy of promoting only those who use AI, with those “resisting” risking their jobs.)
  • An IIA conference audience raising all hands to my question, “Who is using LLMs such as Copilot?” Yet with one exception, no one had heard of well-publicized Copilot security weaknesses such as EchoLeak. There have been others since, and, of course, Copilot will not be the only one.
  • One internal audit team proudly showing me their before-and-after reports, the latter produced by an in-house LLM. The report content in both cases was seriously flawed, with the only improvement being a lack of typos in the LLM-produced report. The team had considered speed and ease, not quality.

What do these examples — which are not unique — tell us about our responsibility as internal auditors? Within these examples, there are multiple opportunities for us to use our professional skepticism and to communicate with honesty and professional courage. We must remember that efficiency exists only where effectiveness has already been established. Producing reports quickly with an LLM is a false economy if they are of poor quality, or if users have compromised confidential data.

As with other organizational trends, a company’s use of LLMs depends on the tone at the top. Do those insisting on blanket LLM use understand the risks and benefits? Have they seen the evidence that increased use of LLMs impairs cognitive function quickly and lastingly (See the study, Your Brain on ChatGPT)? If companies are promoting only heavy LLM users, are they unwittingly creating a caste of leaders lacking critical thinking skills?

Are organizations, in their rush to adopt the latest technology, forgetting the recent past? The global financial crisis arose in large part because so many banks used financial vehicles without controls, to maximize profit and not fall behind peers. And staff members labeled “resistant,” who questioned this practice, were often sidelined or forced out. (Accenture and others, take note.)

Are we keeping abreast of current events and thinking? We shouldn’t be surprised by well-publicized weaknesses, incidents, and even scandals. AI software CEOs such as Sam Altman and Dario Amodei are very open about the risks their products could pose to organizations, society, even humanity. They’re less vocal about possible controls, and have removed certain guardrails during development; still, all this information is available.

Finally, are we keeping our own critical thinking and writing skills sharp? Are we outsourcing our cognitive function to LLMs at the very moment we need to be more alert to the quality of both input and output?

AI (including LLMs) is here to stay. Let’s make sure our profession helps us and our organizations use it judiciously and productively, as its masters — not servants.

The views and opinions expressed in this blog are those of the author and do not necessarily reflect the official policy or position of The Institute of Internal Auditors (The IIA). The IIA does not guarantee the accuracy or originality of the content, nor should it be considered professional advice or authoritative guidance. The content is provided for informational purposes only.

Sara I. James, PHD, CIA

Sara I. James is the owner of Getting Words to Work, author of the bestselling Radical Reporting: Writing Better Audit, Risk, Compliance, and Information Security Reports, and is based in Oxford, United Kingdom.