Skip to Content

​Events in the Mirror May Be Less Relevant Than They Appear

Blogs Mike Jacka, CIA, CPA, CPCU, CLU Apr 19, 2019

Every auditor in every position — from lowly intern to high-and-mighty CAE — makes decisions that impact the success and failure of the department.

Tests, projects, the schedule, the plan — every aspect of internal audit's processes and operations succeed or fail based on the incredible multitude of individual decisions everyone is making. And best practice is to go back, as soon as possible, and evaluate the effectiveness of those decisions in order to build on things that were done right and learn from things that went wrong. That test missed the boat. That interview nailed what we needed to know. That audit was a horrible waste of time. The plan exactly matched the needs of executives and the board. We hired the right person. We fired the bad hire. We caught a fraud. We missed a fraud. We did things right. We did things wrong. A series of choices that, viewed as a whole, make or break the department.

But when we do that analysis — when we go back to see what we can learn and/or build on — we usually make a fundamental error. We focus on the outcome to determine how the analysis will be completed. If we were successful, we look for what to build on; if we failed, we look for what we should never do again.

If you do not see the fallacy in that approach (an approach we all use without really thinking about the consequences), let me throw out a couple of hypotheticals.

You are conducting an audit, on site, over one of the 100 offices the company has throughout the U.S. It is time to begin the testing phase. There are 10 potential areas for review. You have time to complete five. The initial risk assessment, which included thorough interviews, data analysis, and all the other wonderful tasks and duties that help us make informed and intelligent decisions, has determined the five highest-risk areas. That becomes the focus of your audit work.

Fast forward two months. You discover that a significant fraud has occurred in the office you just audited. The fraud occurred in one of the five areas you chose not to review.

Oops.

It becomes quickly apparent that the decision making that occurred in your choice of areas for review was not particularly good. Wouldn't you agree?

Not so fast there, Speed Buggy.

Hypothetical situation No. 2: Same type of office; one of the 100. Same areas to review and same amount of time. You have to pick five of the ten. This time, a coin is tossed for each area and, every time the coin comes up heads, that area is picked for audit. When you get to five, you quit. You begin testing and, while reviewing the second area that was selected, you stumble across a significant fraud.

It becomes quickly apparent that the decision making that occurred in your choice of areas to review in this situation was quite excellent.

What? You disagree? But decision-making in the first situation had to be flawed because something important was missed. So, decision-making in the second situation must have been excellent because of what was discovered.

Of course, this is bull-oney. Yet, we far too often judge the decisions of others (and of ourselves) based on results rather than on the quality of the decision-making — rather than on how the available information and knowledge was used to make that decision.

It is called hindsight bias and it plagues all aspects of our departments, our organizations, and even our lives.

In her book Thinking in Bets: Making Smart Decisions When You Don't Have All the Facts, Annie Duke cites several examples of hindsight bias, one of the most famous coming from sports — Pete Carroll's ill-fated decision to have Russell Wilson throw a pass in the waning moments of the Super Bowl (a pass that was subsequently intercepted, effectively ending the game). I will start by apologizing here to every Seattle Seahawks' fan, but it was not the bone-headed decision many fans claimed it to be. An in-depth analysis of the underlying decision shows that, when the chance of success for the play was evaluated against the chance of failure, it was the right decision. It ended poorly, so the assumption is that the decision was poor.

Hindsight bias: assuming poor results result only from a poor decision.

Let's go back to the two hypothetical examples listed above. That situation of missing a fraud during an on-site audit happened to me. And there was quite a bit of soul-searching, including reviews over how our work was done. And an interesting thing came out of it

The decision-making process was absolutely fine. In fact, in spite of our having missed this particular fraud, the risks involved in all the areas we reviewed was such that we realized, with the information we had at hand, we had made the right decisions.

Did it change the way we did those audits in the future? No. Did it change some of our other monitoring efforts? Yes.

We learned from what had occurred. We learned that the approach we were using for those office visits was sound — the right decisions were being made based on the information available. But the facts of the fraud showed additional monitoring and reviews that needed to be completed — none specifically related to the individual audits of the offices.

When you are evaluating the work you have done — including the work you are smack dab in the middle of — do not become seduced by the results. If things went wrong, do that root cause analysis and find out why. If that analysis shows the decision was a good one, in spite of the results, then there may not be any reason to change. But perhaps more importantly, do not short shrift the analysis if everything went really well. Because there is still a good chance that everything went well in spite of shoddy work. (A coin toss is not a good decision-making tool, even if you wind up finding fraud.)

And, it is something that should be kept in mind when reviewing any operations within the organization. I've said it before and I'll probably say it multiple times in the future. No one ever audits success. And more's the pity. Because, when you look at some of history's most significant organizational failures, it was because no one seemed to think it was important to look at those things that were going great.

When evaluating the decision-making that underlies the work that is being done — whether that work is your own or that of your organization — put on a strange pair of blinders, a pair that keeps you from looking into a future that has already passed.

(And, one last thing. As an Arizona Cardinals' fan, I'm not really that sorry for the Seattle Seahawks' fans. I'm pretty tired of your dominance. There, I feel much better.)

Mike Jacka, CIA, CPA, CPCU, CLU

Mike Jacka is co-founder and chief creative pilot of Flying Pig Audit, Consulting, and Training Services (FPACTS), based in Phoenix.