Skip to Content

Micro-measuring Success​

Blogs Mike Jacka, CIA, CPA, CPCU, CLU Apr 09, 2021

Boy, did I just judge a book by its cover, or rather, misunderstand an article based on the title.

The article was "Want Audit Analytics on Every Audit?" and, if you are interested, you can follow the link. It won't really have anything to do with what is to follow, but more knowledge is always better than less.

I dove into the article prepared to go ballistic — prepared to take up arms against a sea of troubles and wrongheadedness — because of my suppositions. I thought it was going to discuss internal measures of success — how to build metrics in every single audit to prove the success or failure of the department, the audit, and the auditor. Instead, I found that the article is about the practice of using analytic tools — CAATs, data analytics, etc. — in every internal audit. Note that this can be a good thing.

So, why did I have this knee-jerk, neutron bomb reaction to what I thought the article was about? It all comes from my deep-seated belief that internal audit has an obsession with measuring itself that borders on the psychotic. And I don't think the morass of measurements most of us live with is effective. What I expected, and what was driving my unwarranted anger, was an article that would be on the wrong side of two of my pet peeves — how metrics are used and the fool's game that is objective measures of success.

Issue No. 1: Internal audit's fatal flaw is that too many of us measure for no other purpose than to measure; we don't measure to improve. I'm sure most of you have measures of success, key performance indicators, or metrics that provide information on where the department has been and where it is.

But how often are those actually used to show where the department is going? And, more importantly, how are they used to change the way the department works? If you don't meet those measures of success, what do you do? Do you promise to do better next time? Or do you actually take steps to determine what went wrong, how to fix it, and then actually effect change?

Great quote from Seth Godin: "Don't measure anything unless the data helps you make a better decision or change your actions."

Why are you measuring what you are measuring? The term "measures of success" has built into it the idea that you are trying to show success. But are the measures actually related to success? And, again, if you do not meet them, are they areas where the department actually has control to effect change? Are you measuring to improve, or are you just measuring to measure?

OK. If you read my blogs with any regulatory, this is old news. This is an issue I've ranted about before and will probably rant about in the future.

But there is another issue here. And, even though it is another of my pet peeves, I'm not sure I've talked about it all that much.

Issue No. 2: We try to use objective measures to measure success, and success is subjective.

Most of us riddle our work with micro-metrics: milestones, report due dates, audit due dates, completeness of workpapers, timely review notes, timely review note responses, number of review notes, number of report rewrites — you know the drill.

These are all important. We need to know the audit is on track, the report got out on time/the audit was completed on time, workpapers are complete, review notes are timely, and review note replies are timely. We need to know the results related to every one of these metrics. And it is important to hold the internal audit team responsible for meeting those metrics.

However, even if every one of those metrics is met — even if every audit is on time, every workpaper complete, every review note timely, every report written with a minimum number of rewrites, every jot and tittle is jotted and tittled — they do not represent a guarantee that a single aspect of that audit is "good." There is no proof that a quality, value-adding audit was completed.

Objective measures only measure objective success, and this does not necessarily relate to the subjective success that is "quality."

For a few years at Farmers Insurance, we used a balanced scorecard to help measure how we were doing and how we might do better. And it worked pretty well …except for one manager. It was evident to even the most casual internal-audit-savvy observer that the department produced mediocrity. But, in our balanced scorecard, he recorded the second highest score of any manager. He knew how to hit the numbers.

And therein lies two tales. First, you get what you measure. (And any further on that one, deponent sayeth not.) But second is that measuring anything related to the quality of audits, the quality of the department, and the quality of the individual auditors is subjective. Yes, you can objectively measure timeliness and completeness and any of the myriad measures we have all built into our work. But it all comes down to one question: How good is the work? And that is a subjective measure.

Youch! That one hurts us internal auditors. We do not like the subjective; we like objectivity — knowing exactly how measurements were conducted — because then someone else can come back and verify the accuracy of our analyses. But that doesn't work when talking about true quality.

To be done right, everyone in the department must agree on what is meant by quality. There must be an understanding of the aspects of quality, what that means related to the work the department completes, and what that means for the client. Then, standards related to that quality must be established. And, once that is all determined and understood, everyone has to be held to those standards. There must be an honest evaluation of the quality and the value that comes from every audit and from every auditor.

The upshot of all this — the issue underlying everything above — is that you need to take a deep dive into how you measure the success of your department and of the individual auditors. What do you measure, why do you measure, what is the value of what you measure, what is a "quality" audit, and what is a "value-adding" audit?

And, ultimately, what are you going to do with all that information?

Mike Jacka, CIA, CPA, CPCU, CLU

Co-founder and Chief Creative Pilot, Flying Pig Audit, Consulting, and Training Services (FPACTS), based in Phoenix.