Still about a hundred pages to go, and I've got another insight to share.
Putting this too simplistically (to really understand what is being said here, you should go read the book,) the way people look at the probability of events is impacted by how they are presented. In one study cited by Mr. Kahneman, "people who saw information about 'a disease that kills 1,286 people out of every 10,000' judged it as more dangerous than people who were told about 'a disease that kills 24.14% of the population.' The first disease appears more threatening than the second, although the former risk is only half as large as the latter!"
When you stop and look closely at the two sentences, it is obvious that the second situation is more serious — 24 percent is greater than 13 percent. Yet, when looking at the two statements independently, the idea of individual people being killed far outweighs (in the respondents' minds) the impact of a percentage of the population being killed.
Kahneman has much more to say about the additional studies that have been done, but they all support this concept. When a real face is put on the facts, then people see the impact as more real than when abstract concepts like percentages are used.
When you are using test results to support the problem you are trying to prove, how do you report those results? And part two of that question: Do they get the reactions you desire? And part three: Might that be because you are focusing on the abstract when something more concrete and real is needed?
Maybe this has changed since I was doing audits (and, yes that's a long time ago and, yes, you can insert your own jokes here) but we generally reported our results as percentages followed with the statistics that were used to derive that percentage. For example, a report on one of our agents might include this line: "10 percent of the receipts reviewed (13 of 130) contained errors related to the receipt date, amount received, or the insured's policy number."
Accurate, confirmable, and dry as dust.
Did it get your attention? Did you care? And let me quickly add that everyone — auditing. marketing, sales, executive management — agreed that anything greater than 10 percent was an issue. So, yeah, everyone "cared." But how much did they really care? And, even when I tell you 10 percent was the threshold, did that make you care any more?
So, let's change it just a tad. "A review of the records for 130 transactions showed thirteen insureds received documentation that did not accurately record the date, amount, or policy number related to those transactions."
A few minor tweaks and the discussion is no longer about documentation; it is about the impact of that documentation on the customer. It has become a bigger deal.
Now, in the wrong hands, this could turn out to be a little messy. The audit department that is intent on finding an issue behind every nook, cranny, and file cabinet can easily use this as one more tool for bludgeoning the audit client. But, then again, that kind of audit department has bigger problems than writing more effective reports. (No. 1, being an effective audit department.)
But, used correctly (without malice, etc.) it provides internal audit a tool that allows it to present information in such a way that the true impact can be more easily understood and accepted.
There are two big challenges in reporting the results of our audits. Getting the reader to care about what we say and getting them to stay awake while we say it. This tool provides one possible solution for both those issues — it can make them care, and it should keep them awake
No, you don't want to go overboard. (As with anything in report writing, it is possible to report on the smallest dent and make it seem the dam is bursting.) But you do want to ensure that the information is written in such a way that every reader understands — understands what is occurring, understands why they care, and understands that something must be done. And this just may do the trick.