Internal Audit and Data Ethics
Blogs Jim Pelletier, CIA May 21, 2019
The ethical collection, use, and dissemination of data has quickly emerged as an alarming threat on the digital landscape and must become a priority for internal audit. This threat brings with it the potential to magnify the devastating consequences of a privacy crisis.
Why? Because wounds from this digital battle could be self-inflicted. Unlike cyberattacks, we can't blame sophisticated computer criminals when our organizations fail to establish and adhere to high ethical standards for the gathering and use of data.
Internal audit is once again uniquely positioned to help address an important, developing issue. With the appropriate controls in place, the privacy of customer data can remain private and third parties can be effectively monitored.
Transparency can let customers see how their personal information is being used and sold. With focused scrutiny, unfair human biases can be scrubbed from algorithms. The right systems being used properly can protect an organization's reputation, not to mention save it from significant fines.
We've all heard about the European Union's crackdown on data privacy, along with California's similar restrictions and rumblings from other states about doing the same. But this looming threat today goes beyond these laws, really, to encompass all that advanced analytics and artificial intelligence can do with a collection of data. As regulation in this space continues to spread, it is important we don't allow our organizations to solely focus on "Can we do this?" and forget the often more important "Should we do this?"
For example, retailers routinely print or send you coupons based on your buying habits. But what if diaper coupons are printed out for a young woman who hasn't yet announced she is pregnant?
This happened back in 2012. At the time, Target was trying to influence expectant mothers to see Target as their destination for baby supplies. To do this, Target's analytics team scoured everything it knew about female customers — both data Target collected as well as data it purchased from other sources. Ultimately, it found combinations of data that could not only predict that a woman was pregnant, but also estimate, fairly accurately, her due date.
Unfortunately for Target, according to a New York Times article, it underestimated the creepiness factor of what it was doing. One example: A father showed up at a store demanding to speak to a manager, angry that his teenage daughter was receiving coupons for diapers and other baby goods. Convinced that Target was encouraging teen pregnancy, he lashed out at the store manager. A week later, when the store manager called the father to once again apologize, the manager was taken aback when the father stated his daughter had since come forward to tell him she was, in fact, pregnant.
So, the data was right, the predictive analytics had worked, but was it the right thing to do? Are lines being crossed that have not yet been drawn? Target quickly figured out that it couldn't be so direct in its marketing efforts. Instead, it began sending the baby-related coupons in packets with other random coupons, thus making it appear that the baby-related coupons were just as random.
One could argue that Target was trying to better serve its customers — and make more money at the same time — but where is the line when it comes to an individual's personal health information? Where is the line when it comes to manipulating behaviors?
Again, this case happened several years ago. How have companies like Target advanced since then? And have their codes of ethics evolved with their technological prowess?
Sophisticated algorithms are collecting massive amounts of data from any number of sources: personal information, facial recognition, buying habits, location data, public records, internet browsing habits, and all the information they can get from connected devices. What a company can't learn about you directly, it can likely purchase from someone else.
In general, consumers have become comfortable with, or complacent about, the fact that much of what they do is being captured by various organizations. For the most part, people accept it. That changes, however, when people find out that companies aren't being forthright.
To help your organization get on the ethical side of this, here are a few questions to consider from an internal audit perspective:
- How does your organization collect data? Do consumers know what is happening?
- How does your organization analyze the data? Have the algorithms been verified? Is there any bias in the algorithms?
- What is your organization learning from the data? Does it potentially cross any lines into knowing something personal or health related?
- How is your organization using the data? Do customers understand how their data is being used? Does your organization draw a line, for example, between data used to improve marketing efforts and data used to outright manipulate people? (I get this is a very gray area, but does the use of the data at least fit within your organization's stated values?)
- Does your organization sell the data to others and do your customers know about it?
- Has your organization's code of ethics evolved along with its use of technology?
Facebook co-founder Chris Hughes recently called for regulators to break up the company due to its powerful influence. He blames himself and the team for not recognizing "how the News Feed algorithm could change our culture, influence elections, and empower nationalist leaders." But perhaps even most relevant to internal auditors is Hughes' concern that Mark Zuckerberg "surrounds himself with a team that reinforces his beliefs instead of changing them." Don't let yourself get into this awful position.
I urge you to see the importance of addressing data ethics and jump on it now. It is a developing storm that will get much worse before it gets better. The proliferation of the use of facial recognition just prompted the San Francisco board of supervisors to ban agencies from using it in the city, the global heartbeat of tech innovation, saying it "goes too far." Yet, the ban would not affect federal, state, or private use of facial recognition — a line that has yet to be drawn.
On top of that, 5G is almost here, and with it will come more data from more sources. Seize the opportunity now to understand the lines being drawn (or not being drawn) in your organization.
That's my point of view. I'd be happy to hear yours.