Two years ago, I joined a cyber company that is passionate about researching cyber risks as they relate to business-critical applications. The company specifically focuses on cyber risks that threaten enterprise resource planning tools, human resource management systems, and customer relationship management tools. Coincidentally, these are not only the technologies that power our companies' businesses, they are also the technologies that are in scope for compliance regulations.
Two focus areas on which we conduct research are: Bug finding (think software vulnerabilities, insecure configurations, etc.) and threat research (for instance, who is hacking and what are they looking for).
So what does our threat research tell us that you need to know?
- Business critical applications are under attack, and the research showed the attacks originated from close to 20 different countries during our observation window.
- While companies like the one I work for find issues and help software vendors patch or provide fixes to them, they don't find them all, and we see exploits in the wild that have yet to be detected by security researchers. We've seen evidence of vulnerabilities being exploited for more than three months before being detected by researchers.
- On average, from the time the software vendor issues a patch to the time the exploitations are actively happening is 72 hours. Three days. Patch releases being used as a kickoff to threat actors exploiting technologies was mind blowing to me.
- One of the first moves by cybercriminals and hackers is to provision administrator or admin roles for themselves, meaning they can do whatever they want, whenever they want, sometimes outside the view of logs.
- Sometimes they even patch the vulnerability. Yes, you read that right. They patch it. The line of thinking could be that they are protecting their 'turf' from other parties, or they want organizations to believe they already fixed the problem.
- Bad actors tend to navigate first to where the money is, meaning, personally-identifiable information (or PII), intellectual property, your financial data, etc. Coincidentally, many of these things create compliance issues as well, as they are bypassing controls (such as for SOX) or accessing and extracting protected data resulting in data privacy issues (such as GDPR, CCPA, etc.).
What does this all mean for internal audit? Well, quite simply, a lot.
According to The IIA's 2022 North American Pulse of Internal Audit, 85% of CAEs consider cybersecurity to be a high or very high risk in their organization. Yet on average, these same leaders say they plan to allocate only 11% of their audit plan to cybersecurity risk in the coming year.
I've had the pleasure of speaking to a lot of internal audit leaders, and many of them indicate that while they feel the impact of a breach of a business-critical application would be significant, they don't believe the likelihood would be all that likely. The research indicates otherwise.
The sad reality we often face is that we point to basic blocking and tackling controls like access control, change management, patching, etc. without really assessing risks. As it relates to patch management, I'd strive to understand how your organization prioritizes patches. Do they do it on a normal cadence? Are there blackout periods where they won't patch (like year-end close) that are creating outsized risks? How do they react to a zero-day? A majority of organizations aren't patching within 72 hours of a patch release.
As much as we'd like to pretend that cyber risk and compliance risks are separate topics, cyber risk does create compliance risk. Given that one of the first actions by intruders is to provision admin rights and privileges, a quarterly control test isn't going to cut it anymore.
Finally, in the last two months the SEC has proposed not one, but two new rules that have open comment periods ongoing. While the first (published on Feb. 9) has a more narrow scope in both applicability and expectations, the most recent one published on March 9 is far more wide-reaching. As noted in the fact sheet provided by the SEC, this proposed rule, if adopted, would require periodic disclosures regarding, among other things:
- A registrant's policies and procedures to identify and manage cybersecurity risks.
- Management's role in implementing cybersecurity policies and procedures.
- Board of directors' cybersecurity expertise, if any, and its oversight of cybersecurity risk.
- Updates about previously reported material cybersecurity incidents.
Whether we like it or not, or are ready or not, it would appear we can no longer tiptoe around the topic of cyber risk. The wave isn't coming, it's pretty much here.