Now, I'm not a programmer or a hacker, but earlier in my career, I did have to design an internal audit methodology for non-IT-supported applications. This was for a company so large that "shadow IT" meant IT-like personnel embedded in other business units to develop or administer applications not hosted on IT-controlled servers. During planning, the audit team worked with personnel from the IT security function to understand the types of controls they would be looking for in an application risk assessment.
In those discussions, we learned about the Open Web Application Security Project (OWASP), which produces freely available guidance and tools for web application security, including the OWASP Top 10 list of web application risks. At the time, one of the top risks was related to insecure coding, or the presence of authentication bypass mechanisms. Therefore, our audit testing looked for evidence that the developers and independent testers monitored and remedied backdoors.
Curiously, the 2017 version of the Top 10 risks include broken authentication, broken access control, and security misconfiguration, but it seems to downplay the risk of malicious insiders inserting a backdoor of their own, despite the fair warning we received from WarGames in 1983.
A few days ago, eager to learn more about the topic, I found on the OWASP website an informative presentation on the Top 10 Backdoors (PDF). It turns out that some backdoors are intentional, inserted by the application vendor to enable administrative or monitoring services, of which the customer should be aware. Appropriate configuration and authentication procedures — such as changing default passwords or deactivating default accounts — should mitigate the risks from these backdoors.
The risk of covert, vendor-installed backdoors is somewhat trickier for customers to mitigate, because they probably will not have access to the results of secure coding (dynamic and static) or quality assurance testing. Moreover, malicious backdoors may employ highly sophisticated tactics to avoid detection once installed.
The risk of covert backdoors is at the heart of the U.S. government's banning of components made by Chinese telecommunications equipment maker Huawei from being installed in a U.S. network. But how can organizations protect themselves from covert backdoors when there is no apparent reason to suspect the vendor?
The technical details of how organizations can detect and remedy advanced persistent threats (APTs) — which exploit intentional or unintentional backdoors to begin surveillance and exfiltration — are beyond my knowledge. However, if I were planning to audit risks and controls relating to APTs, I would ask IT and information security teams whether they had implemented any controls to identify APTs and whether they have benchmarked their efforts against an industry standard or best practice.
In a couple of audits, my teams asked the clients if they had reconciled outbound communications going across the firewall — or service traffic in a middleware platform — to an inventory of approved external connections or services. The thinking was that doing so might identify APTs. In both cases, the clients indicated they had not taken those steps, but conceded that such controls might be effective. If readers have any suggestions for auditing controls that might detect APTs, I'm sure plenty of IT audit teams would love to hear them.
It often has been said that cybersecurity is an endless competition between the attackers and the defenders, with the advantage going to the attackers for several reasons. However, if the defenders can improve the quality of their detective controls, maybe we can even the score. Shall we play a game?