Comparing Ways to Measure Security Control Effectiveness
There is a growing range of ways to provide security control metrics and assessments for businesses. The intended audience of these solutions tends to be non-security people, for example senior board members (for enterprise security and the associated risks) and procurement or risk/compliance managers (for third party security risk exposures) who need an understanding of cyber risk and security control effectiveness to monitor performance, improvements or exceptions.
There are 3 primary uses cases for this sort of measurement:
- Enterprise risk reporting – Security metrics and KPIs to senior security personnel or risk/compliance management teams as either operational reporting or to support oversight (e.g. of outsource IT service provision).
- Third party security assessment – quantification of the levels of security provided to company information when it is accessed by, or shared with, third parties within a supply chain.
- Insurance risk determination or monitoring – telemetry on cyber security control effectiveness, performance and operation for insurers to ensure that breaches (and hence impact) are prevented, to monitor compliance with undertakings by policy holders and to drive heightened security (and hence lower premiums) for policy holders.
Two key methods to measure security control effectiveness
In general, there are two main ways in which this type of output is generated:
- Indirect, non-invasive collection of external observations of things like perimeter server software versions, external reference to the company, DNS configurations etc to gain a view as to the organisation’s likely security risk.
- Direct, internal, on-network assessment, test or measurement of the state, presence, operation and configuration of security controls. This is inherently a more invasive approach (although that isn’t a bad thing) and requires access by or deployment of a solution onto the local network.
Indirect assessment solutions sit outside the corporate network, perhaps run by a service provider or in the cloud. They collect external (hence public) information in some cases from the organisation itself but also from wider sources (e.g. if company email addresses can be found on public internet forums).
This external assessment has the advantage of being very simple to do and easier to set up (there is nothing to install or connect) – it becomes a centrally operated service – but it also suffers from its non-invasive nature, from the fact that all it sees is the external facade of the organisation.
It might tell you that your web server is configured in a way that can introduce risk, but that is probably hosted by a third party. While you can tell them to fix it if they want to keep your hosting business, it is nowhere near your corporate network where users might sit and access data. So the external nature of the results provides little assurance as to the level of security from some of the most common threats like insider misuse, user susceptibility to phishing, internal access control or privileged account management or the levels of patching on internal systems.
The indirect approach is a bit like the people in Plato’s famous cave – sitting looking at the back wall watching shadows from the world outside. It’s a very incomplete, vague, indirect view of, in this case, cyber risk. If you want to know what your cyber posture is, stand up, leave Plato’s cave and look at the world directly.
Taking a more direct approach to security measurement inevitably means getting closer to the systems, users and data that you are trying to protect. This means something more “invasive”, which can sound like negative thing until you realise that actually it’s the accuracy and completeness of information you specifically want.
This might be a one off “execution” of an application or some form of scanner or data collector/audit tool, or an ongoing and continuous solution that provides scope to operate on a scheduled basis or to continuously monitor and observe relevant metrics of cyber security operational data as they happen. For example, checking the configuration and activity of security controls, measuring the coverage of things like patch application or backup processes, allowing the “now” state to be compared to the “before” state or the “intended” state and checking whether the security priorities that should have been implemented and be operating actually have been.
“Actual” Security Control Measures rather than “Inferred” Ones
The main difference in the two approaches is that one reports cyber posture based on actual measurements, real observations, the actual settings or records of control effectiveness whereas the other makes deductions based on ancillary facts that indicate where problems might be.
It’s certainly true that a corporate email address in the public domain might be useful to an attacker to carry out a spearfishing attack, but that attack has to get past the content gateways, reach the user, look convincing, be clicked on and then whatever gets clicked on has to be able to exploit the operating system to install its payload.
Web servers are very exposed systems – deliberately so – and keeping them up to date with the latest patches is important. But they are also often well protected by firewalls and IDS, only accessible on a narrow range of ports and routinely penetration tested. So finding out which patches are missing is useful to an attacker, but it might not be a viable route to attack the system if the configuration prevents the vulnerability from being exploited to compromise the network or data. In fact, on a web server that has had all the latest patches installed you could easily find yourself exposed to a very specific application level SQL injection attack – and that would be down to your local development quality assurance, hence unlikely to be picked up by a general assessment.
If your internal servers and user endpoints are not patched to the latest versions then even a trivial intrusion or piece of malware, the dumbest ransomware example on the planet, could not only gain a foothold in your network but spread from system to system destroying data as it goes.
Best practice security control measurement
When it comes to measuring cyber security posture, think about buying a house.
You don’t make a purchase based just on the real estate agent’s photographs or by standing outside and looking at it. You go inside to look around properly, open the cupboards, get surveys done, carry out site searches for problems etc. You measure, you inspect, you quantify how many things and how much work might need to be done. You check yourself – directly, you don’t just ask the previous occupants what it was like to live in.
The validation or establishment of security posture or control effectiveness is similar. If you want to avoid costly mistakes or making expensive assumptions – look closely.
The difference to the house buying analogy is you might only buy a house once every few years, security moves fast so you need to be able to keep up. If you had to buy a house every week you’d find a way to get the same assurance, but automatically – you’d build the checks into a system that did them for you, in a trustworthy and repeatable way.