Risk Management & Reporting

September 12, 2019

One common challenge in security is in proving status reports or demonstrating progress against security KPIs – either ongoing operational ones or those that reflect continual improvement (for example, corresponding to a security improvement project).

It’s important to be able to derive performance or status information on security, if for no other reason than it gives a picture of the cyber posture or state of risk for the organisation.  However, there are difficulties with many common approaches.

Invasive/Non-invasive testing

One challenge with security reporting is that some ways of testing for issues can be invasive.  It is a double-edged sword – non-invasive methods can be prone to giving misleading results.  But the more invasive approaches – that require the connection of systems or software to a network or direct connection to target systems – then fall under change control and may need to consider operational factors like network load, other changes or the creations of accounts to enable them to access systems.

Point-in-time checks versus continuous security measurement

The frequency of assessment matters.  If the interval between tests/reports or assessment is too long, then issues can escalate or pile up and the picture becomes decidedly less rosy; but also, the work to correct issues cannot start.  However, if the frequency is too short then corrective actions can be overtaken by intervening reports and the impact on systems and processes can be burdensome.

In either case one challenge with any form of point-in-time assessment – whether it’s a quarterly audit or a daily scan – is that they can miss short terms issues that arise and then resolve themselves within the interval.

For example, a malicious user might get added to an admin group, carry out some action, and then remove themselves from it all within a timescale of just a few minutes.  This temporary breach of policy will be missed by successive daily checks on group membership.  Unless there is a continuous view of intervening activity simple observations miss what might have changed since the last check was made.

Reporting raw data with no context

The technical output of many scans, audits and assessments is a list of issues or findings.  These often get prioritised and sorted before being presented, but context can still be lost.  A critical patch on one high-sensitivity system would rightly be rated as more significant than a less crucial one on a mundane file and print server. However, the missing patch might be deliberate and be awaiting a scheduled update window, there could be mitigations or other compensating controls in place.  On the other hand, a few hundred servers that are routinely not receiving patches, but are not significant enough to merit the headline reports, can become an easy target of ransomware that infects one and then spreads to all the others.

The problem here is not the issue itself, it’s the context.  Instead of taking observations and analysing them, the reports are based on the observations themselves; when it is really insights and understanding that are needed.

It’s an issue that is much broader than just security, as the following article explains:

False positives and negatives

Related to the above issue. Finding an issue or policy mis-configuration can be “real” issue that security teams or management need to be aware of, but it can also be a false positive – where an apparent vulnerability or change is flagged but is actually intended or legitimate.  This is often the case with vulnerability scanning as an activity because the process tries out every single known vulnerability and flags those that apparently are present, although sometimes the indicators remain even after a problem is fixed or if elsewhere a way of preventing access has been put in place.

Similarly, where systems are protected by controls the presence of issues can be obscured or masked. This is common where a host is protected by a firewall as the limitations on traffic types means that “seeing” beyond the firewall on certainly ports or to certain addresses may not be possible.  But for an attack, once it has managed to exploit an initial weakness and gain access to a box behind it, is in a different position – having created a bridgehead for the attack the vulnerabilities that the firewall masks (and the reports don’t contain) become rich pickings.

Checking for issues versus measuring correct operation

The final, and possibly most subtle, challenge relates to whether one looks across a wide range of known “possible” problems to derive a list of issues or monitors and observes controls and systems that are in place to verify correct operation.

The first option is sometimes a “path of least resistance” but as with vulnerability scanning or automated testing as in our example above the results can be flawed or require a lot of interpretation and filtering.  The human aspect of this process can introduce a large amount of subjectivity into what should be an objective process.  In addition, the library of possible weaknesses (in whatever form it exists) can only ever be the “known” ones and hence an attack that uses something “new” will be unprotected.

The second approach may take more thought to set up but gives a better picture of the effectiveness of processes – defining what should happen and then checking that it has. This could mean combining the current state, then a prior state and any records of activity in the mean-time to highlight intermediate changes, or it could validate a policy is correct and then monitor its application to the set of systems within the domain – and only flag up exceptions that reflect failures (rather than checking every single system for every single patch).

Measure twice, cut once

In security, as in other walks of life, decisions are made and actions taken based on the measurements of the attributes of systems under governance, or the effectiveness of the processes and controls.

There are lots of ways of making these measurements, or generating observations and findings; the pros and cons vary according to the different audiences.  But it is an area that merits thought rather than blindly just picking a metric and reporting on it.  Without context or meaning, or trust in the origin of the data, the value of the data can be called into question; and once found lacking rebuilding confidence in the reporting reliability can be hard – especially in the face of a sceptical, inexpert or unconvinced audience.

If decisions are to be made based on security measurements, those measurements need to be accurate, consistent and trustworthy.

Essential 8 Scorecard Overview

BLOG POSTS

Related Cybersecurity Content

SIGN UP TO RECEIVE CYBER SECURITY INSIGHTS

Read by directors, executives, and security professionals globally, operating in the most complex of security environments.