Operational resilience

June 16, 2019

Security teams are always busy deploying and implementing security controls to try and prevent or detect cyber-attacks.  Those controls, as well as the security configurations with the operating systems and networks they are responsible for protecting, introduce an ongoing management and operational load.

It is easy to assume that these security technologies and processes all “just work”; that controls will be justified, bought, deployed and operated and then cyber security issues will go away. This assumption would be naive.

Companies still get breached and still suffer from cyber-attacks.  So, what is going on?  Why are we failing?

Hackers circumvent cyber security controls

One way data breaches occur is that companies deploy strong and effective controls “in some places” or “with x% coverage”.  Perhaps they limit attachment size on emails but allow access to file sharing sites like Dropbox.  Or they deploy IDS solutions to block web-based network attacks but allow access to applications that have insecure or weak controls on user input, or they secure their corporate network but share data with less secure third parties.

The hacker, constantly looking for the path of least resistance, avoids all the difficult routes to get to information and uses the easiest way possible.

Security Controls are not 100% perfect

The best example here is people.  Pick any group – users, developers, system administrators – and you can come up with failure scenarios where they deliberately subvert security, unknowingly do so, put in place ways to make life easier that avoid controls or introduce risk (often with the best will in the world) or are just lazy and know they should do things one way – but do them an easier (and less secure) way.

Users will choose weak passwords, system administrators will configure systems to make it easy for them to administer them, developers will miss vital user input checks and designers will create systems that have unforeseen vulnerabilities.

Maybe the rights of a user to a data set will not be quite comprehensive enough to prevent them from knowingly or unwittingly corrupting it, leaking it or exposing it in some way.  In all of these cases, when the inevitable happens, systems and data become exposed.

“New” attacks appear against “Old” cyber security controls

This normally occurs where controls are based on patterns, signatures or heuristics and there is a lag between a new attack signature, virus pattern or file-hash being detected and that pattern/signature becoming integrated into the tools used to provide a complete defence.  Until that reference data is available, the organisation is exposed to the threats it corresponds with – the inbuilt/latest patterns become out-of-date.

Alternatively, a heuristic/machine learning tool might have to observe a number of anomalous actions to realise they are suspicious, or a threshold might have to be breached to trigger a response… in each case the control is there but it doesn’t pick up every single threat of a given type because it needs to learn what normal looks like in order to recognise abnormal.

Trustworthy systems suddenly become untrustworthy

In security, attackers are constantly trying to find ways into systems through new ways of attacking or exploiting existing technology.  When they succeed, a platform, system, piece of software or design goes from being “resistant to all known attacks” to being “resistant to all known attacks apart from the newest one”.

The obvious case here is a software vulnerability.  The attacker finds it and if we are lucky responsibly discloses it to the vendor, then a patch is issued and then applied.  However, sometimes the exploit is used maliciously before it is patched or the patch isn’t applied when it is available.  Either way, the previously trustworthy system is vulnerable to the new attack.

At the design stage similar issues can arise – not so much with the code that is written or the way systems are built, but in the architecture itself.  Maybe a strong gateway or perimeter security control system limits access at the point intended, but doesn’t prevent access through a side door or, in IT terms, a strongly protected internet access gateway doesn’t protect from network attacks through alternate network connections to a third party.

We often build systems in layers, creating functionality that sits on top of other functions or modules. This is a sound way to engineer a complex system – however you do introduce situations where secure/robust application code sits on top of (or utilises) underlying, and potentially less secure, sub-systems or libraries of code for specific functions (as in the case of Heartbleed) or is running on processors that allow intrusion or attack (Spectre and Meltdown).

What does the assumption of cyber security failure mean?

Once one accepts the inevitability of failure, there can be much wider approach and thought process around how to defend systems from attack and how to protect data and information.

There is no better example of this than in the case of malware.  Malware exploits weaknesses in code and systems, it often requires user error to create an infection and new variants emerge all the time – hence it matches several of our above cases.

However, anti-virus software has been around for a long time, so why are we still experiencing malware as a problem?

Mainly because there are new viruses and variants that aren’t covered by the existing AV signature files or scanning engines or simply that the latest signatures are not installed or updated and malware writers are smart enough to write successful malware code that works, despite the controls – it will ever be thus!

The most common approach to defend against malware is a layered/staged strategy:

  • Prevent the initial infection or breach of defence wherever and however you can (using a variety of techniques – AV is certainly one, user awareness another);
  • Ensure there are multiple control layers that attacks must pass through – like an AV gateway as well as workstation scanning software and proxy servers that limit internet access. e. avoid reliance on one control;
  • Control the access of users to the smallest set of data and systems they need for their role – rather than the entire corporate information base – so that if one is affected the extent of systems and data it can spread to is as limited as possible;
  • If a system (or user) is infected, limit or constrain its ability to connect to other systems, data or networks;
  • Be able to detect it quickly and trace the origin so if a virus is spreading you can find out where it started, where it has spread and what systems need to be cleansed or quarantined;
  • Establish ways to get the latest software and signature updates, filter network traffic, recover corrupted data (especially for ransomware) etc. so that the process of recovery from an outbreak, however large, is as slick as possible (i.e. work out as many things you “might” have to do in advance).

In short, it is important to assume – and hence plan for – the worst-case scenario.

Australia’s Essential 8

The Australian government is an interesting case study in this regard.  The Australian Cyber Security Centre(ACSC) studied past breaches and found that in an overwhelming number of cases when a control failure led to a breach, nearly all of them (85%) were enabled (in terms of infection) or made much worse due to highly repetitious failure scenarios.

ACSC identified the top 4 mitigation strategies based on this data to address the majority of attacks. They then cited a further 4 that played a significant role in limiting the impacts.

As a result of this, the Essential 8 has become a key government recommendation.

As the security controls are based on past breach research there is a high degree of confidence in the value of ensuring the controls are in place, effective and operating in a visible (or auditable) and measurable way.  So now within the government sector, the MSSP industry and the wider supply chain these are being increasingly adopted by organisations or throughout the networks of interconnection that are common in modern business.

The nature of the 8 controls will come as no surprise (see post here) but what is interesting is the classification. There are three “layers” or categories:

  • Prevention controls– that aim to stop the initial attack occurring if an anti-virus software scan fails, if a virus isn’t detected or a user clicks on a link.
  • Limitation controls– that aims to limit how far, deep or widespread an infection can take root.
  • Recovery & Systems Availability controls– that aim to give organisation the ability to recover data that might be inaccessible or corrupted.

Measure the effectiveness of your cyber security controls

Huntsman Security’s solution puts in place a system to continuously and automatically “measure” these controls in an objective way; to take the oversight and guesswork out of getting that 85% cross-section of cyber risks into a managed state.

This means having visibility across a core set of controls with a trusted foundation, hence creating bandwidth to work on local, more specific problems or targeted improvements in other areas – such as application security or user management.

The benefits are obvious – better and more visible risk management and a clearer view at executive level with less surprises resulting from unexpected malware infections, intrusions and breaches or losses of data through theft or corruption.

Essential 8 Scorecard Overview

BLOG POSTS

Related Cybersecurity Content

SIGN UP TO RECEIVE CYBER SECURITY INSIGHTS

Read by directors, executives, and security professionals globally, operating in the most complex of security environments.