Compliance & Legislation

January 10, 2018

As 2018 dawns, the time to achieve compliance with GDPR tightens. Issues like the right to be forgotten and the need to set up processes to handle data breach notifications become pressing issues. For some organisations the process of issuing a data breach notification itself will be completely new.

GDPR Data Breach Notification – are you ready?

There will be, as has been acknowledged in various studies, a range of degrees of readiness within businesses – but the security and privacy communities; spanning services, consulting and products; are busy both driving demand and also meeting it. The emergence of services specifically to support data breach notifications is one example – these have existing in the US for some time but now will find more traction in the EU.

In specific terms, the area around Security Analytics, automation and machine learning has a clear role to play as businesses try to:

  • Detect breaches, attacks and misuse of systems more comprehensively and quickly;
  • Understand the validity, scope, extent and nature of a breach;
  • Develop the capability to create an informed, accurate and complete report to the regulator;
  • Establish the roots causes and vulnerabilities that were exploited to avoid recurrence.

However, there are a wealth of security solutions (and the field of analytics is no different) that expertly solve one of the problems faced and pay little heed to any of the related issues – indeed in some cases, as we shall see, they can solve one problem and exacerbate another.

As a result, going into trolley-dash mode and panic-buying solutions to deal with the spectre of GDPR is absolutely the wrong approach.

Data Breach Notifications: Detecting additional types and/or numbers of threats

There is an ethos within the security industry that revolves around detecting attacks, misuse, phishing, malware, network anomalies, data losses, file access, media insertions, privilege changes, policy breaches, matches with published threat intelligence, black listed application usage etc.

This is perfectly valid – and in a great many cases these detective controls actually serve a preventative function – anti-virus being a good example; a file that matches a pattern is not just detected but is quarantined, deleted, prevented from being opened etc.

The goal of a solution that aims to detect is only met if it actually detects things – hence there is a tendency (in particular during a pilot or proof of concept) to try and detect as much (or as many cases) as possible.

What should be self-evident however is that just detecting more instances poses security teams with two very real problems:

  • The volume of false positives generated by controls that lean towards over-detection is a real issue – this adds to the operational burden and masks genuine issues amidst a cloud of miscellaneous reports;
  • The increase in net volume of real instances, attacks, suspected of misuses that must be investigated and dealt with in all but the most trivial (e.g. anti-virus) cases.

CONCLUSION: When buying solutions that detect problems, make sure that they aren’t going to over classify things as problems to try and “impress” you.  Ensure that you have got a plan to deal with the things they do detect that are valid (or not) so that your team, reporting solutions and processes can cope.

Data Breach Notifications: Support data breach investigations and aid understanding

If the historic SIEM industry was about collecting data and presenting it in pie chart, bar chart or table form for managers and auditors, the Security Analytics industry is much more about deriving meaning from data in a way that supports the processes of detection, understanding and response within the security operations function itself and in the reporting of this to business stakeholders.

Data Breach Notifications: Collect data in one place

There are several clear imperatives in this – one, apparently very basic, is the need to gather relevant data together in a single place for access. In one respect the amalgam of log data in traditional solutions achieved this; however for a given event/alert/report/alarm (depending on your terminology) there is often also a quantity of supporting information that traditionally would have to be manually collected. Examples might include:

  • The configuration of an end point/workstation;
  • The contents of a file that has been implicated/shared/transmitted;
  • The presence, location, department, status or role of a user (or a server for that matter);
  • The network session data (often this is collected in a transient way in a cache so may not be available if too much time has elapsed);
  • Non-security log and activity details like file accesses, DHCP records, DNS lookups, CCTV footage;
  • Threat intelligence about suspicious IP addresses or networks or hashes of particular files/executables;
  • The patch and vulnerability status of systems involved or affected.

There is a huge difference in usefulness and workload/effort required between “having this data available” and “having this data available in one place” – even if it is not continually stored in a single location as a rule (it would be rare to find a record of the contents every network packet stored within a SIEM for example, but in the case of security report this data might be hugely valuable).

Data Breach Notifications: Anticipate questions that will be asked

The step beyond gathering the data and having it available is to actually seek to optimise and speed up the investigative processes to provide faster and deeper understanding.

One way this can be achieved is with automation of some of the processes that a human operator would follow as part of the alert triage and investigation process.

Two very obvious examples are the verification of threats and the anticipation of likely questions:

  • Verification means taking a report or alert that has been generated and seeking or identifying corroborating data or evidence that can be used to establish if it is real or benign (i.e. a false positive). In the first case the supporting data gives the operator a head start and probably some well-needed context, in the second case it means they can safely mark a truly benign event as resolved.

However in either case it should be the security analytics solution that does this, rather than sitting there with a blinking cursor or query builder waiting for the operator to instruct it to do so.

  • Anticipation of questions means applying intelligence to pre-capture, pre-compute or pre-compile answers to obvious questions. Supposing a report or alert pertains to a user, the likely questions are:
    • Who is this person?
    • What department are they in?
    • What role do they hold?
    • What else have they done?
    • How long has this activity been going on?
    • What else have they done in the last day/week/month?
    • Where are they right now?

Of course a human operator could ask these things of the analytics platform in front of them – but again, there is no need for a security platform to sit there waiting to be told to do it.

Similarly, in the case of a malware report (malware is still a growing problem) from an end point, likely questions are:

  • Has the malware infected the end point?
  • Is anything untoward happening on the system or network since then?
  • Has this signature/pattern/behaviour occurred on any other systems?
  • What network connections/accesses does this system (or the current user) have to the rest of the network?

As above, it’s a fair bet these questions will be asked, so choose a security analytics solution that will just answer them, rather than sit there like the ancient oracle at Delphi waiting to be asked.

Data Breach Notifications: Allow questions to be asked and investigations to be conducted

(… without having to rely on excel)

After the obvious questions that should be anticipated there is a whole investigative process that, for more complex cases, may need considerable human investment in time, expertise and effort.

The important thing here is to allow the creation of questions, queries, lookups and interrogations that provide the maximum value in terms of output for the minimum effort in terms of creation or definition.

What is commonly found is that the methods for querying data sets are too cumbersome or limited for anything outside of normal dashboards or reporting use; or too difficult to learn.   Hence, data gets extracted from centralised reference systems and is then pulled into a different tool for analysis.

Rather than taking a user and a workstation and a server and asking “what other users have used this workstation and what servers did they access?” the operator finds there isn’t a way to encode or define that question in the way the analytics tool would like or allow.

Consequently the list of users and workstations is created as one export or CSV file, and the list of workstations and servers is created as another, and these get loaded manually into Excel so that a LOOKUP, or sort, or filter, or conditional formula can be applied.

This output is a set of results that stays stored in “malware incident analysis.xls” because there is also no way to bring external analysis back into the “central analytics platform” to associate it with an incident.

CONCLUSION: The goal of analytics is not to draw graphs; it is to build meaning around data and information so that it can be understood and to allow decisions to be made. When investing in analytics solutions, ensure that they answer questions – preferably automatically but if manually, then with the utmost flexibility and the lowest friction for the operator. Make sure technology makes the work of the human less laborious rather than more.

Data Breach Notifications: Actions

The ultimate goal, in the event of a security incident – particularly one that is ongoing at the time the investigation happens – is to be able to do something about it.

In its simplest form this might mean disabling a network connection, suspending a user account or quarantining a host from the network to avoid data loss, onward infection or unauthorised access.

Even some time after a breach might have occurred, when the horse has bolted so to speak, there is a need to amend configurations, apply patches or tighten access rights to prevent recurrence.

Ideally, this would be undertaken in a calm and relaxed way by the security team; but with security resources as scarce as they are it is much more likely that it is done in a stressed and panicked way by an overworked security team or possibly not done at all because they aren’t empowered to act and respond.

This need for consistency and promptness in responses all but implies an automated capability – one that can be trusted to operate under a set of guidelines and rules defined by an expert in cyber security but not dependent on them per se to actually do it.

Automation in this context has a large hurdle to overcome, that of trust. If a case is detected and is 80% likely to need a response, for most people that would not be close enough to let the system get on with it autonomously.

If however this certainty can be raised to 99% then the risks of automating a response have dropped considerably. Additionally, if the response can be executed in a trustworthy, sound, consistent and reversible way then the choice to allow the system to take action becomes much easier.

So a security analytics solution that churns out alerts, or exceptions with or without qualification for a human to take action is much less useful than a security analytics solution that will undertake the necessary verification and triage to get to the point where the level of certainty is high enough that it can make a decision, as configured, to act autonomously. The case can then be flagged – with the necessary corroboration – to a human as a fait accompli and with an easy “undo” or “go back” function if the rare case has occurred where the response was not correct.

CONCLUSION: The goal of analytics is to bring meaning to data and provide certainty in decision-making. Use this as a way to leverage automation to improve response times and further reduce manual workload on human resources so they can be better utilised on tasks that require real expertise – not on programmed mitigation workflows that can easily be built within a production-line environment.

Data Breach Notifications: Don’t panic buy compliance to please auditors; invest in analytics and automation to improve outcomes

It is easy to see GDPR as a way to justify additional investment in prevention, detection, and response. To an extent this is valid, however the problems that security teams are solving are being worsened by the growing technology landscape and expanding threat universe.

Buying tools that do one thing well but are isolated from the wider remit, ecosystem or end-to-end process simply moves the problem (and often data) around.

Bringing meaning to data, supporting the process of making decisions and reacting swiftly is the goal. Compliance driven procurements of solutions that don’t recognise that are likely to deliver only incremental value.

GDPR is not “the perfect time to panic” – instead:

  • Don’t just look to detect as many security or compliance issues as possible without considering how to deal with false positives and verify the alerts.
  • Do deploy solutions that aid the work of human operators by reducing manual collection, collation and analysis and automating predictable and routine investigative processes.
  • Do build sufficient certainty around cases and associated responses so that systems can take prompt, repeatable actions whenever possible, leaving staff free to take charge of the more challenging activities.

Fast Track your GDPR Compliance

BLOG POSTS

Related Cybersecurity Content

SIGN UP TO RECEIVE CYBER SECURITY INSIGHTS

Read by directors, executives, and security professionals globally, operating in the most complex of security environments.