There has recently been a prominent example of how damaging a serious IT outage can be. The hours-long interruption in service that Facebook (and its other platforms Instagram and WhatsApp) suffered recently, made news around the world. It cut off social networks, friends, relatives, lovers and businesses. Only Twitter saw the funny side.
The root cause is still the subject of some speculation and we have no information on that, beyond what’s been published on the Internet. What was clear, however, is how disruptive and damaging an outage can be, howsoever it was caused. Facebook became the news as its share price fell almost 6%, leaving Mark Zuckerberg an estimated $7billion out of pocket. Now that’s a sizeable amount, but already the price has partly rebounded; so, he’s unlikely to starve!
The prevailing theory is that the outage was caused by a remote administrator updating the BGP routing configuration. The change meant that routing was disabled as the old configuration was removed – but the new configuration couldn’t be configured because it was being done remotely. As a result, Facebook’s application servers and DNS hosts became unreachable and, being remote, they couldn’t connect in to fix it. Reportedly someone who knew what they were doing had to physically get to site and reconfigure the settings on the routers to bring the environment back up.
Ignoring the frailty of IT systems to human error, and the difficulties and vulnerabilities of routing configurations and DNS, what can the rest of us learn from the disruption caused by the outage of such critical social infrastructure?
A worst case scenario for many businesses, not just Facebook, is a complete loss of service. Facebook’s business model is totally reliant on online access and the Internet. Many other businesses don’t consider themselves to be as exposed to that kind of failure, but the reality is that in a digital world even a small outage can have a hugely disruptive effect.
This can be caused by misconfiguration or human error (as was perhaps the case for Facebook), an oversight, a physical failure or a deliberate act. The cause, as always, is much easier to pinpoint after the fact.
We have seen similar implications in non-IT businesses too – oil pipeline operators, food manufacturers and healthcare providers who businesses have suffered major outages as a result of ransomware attacks. Their reliance on IT, even though they trade in the physical world, meant that services and their delivery were similarly affected. This shows that no company can afford an IT outage – no matter how it is caused. Network misconfiguration is just one cause of failure; and ransomware another which has over recent times become more common than the calamitous events we saw in the social media world last week.
What the Facebook event shows is not how to avoid downtime, outages and blackouts –instead, it shows how small episodes that can seem almost trivial can give rise to such enormous consequences.
You can’t avoid all risks. Whether it’s a network administrator changing routes or a user with a malicious email attachment, people make mistakes. If, as the mathematician Lorenz proposes, a butterfly flapping its wings can result in a tornado, it’s important that early signs of risk are acknowledged as part of your risk management process.
We can learn about the risks of changing BGP configurations from Facebook; or when it comes to ransomware, learn how to reduce the risk of becoming infected. In both instances, however, effective mitigation strategies that prevent a risk or contain its impact are key to lessening the potential effect across an entire enterprise.
Maybe a backup router configuration strategy might’ve helped Facebook (if they had been easily accessible). Although, to be fair, massive on-line businesses like Facebook typically have huge backup data centres available to provide resilience and mitigation against catastrophic events.
For many other failure scenarios, however, backups are an important part of a Plan B. Loss or corruption of data can render even a fully working, internet connected, server inoperative. In the event of hardware failures, ransomware, theft, deliberate misuse or vandalism – it’s often the presence or absence of that make the biggest difference.
In some ransomware attacks, where the decryption process has been absent, unworkable or too slow, backups have provided the road to recovery. Colonial Pipeline found that; and so did Maersk when they were hit by NotPetya. They only managed to get their systems back because of a single domain controller, located in a remote Nigerian office and unaffected by the broader network outage. Incredibly, it was this only copy of the user and system Active Directory (which was ultimately flown back to head office) that enabled the recreation of the Maersk windows domain.
We’ve seen lots of significant systems outages in the past, resulting from numerous causes, and Facebook is just the most recent high profile “victim”. We also know that such disruptive events can stem from something as small as a butterfly flapping its wings.
Effective risk management means dealing with these, and where they can be foreseen, having controls in place. Every company can learn something about network support and administration from the Facebook experience, and in the same way every company can learn something about ransomware from Colonial Pipeline and about the importance of backups from Maersk.
You do have to sweat the small stuff!Read More
After the ransomware attack on the Waikato District Health Board (DHB) in May 2021 the New Zealand Privacy Commissioner John Edwards, warned all 20 NZ DHBs that if any DHB was found to not have adequate security (to protect patients’ information), compliance notices may be issued under the Privacy Act 2020; and if necessary, prosecutions would follow .
Clearly the time has come for boards and executive teams in New Zealand’s DHBs to be ransomware ready. Concerns were raised in Australia too, when the Office of the Australian Information Commissioner’s (OAIC) Notifiable Data Breach Report for Jan-Jun 2021 confirmed that the health care sector was most vulnerable to ransomware attack. 
Meanwhile, in this climate of growing attacks globally, PwC observed that less than 50% of health sector CISOs were likely to increase their cyber budgets in 2021. Almost 75% of those executives surveyed believed they would still be able to improve their levels of cyber posture through cost containment and judicious spending.
According to the OAIC, in Australia, ransomware was up 24% since the last reporting period. Recent local health care attacks in both Australia and New Zealand are a wakeup call for boards and executive teams. It’s time to take cyber security and resilience very seriously. Despite the optimism of those surveyed by PwC, in an industry notorious for systems vulnerabilities and cyber security under-investment, it is imperative that health care organisations embrace a stronger cyber culture and seek expert advice to tighten their cyber security controls.
The recent IBM Cost of Data Breach Report 2021 confirmed that, for the 11th year in a row, the healthcare sector had the highest average cost of a data breach. This year, US$9.23m per breach and that excludes the lives potentially put at risk as a consequence of an attack.
In Brisbane, a ransomware attack on UnitingCare’s internal IT systems forced its hospitals and nursing homes to resort to manual back up processes. While in Waikato, the ransomware outage affected all clinical services across all 5 regional hospitals. Patient appointments and surgeries were severely impacted, causing large backlogs for these important services.
The loss of modern diagnostic capability, and the speed of computer communication, meant it took twice as long to treat urgent patients. Having to resort to manual back-up systems caused major stresses for both patients and staff. The loss of radiology services severely impacted a number of seriously ill cancer patients who had to be transferred to other North Island hospitals. The then medical director of the Cancer Society of New Zealand stated that “it’s hard to understate how disruptive the loss of an IT system is on a hospital”.
In NZ, the government’s refusal to pay the ransom resulted in sensitive patient data being released to the media with some patient data permanently lost. IT systems took more than 4 weeks to fully recover.
The impacts of a ransomware attack on health care facilities cannot be underestimated. Financial losses, reputational damage, loss of productivity and business continuity and the risk of potential legal liabilities emerging as a result of interrupted patient health care. The disruption and loss of technology in a clinical setting can impact patient outcomes and potentially cost lives.
As if the loss of medical services isn’t enough for healthcare victims of cyber attacks. The potential theft of patients’ sensitive medical information and accompanying personally identifiable information adds insult to injury. Stolen information can include research data, patient records, billing information, insurance claims and social security numbers (a full set of identity records); all of which is highly prized on the dark web.
These costs to healthcare victims can be overwhelming, particularly at a time in their lives when many are at their most vulnerable. It’s for this reason that we need to identify and resolve some of the factors that make the sector so attractive to attackers:
Fortunately, these drivers of cyber attack in the healthcare sector point to some possible solutions to this scourge. There are, for example, a number of cost-effective mitigation strategies or controls that can be relatively simply initiated across healthcare organisations to improve their cyber security maturity and as a result, reduce their risk of cyber attack.
As noted above, accountabilities are strengthening so boards and senior executives need a clear picture of their cyber security posture. Active security risk management processes that regularly measure and inform management of the state of their cyber controls are increasingly being expected by regulators everywhere.
Being able to monitor and assess your cyber risk against a simple set of cyber security KPIs, like the Australian Cyber Security Centre Essential Eight framework, need not be costly, but it can ensure that your organisation stays on top of its cyber security to maintain effective oversight.
The good news is that highly effective automated technologies are now available to instantly measure and enable you to manage the health of your key security controls. Huntsman Security’s Essential Eight solutions can quickly measure and clearly report cyber security posture to relevant stakeholders.
With a clear picture of the state of its prevention, containment and recovery strategies the board can regularly assess and address any shortcomings that may expose the organisation or its patients to poor cyber security outcomes.
 https://www.privacy.org.nz/publications/statements-media-releases/privacy-commissioner-calls-on-dhbs-to-address-it-vulnerabilities/ ; May 26 2021
2021 is undoubtedly ‘the year of ransomware’. The Colonial Pipeline attack in May, highlighted the scale of the cyber risk for utilities and infrastructure industries more generally. All it took was a single password breach for criminals to demand, and receive, a US$4m ransom. Although the ransom might sound costly, the wider damage to revenue and reputation caused to a giant like Colonial Pipeline will ultimately be much higher. Even more recently, the Kaseya case highlighted the exposure that businesses can have through their supply chains and service providers. One recent report was that the Kaseya attack itself, had infected over one million endpoints with a ransom set at $70m.
Colonial Pipeline Co was fortunate in having a potential ‘quick fix’ option: to pay the ransom. That situation might soon change, if laws banning the payment of ransoms start to be passed in various countries. In Australia, there have been calls for mandatory notifications of ransomware attacks; and in the US, the SEC and OFAC are looking at banning ransom payments altogether. Interestingly, this may not mean much change for some. In a number of cases already, despite ransoms being paid, the decryption process has been so slow that companies have had to rely on backups and their own safeguards on order to return to BAU.
Cyber insurance helps businesses manage two of their biggest risks – getting back up and running quickly and reducing disruption. Insurers, however, are increasingly demanding evidence of operational security controls and even co-insurance of cyber risk for some, where these are less apparent. Everything points to the likelihood that premiums will increase even further for organisations that are less well defended. So getting your cyber risk management capabilities in place may be more important than you think. You may need them to get insurance and you most certainly will if you can’t!
The energy, oil and gas sectors face some specific challenges. They have extensive and often remote networks to defend; IT assets at drilling platforms or production facilities, often interconnected by both public and private infrastructure, back to HQ. Inevitably cyber security efforts are less rigorous at some of these remote sites and so security controls like multi-factor authentication are a particularly important defence for remote IT facilities.
Any relaxation of security at remote facilities is inevitably seen by an attacker as an opportunity to access assets which would otherwise be protected more rigorously back in HQ. As with environmental and other risks in the energy, oil and gas sectors, letting your guard down at a remote site can present a weak link in your risk management defences, and as a result, a costly breach to clean up and make good.
The sheer number and variety of security devices and systems in use can also pose challenges as they provide an almost endless number of points through which an attacker can access and then encrypt, even one part of the system, to render it useless. Colonial’s weak link was its billing system, rather than the technology that controlled the pipeline; but the interconnectivity of the systems meant that the pipeline network itself had to be isolated to limit the damage.
In our changing world, if paying ransoms is outlawed or too costly, and insurance becomes less of an option, the energy, oil and gas industry will need to improve its cyber risk management capabilities.
Anti-virus software and network defences, alongside the rise of endpoint detection and response, can certainly help businesses manage attacks. But these solutions are reactive in that they rely on detecting the attack as malicious in the first place. What if your endpoint solution misses the attack without warning? Do you have a ‘defence-in-depth’ strategy or is there a single point of failure? Do you have visibility to know what’s happening? Are there other controls in place that can mitigate the threat? More attention must be given to ‘layering’ your defences to prevent or at least limit successful ransomware attacks before they do serious damage.
There are three elements of a cyber-attack sequence to focus on. The first is the prevention of any initial infection; and the second, containment or limitation of the spread, if one does occur. This then, needs to be coupled to the third element, recovery, which allows systems and data to be restored in the event of the failure of the other controls. The principles of effective risk management apply – triage the risks and manage them accordingly.
There are some important safeguards organisations can adopt to support each of these elements:
Monitor your controls closely. If one aspect of the chain of control stops working, IT teams need to know quickly to respond. A ‘cyber culture’ and making cyber security a board level issue will improve overall corporate preparedness.
Accountabilities for cyber security are changing. The board must receive reports that provide clear visibility of these controls, or KPIs, of the security posture of their environment. The measurement of these KPIs must become part of an active cyber security risk management process. Being able to monitor your readiness and assess your risk across these KPIs provides a ’multi-point’ early warning system and confirmation that an effective cyber security program is in hand.
The energy, oil and gas sectors face many challenges and there is no easy fix for cyber security risk management. A big ransomware attack can disrupt supplies and impact broader operations for a long time, as Maersk found to their cost.
The best way to protect an organisation is with strong cyber defences and controls, backed up by regular checks to mitigate any identified shortcomings as necessary. If one control fails to identify the attack, not all is lost, as other subsequent controls are available to limit its access and the progress of any impact. That way the risk of a successful attack is minimised and hopefully you’ll be on the front foot in an attack well before any disruption to your systems and operations.
Article originally published in Energy, Oil & Gas Magazine.Read More
How to deal with a ransomware attack is currently a matter of some debate.
There is a school of thought that paying the ransom is a bad idea because it rewards the criminal and can be used to fund further attacks, possibly even on the same organisation. Many organisations run a counter argument which suggests that to get systems and data back up quickly and resume services, paying the ransom is the cheapest way out of a bad situation.
There is an increasing number of stakeholders in this decision which complicates the matter enormously.
Firstly, there are increasing moves by government and regulators to report ransom payments or even make them illegal altogether. This fits well with an international desire to stamp out this type of transnational crime. It may, however, create existential considerations for some seriously affected organisations. Some insurers, like AXA SA, are now refusing to write policies that reimburse ransoms paid by their customers in France, so, in the absence of de-encryption keys, re-instatement efforts and the likelihood of a return to BAU look unlikely for many victims.
Secondly, there is the assumption that the decryption key or system the ransom pays for will, in fact, work. It probably will (although not always) however in the case of Colonial Pipeline Co, the decryption process was so slow that they had to switch to backups anyway.
Thirdly, and briefly returning to insurers’ roles in supporting commercial cyber risk management efforts, the increasing prevalence of ransomware is resulting in underwriters seeking more and more evidence of the use of security controls in the organisations they insure. It is of concern that in the absence of such controls, and in the event of a ransomware attack, insurers may refuse to pay either the claim or the ransom money.
The measure of success of any recovery from a ransomware attack is your ability to resume BAU as quickly and painlessly as possible. Again, there is no silver bullet but the successful management of these “best two” controls will significantly increase your likelihood of successfully reinstating your business and data systems. As a result, these controls will limit disruption and your potential losses as well as, hopefully, the need to pay a ransom.
With ransomware being the topic of international summits, the insurance industry in such flux and future regulatory challenges to be resolved it would seem smart to have a backup plan – how your business could survive in the event of the worst possible set of circumstances.
Regular, comprehensive, tested and accessible backups that can form the basis of the reinstatement of your business. Obviously they need to be secured and safely isolated from the rest of the networks and systems, but also they need to be sufficiently accessible to enable the restoration of the business systems and data to ensure a timely return to BAU.
This includes backups of servers, file stores, workstations and in particular, systems where the integrity of the platform itself is vital – like domain controllers. Losing one of those can result in a massive amount of re-work. Don’t assume, for example, that because systems are resilient or mirrored that you will be OK. The ransomware might spread to all nodes in a cluster, or encrypted data could be replicated across all the same technologies you believed would save you.
Regular and comprehensive, reliable backups of every data store and enterprise system is still the best remedy for large-scale data loss or corruption. Having a secure and tested set of business systems and data back-ups is the best form of insurance you can have.
Assume that you have the ransom money (and are allowed to pay it) OR a good set of backups. Is that all you need?
In short, no.
A ransomware outbreak requires solid management just like any other cyber security incident. Reinstating your business systems and data to support BAU without impacting your business operations and stakeholders is not easy.
There will be the effects of disruption to manage, systems affected by the malware itself and those which have been disconnected to protect them. There will be communications to customers, stakeholders, regulators, law enforcement, insurers and governments to manage. If paying a ransom, who will negotiate with the attacker, and who has the sign off for a multi-million-dollar payment? These are not routine activities.
The vulnerability or security weakness that was exploited will require investigation so it can be fixed, patched or corrected to avoid further infections. Infected systems must be isolated from the network so they don’t infect systems that have not yet been affected, or re-infect the ones you are gradually restoring and bringing back on line.
Planning for how the incident will be managed is essential, and as with any other plan, identify who’s in charge, practice it, establish the pre-requisites and dependencies. Test it again. It all takes time, but if the plan works it sets the platform for a successful re-instatement of BAU whether you pay the ransom or not.
The recovery from a ransomware attack has a number of moving parts. But you will need backups. They might save you; they might be the only thing that does. Especially, if you can’t pay the ransom, or the decryption solution isn’t workable.
The wider implications of a malware-infested environment, of disruption and losses of service, of needing to communicate and to arrange rapid access to funds, forensic teams or consultants all mean having a sound, and tested, incident management plan.
As we said in the first two blogs in this series (see here and here) – having controls and safeguards is important; making sure they all work effectively is equally vital. It’s too late to test your incident management plan and system and data backups after the fact.Read More
In a previous blog, we talked about the rising threat of ransomware, how many solutions and approaches are geared towards detecting it, and how there are key things organisations they can do to prevent a ransomware attack.
We spoke about some recommended prevention controls and their prospect of success. We also, however, cautioned that there are no silver bullets and that no defence on its own is perfect. It’s for that very reason that it is wise to make plans and have controls in place to ensure that if ransomware does get through, its spread and effect is limited. It’s all about the defence in depth that can be gained through the deployment of multiple security controls. Clearly, one infected workstation is bad, but a thousand is undeniably worse.
“Containing” ransomware (in fact any attack or virus) is about limiting its ability to spread or to infect other systems and data; sometimes referred to as lateral movement. The four approaches below have been found to be the most useful defences against ransomware, if you have been unlucky enough to find it on an infected system.
In many respects they too are preventive controls, in that they are intended to limit the extent of an attack, but for this family of threats they are often containment countermeasures for “stage two” or “propagation” of an attack.
This comprises two aspects, first to minimise the number of people that have access to administrative accounts – and/or the amount of time they have access to them (e.g. for the duration of a change or a maintenance window). This is good practice – the principle of “least privilege”.
Secondly, limit the potential exposure to malware that people with admin accounts might have. This means turning off the most dangerous features and disabling the riskiest accesses that can be performed by those with admin credentials. For example, don’t give admin accounts an email address – if they need to use email, use their standard account. Don’t allow admin accounts to access the Internet, browse the web or access social media.
Admin accounts should only be required when access for maintenance is needed; so if that’s the limit of its use and someone using an admin account does stumble upon something malicious, it can’t penetrate the network using the very high level access rights of an administrator.
Limit the use of administrator accounts as much as you possible to reduce the risk of ransomware spreading across your systems.
Typically for ransomware the initial vector of attack is a direct network connection or via a malicious attachment, email or web page containing the initial payload.
Once that initial infection has activated and self-installed, ransomware typically seeks to spread across the network from its initial point of entry. It doesn’t spread by sending follow-up emails to all the other people in the organisation; more likely it will try to connect from system to system directly – from one host to the next, unbeknownst to the users. This can occur through several means, but if there is an unpatched operating system vulnerability that the code can identify across multiple hosts, it is relatively easy, and likely to work on every system.
If the first host and system gets infected, ransomware can quickly propagate across the network by exploiting OS vulnerabilities on adjacent interconnected systems on the same network. Maintaining patched operating systems is therefore a very effective defensive control.
Multi-factor authentication (MFA) means that an attacker requires something other than a single stolen password, compromised account or other set of credentials to move the ransomware laterally from system to system or to gain escalated privileges. For normal users MFA can be a challenge with an operational overhead. Some systems may not support MFA at all.
When taking a risk-based approach, however, multi-factor authentication is a very effective way to protect more exposed access points such as remote access/VPN gateways (Colonial Pipeline was compromised using a single factor remote login at one such access point). MFA is invaluable for system administration accounts where the usage pattern is less frequent, but the impact of compromise can be significant.
Using MFA to protect sensitive or exposed access points and to control admin access puts operational barriers in the path of a ransomware attack.
Anti-virus and end-point protection may seem like the place to start for ransomware attacks, however the reality is that all these controls are baseline or foundational controls. Anti-virus and endpoint protection is key, but as with anything else, it is not a silver bullet – there are numerous accounts of successful attacks involving code/exploits/malware that have occurred despite that protection being operational.
Obviously, endpoint and anti-virus solutions should be current but even then, some malware and ransomware attacks seek to circumvent or disable the detection capabilities of anti-virus solutions; and it’s not unknown for attackers to undertake direct intrusions into the network, rather than seek to use malware code to gain access to a target.
Anti-virus solutions at the gateways and endpoints, however, provide significant protection against the spread of ransomware and other forms of viruses and malware. They must be regularly updated to be fully effective, and there are now emerging technologies that watch for suspicious behaviour on workstations as well as specific cases of known virus code.
Anti-virus solutions and end-point protection limit the intrusion and spread of malware of all types, and therefore they are another pivotal defensive against ransomware propagation.
The four controls described in this blog are the major components of the containment controls needed to limit the spread of ransomware.
In the first blog of this series we looked at the ways organisations could defend themselves from the initial stage of attack and then, here, we have canvassed the ways that an attack can be contained. Of course all 10 of these controls act in concert to prevent and limit the spread of ransomware – but businesses need to defend patient “zero” as well as patient “one” onwards.
As we said in the first blog, having controls that you can trust and making them measurable and effective is key. A ransomware attack will highlight at least one of the weaknesses in your cyber security posture, but you need to find them all, preferably ahead of time, so you can avert potentially catastrophic losses.
It’s important to remember that auditing and assessing your security controls are regular and on-going processes. Every vulnerability, every patch, every new admin account or newly provisioned server could introduce the weak link that allows access to a ransomware attack. Depending on the size and nature of your business operations, annual or even quarterly assessments may not be frequent enough to secure yourself in such a rapidly changing risk environment.Read More
There is so much interest in ransomware at the moment that it almost feels like it’s the only cyber security problem we have to solve. While that certainly isn’t the case, there is undoubtedly a renewed importance in being able to deal with this increasingly debilitating threat.
Much time has been spent, as is often the case in cyber security, looking intrinsically at how to detect it. Mostly considering the network and end-point to detect host or session activity for indicators of compromise. Of course, you want to be able to detect a ransomware attack. But wouldn’t it be better to try and prevent it in the first place?
In a series of posts (this one being the first of three), we will look initially at ways to prevent ransomware attacks in the first place. Then we will move on to how to limit and contain their effects, if you do get infected. Finally, we’ll look at the recovery options if things just don’t go to plan.
In the vast majority of cases, ransomware attacks start in one of two ways. If you can cover both these bases there is a good chance that early “patient zero” infection can be avoided:
When we analyse these vectors, we can see that had better controls been in place, the attack could well have been evaded completely. The good news is that with little more than a handful of operational security controls these points of ransomware entry can be protected effectively.
From the cases we’ve seen (including here) and other research (such as this) there are six really good anti-ransomware defences to prevent attacks. In many cases these are focussed on stopping the initial malicious payload the attacker is seeking to deliver. You can, of course, add in more controls but these are the ones that are generally recommended to limit your risk of attack:
The settings for user applications, particularly Internet facing ones such as browsers and email clients, can often be a major point of weakness and often also the easier things to set in a central policy (assuming that it is then universally applied).
The most obvious and pertinent examples are the ability for emails and web pages to run active local code – Java/Flash etc. Removing this can sometimes lessen website functionality but importantly, it prevents attacks that enable a user to run local code.
In short, limiting what external content is able to do on a user’s system when it is accessed from a web page or an email.
Most malware is received as an attachment or a download or at the end of a link, and will seek to self-install and run various bits of code. One way to prevent this is to control users’ abilities to install and execute their own software. This is not dissimilar to the types of policies that are often put in place anyway to prevent the installation of unlicenced software, or random applications that could expose data (for instance cloud storage applications).
If “normal users” cannot install and run other applications, then neither can the malware sender/ransomware creator either. The result is that the attack is stopped in its tracks – even if the user is “deceived” into opening a malicious attachment in the first place.
The value of this control is increased further by its ability to limit the many data theft attacks that rely on installing software, possibly the cloud storage type mentioned above, or other file transfer utilities.
Preventing installation and execution of ransomware is a big enough reason to control applications and software in this way.
It is important to make sure OS patches are applied although often, in the case of ransomware, we have seen that OS level vulnerabilities are more commonly used to spread, rather than allow entry to the malware in the first place.
Applications, however, are the more likely point of attack for ransomware attackers.. The reason is that when content arrives (email, web browser, document, PDF file) it is an application that loads it.
One example is Adobe Reader and PDF files, which have proven time and time again to be a common way in which malware is introduced. So closing this route of attack pays real dividends.
If applications have vulnerabilities that are not patched, there is a real danger that they can be exploited by any malicious file or document to allow ransomware to gain a foothold in your enterprise.
As with active code/embedded malware in web pages and emails, another vector for ransomware infection and ingress is from within document files – Word documents, Excel spreadsheets etc. These applications can contain macro code which can be turned against a user who has unwittingly opened an innocent looking word document or spreadsheet. This can happen easily and so Microsoft applications should be configured to block all but “trusted” macros.
Preventing macros (i.e. code) running within applications is another very good way to limit the risk of ransomware, and other forms of malicious content entering your environment.
Cyber security awareness programmes are acknowledged as an important driver of cultural change and as a result are becoming more common. While they vary in quality, approach and even style of delivery, their ability to raise the level of cyber security knowledge is well-established.
The challenge with staff awareness, however, is that people can still be lured into making mistakes, and skilled social engineers can often entice quite capable people to do things they would not otherwise do. Adversaries can persuade even recently educated staff to believe that a malicious payload is in fact benign. Telling people to avoid clicking on suspicious links or unexpected and suspicious attachments only goes so far. If the attacker can induce the victim to click on a link or attachment, security teams need to rely on other technical controls as part of the defence in depth strategy.
Cyber security awareness programmes matter, but they are not a silver bullet. Refresher programs are necessary, but they also need to be accompanied by other controls. You need a mitigation strategy in place to address the absolute likelihood that someone will click on a link or allow an attachment to open and execute.
Lastly, or firstly depending on your point of view, is the network perimeter. Defence of the perimeter is a vital enforcement point as it is where access attempts are often targeted– as in the case of Travelex (out of date VPN devices) or Colonial Pipeline (single factor authenticated access). They can also be equipped and configured to control the types of content users see.
If you have the ability to control access and prevent administrative users accessing the web, or if you can maintain a list of addresses with malicious content/bad reputations and filter the content or URLs that people access, then you can prevent a significant number of ransomware attacks.
Collectively these controls are highly effective. Of course, you want to detect ransomware, but preventing it in the first place is a better outcome. Putting up these barriers (which often only cost the time it takes to configure them) is a vital line of defence.
As with any risk management strategy, you must plan for the fact that sometimes defences like these will fail. This is the very essence of defence in depth and why, in the second blog in this series, we will look at how to deal with that circumstance when it occurs.
Once you have a set of controls in place, you can monitor these to ensure that they are working and correctly configured to provide an effective defence. This assurance is vital and forms a key part of a cyber security risk management process that will strengthen your oversight of your internal network as well as those of your 3rd party suppliers. Furthermore, cyber insurers are increasingly expecting organisations to have these basic “cyber hygiene” controls in place with evidence of their operation before taking on risks or paying out on policies.
As a starting point, these six preventive controls are simple, effective and widely recommended to assist in the fight against ransomRead More
A significant cost of the last 18 months of turmoil for many organisations is revealed in a joint cyber security advisory published this week. Organisations everywhere have been challenged by cyber adversaries and their ongoing exploitation of a number of “reliable go to” security vulnerabilities. The rapid shift to remote working for many of us has challenged the ability of cyber professionals everywhere to maintain their defensive efforts; and those chickens are coming home to roost.
In the joint advisory, Cybersecurity and Infrastructure Security Agency (CISA) and FBI in the US, Australian Cyber Security Centre (ACSC) in Australia and National Cyber Security Centre (NCSC) in the UK have shed some light on how the criminal fraternity is adapting and using many of the core IT systems we have been increasingly reliant on, to further its own goals.
In a list of known information security vulnerabilities, identifiable by their Common Vulnerabilities and Exposures (CVEs), the advisory lists the top 30 vulnerabilities that are longstanding and were routinely exploited by malicious cyber actors in 2020. With some new additions to the list, those same vulnerabilities continue to be widely exploited into 2021.
In a sign of the times, the joint alert (AA21-209A) acknowledged that remote access to systems and data, so prevalent during the COVID-19 pandemic, was:
(a) a common target for attackers, and
(b) more vital than ever to businesses working remotely.
The advisory noted that:
“Four of the most targeted vulnerabilities in 2020 affected remote work, VPNs, or cloud-based technologies.”
Disappointingly, the advisory notes that with increased remote working, many already disclosed vulnerabilities continue to be used by adversaries to compromise unpatched systems.
“Many VPN gateway devices remained unpatched during 2020, with the growth of remote work options challenging the ability of organizations to conduct rigorous patch management.”
So, what did the attack surface popularity contest look like in 2020? The table below lists the CVE references and affected products in the advisory.
|Citrix||CVE-2019-19781||arbitrary code execution|
|Pulse||CVE 2019-11510||arbitrary file reading|
|Fortinet||CVE 2018-13379||path traversal|
|F5- Big IP||CVE 2020-5902||remote code execution (RCE)|
|Microsoft||CVE-2020-0787||elevation of privilege|
|Netlogon||CVE-2020-1472||elevation of privilege|
The list of remediations provides sobering reading, not least because of the number of times the mitigation strategy advises: “deploy and install a patch” or “upgrade to the latest version”.
The importance of mitigating such vulnerabilities promptly, is compounded in the discussion about a common VPN vulnerability:
“The CVE-2019-11510 vulnerability in Pulse Connect Secure VPN was also frequently targeted by nation-state APTs. Actors can exploit the vulnerability to steal the unencrypted credentials for all users on a compromised Pulse VPN server and retain unauthorized credentials for all users on a compromised Pulse VPN server and can retain unauthorize access after the system is patched unless all compromised credentials are changed.”
So, if an unpatched system is compromised, the attackers can get all the usernames/passwords; and so even if the system is subsequently patched, these credentials will still work and the attacker has access long after the patch is applied. Unless the organisation also changes all user access passwords the system will remain compromised. This is potentially a huge task – brought on purely by a delay in the rollout of the patch as soon as it is available.
Clearly, keeping up to date with software vulnerabilities has never been more important. The obvious questions when faced with the established knowledge that known, published vulnerabilities continue be exploited, are: Why aren’t these holes being fixed faster? Why are operations teams, IT security teams, IT admins leaving themselves in this position? The implications for the business can be massive; so, who needs to take action within your organisation?
In light of these revelations, are senior managers and directors sufficiently aware of the state of their security defences and the levels of protection they have from attack?
For 2021, the advisory reiterates the 2020 list and adds several additional CVE references.
Cyber actors continued to target vulnerabilities in perimeter-type devices such as Firewalls, VPNs and others. In addition to the 2020 list, organisations should prioritise patching for the following CVEs that are known to have been exploited:
Once again, at the risk of repeating themselves, the alert advises security teams to download and apply the patches, upgrade affected versions and check configurations.
There is a clear and recurring message here for both public and private enterprise, and it’s one the security agencies clearly want to emphasise. Organisations are continuing to leave themselves vulnerable to attack; and some exploits are so frequent, and successful, that authorities have published a “leagues table”.
“The advisory published today puts the power in every organisation’s hand to fix the most common vulnerabilities, such as unpatched VPN gateway devices,” remarked Paul Chichester from the UK NCSC.
Patching to stay on top of vulnerabilities is hard. No question. Some systems can be managed by central software management, but others can’t. There are always challenges finding time to patch and reboot systems, particularly those that operate 24-hours a day. With so many technologies and so many patches the work may feel never ending but, as this advisory highlights, the cost of not staying on top of your patching controls can seriously impact your operations.
The resultant risks to the business from these sorts of vulnerabilities are becoming so significant and the operational implications so great that senior executives and directors, responsible for the overall management of the business, urgently need better risk information. They need visibility of the state of their security controls and measures of any risk resulting from any vulnerabilities.
With objective measurement of the size of these risks, those responsible for their effective management can quickly get an understanding of the nature of their exposure and so execute effective mitigation strategies. This of course is not the sole responsibility of the senior executive or director, however, as the accountable party, they can insist on clear oversight of their cyber risk environment. From SOC and IT teams up to Executives and Boards, there is an imperative to invest in technologies that provide clear visibility and accurate measurement of where patches are missing, or other unmitigated vulnerabilities exist so they can manage cyber risk just like any other risk faced by their organisation.Read More
Despite many of the disruptions caused by COVID-19 over the last 12 months it remains vital that organisations maintain their cyber security governance. Maintaining security defences and avoiding security vulnerabilities will hopefully prevent the unwelcome attention of auditors and regulators; or worse a successful attack by hackers.Read More
Too many boards still lack visibility or understanding of the problems, while internal audit functions can lack the specialist skills to challenge boards and management to plug urgent gaps.
Geoff Summerhayes, APRA Executive Board MemberRead More
One of the challenges in cyber security is how to measure the status of security controls to quantify cyber risk – even controls that should be ubiquitous, baseline and foundational.
This problem has a number of dimensions – for example when looking at maturity it is often necessary to ensure that a technical control (which might be perfectly robust) is governed by a policy and actually generates audit information that enables it to be verifiable.
More common however, is the need to ensure that a technical configuration is (a) correct (i.e. matches policy, intent or compliance requirements) and (b) has been implemented (and to what degree).
The ability to measure this can be difficult in highly distributed environments. This can lead to readings being taken that are based on assumptions, and it is typically these assumptions that are found to be flawed when problems later emerge.
One way in which this can occur is in the configurations or versions of endpoint software on the network. An enterprise-wide Windows rollout, or browser update might have been implemented, but did it reach all the systems it was supposed to cover?
Old laptops that were bought for specific purposes, systems that control physical access and are never directly logged into, contractors using their own specialised equipment or other corporate guests can all have been skipped when changes have been applied. These then are the very vulnerable systems that attackers aim to locate and target, not the several thousand well managed and well patched workstations.
To quantify this problem, there are now several service providers trying to gauge security performance using external sources of information and assumptive benchmarks. However, this in itself is counter intuitive – how can you derive internal network configuration information without looking inside the network itself?
The answer (in the example we are discussing) is that browsers reveal information when they connect to web pages hosted on a server. See https://www.whatismybrowser.com for an example. This blog post is being written on a system running:
That information wouldn’t be any use if you had to trawl every company that the users have ever connected to in order to harvest their browser details. But often when you visit a web page, the adverts that are displayed come from a single set of web advertising companies – and so these sites do have a vast number of end-user browser details across all users and irrespective of the actual web sites the user visited.
Secondly, there is information publicly available about which network addresses are owned and used by companies. This can be used to identify the organisation and geographic location. It’s often used in security to convert an IP address to a location but it can also be used to map the information above to an organisation and even a particular office.
On the surface this seems to answer the question – we have one dataset of browsers and OS versions linked to IP addresses and another linking IP addresses to companies and offices. But does this provide a reliable way to externally connect browser/OS versions and patching status to a specific organisation?
The reality is that it does give an answer, but that answer relies on assumptions, and quite often flawed ones. And that means that any decisions about the state of security controls are similarly flawed.
Some organisations might allow people to connect to their network in a permitted way which means that any Internet access externally from their browsers/workstations will cloud the organisation’s browser/OS version results, as the connections to the outside world (and hence the advertising browser databases) originate from the organisation’s external network address (just not their own, managed systems).
There are many scenarios – guest WiFi networks, external users connecting devices to networks for meetings, contractors using their own systems, employees with mobile devices that are permitted to connect to corporate networks, guests in hotels. An organisation can easily appear to be bad at patching and software configuration just because it has hosted a major career event for hundreds of students.
In addition, users that are part of the organisation might be out of the office, work from home or at customer sites or be in smaller offices where the IP/network provision is given to them by the telecoms provider to the building. These systems, being away from the corporate network, might be the ones that are not regularly updated or patched and hence the riskiest, but because they connect to the Internet from hotels, client sites, Starbucks branches or restaurants they are never associated with the enterprise risk scores that a limited external assessment produces.
In essence, we are making decisions based on results from an incomplete and polluted sample.
The solution is to look within the network itself, where the systems that we want to assess or measure can be directly examined. In the example we’ve been using the devices on the network are easily visible, and it is possible to discern their role or ownership much more easily.
It would be very simple to see a distinction between systems that are on a guest WiFi network (the student conference, the visiting business partners etc.) and the corporate network (where you have employees using IT-issued kit), and consequently include or exclude them in an assessment as appropriate.
If you are trying to validate operating system versions, patches, browsers that are in use, what update schedule they have as part of an audit activity or assess a third party supplier, having the ability to collect metrics on security controls from within the network is crucial.
One perceived problem with this approach is the level of intrusion that the data gathering involves (this is the rationale for using externally visible information). The concern is that if internal systems are being scanned, probed, connected to and interrogated directly, this could put a load on the network and could cause other problems, maybe disrupting activities or triggering security controls that aim to detect vulnerability probes or scans.
However, it is only by interrogating the central management systems that control security, that you get an easy, single point of security control data around (in this case) patching and version information, or backup schedules, malware defences or application usage.
The only exceptions are those systems that fall outside of that umbrella as a result of deliberate exclusion for operational reasons.
To measure cyber risk on a continuous (rather than one-off) basis, you need accurate information that is complete and trustworthy, and you need to be able to collect this from single points of reference rather than interrogating every individual device with a noisy scanning solution. You need to be able to work across network boundaries enabling complex business units to police themselves and large organisations to monitor their external third party supply chains, and you must focus on issues that provide the highest value in security risk terms (like patching and software/OS versions).
Relying on external repositories that are derived from datasets based on assumptions might seem easy, but it is a sure way to get flawed data. It’s not quite guesswork, but it risks providing a view, upon which decisions about risk are made, that are not valid and hence unsafe.
If the choice is a questionable external view or an internal control assessment, then data gathered from within the network itself will always be closer to the truth and the better source of information to use.
By objectively measuring cyber risk from the ‘inside-out’, operations and management teams can reliably verify their security posture and even manage it as part of an enterprise wide improvement program; so if you are interested in taking control of your security posture, there is more information here.Read More