How to deal with a ransomware attack is currently a matter of some debate.
There is a school of thought that paying the ransom is a bad idea because it rewards the criminal and can be used to fund further attacks, possibly even on the same organisation. Many organisations run a counter argument which suggests that to get systems and data back up quickly and resume services, paying the ransom is the cheapest way out of a bad situation.
There is an increasing number of stakeholders in this decision which complicates the matter enormously.
Firstly, there are increasing moves by government and regulators to report ransom payments or even make them illegal altogether. This fits well with an international desire to stamp out this type of transnational crime. It may, however, create existential considerations for some seriously affected organisations. Some insurers, like AXA SA, are now refusing to write policies that reimburse ransoms paid by their customers in France, so, in the absence of de-encryption keys, re-instatement efforts and the likelihood of a return to BAU look unlikely for many victims.
Secondly, there is the assumption that the decryption key or system the ransom pays for will, in fact, work. It probably will (although not always) however in the case of Colonial Pipeline Co, the decryption process was so slow that they had to switch to backups anyway.
Thirdly, and briefly returning to insurers’ roles in supporting commercial cyber risk management efforts, the increasing prevalence of ransomware is resulting in underwriters seeking more and more evidence of the use of security controls in the organisations they insure. It is of concern that in the absence of such controls, and in the event of a ransomware attack, insurers may refuse to pay either the claim or the ransom money.
The measure of success of any recovery from a ransomware attack is your ability to resume BAU as quickly and painlessly as possible. Again, there is no silver bullet but the successful management of these “best two” controls will significantly increase your likelihood of successfully reinstating your business and data systems. As a result, these controls will limit disruption and your potential losses as well as, hopefully, the need to pay a ransom.
With ransomware being the topic of international summits, the insurance industry in such flux and future regulatory challenges to be resolved it would seem smart to have a backup plan – how your business could survive in the event of the worst possible set of circumstances.
Regular, comprehensive, tested and accessible backups that can form the basis of the reinstatement of your business. Obviously they need to be secured and safely isolated from the rest of the networks and systems, but also they need to be sufficiently accessible to enable the restoration of the business systems and data to ensure a timely return to BAU.
This includes backups of servers, file stores, workstations and in particular, systems where the integrity of the platform itself is vital – like domain controllers. Losing one of those can result in a massive amount of re-work. Don’t assume, for example, that because systems are resilient or mirrored that you will be OK. The ransomware might spread to all nodes in a cluster, or encrypted data could be replicated across all the same technologies you believed would save you.
Regular and comprehensive, reliable backups of every data store and enterprise system is still the best remedy for large-scale data loss or corruption. Having a secure and tested set of business systems and data back-ups is the best form of insurance you can have.
Assume that you have the ransom money (and are allowed to pay it) OR a good set of backups. Is that all you need?
In short, no.
A ransomware outbreak requires solid management just like any other cyber security incident. Reinstating your business systems and data to support BAU without impacting your business operations and stakeholders is not easy.
There will be the effects of disruption to manage, systems affected by the malware itself and those which have been disconnected to protect them. There will be communications to customers, stakeholders, regulators, law enforcement, insurers and governments to manage. If paying a ransom, who will negotiate with the attacker, and who has the sign off for a multi-million-dollar payment? These are not routine activities.
The vulnerability or security weakness that was exploited will require investigation so it can be fixed, patched or corrected to avoid further infections. Infected systems must be isolated from the network so they don’t infect systems that have not yet been affected, or re-infect the ones you are gradually restoring and bringing back on line.
Planning for how the incident will be managed is essential, and as with any other plan, identify who’s in charge, practice it, establish the pre-requisites and dependencies. Test it again. It all takes time, but if the plan works it sets the platform for a successful re-instatement of BAU whether you pay the ransom or not.
The recovery from a ransomware attack has a number of moving parts. But you will need backups. They might save you; they might be the only thing that does. Especially, if you can’t pay the ransom, or the decryption solution isn’t workable.
The wider implications of a malware-infested environment, of disruption and losses of service, of needing to communicate and to arrange rapid access to funds, forensic teams or consultants all mean having a sound, and tested, incident management plan.
As we said in the first two blogs in this series (see here and here) – having controls and safeguards is important; making sure they all work effectively is equally vital. It’s too late to test your incident management plan and system and data backups after the fact.Read More
In a previous blog, we talked about the rising threat of ransomware, how many solutions and approaches are geared towards detecting it, and how there are key things organisations they can do to prevent a ransomware attack.
We spoke about some recommended prevention controls and their prospect of success. We also, however, cautioned that there are no silver bullets and that no defence on its own is perfect. It’s for that very reason that it is wise to make plans and have controls in place to ensure that if ransomware does get through, its spread and effect is limited. It’s all about the defence in depth that can be gained through the deployment of multiple security controls. Clearly, one infected workstation is bad, but a thousand is undeniably worse.
“Containing” ransomware (in fact any attack or virus) is about limiting its ability to spread or to infect other systems and data; sometimes referred to as lateral movement. The four approaches below have been found to be the most useful defences against ransomware, if you have been unlucky enough to find it on an infected system.
In many respects they too are preventive controls, in that they are intended to limit the extent of an attack, but for this family of threats they are often containment countermeasures for “stage two” or “propagation” of an attack.
This comprises two aspects, first to minimise the number of people that have access to administrative accounts – and/or the amount of time they have access to them (e.g. for the duration of a change or a maintenance window). This is good practice – the principle of “least privilege”.
Secondly, limit the potential exposure to malware that people with admin accounts might have. This means turning off the most dangerous features and disabling the riskiest accesses that can be performed by those with admin credentials. For example, don’t give admin accounts an email address – if they need to use email, use their standard account. Don’t allow admin accounts to access the Internet, browse the web or access social media.
Admin accounts should only be required when access for maintenance is needed; so if that’s the limit of its use and someone using an admin account does stumble upon something malicious, it can’t penetrate the network using the very high level access rights of an administrator.
Limit the use of administrator accounts as much as you possible to reduce the risk of ransomware spreading across your systems.
Typically for ransomware the initial vector of attack is a direct network connection or via a malicious attachment, email or web page containing the initial payload.
Once that initial infection has activated and self-installed, ransomware typically seeks to spread across the network from its initial point of entry. It doesn’t spread by sending follow-up emails to all the other people in the organisation; more likely it will try to connect from system to system directly – from one host to the next, unbeknownst to the users. This can occur through several means, but if there is an unpatched operating system vulnerability that the code can identify across multiple hosts, it is relatively easy, and likely to work on every system.
If the first host and system gets infected, ransomware can quickly propagate across the network by exploiting OS vulnerabilities on adjacent interconnected systems on the same network. Maintaining patched operating systems is therefore a very effective defensive control.
Multi-factor authentication (MFA) means that an attacker requires something other than a single stolen password, compromised account or other set of credentials to move the ransomware laterally from system to system or to gain escalated privileges. For normal users MFA can be a challenge with an operational overhead. Some systems may not support MFA at all.
When taking a risk-based approach, however, multi-factor authentication is a very effective way to protect more exposed access points such as remote access/VPN gateways (Colonial Pipeline was compromised using a single factor remote login at one such access point). MFA is invaluable for system administration accounts where the usage pattern is less frequent, but the impact of compromise can be significant.
Using MFA to protect sensitive or exposed access points and to control admin access puts operational barriers in the path of a ransomware attack.
Anti-virus and end-point protection may seem like the place to start for ransomware attacks, however the reality is that all these controls are baseline or foundational controls. Anti-virus and endpoint protection is key, but as with anything else, it is not a silver bullet – there are numerous accounts of successful attacks involving code/exploits/malware that have occurred despite that protection being operational.
Obviously, endpoint and anti-virus solutions should be current but even then, some malware and ransomware attacks seek to circumvent or disable the detection capabilities of anti-virus solutions; and it’s not unknown for attackers to undertake direct intrusions into the network, rather than seek to use malware code to gain access to a target.
Anti-virus solutions at the gateways and endpoints, however, provide significant protection against the spread of ransomware and other forms of viruses and malware. They must be regularly updated to be fully effective, and there are now emerging technologies that watch for suspicious behaviour on workstations as well as specific cases of known virus code.
Anti-virus solutions and end-point protection limit the intrusion and spread of malware of all types, and therefore they are another pivotal defensive against ransomware propagation.
The four controls described in this blog are the major components of the containment controls needed to limit the spread of ransomware.
In the first blog of this series we looked at the ways organisations could defend themselves from the initial stage of attack and then, here, we have canvassed the ways that an attack can be contained. Of course all 10 of these controls act in concert to prevent and limit the spread of ransomware – but businesses need to defend patient “zero” as well as patient “one” onwards.
As we said in the first blog, having controls that you can trust and making them measurable and effective is key. A ransomware attack will highlight at least one of the weaknesses in your cyber security posture, but you need to find them all, preferably ahead of time, so you can avert potentially catastrophic losses.
It’s important to remember that auditing and assessing your security controls are regular and on-going processes. Every vulnerability, every patch, every new admin account or newly provisioned server could introduce the weak link that allows access to a ransomware attack. Depending on the size and nature of your business operations, annual or even quarterly assessments may not be frequent enough to secure yourself in such a rapidly changing risk environment.Read More
There is so much interest in ransomware at the moment that it almost feels like it’s the only cyber security problem we have to solve. While that certainly isn’t the case, there is undoubtedly a renewed importance in being able to deal with this increasingly debilitating threat.
Much time has been spent, as is often the case in cyber security, looking intrinsically at how to detect it. Mostly considering the network and end-point to detect host or session activity for indicators of compromise. Of course, you want to be able to detect a ransomware attack. But wouldn’t it be better to try and prevent it in the first place?
In a series of posts (this one being the first of three), we will look initially at ways to prevent ransomware attacks in the first place. Then we will move on to how to limit and contain their effects, if you do get infected. Finally, we’ll look at the recovery options if things just don’t go to plan.
In the vast majority of cases, ransomware attacks start in one of two ways. If you can cover both these bases there is a good chance that early “patient zero” infection can be avoided:
When we analyse these vectors, we can see that had better controls been in place, the attack could well have been evaded completely. The good news is that with little more than a handful of operational security controls these points of ransomware entry can be protected effectively.
From the cases we’ve seen (including here) and other research (such as this) there are six really good anti-ransomware defences to prevent attacks. In many cases these are focussed on stopping the initial malicious payload the attacker is seeking to deliver. You can, of course, add in more controls but these are the ones that are generally recommended to limit your risk of attack:
The settings for user applications, particularly Internet facing ones such as browsers and email clients, can often be a major point of weakness and often also the easier things to set in a central policy (assuming that it is then universally applied).
The most obvious and pertinent examples are the ability for emails and web pages to run active local code – Java/Flash etc. Removing this can sometimes lessen website functionality but importantly, it prevents attacks that enable a user to run local code.
In short, limiting what external content is able to do on a user’s system when it is accessed from a web page or an email.
Most malware is received as an attachment or a download or at the end of a link, and will seek to self-install and run various bits of code. One way to prevent this is to control users’ abilities to install and execute their own software. This is not dissimilar to the types of policies that are often put in place anyway to prevent the installation of unlicenced software, or random applications that could expose data (for instance cloud storage applications).
If “normal users” cannot install and run other applications, then neither can the malware sender/ransomware creator either. The result is that the attack is stopped in its tracks – even if the user is “deceived” into opening a malicious attachment in the first place.
The value of this control is increased further by its ability to limit the many data theft attacks that rely on installing software, possibly the cloud storage type mentioned above, or other file transfer utilities.
Preventing installation and execution of ransomware is a big enough reason to control applications and software in this way.
It is important to make sure OS patches are applied although often, in the case of ransomware, we have seen that OS level vulnerabilities are more commonly used to spread, rather than allow entry to the malware in the first place.
Applications, however, are the more likely point of attack for ransomware attackers.. The reason is that when content arrives (email, web browser, document, PDF file) it is an application that loads it.
One example is Adobe Reader and PDF files, which have proven time and time again to be a common way in which malware is introduced. So closing this route of attack pays real dividends.
If applications have vulnerabilities that are not patched, there is a real danger that they can be exploited by any malicious file or document to allow ransomware to gain a foothold in your enterprise.
As with active code/embedded malware in web pages and emails, another vector for ransomware infection and ingress is from within document files – Word documents, Excel spreadsheets etc. These applications can contain macro code which can be turned against a user who has unwittingly opened an innocent looking word document or spreadsheet. This can happen easily and so Microsoft applications should be configured to block all but “trusted” macros.
Preventing macros (i.e. code) running within applications is another very good way to limit the risk of ransomware, and other forms of malicious content entering your environment.
Cyber security awareness programmes are acknowledged as an important driver of cultural change and as a result are becoming more common. While they vary in quality, approach and even style of delivery, their ability to raise the level of cyber security knowledge is well-established.
The challenge with staff awareness, however, is that people can still be lured into making mistakes, and skilled social engineers can often entice quite capable people to do things they would not otherwise do. Adversaries can persuade even recently educated staff to believe that a malicious payload is in fact benign. Telling people to avoid clicking on suspicious links or unexpected and suspicious attachments only goes so far. If the attacker can induce the victim to click on a link or attachment, security teams need to rely on other technical controls as part of the defence in depth strategy.
Cyber security awareness programmes matter, but they are not a silver bullet. Refresher programs are necessary, but they also need to be accompanied by other controls. You need a mitigation strategy in place to address the absolute likelihood that someone will click on a link or allow an attachment to open and execute.
Lastly, or firstly depending on your point of view, is the network perimeter. Defence of the perimeter is a vital enforcement point as it is where access attempts are often targeted– as in the case of Travelex (out of date VPN devices) or Colonial Pipeline (single factor authenticated access). They can also be equipped and configured to control the types of content users see.
If you have the ability to control access and prevent administrative users accessing the web, or if you can maintain a list of addresses with malicious content/bad reputations and filter the content or URLs that people access, then you can prevent a significant number of ransomware attacks.
Collectively these controls are highly effective. Of course, you want to detect ransomware, but preventing it in the first place is a better outcome. Putting up these barriers (which often only cost the time it takes to configure them) is a vital line of defence.
As with any risk management strategy, you must plan for the fact that sometimes defences like these will fail. This is the very essence of defence in depth and why, in the second blog in this series, we will look at how to deal with that circumstance when it occurs.
Once you have a set of controls in place, you can monitor these to ensure that they are working and correctly configured to provide an effective defence. This assurance is vital and forms a key part of a cyber security risk management process that will strengthen your oversight of your internal network as well as those of your 3rd party suppliers. Furthermore, cyber insurers are increasingly expecting organisations to have these basic “cyber hygiene” controls in place with evidence of their operation before taking on risks or paying out on policies.
As a starting point, these six preventive controls are simple, effective and widely recommended to assist in the fight against ransomRead More
A significant cost of the last 18 months of turmoil for many organisations is revealed in a joint cyber security advisory published this week. Organisations everywhere have been challenged by cyber adversaries and their ongoing exploitation of a number of “reliable go to” security vulnerabilities. The rapid shift to remote working for many of us has challenged the ability of cyber professionals everywhere to maintain their defensive efforts; and those chickens are coming home to roost.
In the joint advisory, Cybersecurity and Infrastructure Security Agency (CISA) and FBI in the US, Australian Cyber Security Centre (ACSC) in Australia and National Cyber Security Centre (NCSC) in the UK have shed some light on how the criminal fraternity is adapting and using many of the core IT systems we have been increasingly reliant on, to further its own goals.
In a list of known information security vulnerabilities, identifiable by their Common Vulnerabilities and Exposures (CVEs), the advisory lists the top 30 vulnerabilities that are longstanding and were routinely exploited by malicious cyber actors in 2020. With some new additions to the list, those same vulnerabilities continue to be widely exploited into 2021.
In a sign of the times, the joint alert (AA21-209A) acknowledged that remote access to systems and data, so prevalent during the COVID-19 pandemic, was:
(a) a common target for attackers, and
(b) more vital than ever to businesses working remotely.
The advisory noted that:
“Four of the most targeted vulnerabilities in 2020 affected remote work, VPNs, or cloud-based technologies.”
Disappointingly, the advisory notes that with increased remote working, many already disclosed vulnerabilities continue to be used by adversaries to compromise unpatched systems.
“Many VPN gateway devices remained unpatched during 2020, with the growth of remote work options challenging the ability of organizations to conduct rigorous patch management.”
So, what did the attack surface popularity contest look like in 2020? The table below lists the CVE references and affected products in the advisory.
|Citrix||CVE-2019-19781||arbitrary code execution|
|Pulse||CVE 2019-11510||arbitrary file reading|
|Fortinet||CVE 2018-13379||path traversal|
|F5- Big IP||CVE 2020-5902||remote code execution (RCE)|
|Microsoft||CVE-2020-0787||elevation of privilege|
|Netlogon||CVE-2020-1472||elevation of privilege|
The list of remediations provides sobering reading, not least because of the number of times the mitigation strategy advises: “deploy and install a patch” or “upgrade to the latest version”.
The importance of mitigating such vulnerabilities promptly, is compounded in the discussion about a common VPN vulnerability:
“The CVE-2019-11510 vulnerability in Pulse Connect Secure VPN was also frequently targeted by nation-state APTs. Actors can exploit the vulnerability to steal the unencrypted credentials for all users on a compromised Pulse VPN server and retain unauthorized credentials for all users on a compromised Pulse VPN server and can retain unauthorize access after the system is patched unless all compromised credentials are changed.”
So, if an unpatched system is compromised, the attackers can get all the usernames/passwords; and so even if the system is subsequently patched, these credentials will still work and the attacker has access long after the patch is applied. Unless the organisation also changes all user access passwords the system will remain compromised. This is potentially a huge task – brought on purely by a delay in the rollout of the patch as soon as it is available.
Clearly, keeping up to date with software vulnerabilities has never been more important. The obvious questions when faced with the established knowledge that known, published vulnerabilities continue be exploited, are: Why aren’t these holes being fixed faster? Why are operations teams, IT security teams, IT admins leaving themselves in this position? The implications for the business can be massive; so, who needs to take action within your organisation?
In light of these revelations, are senior managers and directors sufficiently aware of the state of their security defences and the levels of protection they have from attack?
For 2021, the advisory reiterates the 2020 list and adds several additional CVE references.
Cyber actors continued to target vulnerabilities in perimeter-type devices such as Firewalls, VPNs and others. In addition to the 2020 list, organisations should prioritise patching for the following CVEs that are known to have been exploited:
Once again, at the risk of repeating themselves, the alert advises security teams to download and apply the patches, upgrade affected versions and check configurations.
There is a clear and recurring message here for both public and private enterprise, and it’s one the security agencies clearly want to emphasise. Organisations are continuing to leave themselves vulnerable to attack; and some exploits are so frequent, and successful, that authorities have published a “leagues table”.
“The advisory published today puts the power in every organisation’s hand to fix the most common vulnerabilities, such as unpatched VPN gateway devices,” remarked Paul Chichester from the UK NCSC.
Patching to stay on top of vulnerabilities is hard. No question. Some systems can be managed by central software management, but others can’t. There are always challenges finding time to patch and reboot systems, particularly those that operate 24-hours a day. With so many technologies and so many patches the work may feel never ending but, as this advisory highlights, the cost of not staying on top of your patching controls can seriously impact your operations.
The resultant risks to the business from these sorts of vulnerabilities are becoming so significant and the operational implications so great that senior executives and directors, responsible for the overall management of the business, urgently need better risk information. They need visibility of the state of their security controls and measures of any risk resulting from any vulnerabilities.
With objective measurement of the size of these risks, those responsible for their effective management can quickly get an understanding of the nature of their exposure and so execute effective mitigation strategies. This of course is not the sole responsibility of the senior executive or director, however, as the accountable party, they can insist on clear oversight of their cyber risk environment. From SOC and IT teams up to Executives and Boards, there is an imperative to invest in technologies that provide clear visibility and accurate measurement of where patches are missing, or other unmitigated vulnerabilities exist so they can manage cyber risk just like any other risk faced by their organisation.Read More
Despite many of the disruptions caused by COVID-19 over the last 12 months it remains vital that organisations maintain their cyber security governance. Maintaining security defences and avoiding security vulnerabilities will hopefully prevent the unwelcome attention of auditors and regulators; or worse a successful attack by hackers.Read More
Too many boards still lack visibility or understanding of the problems, while internal audit functions can lack the specialist skills to challenge boards and management to plug urgent gaps.
Geoff Summerhayes, APRA Executive Board MemberRead More
One of the challenges in cyber security is how to measure the status of security controls to quantify cyber risk – even controls that should be ubiquitous, baseline and foundational.
This problem has a number of dimensions – for example when looking at maturity it is often necessary to ensure that a technical control (which might be perfectly robust) is governed by a policy and actually generates audit information that enables it to be verifiable.
More common however, is the need to ensure that a technical configuration is (a) correct (i.e. matches policy, intent or compliance requirements) and (b) has been implemented (and to what degree).
The ability to measure this can be difficult in highly distributed environments. This can lead to readings being taken that are based on assumptions, and it is typically these assumptions that are found to be flawed when problems later emerge.
One way in which this can occur is in the configurations or versions of endpoint software on the network. An enterprise-wide Windows rollout, or browser update might have been implemented, but did it reach all the systems it was supposed to cover?
Old laptops that were bought for specific purposes, systems that control physical access and are never directly logged into, contractors using their own specialised equipment or other corporate guests can all have been skipped when changes have been applied. These then are the very vulnerable systems that attackers aim to locate and target, not the several thousand well managed and well patched workstations.
To quantify this problem, there are now several service providers trying to gauge security performance using external sources of information and assumptive benchmarks. However, this in itself is counter intuitive – how can you derive internal network configuration information without looking inside the network itself?
The answer (in the example we are discussing) is that browsers reveal information when they connect to web pages hosted on a server. See https://www.whatismybrowser.com for an example. This blog post is being written on a system running:
That information wouldn’t be any use if you had to trawl every company that the users have ever connected to in order to harvest their browser details. But often when you visit a web page, the adverts that are displayed come from a single set of web advertising companies – and so these sites do have a vast number of end-user browser details across all users and irrespective of the actual web sites the user visited.
Secondly, there is information publicly available about which network addresses are owned and used by companies. This can be used to identify the organisation and geographic location. It’s often used in security to convert an IP address to a location but it can also be used to map the information above to an organisation and even a particular office.
On the surface this seems to answer the question – we have one dataset of browsers and OS versions linked to IP addresses and another linking IP addresses to companies and offices. But does this provide a reliable way to externally connect browser/OS versions and patching status to a specific organisation?
The reality is that it does give an answer, but that answer relies on assumptions, and quite often flawed ones. And that means that any decisions about the state of security controls are similarly flawed.
Some organisations might allow people to connect to their network in a permitted way which means that any Internet access externally from their browsers/workstations will cloud the organisation’s browser/OS version results, as the connections to the outside world (and hence the advertising browser databases) originate from the organisation’s external network address (just not their own, managed systems).
There are many scenarios – guest WiFi networks, external users connecting devices to networks for meetings, contractors using their own systems, employees with mobile devices that are permitted to connect to corporate networks, guests in hotels. An organisation can easily appear to be bad at patching and software configuration just because it has hosted a major career event for hundreds of students.
In addition, users that are part of the organisation might be out of the office, work from home or at customer sites or be in smaller offices where the IP/network provision is given to them by the telecoms provider to the building. These systems, being away from the corporate network, might be the ones that are not regularly updated or patched and hence the riskiest, but because they connect to the Internet from hotels, client sites, Starbucks branches or restaurants they are never associated with the enterprise risk scores that a limited external assessment produces.
In essence, we are making decisions based on results from an incomplete and polluted sample.
The solution is to look within the network itself, where the systems that we want to assess or measure can be directly examined. In the example we’ve been using the devices on the network are easily visible, and it is possible to discern their role or ownership much more easily.
It would be very simple to see a distinction between systems that are on a guest WiFi network (the student conference, the visiting business partners etc.) and the corporate network (where you have employees using IT-issued kit), and consequently include or exclude them in an assessment as appropriate.
If you are trying to validate operating system versions, patches, browsers that are in use, what update schedule they have as part of an audit activity or assess a third party supplier, having the ability to collect metrics on security controls from within the network is crucial.
One perceived problem with this approach is the level of intrusion that the data gathering involves (this is the rationale for using externally visible information). The concern is that if internal systems are being scanned, probed, connected to and interrogated directly, this could put a load on the network and could cause other problems, maybe disrupting activities or triggering security controls that aim to detect vulnerability probes or scans.
However, it is only by interrogating the central management systems that control security, that you get an easy, single point of security control data around (in this case) patching and version information, or backup schedules, malware defences or application usage.
The only exceptions are those systems that fall outside of that umbrella as a result of deliberate exclusion for operational reasons.
To measure cyber risk on a continuous (rather than one-off) basis, you need accurate information that is complete and trustworthy, and you need to be able to collect this from single points of reference rather than interrogating every individual device with a noisy scanning solution. You need to be able to work across network boundaries enabling complex business units to police themselves and large organisations to monitor their external third party supply chains, and you must focus on issues that provide the highest value in security risk terms (like patching and software/OS versions).
Relying on external repositories that are derived from datasets based on assumptions might seem easy, but it is a sure way to get flawed data. It’s not quite guesswork, but it risks providing a view, upon which decisions about risk are made, that are not valid and hence unsafe.
If the choice is a questionable external view or an internal control assessment, then data gathered from within the network itself will always be closer to the truth and the better source of information to use.
By objectively measuring cyber risk from the ‘inside-out’, operations and management teams can reliably verify their security posture and even manage it as part of an enterprise wide improvement program; so if you are interested in taking control of your security posture, there is more information here.Read More
A recently discovered vulnerability in Microsoft’s Netlogon authentication protocol (CVE-2020-1472) allows attackers to establish a vulnerable Netlogon secure channel connection to a domain controller. If an attacker successfully exploits this vulnerability, they can run specially crafted applications on the device and assume full administrative privileges.Read More
This blog looks at how the MITRE ATT&CK matrix can be used to complement the work of your incident response team in the Security Operations Centre (SOC). It explores how it can help incident responders structure and streamline their investigations. You can read earlier MITRE ATT&CK posts here, here and here.Read More
The perfect cyber security storm that COVID-19 created has ushered in new cyber security operating models for many businesses. Many organisations are now switching focus from network security risk to endpoint security as a result of the move to working from home.Read More