Despite many of the disruptions caused by COVID-19 over the last 12 months it remains vital that organisations maintain their cyber security governance. Maintaining security defences and avoiding security vulnerabilities will hopefully prevent the unwelcome attention of auditors and regulators; or worse a successful attack by hackers.Read More
Too many boards still lack visibility or understanding of the problems, while internal audit functions can lack the specialist skills to challenge boards and management to plug urgent gaps.
Geoff Summerhayes, APRA Executive Board MemberRead More
One of the challenges in cyber security is how to measure the status of security controls to quantify cyber risk – even controls that should be ubiquitous, baseline and foundational.
This problem has a number of dimensions – for example when looking at maturity it is often necessary to ensure that a technical control (which might be perfectly robust) is governed by a policy and actually generates audit information that enables it to be verifiable.
More common however, is the need to ensure that a technical configuration is (a) correct (i.e. matches policy, intent or compliance requirements) and (b) has been implemented (and to what degree).
The ability to measure this can be difficult in highly distributed environments. This can lead to readings being taken that are based on assumptions, and it is typically these assumptions that are found to be flawed when problems later emerge.
One way in which this can occur is in the configurations or versions of endpoint software on the network. An enterprise-wide Windows rollout, or browser update might have been implemented, but did it reach all the systems it was supposed to cover?
Old laptops that were bought for specific purposes, systems that control physical access and are never directly logged into, contractors using their own specialised equipment or other corporate guests can all have been skipped when changes have been applied. These then are the very vulnerable systems that attackers aim to locate and target, not the several thousand well managed and well patched workstations.
To quantify this problem, there are now several service providers trying to gauge security performance using external sources of information and assumptive benchmarks. However, this in itself is counter intuitive – how can you derive internal network configuration information without looking inside the network itself?
The answer (in the example we are discussing) is that browsers reveal information when they connect to web pages hosted on a server. See https://www.whatismybrowser.com for an example. This blog post is being written on a system running:
That information wouldn’t be any use if you had to trawl every company that the users have ever connected to in order to harvest their browser details. But often when you visit a web page, the adverts that are displayed come from a single set of web advertising companies – and so these sites do have a vast number of end-user browser details across all users and irrespective of the actual web sites the user visited.
Secondly, there is information publicly available about which network addresses are owned and used by companies. This can be used to identify the organisation and geographic location. It’s often used in security to convert an IP address to a location but it can also be used to map the information above to an organisation and even a particular office.
On the surface this seems to answer the question – we have one dataset of browsers and OS versions linked to IP addresses and another linking IP addresses to companies and offices. But does this provide a reliable way to externally connect browser/OS versions and patching status to a specific organisation?
The reality is that it does give an answer, but that answer relies on assumptions, and quite often flawed ones. And that means that any decisions about the state of security controls are similarly flawed.
Some organisations might allow people to connect to their network in a permitted way which means that any Internet access externally from their browsers/workstations will cloud the organisation’s browser/OS version results, as the connections to the outside world (and hence the advertising browser databases) originate from the organisation’s external network address (just not their own, managed systems).
There are many scenarios – guest WiFi networks, external users connecting devices to networks for meetings, contractors using their own systems, employees with mobile devices that are permitted to connect to corporate networks, guests in hotels. An organisation can easily appear to be bad at patching and software configuration just because it has hosted a major career event for hundreds of students.
In addition, users that are part of the organisation might be out of the office, work from home or at customer sites or be in smaller offices where the IP/network provision is given to them by the telecoms provider to the building. These systems, being away from the corporate network, might be the ones that are not regularly updated or patched and hence the riskiest, but because they connect to the Internet from hotels, client sites, Starbucks branches or restaurants they are never associated with the enterprise risk scores that a limited external assessment produces.
In essence, we are making decisions based on results from an incomplete and polluted sample.
The solution is to look within the network itself, where the systems that we want to assess or measure can be directly examined. In the example we’ve been using the devices on the network are easily visible, and it is possible to discern their role or ownership much more easily.
It would be very simple to see a distinction between systems that are on a guest WiFi network (the student conference, the visiting business partners etc.) and the corporate network (where you have employees using IT-issued kit), and consequently include or exclude them in an assessment as appropriate.
If you are trying to validate operating system versions, patches, browsers that are in use, what update schedule they have as part of an audit activity or assess a third party supplier, having the ability to collect metrics on security controls from within the network is crucial.
One perceived problem with this approach is the level of intrusion that the data gathering involves (this is the rationale for using externally visible information). The concern is that if internal systems are being scanned, probed, connected to and interrogated directly, this could put a load on the network and could cause other problems, maybe disrupting activities or triggering security controls that aim to detect vulnerability probes or scans.
However, it is only by interrogating the central management systems that control security, that you get an easy, single point of security control data around (in this case) patching and version information, or backup schedules, malware defences or application usage.
The only exceptions are those systems that fall outside of that umbrella as a result of deliberate exclusion for operational reasons.
To measure cyber risk on a continuous (rather than one-off) basis, you need accurate information that is complete and trustworthy, and you need to be able to collect this from single points of reference rather than interrogating every individual device with a noisy scanning solution. You need to be able to work across network boundaries enabling complex business units to police themselves and large organisations to monitor their external third party supply chains, and you must focus on issues that provide the highest value in security risk terms (like patching and software/OS versions).
Relying on external repositories that are derived from datasets based on assumptions might seem easy, but it is a sure way to get flawed data. It’s not quite guesswork, but it risks providing a view, upon which decisions about risk are made, that are not valid and hence unsafe.
If the choice is a questionable external view or an internal control assessment, then data gathered from within the network itself will always be closer to the truth and the better source of information to use.
By objectively measuring cyber risk from the ‘inside-out’, operations and management teams can reliably verify their security posture and even manage it as part of an enterprise wide improvement program; so if you are interested in taking control of your security posture, there is more information here.Read More
A recently discovered vulnerability in Microsoft’s Netlogon authentication protocol (CVE-2020-1472) allows attackers to establish a vulnerable Netlogon secure channel connection to a domain controller. If an attacker successfully exploits this vulnerability, they can run specially crafted applications on the device and assume full administrative privileges.Read More
This blog looks at how the MITRE ATT&CK matrix can be used to complement the work of your incident response team in the Security Operations Centre (SOC). It explores how it can help incident responders structure and streamline their investigations. You can read earlier MITRE ATT&CK posts here, here and here.Read More
The perfect cyber security storm that COVID-19 created has ushered in new cyber security operating models for many businesses. Many organisations are now switching focus from network security risk to endpoint security as a result of the move to working from home.Read More
Cyber security teams use threat modelling to represent sets of adversary tactics and techniques that may be used to a compromise their computer systems. These threat models contain representations of the ICT systems, networks and applications, combined with techniques used to exploit each component, from initial access through to exfiltration (or for achieving an alternative malicious goal, such as denial of service). This blog looks at how Security Operations Centre (SOC) teams use threat models to create use cases and how modelling specific sectors using the MITRE ATT&CK framework helps categorise threats and map controls, thus giving the SOC the insight needed to better defend the business.Read More
The MITRE ATT&CK Framework of tactics and techniques used by attackers to probe and compromise systems is attracting a lot of attention. We’ve covered it in several blogs posts here, here and here. But is it just a framework for enterprises to manage their own low-level “root and branch” technical security? Or can it be used by MSSPs who might not be involved in that more user and workstation-centric end of cyber security monitoring?Read More
There’s a lot of discussion about Australian cyber security right now, AustCyber has just released the Australian Digital Trust Report 2020, the Australian Cyber Security Industry Advisory Panel report will shortly hand down its recommendations to Government. This will be followed, very shortly, by the release of the much-anticipated Australian Cyber Security Strategy 2020. For the vast majority of Australian companies, all they want to know is what does this mean for me, and how can I measure and improve my cyber resilience?Read More
The MITRE ATT&CK framework is a resource that security operations centre (SOC) teams can use to refine their detection rules against known attack profiles. Using ATT&CK allows them to build specific targeted defences against advanced persistent threats (APTs) that are tuned to their organisation’s context, while covering a broad range of different tactics, techniques and procedures (TTPs) used by sophisticated adversaries.Read More