We were fortunate to attend the 2019 SANS DFIR Cyber Threat Intelligence Summit this year, which brings together some of the best and brightest in the Cyber Threat Intelligence (CTI) industry for a week of in-depth talks and trainings. The Summit began with two days of talks, with speakers selected by the CTI summit advisory board, co-chaired by Rick Holland (@rickhholland) and Dragos’ Robert M. Lee (@RobertMLee).

Now the dust has settled, we would like to present our summary of some of the topics discussed. This blog will describe some of the common CTI mistakes, and how the industry is adapting to address them.

 

The perception of CTI

Not understanding the consumer of your intelligence products is one of the biggest mistakes you can make in any CTI program. This is often an issue for technical teams, who often fail to know and understand their audience, which mean ultimately they do not produce output that addresses the consumer’s interests. This process should be a feedback loop between CTI teams and their audience to ensure CTI output is informed by the business objectives and policy makers.

The public and wider perception of CTI comes largely from content that has been created purely as a marketing tool. This encourages a hit or run approach to extract as much exposure from a topic with little incentive to provide actionable intelligence to consumers.

Many of the talks at SANS presented ways to improve your CTI programs to demonstrate additional value to consumers.

 

“Measures of Effectiveness are more compelling to your boss’ boss”

Developing metrics to measure effectiveness may also help to show value. ThreatConnect’s Toni Gidwani (@t_gidwani) and Marika Chauvin (@MarSChauvin) took to the stage to talk about how CTI programs can create metrics for both performance and effectiveness that can then be shown to senior leaders. All metrics aren’t relevant for everyone, and this talk discussed options for CTI programmes at different levels of maturity to best highlight value depending on the specific audience, or according to the organization’s needs and resources.

 

Models and frameworks – structuring your analysis

Understanding both your audience and demonstrating value through metrics can bolster existing CTI programs. Another way to improve your program is to use established models and frameworks, so that teams can perform more rigorous and structured analysis.

The SANS presenters touched on numerous existing models & frameworks: VERIS, Diamond Model, the Lockheed Martin Cyber Kill Chain were among favourites, with MITRE ATT&CK™ garnering significant attention from presenters and audience alike.

Jake Williams (@MalwareJake) described the importance of models and how they are a valuable tool for combating many of the issues described above in his keynote talk, which revisits Kent’s Analytic Doctrine and how the lessons learnt from traditional intelligence analysis are relevant to CTI.

You may ask why models matter in the first place? As Williams made clear, models provide:

  • Repeat-ability: Will the analyst, given identical data reach the same conclusion again?
  • Consistency: Will multiple analysts, given the same data, reach the same conclusion?
  • Metrics: How well does the analysis adhere to the model?
  • Rigor/Credibility: How can you be sure the CTI team are not just making things up?

Models are not necessarily a solution to all our problems: by their very nature they are designed to capture complex and intricate subjects and distil them into a simplified representation. This allows us to predict and explain, so they are a very useful tool for structuring analysis, but this does mean that they have their limitations. As George Box said, “all models are wrong, but some are useful”.

Not all indicators are created equal

One model which concisely describes the value of different types of indicators is David Bianco’s (@DavidJBianco) Pyramid of Pain. The Pyramid of Pain has been around since early 2013, but is still just as relevant today and describes the value of different indicators. Short lived technical based indicators (hashes, IPs and domains) are easy to utilise, but have a lesser impact on adversary operations. The pinnacle of the Pyramid of Pain remains actor Tactics, Techniques & Procedures (TTPs); effective detection and remediation of these means that actors must modify their behaviour, which requires maximum effort from adversaries.

Figure 1: Pyramid of Pain model [Source: detect-respond.blogspot.com]

Over-reliance on highly ephemeral technical indicators is a problem. Addressing issues by implementing controls that address the underlying problem is significantly more sustainable. This transforms security programs from a reactive game of whack-a-mole to a more proactive process that focuses on remediation of the underlying issues. A big part of this is converting threat intelligence sharing from technical indicators to something that can clearly describe actor TTPs; this is where ATT&CK comes into play.

David Bianco’s talk mapped the Target Corporation’s indicators of compromise (IOCs) to his very own Pyramid of Pain as well as the Cyber Kill Chain and ATT&CK. This allowed him to create a visualization of coverage and identify areas of strength and improvement, as well as analyzing IOC age and volatility (check out his slides at the link above).

ATT&CK – scaling the Pyramid of Pain

ATT&CK garnered significant attention from speakers at the summit, and rightfully so. ATT&CK gives a common language for communication to blue teams across organizations, particularly on actor behaviour. This allows CTI teams to shift the focus away from the sharing of low-quality technical indicators (such as hashes, IPs, and domains) by allowing a consistent method for defensive teams to describe actor Tactics, Techniques & Procedures (TTPs). ATT&CK allows defensive teams and organizations to share mitigations and remediation (check out some of our previous work on ATT&CK), which in turn forces adversaries to change their method of operation, as opposed to simply recompiling tools or using alternate infrastructure. Forcing adversaries to learn new behaviours increases attacker costs significantly.

Brian Beyer, CEO of Red Canary and ATT&CK’s own Katie Nickels (@likethecoins) combined the MITRE and Red Canary datasets to identify the most commonly seen adversary techniques and solid remediation advice for each. This is a clear demonstration of how ATT&CK enables intelligence sharing across a variety of platforms and organisations. This provides a solid foundation for defenders to begin hardening their environments against some of the most commonly observed attacker behaviour.

Ultimately, CTI is a product for the Blue Team, and by shifting focus from short-lived technical indicators to TTPs, organisations can implement meaningful and proactive security programmes.  The continued adoption of frameworks like ATT&CK into offensive and defensive processes will be in important tool for cybersecurity professionals to better address gaps in defences and operational capabilities.

These are some of the most valuable takeaways that stood out to us at this year’s summit. We in the Digital Shadows Security Engineering team will of course aim to continue adhering and building on the concepts put forward by our fellow practitioners. For those that could not make it, the complete list of talks and slides are available on the SANS website.

One final note: be on the lookout for the recorded presentations on the SANS Digital Forensics and Incident Response YouTube channel. The talks are typically made available about three months after the event.

 

To stay up to date with the latest digital risk and threat intelligence news, subscribe to our threat intelligence emails here.