Minimize your digital risk by detecting data loss, securing your online brand, and reducing your attack surface.
A powerful, easy-to-use search engine that combines structured technical data with content from the open, deep, and dark web.
Digital Risk Protection
Read our new practical guide to reducing digital risk.
New report recognizes Digital Shadows for strongest current offering, strategy, and market presence of 14 vendors profiled
Read Full Report
My name is Ben and I’ve been working for Digital Shadows for nine months in the analysis team. In this article I’m going to talk a little about source evaluation and explain the processes we use to ensure that our threat intelligence is useful and, more crucially, reliable.
The failure to properly evaluate sources can lead to significant intelligence failures, and indeed has done so in the past. Consider the case of Rafid al-Janabi, better known as “Curveball”, who defected from Iraq claiming he had worked as a chemical engineer as part of Iraq’s weapons of mass destruction programme. Curveball’s story, which in 2011 was revealed to have been fabricated, was used as a justification for the Iraq War.
To organisations, threat intelligence is about understanding the threat landscape – the various actors and campaigns which conduct cyber attacks – so that when they are specifically targeted it can be detected, mitigation put in place, and the risk to their business reduced. Robust source evaluation minimises the chance of crying wolf, or warning of the wrong threat entirely.
We’ve developed our own collection tool which searches a range of open sources, including millions of social media sites. Such sources, given their often-pseudonymous nature, are littered with pieces of information that cannot be independently verified. Nowhere is this more applicable than among many of the hacktivist and criminal threats we track. On a daily basis, we witness dozens upon dozens of claims of successful data leaks, DDoS attacks, and defacements, and as analysts it’s our job to determine which are credible and which are not.
On the face of it, it could seem very easy to assess the veracity of some claims – for instance, if an actor has claimed to have extracted data from a given website, and has seemingly posted the data on Pastebin or another of the many paste sites available, then surely the claim is genuine? Unfortunately, this isn’t always the case. We occasionally see groups reusing data leaks they have found on other paste sites in an attempt to pass them off as their own. As such, it is important to check that supposedly leaked data has not been leaked previously by a different group. In the case of credential compromises, we can check that the structure of the associated email addresses are consistent with the companies and organisations which they are from, and that the passwords are compliant with the password policy for the targeted website.
Another way we evaluate claims is to check for other similar posts on social media. This is perhaps best illustrated by DoS attacks, where a simple “tango down” Tweet will often be blindly retweeted and reposted by several groups who have not been so diligent in evaluating the claim itself, adding the illusion of credibility where there is none . A famous example of this phenomenon, known as circular reporting, saw Chinese news site People’s Daily publish a news article claiming that North Korea’s Kim Jong Un had been voted “The Sexiest Man Alive” in 2012 – their source? The satirical news website, The Onion.
Of course, our previous assessments help to drive our evaluation of new incidents. If a group often posts claims that they have rendered websites unavailable through DoS attacks without providing proof of downtime, then it could suggest that they are prone to making false claims. That said, when taking this approach, we have to take care not to fall into the trap of the availability heuristic, whereby we rely solely on information that is readily available to us – more on heuristics and biases to come.
It is all very well to know how to evaluate a source in theory, but the important part is to ensure that these techniques are being used in practice. Within the analysis team we peer review every incident we raise, and regularly take part in A and B Teaming, whereby two analysts investigate the same incident and compare their conclusions. By being constantly conscious of our source evaluation we minimise the risk of being led astray by exaggerated and false claims by hacktivists and other actors, which could result in intelligence failure.
Whilst it is easy to overlook, the reporting of events in cyberspace should be thought of in a similar way to the reporting of real world events. There are not a great deal of situations where several Tweets or Facebook posts would be classed as sufficient evidence to prove that something truly happened – we only need to look at the ever-growing, and somewhat morbid, list of unfortunate celebrities who have fallen victim to death hoaxes to see how prone to fabrication social media truly is. As such, we have to be very careful when basing our threat intelligence on evidence gathered from social media.
It is always best to err on the side of caution when it comes to producing threat intelligence from social media sources. After all, it is the internet, where men are men, women are men, and Kim Jong Un is 2012’s sexiest man alive.