Artificial Intelligence (AI) is currently going through one of its regular hype bubbles. Another dawn of the super-intelligent machine is upon us. When they aren’t piloting driverless cars they’re beating us at Jeopardy and now Go. Commentators are predicting that computers will be doing more jobs traditionally done by humans. And not just clerical jobs but those that require a high level of expertise.
Perhaps intelligence analysts should be thinking about finding a new career before it’s too late? Who needs them when there’s a super-smart computer around?
Well, don’t start editing your resume just yet. Hype bubbles, like those associated with AI, always end up in disappointment when over-inflated promises simply don’t get delivered. The hype subsides and the path returns to a more orderly ascent.
But please don't think I’m an AI sceptic. Away from the hype, some really useful spin-offs have arisen from AI research over the past few decades. One that is particularly relevant for 21st century threat intelligence is research carried out back in the mid 1980s when AI was in the news again and R&D funds had been topped up.
Behind the scenes, researchers spent a lot of time and effort lifting the lid on the nature of human expertise and the result was an exhaustive set of detailed generic problem solving models. One of these, called “assessment”, has turned out to be a best-kept secret: a valuable blueprint for designing threat detection and intelligence systems.
The assessment model has given rise to the idea of broad hay removal rather than focussed needle identification when commencing the search for that elusive needle in a haystack. It also tells us the role that machines and humans can play, as actors in a sociotechnical system, when searching for that needle. Smart intelligence operators know that it’s the hybrid approach that works the best: computers and humans performing those activities to which they are best suited.
AI’s assessment model tells us that analytical strategies can be data-driven (forward chaining) or hypothesis-driven (backward chaining). A cross-coupled combination can be even more powerful. The key observations you make on the basis of raw data give you a clue as to which behavioral model may apply. And the selection of a particular behavioral model (hypothesis) gives you a good clue as to what kind of observations you need to make. After several cross-coupled iterations you will be in a position to compare what you’re looking for with what you have and make a rational, evidence-based decision.
Machines excel in a high volume, number-crunching role during data-driven analysis that reduces a huge amount of input data down to a more manageable subset of abstracted information. Human analysts excel by applying their intuition, curiosity and imagination (something machines still do not have, at least for the foreseeable future) during hypothesis-driven analysis where they apply a critical level of judgement to lift the signal from the noise, strip out false alarms and find the priority cases.
This is why, at Digital Shadows, we adopt a hybrid, “human in the loop” approach. The trick is making sure the interface between human and machine (or analytical software) is as frictionless as possible.
So, while an all-thinking, all-knowing RoboAnalyst remains a distant possibility, AI has, away from the hype, given us some valuable techniques for enhancing our intelligence tradecraft. No doubt it will give us plenty more — not super smart but just smart enough.