A Threat Intelligence Analyst's Guide to Today's Sources of Bias

Structured analytical techniques play an important role in CTI, but they are not always appropriate.

A Threat Intelligence Analyst's Guide to Today's Sources of Bias
Photo by Hello I'm Nik / Unsplash

In an industry prone to going overboard with fear-based marketing, the cyber threat intelligence (CTI) community has a refreshing emphasis on questioning assumptions. CTI teams will often deploy a variety of structured analytical techniques to ensure their assessments are as objective as possible. This involves a variety of processes, including developing alternative competing hypotheses (ACH), organizing structured debates within teams, and open brainstorming sessions to ensure analysts keep an open mind.

However, while structured analytical techniques play an important role in CTI, it should also be acknowledged that they are not always appropriate.

For one, they are often too time-intensive.

Gathering a security team together to brainstorm competing hypotheses comes at a clear opportunity cost. The cyber security industry’s chronic skill shortage means that analysts rarely have the time for these exercises on a day-to-day basis. For most organizations, structured analytical techniques are a luxury rather than a necessity, and only used for answering the most important and strategic questions facing an organization. These analytical techniques are perfect for establishing if a government is secretly hoarding nuclear weapons or whether a new APT campaign is linked to a foreign government. They are less appropriate, however, for unpacking the fifth data breach story of the day.

weird portrait colorful head
Analytical techniques can cause plenty of headaches if used incorrectly. Source: Vinicius "amnx" Amano / Unsplash.

Likewise, the Internet is never short of conspiracy theories and many claims about the threat landscape can be dismissed outright. Here, analytical techniques are simply inefficient (i.e. you probably don’t need an ACH to refute whether Kylie Jenner is an agent of the Iranian government).

Many of these analytical techniques were also written decades ago. Their continued reference is largely to their credit, and highlights the perennial risks posed from phenomena such as groupthink, and an analyst’s inherent cognitive biases. Despite their continued relevance, however, there is much less consideration on how these sources of bias translate today.

Crucially, cyber threat intelligence tradecraft risks sitting at a disconnect. While there is a wealth of analytical techniques that are ideal for solving some of the most knotty questions facing the industry, we are less prepared for understanding and overcoming day-to-day and nascent sources of bias. This blog seeks to rebalance intelligence tradecraft discussions by highlighting some of the less glamorous everyday sources of bias that are too often overlooked.

One: Social Media Bias

The online cyber security community provides a tremendous resource for anyone involved in the industry. Social media provides a platform to share knowledge and is often a source of mentorship for junior analysts. Twitter can act as a great equalizer, providing anyone in the industry with the opportunity to learn from some of the leaders in the field. Where else can a junior analyst learn from the likes of Alex Stamos (a former Facebook CISO), Katie Nickels(MITRE ATT&CK threat intelligence lead), and Rob Joyce (a senior National Security Agency figure and former White House cyber advisor). Social media is also a useful tool for CTI. Analysts can track developments and breaking stories as they occur in real-time (often accompanied by commentary from subject-matter experts). Many researchers also share various technical indicators with the broader community.

Yet, analysts basing their intelligence assessments on the hot takes of the Twitter and LinkedIn thought leader army play a dangerous game. While many cyber security professionals offer genuine added value through these platforms, there is also plenty of needless drama, hype, and even bigotry. It is also entirely possible for leading experts in the field to be wrong.

Don't believe everything you read on Infosec Twitter. Source: NeONBRAND / Unsplash.

During my time as an analyst at Digital Shadows, I have worked with colleagues that both follow the InfoSec Twittersphere and those that don’t. Here, I have often found that it is analysts who aren’t plugged into social media that come up with the most interesting and perceptive assessments. We therefore need to be acutely aware of the slippery groupthink dynamics that these platforms can  bring.

Social Media Bias Mitigation Advice:

  • Be aware that social media is full of unverified claims. When possible, triangulate claims with other sources.
  • Think critically about who you want to ‘influence’ you.
  • Be willing to challenge or doubt the views of experts and industry leaders (pro tip: this doesn’t mean you have to attack or harass them online).
  • Engage with analysts outside of the social media bubble.

Two: Novelty Bias

The constantly changing nature of the threat landscape is one of the things that makes CTI so interesting. Tracking and forecasting emerging threats is an essential component of the job and allows organizations to both proactively anticipate and respond to tomorrow’s attacks.

Yet, there is also an inbuilt bias in the industry as public reporting naturally focuses on emerging threats. This is because they are by far the most interesting and make ideal public relations fodder.

Analysts therefore risk placing far too much emphasis on what is new and exciting. For all the concern over AI-powered cyber attacks, deep fake social engineering, and quantum-powered attacks, there is actually very little evidence that these issues pose a substantial threat in practice. Hype about them can also skew and distort the threat that they pose. For example, despite all the hype over how deep fakes could be used in elections, the empirical evidence suggests the actual victims from the technology are women involved in non-consensual pornography.

bias in threat intelligence example twitter
Source: Joseph Cox

Unlike James Bond, the villains in cyber security are rarely exotic. While unpatched vulnerabilities, exposed ElasticSearch servers, and routine phishing emails might not offer quite the same pizzazz, they are far more likely to be the source of an organization’s pain. There is therefore a need to reorientate CTI to focus on practical advice, no matter how repetitive and mundane.

Novelty Bias Mitigation Advice:

  • Bring empirical rigor into threat assessments. Novelty alone does not represent a significant threat.
  • Critically analyze the actual threats and risks posed by speculative attack models.
  • Focus on common attack vectors and providing practical advice.
  • Be aware of the marketing agendas that often sit behind reports on new tactics, techniques and procedures (pro-tip: organizations that still haven’t patched a 2018 vulnerability should not be focusing on AI-enabled cyber attacks while a deep fake defense strategy should not be prioritized over phishing protection).
  • End users should focus on working with vendors that offer solutions to real and practical problems.

Three: Headline Bias

There is often a significant difference between cyber security stories reported in the news, and those that represent a threat to organizations. This is understandable as journalists are not CTI analysts. They report stories of interest to the general public, not those working in a security operation center.

Similar to novelty bias outlined above, mainstream news outlets can therefore inject a great deal of noise into public reporting. A recent distributed denial of service (DDoS) attack on the UK’s Labour Party was quickly shrugged off by the security community, but was a story guaranteed to attract eyeballs and clicks from the public during a general election (who doesn’t love a cyber attack election conspiracy after all?).

CTI analysts must therefore guard against the trap of assuming cyber attacks that make headline news pose a significant threat.

Yet, it is intelligence consumers that are the most likely demographic to fall for headline bias. Almost any CTI analyst will have responded to a request for intelligence after a senior executive has read a dubiously-reported cyber story during their commute.

Although tempting to sneer at questions about the cyber threat posed by killer quantum blockchains, CTI analysts need to act with humility. They should accept that headlines can present a distorted view of the threat landscape and that many of their intelligence consumers will be ignorant about cyber security. Educating non-specialists on the threat landscape is therefore an essential component of the job, and an area where analysts can be hugely influential in helping organizations to focus on the right problems.

Headline Bias Mitigation Advice:

  • Similar to overcoming novelty bias, CTI analysts should focus on the practical threats and risks posed by cyber security stories.
  • Embrace educating non-specialists, no matter the questions (pro-tip: sniggering at intelligence requests is an excellent way to isolate the cyber security team from the broader organization).
  • Situate headline stories into a broader threat context. While your organization might not have been affected by the Capital One data breach, the incident might provide an opportunity to highlight broader themes such as cloud security, third-party risk, and supply-chain vulnerabilities.

Four: Take My Word For It Bias

Governments are now far less shy in calling out malicious cyber activity. Hardly a week goes by without a government indicting hackers or publishing their malware samples. The UK Government has even made naming and shaming cyber perpetrators a central pillar of its broader cyber deterrence strategy.

On the other hand, many of these claims are made with little to no evidence. Public attribution claims are often underpinned by various political and economic agendas. This means that CTI analysts should be careful in taking these attribution statements at face value.

Yet, the issue presents a difficult balancing act. Evidence-less claims should not be completely dismissed either. Ultimately, analysts should still be able to make judgement calls and take these claims with an element of good faith. A lot of the governments making these statements possess some of the most sophisticated signals intelligence agencies in the world, making them some of the meanest kids on the block. If anyone can get away with attributing cyber attacks without providing direct evidence, it is arguably them.

Take My Word For It Mitigation Advice:

  • Acknowledge the lack of evidence in reporting.
  • Question the timing. Many governments will have been sitting on this information for months meaning that analysts should always question why are they telling us this now? How might the statement itself impact the cyber threat landscape?
  • Put attribution statements into a broader context related to geopolitical tensions between the governments involved and whether there has been a track record of cyber attacks between the two states.

Today’s sources of information throw up age-old problems with bias in intelligence. An analyst’s social media following can easily sway them towards groupthink while it is tempting to blindly trust attribution statements made by recognized authorities. While many of these challenges are certainly new, the structured analytical techniques developed in previous eras cannot solve them alone. Instead, there is a need to reorientate CTI towards establishing a better understanding of how the day-to-day sources of information that we interact with influence us.


This blog originally appeared on Digital Shadows.