Measure Threat Informed Defense

Now that threat-informed defense has been defined, its importance emphasized, and the three dimensions have been covered, there are three key questions:

  1. What specific activities do I need to become threat-informed?

  2. How threat-informed is my security program now?

  3. What are the next steps I need to take to improve my level of threat-informed defense?

Each of these questions lead to a need to measure threat-informed defense, something that has not been done before. On the Key Components and Maturity Levels page, the three Dimensions of threat-informed defense are further decomposed into an initial set of components, with measures for each component from least-to-most threat-informed. In the sections that follow, we explain the methodology to leverage those components to assess a security program and then apply the methodology to a hypothetical organization.

Methodology

We developed the following steps for threat-informed defense measurement:

  • Each of the threat-informed defense dimensions is decomposed into components.

  • For each of those components, we developed levels of maturity.

  • The levels have varying weights depending important they are to threat-informed defense.

  • The dimensions are weighted as follows:

    • Cyber Threat Intelligence: 35%

    • Defensive Measures: 40%

    • Test & Evaluation: 25%

See Three Dimensions of Threat-Informed Defense for the defined components and maturity levels for all three dimensions of threat-informed defense. These component and maturity levels form the basis for the assessment and scoring further described below.

Weighting and Scoring

This scoring system consists of three parts. Each of the three dimensions has components, and each component has levels. Scoring is accomplished primarily at the “level” portion of the framework.

Each level has a number of points associated with it, which are “earned” if the organization satisfies the requirements of that level. These points are calculated differently based on the question. With multiselect questions, max points for the component are only awarded if all options are selected. For radio buttons, the points of previous levels are automatically added into the score value. The score for a dimension is calculated by finding the percentage of possible points in the dimension that have been satisfied.

Overall Threat Informed Defense Scoring

The total score for an organization is computed as a weighted sum of the dimension scores. The logic behind this cumulative score is that taking defensive action is the most important dimension of a threat-informed defense. The importance of CTI is greater than that of test and evaluation based on the experience of CTID and its members. This final formula is not meant to be extremely precise, but rather reflects the “best engineering judgment” of the project team and participants. As with most other frameworks and maturity models, each organization can, and should, tune and tailor this formula based on their needs and constraints.

Example Scoring

As a notional example of implementing this assessment and scoring approach, imagine a fictitious Company A. Company A has invested in their security program and met a minimum acceptable level of compliance with their industry standard. They are beginning their approach to implement threat-informed defense but are doing so unevenly. Based on their current investments, the bullets below describe their current state of threat-informed Defense in each dimension:

Company A: In-house implementation of a nascent threat-informed defense.

  • CTI: The organization has CTI on IOCs and software used across multiple ATT&CK Techniques. Analysts occasionally read freely available generic reports and disseminate IOCs to the rest of the team.

  • DM: Despite excellent CTI, the company has not leveraged that CTI effectively to prioritize their investments in Defensive Measures. They apply patches as needed, have identified critical assets, collect data as per standard best-practices, run a set of imported SIGMA rules, respond to alerts as needed, and do not conduct any deception operations.

  • T&E: The company is only minimally investing in Testing & Evaluation, limiting their current testing to an annual purple team that is not tailored to any specific adversary or set of adversary behaviors. A report is generated.

To leverage this methodology as an assessment, we have released an online assessment. The screenshots below are taken from the Results tab of that calculator.

Overall Dimension and Component Scores

Overall Dimension and Component Scores

Impact Complexity Matrix

Impact Complexity Matrix

After an organization conducts this initial assessment and understands the current status of their threat-informed defensive program, the scoring and impact complexity matrix highlight opportunities to improve their program. The section that follows will describe approaches to improving threat-informed defense maturity once an initial baseline is understood.

Threat Informed Defense Maturity Levels

At the end of the assessment, each component and dimension will be graded on their maturity level based on the following table:

Level

Label

Description

1

Initial

Activities are ad hoc, reactive, or informal. Processes are largely undocumented, and outcomes depend on individual effort rather than organizational practice.

2

Developing

Some repeatable practices exist, but implementation is inconsistent. Awareness is growing, and partial structures or tools are in place, though effectiveness is limited.

3

Defined

Processes are documented and applied with moderate consistency. The organization demonstrates a baseline capability and can reliably achieve expected outcomes.

4

Managed

Practices are measured, monitored, and integrated across teams. Data informs decisions, and continuous improvement is part of regular operations.

5

Optimized

Capabilities are proactive, adaptive, and continuously refined. Lessons learned and metrics drive innovation and sustainment across the organization.

For Components, Levels are determined based on what percentage of the points for each component have been achieved. For example, if an organization has achieved 75% of the points for a component, they would be rated at Level 4 (Managed) for that component.

For Dimensions, Levels are determined based on the weighted average of the component levels within that dimension. For example, if an organization has component levels of 3, 4, and 5 within a dimension, the weighted average would be calculated to determine the overall dimension level.