This blog was cowritten by John Doyle, Gert-Jan Bruggink, Steven Savoldelli, and Callie Guenther.
Cyber threat intelligence (CTI) teams are frequently asked to provide metrics for leadership to illustrate their contributions in helping improve the organization’s security posture and reducing risks. However, most CTI programs, especially those starting down this path, tend to create weak metrics centered solely around production throughout. These are often viewed as “low hanging fruit” and often are misaligned with the intent behind the metrics creation ask. While performing metric evaluations on throughput may serve as an initial step, ultimately, CTI teams should aim to demonstrate meaningful insights that go beyond routine measurements. Metrics that genuinely reflect program impact and maturation require careful planning.
In this article, we examine why organizations struggle to conceptualize and develop effective metrics for CTI programs before presenting a practical guide for metrics categorization that CTI teams can leverage to develop meaningful program measures. Throughout this blog, we showcase examples of how CTI metrics align to actionable intelligence, risk reduction, and business impact using our categorization taxonomy. We conclude with parting thoughts, highlighting previous research produced specifically around CTI metrics.
Before delving into the blog’s substance, we’d first like to provide many thanks to Callie Guenther, Braxton Scholz, Chandler McClellan, Rebecca Ford, Katie Nickels, Brandon Tirado, Greg Moore, Jonathan Lilly, and others who helped start us down this path at the SANS CTI Summit 2024 workshop we hosted on how to build an effective CTI Program.
Demonstrating Value Through Metrics
Metrics are one modality that a cybersecurity function can – and often does – use to measure effectiveness and efficiency. Yet, measuring the value cybersecurity provides for an organization is a daunting, challenging, and cumbersome task with unclear benefits to most managers assigned to create them. Perhaps this is because of lack of exposure, training, desire, aptitude, perceived value, or any number of other reasons. That is, besides the glaring elephant in the room: it is challenging to show the value of improved decision-making, cost savings from mitigated incidents, and agility in responding to a dynamic threat landscape, especially in a quantifiable way.
To properly create metrics that showcase the value of CTI services, collaborative systems thinking is crucial. This approach should account for factors such as brand reputation, consumer trust, legal consequences, employee productivity, and morale, beyond the shortcomings associated with solely using the traditional cybersecurity triad of confidentiality, integrity, and availability. This process becomes even more complex when developing meaningful metrics for programs like CTI, whose role is to inform and enhance decision-making among defenders, risk managers, and cybersecurity leadership. Through it, though, we should strive to work with partner teams – stakeholders – to garner an understanding of the impact of our work, driving security outcomes for the organization. A unified story–often conveyed through metrics– can effectively demonstrate how the CTI function strengthens organizational resilience, reduces cyber risk, protects against regulatory fines, and safeguards brand reputation, thereby justifying the cost of staffing and maintaining a CTI program.
Purposeful Metrics: Setting Goals and Outcomes
Before cybersecurity and threat intelligence leadership rush to decree that all programs need metrics, a more effective starting point would be to determine what the program wants to measure, why, and what outcomes it will drive by capturing these measurements. Metrics should serve as a means to an end. Organizations should first establish the purpose of the metrics they intend to capture and clarify how they plan to use this information to drive business decisions. For example, for leadership and management it may be that we measure and demonstrate value to those who have a say in our funding and existence – namely highlighting our raison d'être. For our peers, it may be that we measure their expected outcomes and how CTI has aided in helping them achieve these.
- In our experience, CTI programs that embrace solely weak metrics often lack clearly defined objectives for the CTI program, have a limited understanding of how to connect CTI activities to business outcomes, or genuinely struggle with the inherent difficult task of quantifying the impact intelligence has on improving the organization’s security posture. This reliance on easily gathered data often fails to justify the program's value to leadership, hindering its growth and potentially leading to misallocation of resources.
The process of developing metrics demands an administrative cost that often exceeds merely collecting available data. For example, collecting relevant metric-supporting data may require building new technology, processes, and workflows to gather, store, and display metrics that serve a specific, exploratory purpose. Capturing metrics solely for their own sake is a misstep that can lead to wasted resources. Instead, data collection should support business outcomes and use cases. For example, analyzing the intelligence requirements the team serviced over a fixed period of time can allow leadership to determine whether out-of-band stakeholder re-engagement is required to determine if continuing support is required for the team to produce intelligence products on a given topic or if the team can shift its focus to other pressing needs.
It is also worth noting that metrics are not the only method to showcase program success; qualitative accomplishments that highlight value across the organization should also be celebrated.
Building and Evaluating Metrics: A Taxonomy for CTI Metrics
CTI teams, like many stakeholder-driven functions, face challenges in creating universally transferable metrics, which can vary by organization, scale, and stakeholder involvement. Below, we offer a taxonomy for constructing and evaluating meaningful metrics within CTI programs. This taxonomy serves as a foundation to guide teams in building metrics from the ground up, allowing the programs to elect one or more of these framings for use.
By Function
CTI programs should be thinking about metrics development to drive action, but it is often difficult for those not trained in metrics design or the mission outcomes it enables through the CTI and cybersecurity lens to conceive starting points. We propose that organizations think of these in three broad categories: Administrative, Performative, and Operational. Some overlap may exist between performative and operational metrics, especially in evaluating investments in security controls, tooling, and external datasets. Metrics in each category serve unique purposes, from cost planning to gauging resource utilization.
- Administrative Metrics: Administrative metrics determine cost expenditures on staff, conference and training budget, software licensing, and data set procurement. Administrative metrics can assist in determining which data sets were used to support relevant intelligence requirements over a given time period or enable comparative analyses such as evaluating the impact of adding a headcount to the team.
- Performative Metrics: Performative metrics gauge throughput and measure level of effort required to complete tasks, supporting capacity baselining. This is useful for resource planning, baselining expectations for job role and level, evaluating an aspect of performance, and establishing individual and program-wide goals. Number of tickets created, active intelligence requirements, RFIs supported by type, rate of proactive vs. reactive delivery on threat activity information, adherence to internally defined quality standards, etc., are also performative metrics. While these metrics can provide a high-level overview, they may offer limited insights for driving specific actions or improvements and have sometimes been referred to as “vanity metrics,” often requiring further analysis in conjunction with other data to provide a complete understanding of workload or performance.
- Operational Metrics: Operational metrics focus on impact to business operations and the functions designed to support the overall growth strategy. In CTI, we can use this categorization to illustrate how the services we provided helped drive down risk; informed cybersecurity strategy and planning; and enabled cyber defense actions. These metrics are rarely owned exclusively by the CTI team, and are often collaborative in nature, working with teams specialized in quantifying cost savings and other stakeholders.
Arguably, there could be enough overlap between administrative and performative metrics to merge them into one category. The value in collecting a variety of different types of metrics is that it helps planning efforts as a cost center given the reality of finite resourcing. As noted, the method of reporting metrics should reflect the unique operational environment of an organization, so we encourage considering these other methods of examining metrics.
By Audience or Stakeholder
When designing metrics, consider the intended audience and the outcomes these metrics aim to support. A given metric may serve various purposes from helping CTI managers justify resource needs to highlighting areas of excellence or identifying routine concerns. Most often, the primary audience for metrics is senior leadership, but depending on industry or region, this may be partially driven by regulatory compliance or support headcount scrutiny or even audit.
To ensure relevance and impact, the metrics chosen for each audience should be directly aligned with key business outcomes. While tailored to different audiences, these metrics ultimately contribute to a unified understanding of how performance impacts business success.
These measures may then be used as fodder for a CTI manager to develop a business justification for additional headcount, highlight an area of excellence, or identify routine areas of concern to consumers. In cases where metrics are tied directly to intelligence requirements, capturing the stakeholders, desired impact, and realized impact in support of stakeholder outcomes is prudent. Common implementations of this metric type are often limited to consumer feedback along the criteria of timeliness, completeness, and actionability–both immediate action and to inform strategic planning.
Examples
- The CTI team may keep an internal count or percentage of documented consumer workflows as an early metric for awareness and integration. They could also monitor the number of stored and reviewed Priority Intelligence Requirements (PIRs), reflecting a focus on CTI-owned processes.
- The CTI team may leverage ticketing platforms to track the volume of opened and resolved tickets, the frequency of direct requests, the teams driving ticket creation, the support provided, and the types of work required. This reflects a middle ground of both CTI-owned processes and the RFI processes of peer teams.
- CTI may track the rate at which threat actor information is delivered proactively versus reactively to demonstrate increasingly forward-looking analysis. The team may also know where and how to view consumer outcomes and metrics to determine how CTI impacted these operations. This reflects a fully integrated data collection and presentation method.
By Organizational Reach
Effective CTI programs leverage regular cadence syncs and consumer group integration to raise stakeholder awareness, grow the brand, expand organizational reach, and ensure cross-functional cooperation. Understanding the workflows of stakeholder teams is vital to demonstrating CTI’s impact. Metrics can measure the frequency and timing of CTI interactions, correlate team utilization via requests for information (RFIs), and measure feedback loops, collaborations, and brand advocacy across the organization.
Early-stage CTI teams may operate with minimal integration, placing a heavy burden on team members to educate consumers about CTI’s role. As integration improves, the goal is to move towards seamless alignment. Teams looking to build metrics based on organizational reach can benefit from lessons learned by industry peers with similar team size, composition, or constraints.
CTI leadership can demonstrate incremental improvement in trust with natural stakeholders, especially where progress was stymied. This can be achieved through breaking down organizational silos, overcoming difficult personalities, and creating opportunities for joint work production, training, or cross-team exposure. Additionally, cybersecurity senior management can explicitly recognize improved intra-team collaboration.
Examples:
- The risk management team may proactively reach out to CTI leadership or team members to collaborate on short- or long-term assessments, seeking CTI as an input into their overall risk assessment. Subsequent outbriefs to cybersecurity and risk leadership should include representatives from both teams.
- The red team manager may want to know how often CTI supports their engagements to properly simulate realistic threats to the organization and how to improve internal processes for both teams to improve quality.
By Complexity
Metrics vary in complexity based on access to data and need to engage partner teams. Low-complexity metrics, for instance, are defined as those that operate within the control of the CTI team and can include tracking the number of phishing emails reported by employees. Whereas high-complexity metrics rely on cross-team data, processes, and collaboration, which require accounting for additional administrative overhead.
High-complexity tasks–and their subsequent metrics capture–may involve significant collaboration, implicit assumptions, and inadvertent bias, leading to potential cascading errors. This underscores the need for the CTI program lead to remain vigilant during the metric creation and capture process. Balancing effort across different metric types is essential to ensure that the CTI team is not overburdened by overly complex metrics, while still capturing valuable insights that require cross-team collaboration. This balance allows for efficient resource allocation and maximizes the impact of the CTI program.
Examples:
- The CTI team’s count of data sources used during the production of intelligence over a given period of time may be wholly dependent on the CTI team’s operations and could be considered a low-complexity metric.
- The CTI team performing regular spot checks on whether intelligence produced adhered to internal quality standards, provided the proper substantive depth, and was written through the lens of business operations support would be between a low- to moderately-complex task due to resource commitment requirement.
- A metric built to identify cost savings through risks reduced, faster adversary discovery, strong detections written, and other use cases broken out per product produced across stakeholders would be considered high-complexity. This requires a clear understanding of consumer workflows, collaboration, and agreement over outcomes.
By Point-in-Time or Period of Time
Metrics may provide value either as point-in-time snapshots or as longitudinal data over extended periods. While extended time frames help identify trends and outliers, it becomes challenging to attribute outcomes to specific causes. Therefore, it’s critical to document factors that influence metric interpretation.
Note: structured data stored in a logical manner with proper tagging can be easily queried in spreadsheets or more powerful central intelligence systems like The Vertex Project’s Synapse to quickly create recurring reports and queries to view longitudinal trends. We provide the following two illustrations as examples of ways to store intrusion-related data structured in a way that is easy to query.
Examples:
- If the team tracks volume of output and half of the production team was out for a given month, this note should be included for as long as that month remains relevant to data presentation, not just as a footnote in a middle manager’s head. Displaying this type of information as a percentage or ratio should also apply statistical best practices to allow for proper data interpretation.
- Production or impact over a given month or year, change from year-over-year for exploration into reasoning that could explain the shift and whether it is reflective of a new baseline, such as a strong performing team member leaving the CTI program with a lapse in coverage until resources are addressed.
- Shifts how CTI spends its time in consumer support to illustrate exploration into innovation and justify expanding remit and engineering development efforts or tool procurement.
- Growth and decline in reliance on individual sources to pinpoint analytic dependencies, prompt revisiting the collection management plan, and evaluation of available data sources with their respective potential value add proposition.
Getting Started with Metrics: An Incremental Approach
CTI programs that are starting to build metrics should be intentional about their creation, outlining clearly the purpose behind each. Ultimately, metrics are a means to an end. Period. Hard stop. As programs mature, they can develop metrics that are more strategic and complex, designed to unearth trends. However, experimentation is usually required in order to right-size these aspirational metrics.
There is a deeper discussion needed, however, about what to capture and when, as the CTI program’s demand, capacity, and capability grow. Once the specific threats, vulnerabilities, PIRs or stakeholder demands are clarified and documented, only then do you have a tangible CTI program to start with. Any metrics developed before this baseline level of maturity will fall victim to high noise, shifting contexts, and exceedingly fluid business processes.
As Rob Lee and Rebekah Brown emphasize in the SANS FOR578 course, the core metric for a CTI team is whether it meets stakeholder needs and demonstrates business impact. CTI teams should aim to provide straightforward answers to common program questions and establish a tangible program baseline, capturing specific threats, vulnerabilities, and stakeholder needs. As a customer service function, this is imperative to justify the CTI program’s existence.
The initial metrics a program creates should focus on data that is easily obtainable, minimally complex, and easy to interpret. As CTI teams develop familiarity with data collection, they can evolve through the taxonomy, capturing more nuanced and sophisticated metrics. Continuous improvement, supported by structured, repeatable data, is crucial for metrics-driven maturity.
Example:
- Start with year-over-year trend analysis to establish baselines. This gives stakeholders a clear view of how security posture evolves over time and helps inform strategic decisions.
Every metric created should be evaluated for what it might imply to an uninformed consumer, so that the CTI leadership practitioner can provide staff adequate onboarding to address pitfalls and errors analysts may encounter during routine delivery execution. The easiest way to do this is to consider implications and inferences around causality, assumptions, and gaps. This becomes critical when conveying metrics over time, as people may erroneously fill in knowledge gaps that are unsupported by evidence.
Examples:
- The volume of CTI products may dip over a given period, but if the depth of reporting has increased, it would be an error to assume the CTI team is less productive. Such an assumption represents an unsupported conclusion about causality.
- Metrics built around risk reduction will almost certainly require assumptions about value. For example, successfully mitigating damage to brand reputation is unlikely to yield a quantifiable figure. CTI practitioners must clearly communicate assumptions of this value proposition.
Building Stakeholder Engagement with Metrics
Metrics not only convey performance but also serve as tools to secure buy-in from critical stakeholders. Effective metrics enable CTI programs to demonstrate responsible investment in security resources, communicate the rationale behind metric selection, and reveal insights that support data-driven decision-making. By engaging stakeholders in this process, CTI teams increase the likelihood of program support and build trust in the data used to inform cybersecurity strategies.
Growing a CTI program involves more than tracking metrics; it requires actionable insights that drive program alignment with organizational goals. This alignment covers practical elements such as processes, deliverables, and integrations that allow for consistent measurement.
Example:
- If the organization uses JIRA, CTI teams can create deliverable metrics using JIRA’s built-in solutions, establishing a cost-effective program dashboard that tracks CTI engagement.
To end this, below is a visual representation of examples that have proven effective in supporting teams while also contributing to the organization’s internal maturity journey. This example table serves as a practical reference, illustrating various CTI metrics by role, audience, complexity, and timeframe.
Table 1: Sample CTI Metrics Table
Metric Type | Example | Role | Audience | Complexity | Time Frame |
Report Utility | Share of reports using licensed data sources | Administrative | Senior Management | Low | Point-in-Time |
Resource Allocation | CTI support to red team engagements | Performative | Red Team | Medium | Period of Time |
Threat Reduction Impact | Measured decrease in identified risks | Operational | Risk Management | High | Period of Time |
Consumer Feedback Rate | Frequency of RFI submissions | Integration | Consumer Teams | Low | Period of Time |
Cost-Benefit Analysis | Estimated savings from CTI-led mitigations | Operational | Finance | High | Point-in-Time |
Closing Thoughts
CTI teams will continue to be asked to provide metrics to demonstrate their value to leadership, regardless of whether the request aligns with its intended purpose. Any CTI professional should approach such requests with as little ambiguity as possible, seeking clarification on the desired outcomes and leadership’s appetite to allocate additional resources should the intended goals exceed existing capabilities or capacity.
We would be remiss if we did not highlight existing CTI metrics resources:
- Gert-Jan’s Master CTI Metrics Matrix
- Marika Chauvin and Toni Gidwani's 2019 SANS CTI Summit talk, How to Get Promoted: Developing Metrics to Show How Threat Intel Works
- Metrics are the Drivers of CTI Value
- Freddy Murre's 2022 FIRST CTI Symposium presentation, Vanity Metrics - The BS of Cybersecurity
We stewed on this quite a bit and hopefully the insights provided in this blog offer a solid starting point to categorically approach metrics generation. Feel free to follow us on social media for more content on this and other CTI topics:
Special thanks to Freddy Murre and Nicole Hoffman for their thoughtful peer review, questions posed, and substantive suggestions that improved the blog’s quality.