“The human mind tends to see what it expects to see and to overlook the unexpected.
Change often happens so gradually that we do not see it or we rationalize it as not being of fundamental importance until it is too obvious to ignore.
Identification of indicators, signposts, and scenarios create an awareness that prepares the mind to recognize change.”
–The Dutton Institute
A Brief Recap
Cyber security leadership, the executive team, the board of directors, and risk management often seek our expert opinion on how an emerging technology or trend will impact the current and future state of business operations, shifts it could cause to the organization’s risk posture, and whether existing security controls are sufficient to manage any new risk or whether additional resources or investment is required. Leadership is also interested in how emerging technology and trends can provide efficiency gains to existing workflows.
The challenge we face when trying to assess the impact of any new advancement is that it often requires a steep multidisciplinary knowledge base beyond that of just cyber security or threat actor tradecraft and trends. In the previous blog post, we introduced this concept of systems analytic thinking, the DIMEFIL framework, an approach to determine market viability, and how to present insights using a common frame of reference. In this blog post, we delve deeper and wider, building our base knowledge on approaches to evaluate emerging technologies and trends, introducing useful structured analytic techniques (SATs) designed to aid in forecasting, and considerations on how to craft assessments such that they resonate with leadership.
Perspective Matters, Link Back to Related Context
All stories require a starting point, a contextual backdrop that acts as a foundation from which the author can build upon. In part 1 of this blog series, we used cyber risk as a well-known reference point that organizational leadership understands. This is one reason why when we craft analytic assessments, we focus on risk concepts like business impact, chance of occurrence, security controls coverage or gaps in existing controls. However, this is not the only frame of reference worth using in CTI—we can also leverage shared experiences of lived through events to draw anecdotal parallels when communicating analytic findings.
Each week there is no shortage of cyber events that occur. Yet as we recount the news headlines in any week, how many, if any, rise to the threshold of signifying an inflection point, some functional shift that changes a natural order? Inflection points can take different forms when we consider cyber innovation and advancements ranging from deviations in a threat actor’s baseline operations, to a new technology platform expanding an organization’s attack surface, to a vendor introducing a security feature to hive off full classes of attacks to the benign—perhaps naive—red teamer that uploads a new framework that just provided adversaries with yet another capability to integrate into its operational arsenal. Since there is such a wide range that could exist for potential inflection points, it’s best we create an organizational schema from which we can ascribe a baseline to identify inflections.
For simplicity’s sake, we will stick with the same model from the first blog post, where we broke cyber risk into three categories: adversary threats and trends, integrated tech stack, and existing security controls. The following graphic represents an illustrative, non-exhaustive enumeration of “cyber” advancements over the past decade grouped into the three categories. Each advancement identified in one of the categories could arguably be considered an inflection point and act as a shared frame of reference from which we can use for comparative purposes to make relative evaluations.
Figure 1: Illustrative Trends and Advancements Over the Past Decade
The interesting part about these advancements is that some of them happened in tandem or short succession of one another. Some may have been spurred or influenced by broader macro-level events like the United States’ Comprehensive National Cybersecurity Initiative or trends like the droves of talented cyber security personnel in the private sector today that largely came from intelligence community, military, or law enforcement cyber careers. In the past decade, we have seen cyber security as a field evolve, branching into burgeoning sub-disciplines.
This outgrowth and fragmentation is one of the reasons I believe newcomers to cyber security struggle at the onset – the core base knowledge requirements have grown commensurately. Individuals with prior backgrounds have incrementally built their knowledge and refactored skills, and they have the depth of understanding to take interrelated and interconnected factors in stride. My buried bottom-line here is that those coming into this new have a steeper learning curve and often a longer journey to internalize the same lessons and understand those interrelated elements. Don’t worry, we’ll provide some guidance on this in the “Where Do I Go To Glean These Insights?” section.
Where to Start
Structured analytic techniques (SATs) are a methodical process designed to help one challenge judgments, create mental models, stimulate creativity, arrange and visualize data, manage uncertainty, and overcome biases inherent to the human mind, amongst other things. In total, there are over 60 different SATs available to assist analysts with critical thinking and problem solving across a range of different problem sets. In their seminal publication, "Structured Analytic Techniques for Intelligence Analysis", Richards Heuer Jr. and Randolph H. Pherson identify six core families of SATs, which others— including the CIA—have distilled into three primary categories: Diagnostic, Imaginative, and Contrarian techniques.
Figure 2: Examples of SATs in Each Category
While this blog post won’t be able to do justice to the full gamut of SATs, a handful of these are particularly useful in helping frame the thought process when forecasting or evaluating potential future scenarios, which includes emerging technology threats and trends:
- Red Teaming. An attempt to emulate adversary behavior by replicating how the adversary would approach a particular situation. Because this SAT requires thinking like an adversary, understanding their motivations, how they operate, things they hold culturally significant, and other areas are important. One of the biggest challenges with this SAT when applying it to cyber operations is breaking from mirror imaging using Western standards. Often if an analyst says “If I were [country], I would…” whatever follows is likely a statement that mirrors our perspective, ideals, or motivations. images the Western approach. It is likely that Chinese President Xi probably mirror imaged during 2015 negotiations with then U.S. President Obama where both parties agreed that neither country’s government would conduct cyber theft of intellectual property with the intent of providing competitive advantage to their commercial sectors.
- Future Scenarios/Alternative Futures. According to the CIA’s A Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis this SAT “systematically explores multiple ways a situation can develop when there is high complexity and uncertainty”. Four potential future scenarios are developed by identifying the key two variables driving the potential change. The output takes the form of a two-dimensional matrix. This technique pairs particularly well with the key assumptions check.
- Signposts of Change, Indicators and Warning (I&W), and Cone of Plausibility. Given the highly interrelated nature of these three SATs, it is easiest to explain them as a group. In Signpost of Change or I&W, an analyst creates a list of expected observable events that link together outcomes. These events, dubbed indicators or signposts, act as a baseline to help track, monitor, or evaluate changes over time. The Cone of Plausibility then uses these signposts or key drivers and assumptions to generate a range of plausible alternative scenarios to envision various futures and their implications.
- “What If?” Analysis. Holding aside any preconceived ideas, a future outcome or event is identified and analysts work backwards to reconstruct the series of events that would have transpired that allowed us to arrive at the event.
Putting It All Together
One approach analysts can take when evaluating an emerging technology, advancement, or some other inflection point involves attempting to identify first, second, and tertiary order effects. Said in plain English, what’s the immediate impact followed by longer term effects from downstream dependencies? Answering the 5 Ws and how is also usually a fruitful endeavor as part of this process.
Let’s use a historic, well-known cyber attack as an example to illustrate the various orders of effect it had before providing a technology and a trend focused example.
The 2017 NotPetya case is a good example. The short summary of the NotPetya attack was Russian military cyber operators deployed a wiper to M.E.Doc customers through its update server. The attack was likely designed to only impact Ukrainian organizations. However, at least at the time, any organization with an operational footprint in or that worked with organizations in Ukraine had to choose between M.E.Doc or one other government approved accounting software for tax reporting purposes.
- The first order effect of was the destruction of data on impacted systems.
- The second order effect was the unanticipated spread and destruction of data on impacted systems outside of Ukraine that had M.E.Doc software installed.
- A third order effect impacted vaccine production at one leading drug supplier causing it to pause distribution for its hepatitis A and B vaccines for several months. This led U.S. policymakers to question the fragility of certain sectors to cyber risks.
- A fourth order effect was whether cyber insurance would cover the cost of damage from a cyber attack if it Is attributed to a state-sponsored actor. Total damage estimated from this attack was approximately $10 billion.
Figure 3: Refresher on the Functional Inputs
IaaS
- Intent. Does this technology—IaaS—change a threat actor’s intent or motivation? The answer will probably be no in most cases unless our company produces the technology, in which case we might ask whether any threat actor groups have targeted similar emerging technologies before and whether it aligns with a given country’s national priorities for something like technology transfer to bolster domestic industries.
- Capability. Does IaaS provide a threat actor with a new capability? If so, what advantage does it provide? Are there any barriers to adoption and its operational use? What would drive the adversary to adopt it? If so, what would drive the adversary to adopt it? What are some observables we should look for as indicators of adoption? If we do not independently have visibility into these signposts, who may so that we could consider establishing an information sharing partnership?
- Opportunity. Is our organization using IaaS or are we considering moving to IaaS? If so, what’s the use case and driving factors for adoption? What benefits does integrating it into the organization’s tech stack provide? In what capacity will it be deployed? Who will have access? What safeguards are being considered surrounding its deployment. Does it introduce new vulnerabilities we have to consider that expand our existing attack surface (vendor owned vs. on-prem)? Will we maintain the same visibility in our logging to detect threat activity? Are there other tradeoffs or blind spots that should be considered by fully or partially migrating to IaaS?
Edge Device Persistence
- Intent. Does edge device persistence change threat actor intent or motivation?
- Capability. Is this a new capability or has this been observed and reported on before? If it has been seen previously, which threat actor groups have employed it, how, and in what capacity? Has the technique been used against our industry vertical before? What was the series of events leading up to the adversary deploying the persistence mechanism and how was the access used afterward? Is this a brand new technique or is it present on the MITRE ATT&CK framework?
- Opportunity. Are we susceptible to this technique i.e. is the affected technology deployed in our environment? Do we have coverage through our existing security controls to detect, contain, remediate, and recover? If not, how would we detect this type of activity? Do we need to partner with the affected vendor or engage a third-party security company? When was the last time the organization checked to see whether the firmware installed matches the one running on the devices at present? Which security or IT teams would be responsible? What other edge devices operating in the environment might require inspection beyond this particular affected make, model, and software version and with what level of regularity do they undergo regular security vetting?
Hopefully these questions provide a cursory roadmap to get you started as an analyst. Now let’s pull the findings together.
Documenting Findings in a Compelling Manner
Throughout the intelligence curation process, the AIMS methodology provides a useful framework to conceptualize the story we plan to tell and how we will message the analytic findings. For those unfamiliar with AIMS, it is an acronym that stands for Audience, Issue, Message, and Storyline. There are alternative storytelling frameworks beyond AIMS you might consider like AIM or GAME, but this blog post will only focus on AIMS.
- Audience refers to identifying the end consumer of your insights. It involves being familiar with their role, responsibilities, background, current work priorities, schedule, existing demands, future aspirations, goals, personality, etc. Understanding the intended audience can help you as the author craft the narrative in a way that will best resonate with them using analogies and other explanatory helpers that align with their expertise or other frames of reference. For emerging technology stories in particular where a recommendation is made to shift resource investment, understanding when the audience will engage in budget decisions or meet with the risk planning committee meetings is an important dynamic to understand to align the delivery of the product for use in those discussions.
- Issue refers to the topic at hand and why we are highlighting it. The “I” in AIMS could also be viewed as standing for “intelligence question”, which is yet another way for us to frame what we are trying to address with our analysis. A general intelligence question I often use for emerging technology and trends is “What are the drivers of adoption for X, how could this spur a change in adversary behavior, does this fundamentally change the current state of offense (adversary ops and capabilities) vs. defense (cyber security product coverage and practices) beyond on the margins, what indicators would we expect to see, and when do we expect this to take place?” Yes, that is a very loaded, multi-part intelligence question, but answering it—all of the sub-questions or elements of them—tees up a potential structure we can employ to communicate the “what, so what, and now what” in a finished intelligence product.
- Message refers to the bottom line; what we want to tell the audience and any associated gaps, limitations, or assumptions we are making with it. Will X disproportionately impact a certain industry for the next two years and we should consider shifting our mergers and acquisition (M&A) strategy in-kind? In the early days of the COVID-19 pandemic, as organizations shifted employees to a fully remote workforce, it caused a lot of organizations to move to a business model they hadn’t considered before to include probably not considering commensurate security measures to maintain pace with business enabling IT operations. Using the same series of intelligence questions posed above, we would expect to—and did see—analysts from across several vendors assess that the rather sudden, unplanned shift to a fully remote workforce was likely to drive an uptick in cyber operations, using covid guidance spearphishing lures, capitalize on lax security practices in the short term, and raise the risk for hospital susceptibility to ransomware targeting. In my opinion, those are all very effective messages to put forth, especially at the beginning of the pandemic, as organizations were grappling with whether there would be any cyber security implications associated with the pandemic.
- Storyline refers to both how to tell the story in a narrative format, but also where we start the story; see frame of reference. If we are talking about an emerging technology that we expect our audience to be unfamiliar with, it will require some time upfront to lay the groundwork, provide a concise primer on what it is, before delving into the implications of it. If we have previously written on the topic before, we have the luxury of picking up where we left off to continue the story, providing a link back to the previous product that contains the foundational elements in it. Irrespective of either, we need to be mindful of the amount of content that can be consumed at any point in time, challenging our ability to pull together a concise report, especially when there is a lot to unpack. One trick I learned during my days in government was to use graphics to represent timelines or to explain how technology works to save a lot of space in these type of reports.
Whether analytic insights are provided in a short form response or using a longer form narrative, a key element to include is its relation to some advancement or innovation that has occurred before. Graphics are an excellent way to convey this type of information on a timeline or creating a compare and contrast graphic. Graphics provide analysts with flexibility to create new mental models, challenge the presentation of an existing approach, or just think creatively and out-of-the-box. So how might we be able to capture broadscale changes in technology platforms and service delivery, shifts in adversary tradecraft, and advancements in cyber security, or perhaps even cyber policy decisions for our leadership team to think through?
One approach could be to create an ecosystem dichotomy outlining the roles and responsibilities for the various players based on type of organization or what product or service it provides. akin to the table below. With this type of an overview guide in place, we could add a layer on top of it, using a well-known industry schema like NIST’s Cybersecurity Framework (CSF) to illustrate specific examples of how these players have historically contributed in the past. While there are other ways to approach the problem, in two graphics—albeit text heavy graphics—we have created a foundational base of knowledge that we can build on for our analysis moving forward. The same could be done for showing the usage of wipers over time, evolution of adversary tradecraft, or expansion of targets beyond a particular set of victims for a campaign, amongst others.
Figure 4: Illustrative Example of Roles
Figure 5: Layering Additional Context
Where Do I Go To Glean These Insights?
This is a bit of a chicken or the egg problem. Developing strong critical thinking and problem solving ability is different than gaining an thorough understanding of the cyber threat landscape which is different than proficiency in various cyber security technologies, laws, or policies. While a lot of this can be learnt through self-study and research—and you will likely need to do some of this anyhow—one can shorten the time horizon for knowledge capture by seeking guidance, starting points, and other advice from mentors and other industry peers. Asking about their experiences or their take on a particular topic is another great way to garner their insights as field experts. Analysts, in particular, tend to enjoy pontificating about alternative realities or implications grounded in logic and their experiences.
Conclusion and Path Forward
While there is no magic bullet solution on how to answer a question about emerging technologies and threat trends, this blog series hopefully provided practical guidance on how to frame research and present findings on the topic. In it, we covered concepts designed to improve critical thinking ability and strategic forecasting, examining multi-dimensional effects using systems analytic thinking and frameworks like DIMEFIL/PESTLE. A handful of SATs can help shape our thought process as we work through how we arrived at a particular event, what were the signposts or drivers that got us there, and identifying any indicators we would expect to see. We provided series of questions that analysts may consider using when evaluating an emerging technology or a trend to determine “should I care” and to what extent does this shift status quo. We concluded by covering key considerations when conceiving and crafting the analytic assessment.
This blog series was designed as a primer to jumpstart analytic skills for junior analysts or those transitioning into CTI from a job area. If you are looking for additional practice beyond the forecasting examples we covered in here, I would encourage you to try to answer each of the questions about intent, capability, and opportunity for some of the other advancements listed in Figure 2. Thanks for taking the time to read this blog series.