A while ago, I was asked to assist in responding to a security problem on a client's network. A major vulnerability was reported on a website that involved failure of the primary authentication and access control mechanism. So severe was the vulnerability that not only could one user view another's PII, but complete authentication circumvention was itself trivial! I was tasked with assessing what, if any, impact had resulted from this exposure. This probably sounds familiar to many security analysts: a vulnerability was discovered, what compromise resulted from it?
These cases turn classic incident response on its head. We are trained, and often work, on issues where a compromise is discovered, from which analysis reveals a vulnerability. Here, we have the opposite. One immediate difference is clear: when there is a compromise, some vulnerability was necessarily exploited. However, the result of a vulnerability investigation is not so clear. Our normal incident response techniques fall apart in this case. I'm coining a term: I call this uncident response. Here I'll outline a few properties of these investigations and rules of thumb to deal with this sort of investigation.
- Property 1 (by definition): Uncident response is analysis that begins with a vulnerability and attempts to discover a compromise based on how the vulnerability is exploited.
- Property 2: It is impossible to prove that no compromise occurred. Such conclusions can only be made through inductive reasoning, which is generalizing from a set of facts. Discovering a compromise can be conclusive because in such cases, one uses deductive reasoning to show that based on empirical evidence collected, it logically follows that the vulnerability was exploited. This duality relates to the notion of innocent until proven guilty in U.S. law: only given complete omniscience for the actions that took place and the causality behind them can one conclusively say nothing happened. This rarely, if ever, is the case. Put another way, absence of evidence is not evidence of absence.
- Property 3: Conclusions will always be conjecture or, at best, made with a margin of error. This is reality based on Property 2. Even if numerous compromises are discovered, one cannot say with complete certainty that no additional compromises occurred. This will nearly always be a subjective assessment based on professional experience of the analyst.
- Property 4: The confidence of conclusions will be limited by the data and time available to the analyst. No logs = no conclusions. Few logs = poor confidence in conclusions. Etc...
Considering these properties of "uncident response," based on my own experience, I offer the following suggestions so you can provide quality analysis and keep yourself out of trouble:
- Set expectations. Make sure all parties involved completely understand the properties above.
- Limit scope. In my case above, the vulnerability was in a web application that exposed PII. The digital forensic investigation demanded by the client was wasteful and unnecessary, and illustrated a failure to understand the issue on their part, or to communicate the issue on mine/ours.
- Set reasonable time limits for analysis. The analysis could in some cases go on forever. At the same time, confidence in the analysis will be artificially limited if insufficient time is available. Try to predict the point of diminishing returns, and wrap up analysis at that point.
- Clearly distinguish between deductive and inductive conclusions, the observed and the interpreted.
- Properly qualify all conclusions. Use words like "believe" and explicitly state confidence intervals. Exercise extreme caution when using words like "fact," "true," and "false;" limit them to deductive reasoning. Absolutely avoid absolutes.
- Refuse to provide conclusions if insufficient time or data is available to make statements you are comfortable with. This is sometimes very difficult to do, because those in charge want conclusions and will exert pressure. Professional integrity, ethics, and the best interests of those potentially victimized dictate that you stand firm on this point, however. Providing conclusions that are nothing more than guesses puts you at risk. Responsible parties need to learn that sometimes the only correct response is "I do not know."
In a later entry I will discuss how I analyzed the limited data available to me in the case above, and make some measured statements that the client was happy with. The story ended well - with high confidence I was able to conclude that no widespread exploitation of the vulnerability occurred.
Michael is a senior member of Lockheed Martin's Computer Incident Response Team. He has lectured for various audiences including SANS, IEEE, and the annual DC3 CyberCrime Convention, and teaches an introductory class on cryptography. His current work consists of security intelligence analysis and development of new tools and techniques for incident response. Michael holds a BS in computer engineering, an MS in computer science, has earned GCIA (#592) and GCFA (#711) gold certifications alongside various others, and is a professional member of ACM and IEEE.