Tags:
Dan Cornell has over fifteen years of experience architecting and developing web-based software systems. As CTO and Principal, he leads Denim Group's security research team in investigating the application of secure coding and development techniques to improve web-based software development methodologies. Dan was the founding coordinator and chairman for the Java Users Group of San Antonio (JUGSA) and currently serves as the OWASP San Antonio chapter leader, member of the OWASP Global Membership Committee and co-lead of the OWASP Open Review Project. Dan has spoken at such international conferences as RSA, OWASP AppSec USA, and OWASP EU Research in Greece.
The cost of fixing software bugs has been studied for a long time now, with experts like Capers Jones collecting data from development and maintenance projects around the world. But up until recently there has been very little data available on the cost of remediating security vulnerabilities. Denim Group is one of the few companies doing analysis in this area, collecting and analyzing data from security remediation projects.
1. Can you please explain how much and what kind of data you've collected so far, and how long you have been collecting it? What makes this statistical data unique?
We do a lot of software security remediation work where we actually go into applications and fix vulnerabilities. What we started doing was collecting the level-of-effort data for those projects in a structured manner so that we could start to look at how long it was taking our folks to fix different types of vulnerabilities as well as what the composition of vulnerability resolution projects looks like. The data we have released reflects the results of looking at 15 different remediated applications and spans about a year and a half. I don't know of any other analysis that has been done on this topic. The folks at Veracode and WhiteHat have released some really cool data about the prevalence of different classes of application vulnerabilities as well as the calendar time they persist, but I haven't seen anything released about the level of effort required to fix those vulnerabilities.
We see organizations go through lots of scanning activities and that leads to giant piles of vulnerabilities that they then have to fix. You can benchmark yourself against the data from WhiteHat and Veracode to see if you have more or fewer vulnerabilities than other organizations and that's great. But it doesn't help you answer your next question: "How am I actually going to fix this stuff and what is it going to cost me?" Our data can be really useful for security analysts and development team leads who find themselves in this situation because we provide a framework for these remediation projects and some numbers that can be used to start roughing out project plans.
2. Based on this data, what are the easiest security problems to fix — the low hanging fruit that people should go after first?
Based on our data, the three easiest types of issues to fix are: removing unused methods, fixing logging that goes to the system output stream and addressing empty catch blocks. But let's look at this in a little different way. Just because something is easy to fix doesn't mean that you necessarily want to fix it first or even fix it at all. You need to have an understanding of the risk associated with these vulnerabilities, as well as the budget and other resources available for remediation. Once you understand your constraints, you can start to ask, "How do I best apply these limited resources?" Our data lets teams run different scenarios and put together potential project plans.
3. On the opposite extreme, what are the hardest problems to fix, the problems that have the most risk or cost the most to fix, and why?
Based on our data, the three issue types that took the longest to fix were SQL injection, reflected cross-site scripting and redundant null checks. But again you have to be careful how you look at this data. One of the things we were really surprised to see was the amount of time it took at address SQL injection issues - by far, the longest of any vulnerability type at 97.5 minutes per vulnerability. When we drilled into the projects a bit deeper we found that we had some particularly nasty SQL injection issues that we addressed in the sample. So that SQL injection number was far higher than it would have been if we had a larger sample of fixed vulnerabilities that more closely reflected the "average" SQL injection vulnerability. Also, just because something is "easy" or "hard" doesn't mean that you automatically fix or not fix the vulnerability. You need to gauge the risk that vulnerability exposes your organization to and maximize the impact of your remediation investment.
4. Are you seeing any changes in this data over time? Are some problems getting simpler and easier to fix — or harder?
We haven't actually tracked this data over time, but what we did track the project composition of these remediation projects. People think about fixing vulnerabilities and expect it to be "easy," but for these projects we were fixing multiple instances of several different classes of vulnerabilities. These were software maintenance projects. And in a software maintenance project, you don't just spend "keyboard time" making changes - you also have to set up your development environments, and you have to verify that your security fix was successful and that the application still works, you have to deploy the remediated code into production. There's also the overhead and just general friction that go along with any project in a large enterprise.
We looked at what percentage of these projects were spent in each phase and what we found was that the time spent actually fixing vulnerabilities varied from 59% in the "best" case all the way down to 15% at the lowest. I think that data is potentially more valuable than the vulnerability-specific data because it starts to provide insight into what you need to do to make the process of fixing vulnerabilities less expensive. One project spent 31% of its time just setting up the development environment. This was an end-of-lifed application everyone was scared to touch, but the organization had identified some vulnerabilities that they could not live with. If the organization had maintained a proper run book or other deployment documentation it would have been far cheaper to fix the identified vulnerabilities. Looking at the composition of the different projects starts to make you see how general development best practices like automated testing and scripted deployment reduce the costs of fixing vulnerabilities.
We see teams using the data in two ways - the vulnerability-specific data can be used to construct bottom-up project estimates for remediation projects and the project composition data can be used to sanity-check the final project plans to make sure that sufficient time has been included to accommodate all of the ancillary tasks that are absolutely necessary, but that teams tend to forget about when they're planning these projects.
My RSA presentation with the results of this research can be found online here: