Tags:
A serious problem in many organizations is that the relationship between security and development is marred by blame, mistrust, evasion and lack of understanding. One result of this is that development teams (and their business sponsors) don't take ownership for understanding and managing software security risks, and often try to ignore vulnerabilities or hide them.
Catch-22
Outside of high-assurance and some highly-regulated environments, security usually isn't an important requirement in building a system. Developers and their business sponsors are more focused on getting the system to work, and getting people to use it. They are driven by feature-set, time-to-market, usability, performance and cost. If the system gets delivered and enough people use it, then the business may have to take security concerns seriously because the risk profile has changed — the system and the business that is relying on it has now become a potentially valuable target.
But now it's too late to do things right — now it's going to cost more and take longer to figure out what's wrong and what needs to be fixed. You have to get experts in to pen test and review the system and get tools to scan and test the code, go back through a lot of code to find and understand security vulnerabilities, and then spend time and money to fix code that is already working fine. Which means taking time and money away from something else, which doesn't make anyone happy. Without real pressure from the top, it's hard to convince developers and management that dealing with security vulnerabilities is a priority because vulnerabilities aren't requirements or real problems — they are potential problems and risks that can be put off until later.
Vulnerabilities are bugs
This is wrong. Vulnerabilities found in pen testing and reviews and scans are either bugs — real problems in the code that should be fixed — or they are noise — false positives or motherhood that can be ignored. Treating them as something different and distinct and managing them in a different way is a mistake. Vulnerabilities should be managed and prioritized the same as other bugs, tracked in the same bug tracking system, and reviewed and fixed and tested like any other bug.
Development teams don't ignore bugs — at least good teams don't. They can't pretend that bugs don't exist and that they don't have to fix them. Fixing bugs is just part of their job. Whether a team is following Agile practices like XP (TDD and pairing and Continuous Integration and automated testing) or they're following more heavyweight practices like TSP/PSP or CMMI, they will have ways to take care of bugs. Even teams taking an ad hoc approach to quick-and-dirty development or maintenance and support understand the importance of dealing responsibly with bugs.
Dealing with Bugs
The first step in dealing with a bug is agreeing that a bug is actually a bug. That it's not a misunderstanding or a configuration error or some other mistake — like a false positive finding by a security scanner. Then you need to look at the cost and risk to the business of not fixing the bug, and weigh it against the cost and technical risk of fixing it. I've talked about this before in the context of Zero Bug Tolerance but let's look at this from the perspective of fixing a security vulnerability.
The business cost and risk is measured by Severity and Frequency of course:
Severity — if and when it happens, what is the impact? Is it visible to a large number of customers? Could the business lose data, or lose service to important customers or partners? What are the downstream effects: what other systems or partners could be impacted, and how quickly could the problem be contained or repaired? Could it violate regulatory or compliance requirements resulting in fines or penalties? Could it significantly damage the company's brand or relationships with partners or customers?
Frequency — how often could this happen in production?
Then there are the technical costs and risks to the development team for fixing the bug:
Cost — how much work is required to understand, reproduce and fix the bug, review the fix, test it (including regression testing) and roll it out? What about Root Cause Analysis, how much work should we do to dig deep and find related problems and try to address the root cause?
Risk — what is the technical risk of making things worse by trying to fix it? How well do I understand the code? How complex is it? How much do I have to refactor it just to understand what the heck is going on? What is the history of changes and problems with this code — am I working in an error-prone part of the system? If I make a mistake, what could fail — is this a highly-used customer feature or API, or a core piece of plumbing? Is the code protected by a good set of tests to catch regressions? How well do I understand the fix? Am I confident that I know what to fix and how to test it? Can I back the fix out quickly from production if I make a mistake? Or do I have to change the software at all — can I get by with an operational workaround or maybe virtual patching with a WAF?
The decisions that need to be made are the same, whether the bug is a reliability problem or a functional defect or a security vulnerability. If a bug poses real risks to the business, and the costs and risks of fixing it aren't excessive, then it should be fixed.
Not all bugs are worth fixing, or are going to get fixed
But that doesn't mean that every bug — or every security vulnerability — will get fixed or can be fixed. For some bugs, the decision to fix it is dead easy. Simple, stupid mistakes should be and can be fixed right away. Other bugs are bigger and scarier and more expensive, but they have a high pay back. Fixing them will make a lot of people happy, and it's worth the technical cost and risk.
There are other bugs that are hard to understand and hard to reproduce which means that they may never be fixed — heisenbugs and ghosts and timing problems and other intermittent bugs that disappear when you try to test for them, so you may never be sure what the problem is or if you have fixed it. Then there are bugs that are too expensive, or too risky to fix — at least for now. Fundamental architecture-breakers that can`t be fixed without going back and reworking the design or changing out an underlying technology.
Ruthless prioritization
Security is just another stakeholder in software development and maintenance, like the customers of the system, marketing, compliance, Ops, QA and the development team itself. All of these stakeholders have things that they need done or want done to the system including getting bugs fixed. They all compete for the same time and money, trading off time-to-market, business ROI, business risk and technical risk, and cost. They all have to make a case for what's important and what's worth doing.
The problem with a lot of security findings is that it is difficult to separate out real risks, the high-severity bugs, from false positives and theoretical problems and motherhood. Static analysis scanners and fuzzers and black box scanners can all find real problems, but they also throw up a lot of other warnings. You need to separate real problems out from this noise. Bugs found by good pen testers are easier to explain and easier to make a case for, because they are real and reproducible — which is one of the reasons that pen testing is so effective, even if it is inefficient and expensive.
Even then, developers, program managers, product managers, sponsors and other decision makers have a hard time understanding security risks and vulnerabilities. They need to be convinced that the problem is real and the risk is high enough to take seriously. Bugs like race conditions and deadlocks, crashes, logic flaws, lousy password management and holes in authorization are all easy for programmers to understand and easy to make a case for. Other security problems like XSS or injection attacks or crypto may take more explanation and education - it can be hard for programmers and managers to understand that what look like small mistakes can be so damaging, so it's easy for some of these problems to be ignored.
You need to make it clear which bugs are high priority and need attention. You can use Microsoft's DREAD risk assessment model or OWASP`s risk rating methodology to prioritize security bugs based on damage and exploitability and other factors, but in the end you need to align your priorities with the same scheme that the development team uses for managing all of their other bugs. If a Severity 1 bug means that the system is down, then you have to be careful assigning Severity 1 to a security vulnerability. Be ruthless when it comes to prioritizing vulnerabilities. Because if you aren't ruthless about it, somebody else will be.
Making your bugs fixable
Testers and support and customers and security professionals have a common problem — getting developers to take bug reports seriously, regardless of the type of bug. If you want a bug fixed, then take the time to package up the bug report properly and put it in language that developers understand. In The Art of Collecting Bug Reports from Making Software, Rahul Premraj and Thomas Zimmermann surveyed programmers and analyzed 150,000 bug reports in major Open Source projects to determine why some bugs get fixed and some don't, and what bugs get fixed faster. They found that good bug reports all include:
- A clear explanation of why the bug is a bug — what the problem is, what the submitter saw happen vs what they expected to see happen.
- Detailed steps to reproduce the problem.
- Detailed error data or stack trace or maybe even a code sample — whatever you can provide to help the developer to understand where the problem happened and how to fix it.
- And not surprisingly, bug reports that were easy to understand and well-written got fixed faster.
Security vulnerabilities shouldn't be fixed just because they are security vulnerabilities, and they shouldn't be ignored just because they are security vulnerabilities. They should be fixed because they are high-risk and important bugs. Making the right case for security bugs will help get security problems fixed, and help bring security and developers closer together.