Tags:
In Software Security: Building Security In, Cigital's Gray McGraw breaks software security problems down into roughly equal halves. One half of security problems are security design flaws: missing authorization or doing encryption wrong — or not using encryption at all when you are supposed to, not handling passwords properly, not auditing the right data, relying on client-side instead of server-side data validation, not managing sessions safely, not taking care of SQL injection properly, and so on. These are problems that require training and experience to understand and solve properly.
The other half are security coding defects — basic mistakes in coding that attackers find ways to exploit. Focusing on preventing, finding and fixing these mistakes is a good place to start a software security program. It's something that developers and testers understand and something that they can take ownership of right away.
There is a tie-in between good code and secure code
Although high-quality code is not necessarily secure, you can't write secure code that isn't good code. There is a direct tie-in between the security of a system, and the basic quality of the code:
"It has been shown that investments in software quality will reduce the incidence of computer security problems, regardless of whether security was a target of the quality program or not." - Ross Anderson, Security Engineering
This tie-in can be seen by looking at the MITRE Corporation's Common Weakness Enumeration (CWE), a comprehensive list of common software problems found in the wild. Too many of these problems are caused by basic mistakes in coding — mistakes like buffer overflows (still) and integer overflows and other mistakes in arithmetic, string handling mistakes, mistakes in error handling and concurrency, resource leaks, leaving debugging code on. Mistakes that lead to major security and reliability problems.
Like a lot of software development, this isn't rocket science. There are only a few key things to stay focused on.
Program Defensively
Defensive Programming is programming carefully and thoughtfully and realistically — what the Pragmatic Programmer calls "Pragmatic Paranoia". Don't trust other code, including third party libraries and the operating system, and never ever trust a human user. The basic rules of careful, defensive programming are:
- Check all input, including type, length and allowable values. Establish "safe zones" or "trust zones" in the code — everything on the inside is safe, as long as the code at the facing edge of the zone is checking for bad data. Deciding where the edge of the trust zones should be is a design problem, but most of the work is coding.
- Use a standard error handling routine that works.
- Use assertions and exception handlers for situations that "can't happen".
- Log enough information to understand what the heck is going on when things go bad.
- And for languages like C and C++, make only safe function calls.
Code reviews, pairing and static analysis
Use code reviews or pair programming to check for basic reliability problems and security issues (as you learn more about what mistakes to look out for) as well as ensuring that the code does what it is supposed to do. Lean on your IDE and your compilers — check all warnings and clean them up. Use static analysis tools to find security coding bugs and other coding mistakes and to highlight problem areas in the code, such as methods with high complexity. The more complex the code is, the more difficult it is to understand, the harder it is to fix or change it without missing something or making a mistake, and the harder it is to test. Highly complex code has more bugs and is more vulnerable to security attack. This is where you should focus your refactoring work and testing.
Adversarial Testing
Cigital's BSIMM software security maturity model is built on data collected from real world software security programs in different companies - on what real companies are doing today to build secure software. Their research shows that most companies start with a small set of common practices — one of these is to get testers to "go beyond functional testing to perform basic adversarial tests." Testing edge cases and boundary conditions to begin with. Then moving on to destructive testing as described in books like How to Break Software and How to Break Software Security, and using techniques like fuzzing and stress testing to push the code and try to break it, to see what happens when it does break. And finally application pen testing — putting the system under security attack.
Commit to fixing security and reliability problems
There's no point in finding problems in code reviews and static analysis and through adversarial testing if the team doesn't commit to fixing them. And not just fixing them and moving on, but taking some time to learn from these mistakes, understanding what they missed and why, finding ways to improve how they design and develop and test software to prevent more problems like this in the future.
This might require a kind of cultural change — getting developers (and managers, and the customer who is paying for the work) to understand that some bugs, even small ones — especially non-functional bugs that are invisible to the customer — can have serious consequences, and that they need to be taken as seriously, or even more seriously, than major functional bugs.
This isn't just Zero Bug Tolerance. Nothing in the real world is that straightforward, that black-and-white. You need to decide what bugs can be fixed and should be fixed by trading off cost and risk, just like everything else. But getting the team and management and your customer to understand the risks, and to take responsibility for fixing these problems and to try to prevent them, will go a long way to improving the reliability and resilience and quality of your code. And it will take you half way to solving your software security problems, while you learn how to deal with the other half.