Tags:
When a development team first starts to take application security seriously, they'll end up with a list (probably a long list) of security bugs. It's useful to look at security bugs in different ways.
Design Flaws vs. Implementation Bugs
The first is to ask where each bug comes from — is it an architectural or design flaw, or a coding or implementation bug? As Gary McGraw explains in Software Security: Building Security In, architectural or design flaws are more fundamental and more expensive to change or fix, and they take training and help to understand. Coding or implementation bugs are easier to find through scanning and reviews — these kinds of bugs are smaller and easier to fix, but there tend to be a lot of them. Thinking of security bugs this way helps you to understand where you need to focus your controls and time in your SDLC, where tools will help you and where they won't.
Focusing on High-Risk Security Bugs
Another important way to think of security bugs is to focus in on the high-risk problems. This is what the OWASP Top 10 and CWE/SANS Top 25 Most Dangerous Software Errors lists are about. These lists identify the most common serious security bugs for web apps (OWASP) and all kinds of apps (CWE/SANS). They help you to understand where you are most vulnerable, where to focus your testing and reviews. And they help you to understand when the threat landscape is changing (when the lists of most serious errors change) and to build software security awareness with developers, testers and management — if you find security problems that fall into one (or both) of these lists, you know how serious the problems are.
Some teams start appsec remediation programs by taking on these lists and finding and fixing every instance of one type of high-risk bug at a time - finding and fixing all of the SQL Injection in web apps for example is one way to start.
Which security bugs are easy to find, and which aren't
If you depend a lot on tools, especially static analysis tools, to find security bugs, then you should consider the matrix presented by Jacob West and Alexander Hoole from HP Fortify at RSA 2012. They look at security bugs along 2 different dimensions:
<table border="1"> <tr> <td> </td> <td>Explicit in Code</td> <td>Implied in Code</td> </tr> <td>Generic</td> <td>50% - Can be found by static analysis tools</td> <td>Can be found in pen testing or expert reviews</td> </tr> <td>Application-Specific</td> <td>Need to understand application patterns and requirements - custom rules and manual reviews</td> <td>Probably can't be found</td> </tr> </table>
You're only going to find half of your security bugs with out-of-the-box static analysis (the generic, explicit in code problems). You'll need to write custom static analysis rules (expensive to do, because it requires a deep understanding of your application and the patterns and idioms that the team tried to follow, as well as a good understanding of application security problems and how the tool works) and pen testing and expert code/design reviews to find most of the rest of your security problems. And some problems (application-specific design bugs) probably won't be found in testing or reviews at all.
Looking at security bugs from a software developer's point of view: soft and hard problems
Another way to think of security bugs is whether they are soft or hard.
Some security problems are "soft" because they are part of standard software development. This includes basic software quality issues especially around defensive programming, and common mistakes that can be found by the IDE or static analysis tools or another developer in a code review: buffer and integer overflows, string handling mistakes, bounds violations, leaving debugging code enabled, missing error handling and missing data validation. Or they are straight forward security requirements like access control and permissioning, auditing, and data privacy and confidentiality rules — requirements that are easy to understand and code and test, like any other part of the system.
Other problems are "hard" because they are hard for developers to understand. This is geeky security stuff, wiring and plumbing work that developers don't understand or probably aren't even aware of. Things that they can't fix because they don't know that they need to fix them. Like crypto (keys and algorithm selection and signing and salts and how to manage secrets), secure session management and protocols, context-sensitive output encoding rules for XSS protection, and language-specific and platform-specific security quirks and hardening guidelines. These are the kind of problems that need training and consulting help to understand and find and fix, or that should be taken care of by a good framework.
Understanding whether security problems are soft or hard helps you to understand where you need to spend your time and money, on improving basic quality of software development, or on training and expert security help - or both.