Tags:
Late last year SANS conducted a survey on application security practices in enterprises. One of the questions asked in the survey was how often organizations are doing security testing. The responses were:
- No security testing policy for critical apps: 13.5%
- Only when applications are updated, patched or changed: 21.3%
- Annually: 14.3%
- Every 3 months: 18.0%
- Once a month: 9.5%
- Ongoing: 23.3%
What was most interesting to me is that almost ¼ of organizations are doing security testing on an ongoing, near-continuous basis — testing applications as they are being developed or changed.
The only way to test this frequently, and the effective way to scale security testing in large enterprises with thousands of applications and hundreds of web sites, is by relying heavily on automation, including automated functional testing of security features (authentication, access control, password management, activity auditing which can all be unit tested like any other code), and Dynamic Analysis Security Testing (DAST) and/or Static Analysis Security Testing (SAST) tools.
Frequent, automated security testing is made easier through on demand, testing-as-a service offerings in the Cloud. These services, available from companies like NT OBJECTives, Qualys, Veracode, WhiteHat, as well as HP and IBM and others, are attractive to to mid-size companies and even enterprises who don't have the skilled people and testing infrastructure to do the work themselves.
There's still an important place for manual security testing, including penetration testing. Even with proper setup and customization, dynamic testing tools won't find the kinds of problems that a good pen tester will, but automated testing will find important security problems quickly. Deeper and more expensive manual testing can then be done on a risk-basis, focusing on critical apps and critical issues, which means that companies can get more value out of both kinds of testing.
Finding security bugs faster means that they will get fixed
For most organizations the challenge isn't finding all of the vulnerabilities, it's getting the ones that you already know about fixed and preventing them from happening in the future.
Continuous, or at least frequent, automated testing can make it easier for developers to understand and fix security bugs, by providing developers with fast feedback on their work. Fixing a bug in code that you just wrote is a no-brainer. It's much cheaper and easier to understand and fix this kind of bug than it is to figure out a bug in code that your wrote a month ago or somebody else wrote a year ago. Nobody needs to make a business case for opening up the code and figuring out what needs to be fixed. There's less chance of breaking something and introducing a regression if you're working on code that you know well. And with fast feedback, you can get confirmation quickly that you made the right change and fixed the problem properly so that you can go on.
This means that security vulnerabilities are more likely to get fixed, and security vulnerability windows shortened. But even more important over the long term, if developers can get this kind of continuous feedback, if they can see the results of their work quickly, they will learn more and learn faster. If you write some code and find out quickly that there is a bug, it's easier to understand what kind of code you should be writing. You will be less likely to make the same mistakes going forward, which means that your code will be more secure. It's a self-reinforcing cycle, and a good one.
If automated security testing can be built in from the beginning, as software is being developed, then the returns can be significant. Developers will learn quickly and start writing more secure software naturally, especially if they are using Agile methods, building and delivering software continuously. Automated testing is what these teams depend on already, and the only way for appsec to keep up with the rapid pace of change in Agile development.
The challenges for automated security testing vendors
For this to really work, automated testing vendors have to offer solutions that:
- keep up with changing technology — RIA, HTML5, new protocols, Cloud and Mobile apps(for DAST) and new languages and frameworks (for SAST)
- provide high quality feed back — low false positives, clear context for what needs to get fixed and how severe the problem is
- integrate nicely into existing development workflows, into the testing platforms and bug reporting platforms that developers already use
- run fast, really fast. Static Analysis can be added easily to Continuous Integration, which means that developers can get feedback on the same day or the next day. But this isn't fast enough. What is really needed is to create automated testing pipelines, starting with incremental testing to provide immediate feedback on what a developer just changed, followed by less frequent but more comprehensive scanning. Grammatech (a spin off from Cornell), for example does incremental static analysis, and some static analysis vendors such as Coverity and Klocwork are trying to provide close-to-immediate feedback directly in the developer's IDE as developers write code.Dynamic Testing takes more work to setup and is more expensive and time-consuming to run, but if this testing can be targeted and run incrementally based on what developers last changed it can radically reduce the time and costs to fix bugs.
- When security testing can be done automatically and continuously, it becomes just another part of the way that developers work. Security bugs are found and tracked in the same way as other bugs and fixed with other bugs. Every step in this direction will help developers take ownership of software security, and change how we build secure software.
- BTW, if you leave a comment, please be patient. All comments on this blog need to be manually reviewed, which means that there will be a delay in comments appearing, hopefully not a long one...