Excerpt from my latest blog post at Cigital…
More and more organizations are using static analysis tools to find security bugs and other quality issues in software long before the code is tested and released. This is a good thing, and despite their well-known frustrations like high false positive rates and relatively slow speeds, these tools are helping improve the overall security of software. Unfortunately, these known frustrations may also introduce a dangerous blind spot in these tools which do not know modern frameworks as well as they know the base languages.
The blind spot
Frameworks are doing more and more of the basic work—providing common functionality of an application. This is a fantastic leap forward in terms of productivity and the ability to release software faster and faster. This frees up more time to focus on the core business functionality of applications.
Sometimes these frameworks are clearly separate things (like Spring, for example) and sometimes they are a mix of basic functionality and advanced features (like the .Net Framework where the tools understand some features but not others). These frameworks are virtually exploding around us, offering many options to take care of the basic drudge work of application writing.
This explosion is happening fast and it seems to be accelerating. New versions and even new frameworks are appearing faster than most can keep up with. Static analysis tools are doing a decent job keeping up with basic languages. However, there is almost no way they can keep up with all these frameworks and handle even a few of them well, let alone all of them. As these frameworks take care of more and more of the plumbing within applications, this inability to understand what they are doing creates a blind spot in which code gets scanned and nothing gets reported.
Frameworks create data flows that the static analysis tools may be blind to. They introduce sources of tainted data that the static analysis tools know nothing about. Therefore, there is nothing to trace to the sinks created in code where problems could occur. These frameworks may introduce new sinks, but since the tools do not know of them, the sources in code cannot be traced to them. They also provide functionality behind the scenes that the static analysis tools do not see at all.
If the static analysis tools cannot see it, they cannot report it. If they do not report it, organizations are left feeling secure when they are not.
False positives are annoying. False negatives are dangerous.
Read more at the Cigital blog
My latest blog post on Cigital’s blog.
Analyzing source code for security bugs gets a lot of attention and focus these days because it is so easy to turn it over to a static analysis tool that can look for the bugs for you. The tools are reasonably fast, efficient, and pretty good at what they do. Most can be automated like a lot of other testing. This makes them very easy to use. But if you are not careful, you may still be missing a lot of issues lurking in your code or you may be wasting developers’ time looking at false positives the tool was not sure about. As good as these tools have become, they still need to be backstopped by manual code review undertaken by security experts who can overcome the limitations of these tools. Let’s examine how manual code review compliments static analysis tools to achieve code review best practices.
Read the full thing on Cigital’s blog.
This isn’t the only place my blogging appears. The Benefits of Code Scanning on Cigital’s Blog:
“All software projects are guaranteed to have one artifact in common – source code. Because of this guarantee, it make sense to center a software assurance activity around code itself.”
-Gary McGraw, Software Security: Building Security In
When an author sits down to write today, they have great tools available to automatically check their spelling and grammar. They no longer need somebody to tediously proofread their work for the mundane errors; the computer does that for them. Tools such as these won’t help if the author has nothing to say, but the simple errors that have long plagued writers are quickly found by modern tools. By the time a work is reviewed, the simple problems have been identified and resolved.
Modern developers have similar tools available to them to address common problems in code. Static analysis tools, also known as code scanners, rapidly look at code and find common errors that lead to security bugs. The tools identify the common problem patterns, alert developers to them and provide suggestions on how to fix the problems. These tools will not take care of underlying design flaws, but they often help developers avoid many security bugs in code long before that code is turned over to testers or is put into production.
Sometimes the joys of waiting for tools is you simply can’t wait. No matter how happily the tools will take care of the grunt work without getting tired or bored, sometimes you can’t let them.
Time is rarely on your side when it comes to work and sometimes the schedule just won’t allow waiting for the tools. When it comes to testing like dynamic analysis of a website, sometimes the schedule further restricts your time so you do not impact other dev or testing work. You probably do not have exclusive access to the testing environment after all and rapid, automated testing of a web site can have a significant impact. You might be restricted to after hours scanning. You will probably have to finish the testing by a certain date and maybe the tool just can’t meet that deadline give the other constraints.
As good as the software security tools can be, they still lack a lot of smarts and judgement. We humans can know when we are approaching a deadline and adjust our approach accordingly. We can change priorities. We can say we’ve seen enough of this and decide to concentrate on that. We can decide if a certain type testing is providing a lot of false positive we should focus on testing that provides true positives. We can decide to call an issue systemic if we see the same results over and over and over again so there is little reason to keep testing for it if we need to spend the time we have elsewhere.
The tools can’t do that. They just chug merrily along doing the drudge work. The same thing over and over and over no matter how long it takes. The very thing that makes them great at doing that tedious stuff can make them very bad at meeting a deadline.
So we have to do it for them. We have to provide that judgement for the tools.
We know the schedule and need to judge how the tool is progressing against that schedule. If it isn’t going to make it, we need to make the changes the tool can’t make on its own. Lots of false positives on one test, turn it off. Lots of true positives one a particular test call it a systemic problem and stop testing it so there is time for other tests. Have a web page with a whole lot of fields on it to be tested, exclude it from the main test and do it separately so that one page doesn’t consume too much of the testing time. Maybe the website is on an under-powered server and on a slow connection and reducing the number of testing threads may actually speed things up. If the tool seems to think its session has expired and keeps logging on, maybe its in session testing relies on a part of the gui that the testing bypasses and you need to switch to out of session detection so it stops wasting time logging in when it doesn’t have to.
Sometimes we do not have the luxury of waiting for the tools to do their jobs. Since the tools don’t get bored, they don’t watch the clock. We can and we must. We know how to take short cuts and can make the judgement call when we know it is necessary. We poor, easily bored humans need to provide the judgement the tools can’t.
Ah the joys of waiting for tools to do their job. Set the scan up either of the source code of an application or a dynamic scan of a website, click go and wait and wait and wait and…
If you’re lucky, the progress indicators s l o w l y creeps along. And you wait and wait and wait and…
Of course you can go off and do other stuff while the computer chugs like attend a North Alabama ISSA lunch time meeting or write a blog post but you still end up coming back, looking at the progress, hoping it has moved since the last time you looked and you wait and wait and wait and…
As tedious as that is, it’s far better than the alternative. It is far more tedious to look at code line by line by line for thousands or hundreds of thousands lines of code. Far more tedious to try to hand jam parameter manipulation and send it all to a website over and over again. It’s far less tedious to periodically check that progress bar, fingers crossed, to see if it has advanced. As much as you might be eager to get to triaging the results, letting the tool compile those results for you to look at is far less tedious than doing it all yourself. The computer doesn’t get tired or bored doesn’t need coffee and it’s pretty good at grinding its way through finding things that would take us weeks or months to do. It doesn’t care that the work day is over and can happily chug along overnight. (If you’re lucky and it doesn’t hang!) It can tirelessly keep track of a data flow from source to sink across call after call after call across complicated call stacks. Try doing that manually for each and every input and not grow old while doing it!
And when the tool is done, you get to spend your time looking at interesting things and diving deep on something rather than spending your time and your customer’s money on tediously finding everything the hard way. We get to have fun, the computer gets to do the drudge work.
As fun as waiting can be, it beats the alternative.
Sigh, still scanning but at least the bar is moving. Back to waiting and waiting and…
One of my favorite memories of living through Microsoft’s adoption of secure development was sitting in a hacking demo by Foundstone for a lot of the Windows development team. The network security team were invited over to watch as well. The Foundstone guys demonstrated hacking an unpatched Windows 2000 system. As they got in to one part of the system I watched a developer hang his head and say “I wrote that code.” I think that moment probably did more for software security with that developer than any other single thing. From that moment on he knew security features did not make code secure. He was won over.
During my SAIC days, all our customers were developing software for DoD. One of the fun parts of being a security guy in a DoD world is the reaction of people when you arrive. There are so many IA guys (DoDspeak for security folks, IA stands for Information Assurance) who think their job is to always say no that security guys who arrive to help are viewed with suspicion until they show they are different. Showing that could be a lot of fun.
Early on one of the teams we were called in to help were not too happy to see us. In the initial meeting, the dev lead sat in the back corner, leaning back with his arms crossed. He clearly didn’t want us to be there. The team happened to be wrestling with a memory leak they could not find and we were able to find it via the static analysis tool we were using at the time. Solving that problem for them made them more receptive but not welcoming yet. As we passed on the results, and there were a lot since there was a lot of legacy C/C++ code that had been around for a long time, things looked grim until we set their priorities and waved them off worrying about fixing everything right away. Since their schedule permitted it, we sent them off on some easy to fix low hanging fruit that allowed a very large, quick win. We then sung their praises to their boss. From then on out we were greeted pretty well as we looked at all their changes periodically every year. Continue reading “Winning Over Developers”