Secure Code: Why buffer overflows still matter

To secure code, software pros test for buffer overflows -- even though these flaws occur only in nonmemory-managed languages such as C and C++.

When it comes to secure code, buffer overflows don't top the list of concerns. The security vulnerability occurs...

only in nonmemory-managed languages such as C, C++ and Assembler. And most applications developed today are written in more modern languages, such as C#, Java, Ruby, Perl and Python.

But there is one big reason why buffer overflows still matter: They are often found in legacy systems, which link to applications in production today. The risk for most environments isn't the new code; it is legacy code, glue code and code from third-party libraries, which may be vulnerable.

At one point, the software started crashing for no apparent reason. After a lot of debugging, we found the culprit.

To secure code effectively, software professionals still have to contend with buffer overflows, which can be exploited to take control of thousands of computers, and from those computers launch attacks against other systems.

In this tip, I outline what buffer overflow vulnerability involves, show what it looks like, and explain how to defend applications from it.

The buffer overflow vulnerability

Fifteen years ago, I was a C programmer, working on creating bills for a successful  telecommunications company. At one point, the software started crashing for no apparent reason.

After a lot of debugging, we found the culprit. The programming language used the same memory space for executing code and for data structures. The application also reserved a block of memory to accommodate thirty phone numbers. When we added an account with more than thirty phone lines, the program kept adding the extra numbers to the list, overwriting the program code, and causing corruption and loss.

Imagine if things were a little different. That same program received input from the Web, and a hacker sent in a very long string for a phone number, not to cause a crash, but to insert new x86 program code into memory.

The program code that's inserted by the attacker can be anything. It might be a keystroke logger that stores everything a user types (such as a bank URL, username and password). It might be a miniserver that grants the attacker shell access. Most commonly it is a rootkit that allows an attacker access to your computer.

Here's a simple C code example of a buffer overflow vulnerability:

#include <stdio.h>
int main(int argc, char* argv[])
{  
char buffer[200];  
strcpy(argv[0], buffer);  
/*... Time Passes ...*/  
printf("Hello %s",buffer);
}

Here's a simple C++ example of a buffer overflow:

#include <stdio.h>
int main(int argc, char* argv[])
{  
char * buffer = new char[200];  
strcpy(argv[0], buffer);  
//... Time Passes ...  
printf("Hello %s",buffer);
}

Note that in both examples we declare some memory, then copy on top of it code that we never checked for length.

How to deal with  buffer overflows

It is unlikely that in 2013 development teams are writing Web applications in C or C++. But it is quite likely that the application is calling into a library or accessing legacy code that is written in those languages. It is also unlikely that the budget for rewriting those legacy applications will magically appear.

To find buffer overflow vulnerabilities, run a static analysis tool on the codebase to look for them. Lint and lint++ are two static analyzers that can find those weaknesses. But again, the vulnerabilities will probably be in wrapped code. If the external, world-facing code is written in Ruby, C# or Java and checks the limit of input and cuts them off at a reasonable size, you can prevent the downstream system from ever seeing a problem.

The final concern around secure code is those pesky third-party applications, which could be vulnerable. The best bet there may be to maintain privileged servers free of third-party applications, while working with the security group to educate employees about risks.

Buffer overflows are unlikely for modern Web applications -- they do not currently list on the OWASP top ten -- but they are still a risk for many shops, especially those with legacy Windows applications. Like any disease, you might not have it now. But education and prevention sure beats the alternative.

This was last published in January 2013

Dig Deeper on Software Security Test Best Practices

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

4 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Is your development team testing applications for code vulnerabilities?
Cancel

There can be testing and testing.


I once done a quick assessment of a critically important software used in a government infrastructure. It was scanned for security vulnerabilities with an enterprise class tool; the architects reviewed the findings and deemed them as low impact and risk. The Product Owner agreed.


It didn't take me long though to identify form submission loophole and save a document "legally signed" in "Galaxy Far Far Away". By "Emperor Palpatine" as a judge.  All I did was overwriting items in the dropdowns using just built in browser tools (press F12). Any user could do that.


The Product Owner was shocked! The architect was confused. And the tool was not to blame. Identified vulnerability was indeed in the tool's report but looked harmless disguised by technical terms.


It's important to put the security concerns in a context to identify the problem and it's threat level. We teach this skill in testing.

Cancel
How about a better question.  Is your development team including code security scanning as part of their build and deploy process? Do they scan dependencies for known vulnerabilities they may need to mitigate?
Cancel
Do we perform these kinds of tests? Yes. Have we closed the loopholes on all of the findings we have? Nope. Part of the reason is the fact that there is a continuum for code vulnerabilities. The more serious and obvious ones definitely get fixed, but many of the reports and findings turn out to be line noise, or deemed not important or severely low probability of ever being an issue. It's an ongoing conversation.
Cancel

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close