Java security isn't well understood, even by those who create Java applications. Fortify chief scientist Brian...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Chess describes common security oversights programmers commit that lead to exploits such as XSS, session hijacking and SQL injection.
If you're like most programmers, you were not taught the value of security. For many, security problems are mere bugs to be fixed in the next release. Making matters worse, Java programmers are often lulled into the belief that Java is intrinsically more secure than C; therefore, the software they write is "secure enough."
That, says Brian Chess, chief scientist of Fortify Software, has to stop. At his session at last week's JavaOne conference in San Francisco, Chess argued that to properly secure software, security must be considered a first-class concern with its own "customer" or "actor" in the software development process responsible for ensuring security requirements are met.
Getting security right requires policy, process and tools. To illustrate that, Chess presented 12 common examples of security vulnerabilities in Java software, complete with code examples. (Chess' slides, including code examples, can be downloaded here. Authorization is required.)
- Injection attacks
SQL injection attacks occur when an attacker includes raw SQL as input to a text field in an application. If the application uses that input to dynamically generate an SQL statement without first applying proper input validation and representation, the attacker's SQL may be executed as written in the database.
Most experienced Java programmers know to avoid this vulnerability through the use of Bind Variables in SQL Prepared Statements. But as Chess pointed out, injection attacks can also occur as "Filename Injection" writing to critical system files (often we might overlook ../../ included in a filename), "Command Injection," XPath injection, or fundamentally in any string that may be interpreted by any part of the system at any time. In fact, it doesn't even have to be a String at all to be vulnerable to injection attacks. Therefore, input validation, which allows only inputs in a specified range of values, is a MUST for any system.
- Cross-site scripting
- Bad credential management
This is keeping user names and passwords in code or files as PLAIN TEXT or using trivial Base64 encodings. (It's very common for JDBC connections.)
- Bad error handling
Displaying exception stack traces in a browser window provides the attacker with the tools to "debug" your program.
- Test code goes into production
Often programmers include hooks to support debugging in their applications. However, if the programmer fails to later remove ALL the logic, it can accidentally go into production, making it vulnerable to attack.
- Native methods
All the security guarantees of Java are lost once Java calls Native code. Buffer overflows and the entire set of vulnerabilities commonly exposed in C and C++ programs are also possible in Java as a result of Native Method calls.
Improperly written shared caches, or even simple member variables in servlets, can result in one user's data being displayed to another user. If this occurs once by accident, an attacker may latch onto the vulnerability until it exposes critical data, such as a credit card number.
- Missing access control
Often programmers secure JSP pages by including security checks at the top of each page. Such a scheme is highly vulnerable because the security checks are sprinkled throughout the code rather than being contained in one central location. Even if access control is done in a filter, the vulnerability can still occur if the Web.XML fails to include the filter during deployment. The development process needs hooks to ensure that access checks are continually verified and audited.
- Bad session management
Session hijacking occurs anytime an attacker can obtain data from another user's session. Best practices for protecting a session from being hijacked include the following:
- Issuing a new session ID when transitioning from an unauthenticated session to an authenticated session (or back).
- Truly invalidating the current session upon logout. If the session isn't invalidated, the attacker can "log in" again by simply clicking the back button.
- Ensuring sufficient randomness of session IDs to reduce the risk of an attacker "guessing" another user's session ID.
- Cookies and other headers
Cookies and headers are just as vulnerable to "injection" attacks as text fields in forms. Often, programmers and frameworks remember to validate form input, but they overlook validation of headers and cookies. Cookies, headers, text fields and hidden form variables all represent data sent by the browser and cannot be trusted by the server without proper validation.
- Logging sensitive data
The programmer's mantra of "log everything" can result in disaster if credit card numbers or other sensitive data accidentally make their way into log files.
- Trusting the configuration
The configuration files from the local file system count as input data, too. These files should be checked for vulnerabilities and invalid input, otherwise an attacker can modify an input file and inject malicious behavior into an application.
"Security cannot be considered as simply a sequence of bugs to be fixed," Chess concluded. "Security requires changes to the way software is developed, using policy, process and tools to effectively tackle the challenge."
This article originally appeared on TheServerSide.COM.
Dig Deeper on Building security into the SDLC (Software development life cycle)