By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
No matter how much Web application vulnerability testing you do there will always be weaknesses you'll overlook. This is regardless of your expertise and the quality of your tools. What makes true security professionals stand out is their ability to learn from their oversights and vow to test with a sharper eye next time.
What are some common flaws in Web applications that are consistently overlooked during security assessments? Well, they're more basic than you'd think. But it's always the simple stuff that comes back to bite you.
The following are Web application vulnerabilities that we've all likely overlooked yet we can't afford to miss.
- Files that shouldn't be publicly accessible
Using a Web mirroring tool such as HTTrack, mirror your site(s) and manually peruse through files and folders downloaded to your local system. Check for FTP log files, Web statistics (such as Webalizer) log files, and backup files containing source code and other comments that the world doesn't need to know about. You can also use Google hacking tools such as SiteDigger and Gooscan to look for sensitive information you may not have thought about. You'll likely find more files and information using manual scans than Google hacks, but do both to be sure.
- Functionality that's browser specific
With all the "standards" that exist for HTTP, HTML and browser compatibility, you'll undoubtedly witness different application behavior using different browsers. I see things like form input, user authentication and error generation handled one way in Firefox and yet another in Internet Explorer. I've even seen different behavior among varying versions of the same browser.
I've also come across security issues when using an unsupported browser. Even if you're not supposed to use a certain browser, use it anyway and see what happens. So, when you're digging in and manually testing the application, be sure to use different browsers -- and browser versions if you can -- to uncover some "undocumented features".
- Flaws that are user-specific
It's imperative to go beyond what the outside world sees and test your Web applications as an authenticated user. In fact, you should use automated tools and manual checks across every role or group level whenever possible. I've found SQL injection, cross-site scripting (XSS), and other serious issues while logged in as one type of user that didn't appear at a lower privilege level and vice versa. You'll never know until you test.
- Operating system and Web server weaknesses
It's one thing to have a solid Web application, but keeping the bad guys out of the underling operating system, Web server and supporting software is quite another. It's not enough to use automated Web vulnerability scanners and manual tests at the application layer. You've got to look at the foundation of the application and server as well. I often see missing patches, unhardened systems and general sloppiness flying under the radar of many security assessments. Use tools such as Nessus or QualysGuard to see what can be exploited in the OS, Web server or something as seemingly benign as your backup software. The last thing you want is someone breaking into your otherwise bulletproof Web application at a lower level, obtaining a remote command prompt for example, and taking over the system that way.
- Form input handling
One area of Web applications that people rely too much on automated security scanning tools is forms. The assumption is that automated tools can throw anything and everything at forms, testing every possible scenario of field manipulation, XSS and SQL injection. That's true, but what tools can't do is put expertise and context into how the forms actually work and can be manipulated by a typical user.
Determining exactly what type of input specific fields will accept combined with other options presented in radio buttons and drop-down lists is something you're going to be able to analyze only through manual assessment. The same goes for what happens once the form is submitted, such as errors returned and delays in the application. This can prove to be very valuable in the context of typical Web application usage.
- Application logic
Similar to form manipulation, analyzing your Web application's logic by some basic poking and prodding will uncover as many, if not more, vulnerabilities than any automated testing tool. The possibilities are unlimited, but some weak areas I've found revolve around the creation of user accounts and account maintenance. What happens when you add a new user? What happens when you add that same user again with something slightly changed in one of the sign-up fields? How does the application respond when an unacceptable password length is entered after the account is created?
You should also check email headers in email sent to users. What can you discover? It's very likely the internal IP address or addressing scheme of the entire internal network is divulged. Not necessarily something you want outsiders knowing.
Also, look at general application flows, including creation, storage and transmission of information. What's vulnerable that someone with malicious intent could exploit?
- Authentication weaknesses
It's easy to assume that basic form or built-in Web server authentication is going to protect the Web application, but that's hardly the case. Depending on the authentication coding and specific Web server versions, the application may behave in different ways when it's presented with login attacks -- both manual and automated.
How does the application respond when invalid user IDs and passwords are entered? Is the user specifically told what's incorrect? This response alone can give a malicious attacker a leg up knowing whether he needs to focus on attacking the user ID, password, or both. What happens when nothing is entered? How does the authentication process work when nothing but junk is entered? How do the application, server and Internet connection all stand up to login attacks when a dictionary attack is run using a tool such as Brutus? Are log files filled up? Is performance degraded? Do user accounts get locked after so many failed attempts? Those are all things that affect the security -- and availability -- of your application and should be tested for accordingly.
- Sensitive information transmitted in the clear
It seems simple enough to just install a digital certificate on the server and force everyone to use secure sockets layer (SSL). But are all parts of your application using it? I've come across configurations where certain parts of applications used SSL, but others did not. Low and behold the areas that weren't using SSL ended up transmitting login credentials, form input and other sensitive information in the clear for anyone to see. It's not a big deal until someone on your network loads up a network analyzer or tool such as Cain, performs ARP poisoning and captures all HTTP traffic flowing across the network -- passwords, session information and more. There's also the inevitable scenario of employees working from home or coffee shop using an unsecured wireless network. Anything transmitted via unsecured HTTP is fair game for abuse. Make sure everything in the application is protected via SSL -- not just the seemingly important areas.
- "Possible" SQL injections
When using automated Web application vulnerability scanners, you may come across scenarios where "possible" SQL injections are discovered when logged in to the application. You may be inclined to stop or not know how to proceed, but I encourage you to dig in deeper. The tool may have found something but wasn't able actually verify the problem due to authentication or session timeouts or other limitations. A good SQL injection testing tool will provide the ability to authenticate users and then perform its tests. If the application is using form-based authentication, don't fret. You can simply copy or capture the original SQL injection query and then copy and paste the entire HTTP request into a Web proxy or HTTP editor and submit it to a Web session you're already authenticated to. It's a little extra effort, but it works and you may be able to find your most serious vulnerabilities this way.
- False sense of firewall or IPS security
Many times firewalls or intrusion detection/prevention systems (IPS) will block Web application attacks. Validating that this works is good, but you also need to test what happens when such controls aren't in place. Imagine the scenario where an administrator makes a "quick" firewall rule change or the protective mechanisms are disabled or temporarily taken offline altogether? You've got to plan on the worst-case scenario. Disable your network application protection and/or setup trusting rules and see what happens. You may be surprised.
With all the complexities of our applications and networks, all it takes is one unintentional oversight for sensitive systems and information to be put in harm's way. Once you've exhausted your vulnerability search using automated tools and manual poking and prodding, look a little deeper. Check your Web applications with a malicious eye -- what would the bad guys do? Odds are there are some weaknesses you may not have thought about.
About the author: Kevin Beaver is an independent information security consultant, speaker, and expert witness with Atlanta-based Principle Logic, LLC. He has more than 19 years of experience in IT and specializes in performing information security assessments revolving around IT compliance. Kevin has authored/co-authored six books on information security including Hacking For Dummies and Hacking Wireless Networks For Dummies (Wiley) as well as The Practical Guide to HIPAA Privacy and Security Compliance (Auerbach). He's also the creator of the Security On Wheels series of audiobooks. Kevin can be reached at firstname.lastname@example.org.
Dig Deeper on Software Security Test Best Practices