By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
|Andrew van der Stock|
Access control is made up of several components:
- Authentication (authC) -- Who are you?
- Authorization (authZ) -- Are you privileged enough to do X?
- Audit -- Secure logging of all access to determine what happened at a later point in time
Authentication strength is directly related initially to evidence of identity and the type of credential (passwords, tokens, etc). Obviously, the higher the value of transaction or the higher the risk of fraud, the stronger the credential required. Strong credentials require server-side infrastructure, such as an access control server or PKI for smart cards.
Authorization is essential. Many applications have poor-quality authorization matrixes, which allow callers to perform privileged actions due to a lack of any authorization check. If an Ajax application uses client-side authorization, this is a recipe for disaster, particularly if there are no server-side controls. The attacker can just change or eliminate authorization code and any associated security tokens in the DOM, such an admin flag.
The only safe path is for Ajax apps to use server-side authC/authZ checks and auditing. That way, if there is an Ajax path and a normal Web app path, there is no preferential treatment for either path and security is maintained in one place rather than two.What do developers need to know about state management and client-side storage of secure state?
There is no such thing as secure state on the client. You must revalidate the data every time you receive it on the server. Developers should not transmit sensitive state, such as authorization tokens, database passwords or access levels to the client unless it is to be displayed for information purposes only. Returning such data is a bad idea, and it should never be obeyed unless validated first.
Lastly, it has become increasingly common to send all data to the client and let the client choose the correct nodes using an XPath query. This is unacceptable in my view. Not only is this wasteful of bandwidth, but it also is a privacy concern, particularly if you can see other people's records. Do the XPath query on the server and send only the necessary and authorized data to the client.What constitutes strong validation?
Most users aren't out to get you, but some are. Strong validation includes:
- For strings: max length, and a whitelisting of permissible characters.
- For integers: validate for range and signedness. In general, do not use "int," but "unsigned int" by default.
- For arrays, such as pull-down lists and radio groups, use simple values ("1".."2".."3") for choices rather than the data itself. Perfect examples of this include pull-down menus containing credit card numbers. Do not use the credit card number as the return value. This allows trivial tampering. Instead, use a simple index value (1, 2, 3, etc.), and use the partial account number as the display option within the select.
- For checkboxes and boolean values, constrain the choices to just true/false.
- Create new compound types, such as Zip codes and so on that can be validated due to innate syntax rules using regular expressions (regexs).
Negative validation is exactly like virus definitions. It's an impossible task to keep up to date with new attack methods. Blacklisting has truly, spectacularly and continuously failed to protect information assets over the last 20 something years. It's vital that developers just say "NO!" to blacklists.
Whitelisting is the only validation method that might be safe. It is far better to test and reject data than to try to sanitize it with blacklists (negative validation) and accept possibly hostile input.Will the steps you've outlined keep Ajax developers ahead of the bad guys?
By doing the basics -- data validation, good architecture, safer API (there's no reason for SQL injections) and so on, you are well on the way to being protected. Malicious exploiters are very opportunistic, and if it takes 20 times as long to develop a single exploit in your app rather than another app, the exploit attackers will target other apps. I do not pretend for a second any app is "safe." That is why the defender's role is so hard -- we have to protect against everything, and yet we do not have the budget or the time to code and test for every potential flaw known today or a flaw that is yet to be discovered.
Many developers are simply unaware they need to learn this stuff. The people who attend OWASP local chapters are already converts to our cause and many may know more than me. Instead, I really want to speak to the business people, the architects and the engineers creating new software. They are the important blank canvases to inspire and convert to our cause.
Security researchers such as me do more basic research than the bad guys, but the bad guys are now getting paid serious money to develop flaws in well-known software. They use many of the same techniques we use to identify and exploit these flaws, and in some cases they create the field -- the SAMY worm is an example.
I expect more Ajax vulnerabilities and exploits to surface, and I expect researchers to come up with additional "new" flaws that need to be protected against. It's not that these flaws don't already exist; it's just that we haven't yet found and described them. Hopefully, we find them before the zero-day crowd does. There is a lag time between our research and when developers can -- or more likely, fail to -- implement the controls necessary for a safe application.
Andrew van der Stock is a leading Australian Web application researcher. He is the moderator of webappsec and helped organize the Melbourne OWASP chapter. Van der Stock is leading Version 3.0 of the OWASP Guide to Building Secure Web Applications, which includes a new chapter on Ajax.