LAS VEGAS -- Ignoring application security is no longer an option. Web sites are now Web applications, Michael Sutton told his audience at last week's Better Software conference. In his session, "Is Web 2.0 a Hacker's Dream?" the security evangelist from SPI Dynamics Inc. (to be acquired by HP)
Web 2.0 is a meaningless marketing term, but the threats are real
Despite the session title, Sutton rejects the term Web 2.0 as an ambiguous "catchall phrase for any new technology." Many of these supposed Web 2.0 technologies have long histories, he pointed out. As he stressed many times, the same vulnerabilities that afflict Web 1.0 applications also afflict Web feeds, Web services and Ajax applications.
However, these dynamic applications are harder to secure because they create more attack vectors. Sutton broke down the security situation of Web 2.0 into a simple equation:
Same vulnerabilities + additional input vectors = more complexityThat's the breakdown, here are the details.
Sutton identified three attack situations that could affect Web feeds:
- Compromised host: This attack is "certainly more likely," Sutton said. You're dealing with a legitimate site where a "trusted host has already taken care of generating traffic for the attacker." The challenge for the attacker is injecting code into the Web feed.
- Open content: "This is very likely," Sutton said. Sites such as MySpace that allow customers to be developers put themselves at risk. This user-provided content is then available through a Web feed. Some sites employ blacklisting to control content, using signatures and regular expressions. But this system can be broken, as was the case in the infamous Samy worm.
"Web feeds can be used to do the same types of attacks [as Web sites] if we don't validate input," Sutton said. Risks in the remote zone include familiar vulnerabilities such as cross-site request forgery (CSRF), cross-site scripting (XSS) and distributed denial of service (DDOS).
The local zone is, generally speaking, safer, but it is still vulnerable. "Some readers convert a feed into an HTML file, store it locally and then render it in a Web browser," explained Sutton. "The client is treating it as a local zone, so different security standards apply." Access to XMLHttpRequest (XHR) and ActiveX objects may be compromised.
On the server side, those responsible should "restrict user input and user tags." However, blacklisting can be circumvented, Sutton warned. Whitelisting is better than blacklisting, he said, because its rules determine what can be accepted and are therefore much stricter. For further protection, one may start with a whitelist and layer a blacklist on top.
"There's not much you can do" for the client side, Sutton said. Try to secure the Web feed readers being used against feed injection, he recommended.
"Web services involve machine-to-machine communication," Sutton said. "That simple thing makes people forget about security." But they are vulnerable to all of the same exploits as Web applications. The difference is that in order to fight session hijacking and similar attacks in Web services, you need tools that are uniquely suited to Web services.
Companies often tack on Web services without thinking about security, Sutton said. "They forget about validation routines," he said. Simply remembering to implement security measures would offer a fair amount of protection for Web services.
There are several challenges to Ajax security. Some of the Ajax application's business logic may be stored on the client side, where it is quite vulnerable. Ajax "increases the surface area," Sutton told his audience, exposing attack vectors. In order to find these attack vectors, security tools must understand the XHR object.
For Ajax security testing, Sutton recommended dynamic analysis and static analysis.