While the Web is still an insecure place, and most Web sites are still insecure, Web site owners now have the knowledge at hand to secure their Web sites should they choose to. Not totally secure, of course, but to improve the "hackability." It would be nice if the bad guys had to work really hard to find that one fatal flaw. Right now it's just shooting fish in a barrel. We have the tools, the knowledge, and the methodologies and best practices are there. Now it's the job of the other side to implement. On the security vendor side, our job is to make implementing those practices or developing solutions around those practices easier and cheaper. Why has it been so difficult to make progress?
There are so many Web sites and vulnerabilities that it's been a struggle for innovations to fix all the flaws or to do so quickly. The other issue is the transition to the Web 2.0 world. More and more of our important software and the things we do with our computers are Web-based.
In the past, security researchers could find vulnerabilities in software and disclose those issues publicly so the software could be improved. The problem with Web 2.0 software is the security researcher is not able to find vulnerabilities with the same abandon. It's somebody else's machine, somebody else's software. It could be considered illegal, and there have been some cases to that fact. So what we lose is the good Samaritans out there finding vulnerabilities, and if they are, they're not disclosing them. Only the bad guys are finding and exploiting vulnerabilities. Do Web site owners welcome so-called "white hat" hackers?
With the larger companies with a lot of traffic, everyone in the world is trying to break into their site. Once they accept it and build process around it and deal with it, it usually turns out really good for everybody. Those who aren't used to it are usually smaller players, small e-commerce shops and universities. When someone comes to them with a vulnerability that exposes user data, this is a big shock. One of the immediate reactions is go to law enforcement and to try to silence this particular person.
Nobody wants to get their Web site hacked or made to look the fool. People are figuring out every site has vulnerabilities; nobody is expecting anybody to be perfect -- just find it and fix it fast as you can. To suppress people who are finding vulnerabilities would be counterproductive, but I don't make the rules. Are organizations disclosing breaches more quickly now?
It depends on the organization. Right now at least they're legally obligated to inform the public of a massive disclosure. What isn't legally obligated is the how. It would be nice to know where the link in the chain was dropped so we can stop making these same mistakes. For most companies the priority will be damage control. That's troubling, but that trend will probably continue.
It's right up there, and it's definitely the most prevalent vulnerability according to any report you read today. It's also been widely underestimated and misunderstood. In the early days the attack was mostly associated with cookie stealing and the like, and nobody paid much attention to it. But within the last two years, the power of this attack has grown. You can develop Web-based worms that can take down large Web sites. It's been used in very convincing phishing attacks, where the phishing Web site occurs on the real Web site rather than the fake one. It also can be used to hack intranets. So the diversity of the attack has grown, and people are taking it seriously now. Why are cross-site scripting errors so easy to make?
It's usually because Web sites are taking a massive volume of user-supplied data, search requests, user posts, that are coming into the site from a variety of locations and exiting the site from variety of locations. At any point of those in or exit points that mistakes are made is where you have flaws. Cross-site scripting, technically, is fairly simple to eradicate. The difficult part is you have to be diligent in hundreds if not thousands of spots throughout the Web site. By the time people started to figure out cross-site scripting was bad, there were 100 million Web sites out there that were vulnerable. So it's going to take some time to backtrack. What are some examples of the damage XSS flaws can cause?
One that opened everybody's eyes was the Sammy worm [on MySpace]. Within 24 hours he was able to amass 1 million friends, and it was growing at an exponential rate. MySpace, in a panic, decides the only way to stop the infection was to voluntarily drop their own Web site for a day to clean up the mess. For a site that makes all their money on advertising revenue, that's significant. Not to mention that Sammy's code really had no payload, it was just designed to propagate. Should he have decided to do something bad to 1 million browsers at the time, he could have. Have corporate sites and e-commerce sites also been victim to XSS?
They're starting to suffer more phishing attacks as a result of cross-site scripting exploits. Using cross-site scripting, it's entirely possible to host your phishing site on the real Web site, so the domain name will be correct, everything about it will be legitimate, but when the user types their username and password into the site it will be transported somewhere else and stolen.
In terms of the intranet, last year at Black Hat I demonstrated cross-site scripting's capability to bypass the firewall. Say you read my blog and you're sitting behind the corporate intranet, that code that you've downloaded off my blog is now in your browser behind the firewall. I demonstrated how you can hack DSL routers, firewalls, internal printers, or just about anything else your browser can reach. What are some tips your book offers on how developers can avoid XSS vulnerabilities?
The number one solution is to validate input. Make sure you're only doing proper input validation and you're only using what you expect to receive. By far the more effective solution, and you can combine the two, is to do proper output filtering. Make sure your data, before you print it to screen, is sanitized. The regular expression or filter to do so is available on a number of frameworks that are very good. Or, it would take one or two lines of regular expression code to fix the flaw -- it just has to be done everywhere. Can automated tools help?
The other option for companies is Web application firewalls that can provide a stopgap measure to get some relief soon while the code is fixed. You don't want to depend 100% [on a Web application firewall]. You need a layered defense, so should the Web application firewall not work or should the code not work, they're backing each other up. You also offer some tips in your book for how users can avoid being XSS victims?
Every user needs to be able to properly defend himself. The way I surf today, cross-site scripting aside, I do all my promiscuous Web browsing with one browser and load it up with extensions like no script, and the Netcraft antiphishing toolbar and things like that, and make sure it's well patched. In my important Web browsing, where my accounts are very important to me, I do it in a separate browser and I only visit those Web sites with that browser. Should my primary browser get hacked, they're not going to be able to access cookies or anything else that's in that other browser because I've never been to those sites with that browser. I'm a fan of security with obscurity. What is the next area of application security to pay attention to?
I think the disclosure dilemma will impact the industry pretty hard. The other realization that will take place in the next year or two is that there are too many Web sites, too many vulnerabilities. At WhiteHat, we're finding when we scan staging or development systems side by side with production, their vulnerabilities don't always match. Actually, it's rare for the vulnerabilities to match, which means software vulnerabilities in development will make their way to production eventually, but not all vulnerabilities in production once existed in development. That seems counterintuitive, but often configuration differences happen, files in log files and backup files are left on the production Web servers and readily found.
We're going to need additional solutions beyond fixing code or developing security in the SDLC (software development life cycle). Secure configuration is going to have a place, and Web app firewalls will probably see significant growth as well.
Jeremiah Grossman, founder and chief technology officer of WhiteHat Security, is an expert in Web application security and a founding member of the Web Application Security Consortium (WASC). At WhiteHat, a provider of Web site security services, Grossman is responsible for Web application security R&D and industry evangelism. He co-wrote Cross Site Scripting Attacks: XSS Exploits and Defense (Syngress, May 2007) with Seth Fogie, Robert Hansen, Anton Rager and Petko D. Petkov.