BALTIMORE -- If you think like a hacker, you can better protect your software.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
That was the main message at this week's Secure Software Summit. Avi Rubin, technical director of the Information Security Institute at Johns Hopkins University, in particular, emphasized that concept during his keynote address on breaking security systems.
"If you just make systems and don't break them, you don't know how to think like a hacker," Rubin said.
Rubin, author of the books Web Security Sourcebook, White-Hat Security Arsenal and the upcoming Brave New Ballot, told the nearly 100 conference attendees about the benefits of breaking systems. For him and many other security testers, the best part is that it's fun.
"It's interesting and challenging, and it teaches you how to build systems more securely," Rubin said. "People who make the thing think I'm a troublemaker and I'm just trying to make a name for myself."
In reality, however, it keeps companies honest, he said. They don't want to see their names in news reports.
Rubin acknowledged that it's significantly easier to break a system than to build a secure system. "A hacker has to be right only once. A builder has to anticipate all types of attacks," he said. "The consequences of building a bad system are worse than the consequences of a bad attack."
Knowing that, builders have to outline threats at the start of software development. "When building a system, security doesn't mean anything without a threat model," Rubin said.
In addition, many decisions need to be made: What protocols and algorithms should be used? What effect will security have on performance? How do you measure security?
It's a challenge to securely design software, as even if you think it's secure there can be errors upon implementation and the threat model could be wrong, Rubin said.
Other ways things can go wrong:
- Bugs in the code
- Poor administration
- Malicious insider threats
- Unrealistic assumption of attackers
If a vulnerability is discovered
In some cases, people discover vulnerabilities accidentally. In other cases, however, researchers consider it a challenge to break systems. Regardless, people need to report the issue to the vendor directly, Rubin said.
"It is the responsible thing to go to the vendors first," he said. "They may deny [the vulnerability], but you need to do that."
Rubin explained how he and his graduate students at Johns Hopkins discovered how to break the TIRIS system, Texas Instruments Registration and Identification System. These are RFID chips that are used in 150 million vehicle keys as immobilizers and in the Exxon Mobil Speedpass.
The students spent several months trying to re-engineer the cipher. They eventually broke the circuit and were then able to think of several ways to break the system, Rubin said. One thing they did was figure out how to use a regular key to start a car that requires a chip-enabled key to start it. In addition, they were able to scan a person's Speedpass, take that information and use it to purchase gas.
"Once they knew the key, they could fool a reader and spoof a valid tag to buy gas and start a car," he said.
Rubin told Texas Instruments and Exxon Mobil about the system break and that he wanted to publish a paper on it. The companies didn't believe him and wanted proof that it could be done. Rubin's team proved it and the paper was published.
Security breaks such as that prove that if an attacker has a will, he can find a way. "No useful system is really secure," Rubin said. "Some are just harder to break."