Think there's no need to secure open source code because the community vets it? Think again. OSS security flaws are abundant, and catching them calls for some DIY security.
The many-eyes theory implies that open source software is secure because scores of developers have tested it. Surprisingly, many enterprises believe the myth and leave OSS security to community developers -- in spite of newsmaking security breaches caused by flawed open source code.
Vulnerabilities in open source components or dependencies led to suspected or verified security breaches for 31% of the more than 2,000 respondents for the 2017 DevSecOps Community Survey. Only 14% reported OSS security incidents in the 2014 survey. Despite the growing threat, 62% of enterprises without DevOps practices don't place controls on open source and third-party components used in development. That's a huge oversight, considering that open source code is used in 96% of commercial applications, according to the 2017 Black Duck report on OSS risks.
Oddly enough, businesses employ IT security experts to secure their own original code, but they rarely vet the qualifications of the contributors to the OSS they use.
Most developers who contribute to projects aren't security experts, said Ben Lambert, engineering director and an instructor at Cloud Academy, a training organization in San Francisco. "Just in cryptography alone, the qualified pool of engineers who can effectively review the code for security flaws is shallow," Lambert said.
The reality that many people have access to open source code is both a weakness and a strength. Some contributors will access, view and improve that code. Others will choose to write and hide malware or other attack code into the software.
"As is usually the case with things that are meant for good, certain entities have manipulated their privileges for malevolent purposes," said Eric Cole, CEO of Secure Anchor Consulting in Ashburn, Va. "This can lead to installing software, or even an entire operating system, that is a ticking time bomb."
One of the reasons the use of open source code is such risky business is the assumption that others will secure it.
Open source projects without strong security guidance often end up including unknown third-party libraries pulled from package managers, Lambert said. Many developers pin version ranges to ensure future patches are available. The further removed the dependency is, however, the better it becomes as an attack vector. For example, a dependency that's four projects removed is probably not well scrutinized. Publicly controlled package managers should be used with care, as their benefits can be outweighed by the security drawbacks.
Kevin Beaverinformation security consultant, Principle Logic
It's not just engineers using third-party libraries for everything who create OSS security risks. Developers sometimes make mistakes when re-engineering well-established open source services or functions. "Most developers are not security experts, but they know just enough to try to implement their own input sanitization code or something more advanced. This can create a false sense of risk mitigation," Lambert said.
OSS security best practices
Get rid of the bystander apathy that passes the buck on OSS security, advised Kevin Beaver, an information security consultant at Principle Logic in Atlanta. Secure the open source code and software used in your business yourself, Beaver said, or hire a qualified security expert to do it.
A disciplined approach includes software and system security standards that apply across the board. Security testing should be integrated throughout development and QA lifecycles, as should practices for code reviews and fixes.
To Cole, it's important to write software that adheres to formal, recognized secure coding standards. Ensure that user input is validated and sanitized to prevent injection attacks. Build in parameter checking to prevent buffer overflows. Employ secure data handling practices, such as encryption. And, he urged, review OSS security prior to deployment.
Don't focus only on application security. Be sure open source APIs, software development kits (SDKs) and code dependencies function as intended and do not introduce unchecked security risks. Cole said he has seen companies, including Facebook, experience data leakage, data exfiltration and other problems resulting from API and SDK security weaknesses.
Also, stay current on compliance and regulation requirements, which will grow only more stringent.
"Regulations like the European Union's GDPR place the burden of commercial and OSS security on the company and ultimately the developer," Cole said. General Data Protection Regulation (GDPR) rules that will impact DevOps practices include right to erasure, pseudonymization, secure processing and data protection by design and default.
Throw out the manual (practices)
Automation is a necessity for organizations that have short release cycles, wherein both security patches and new features are deployed quickly. Developers know that there's no time to audit every release. Using automated scanners, patch managers and real-time monitoring tools will catch many, but not necessarily all, of the problems, Lambert said.
"Patching often has the greatest security consequences, yet it's the most difficult thing to master," Beaver said. It's folly to do security updates without automated patch management tools, he added.
Use automated tests whenever possible to examine everything as if it were code. Compliance testing, for instance, can help ensure that standards are followed. Automate static code analysis in CI/CD processes, as well as unit, integration and dynamic security checks. Some organizations use dynamic application security testing (DAST) to get started with automation, Lambert said. DAST isn't comprehensive, but it catches obvious flaws and vulnerabilities.
Let no open source component go unchecked, security experts say. After all, no one wants to face headlines about a breach that slipped in the backdoor because of an OSS security lapse.