The risk of undermanaged open source software

Did you miss a session at the Data Summit? Watch On-Demand Here.


There are a lot of myths surrounding open source software, but one that continues to permeate conversations is that open source is not as secure as proprietary offerings. At face value, this claim would seem to hold merit as how do you secure a supply chain for a product that is created in an environment where anyone can contribute to it?

But perceptions are changing, as open source code is running many of the most sophisticated computational workloads known to mankind. In fact, according to Red Hat’s 2022 The State of Enterprise Open Source report, 89% of respondents believe that enterprise open source software is as secure or more secure than proprietary software. 

Even if misplaced security concerns linger, it doesn’t seem to be slowing down open source adoption. Open source powers some of the world’s most recognizable companies that we rely on daily – from Netflix and Airbnb to Verizon and The American Red Cross. This usage continues to grow, with Forrester’s State of Application Security 2021 report indicating that 99% of audited codebases contain some amount of open source code. This wouldn’t be the case if the organizations deploying these solutions did not trust the security of the software used.

Relying on open source doesn’t mean you are opening your organization up to vulnerabilities, as long as you review the code for any security concerns. Unlike proprietary software, open source code is fully viewable and, thus, auditable. So the key for enterprise use of open source is to make sure you’re not undermanaging it. But while the opportunity is there, the expertise may not be, and the auditability that is often touted as an advantage of open source may not be for every organization using it. Many users do not have the time, expertise or wherewithal to conduct security audits of the open source they use so we need to consider other avenues to obtain similar assurances in that code. When sensitive workloads are deployed, of course, trust is not enough. “Trust but verify” is a key mantra to keep in mind.

There is always going to be a certain amount of risk we take on when it comes to technology, and software in particular. But since software is deeply ingrained in everything we do, not using it isn’t an option; instead, we focus on risk mitigation. Knowing where you get your open source from is your first line of defense. 

When it comes to open source software, there are two primary options for organizations – curated (or downstream) and community (or upstream). Upstream in open source refers to the community and project where contributions happen and releases are made. One example is the Linux kernel, which serves as the upstream project for all Linux distributions. Vendors can take the unmodified kernel source and then add patches, add an opinionated configuration, and build the kernel with the options they want to offer their users. This then becomes a curated, downstream open source offerings or products. 

Some risks are the same regardless of whether solutions are built with vendor-curated or upstream software; however it is the responsibility for maintenance and security of the code that changes. Let’s make some assumptions about a typical organization. That organization is able to identify where all of its open source comes from, and 85% of that is from a major vendor it works with regularly. The other 15% consists of offerings not available from the vendor of choice and comes directly from upstream projects. For the 85% that comes from a vendor, any security concerns, security metadata, announcements and, most importantly, security patches, come from that vendor. In this scenario, the organization has one place to get all of the needed security information and updates. The organization doesn’t have to monitor the upstream code for any newly discovered vulnerabilities and, essentially, only needs to monitor the vendor and apply any patches it provides. 

On the other hand, monitoring the security of the remaining 15% of the open source code obtained directly from upstream is the user organization’s responsibility. It needs to constantly monitor projects for information about newly discovered vulnerabilities, patches, and updates, which can consume a significant amount of time and effort. And unless the organization has the resources to dedicate a team of people to manage this, systems can be left vulnerable, which can have costly impacts. In this hypothetical scenario, the uncurated open source is a much smaller percentage of your infrastructure, but the support burden for that 15% is most definitely higher than the 85% provided by your vendor.

While at first glance, it may seem that the same effort is required to apply patches to upstream open source code and patches to vendor-supported open source code, there can be important differences. Most upstream projects provide fixes by updating the code in the most recent version (or branch) of the project. Therefore, patching a vulnerability requires updating to the most recent version, which can add risk. That most recent version may have additional changes that are incompatible with the organization’s use of the previous version or may include other issues that have not yet been discovered simply because the code is newer. 

Vendors that curate and support open source software often backport vulnerability fixes to older versions (essentially isolating the upstream change from a later version that fixes a particular issue and applying it to an earlier version), providing a more stable solution for applications consuming that software, while also addressing the newly discovered vulnerability. It has been demonstrably proven that backporting reduces the risk of undiscovered vulnerabilities being introduced and that older software that is actively patched for security issues becomes more secure over time. Conversely, because new code is being introduced in new versions of software, the risk of new security issues being introduced is higher.

That’s not to say you shouldn’t use upstream open source. Organizations can, and do, consume software directly from upstream projects. There are many reasons for using upstream open source in production environments, including cost savings and access to the latest features. And no enterprise vendor can provide all of the open source that consumers may use. GitHub alone hosts millions of projects, making it impossible for any vendor to support them all. 

There will likely be some upstream open source that will be consumed directly, and this, along with any code written by the organization, is where the majority of an organization’s security team’s time and effort will be focused. If that number is small enough, the cost and associated risk will be smaller as well. Every organization will likely consume some open source directly from upstream and they need to be aware of that code, how and where it is used, and how to appropriately track upstream developments for potential security issues. Ideally, organizations will end up with the bulk of their open source coming from an enterprise vendor, which will lower the overall cost of consumption and decrease the associated risk of using it. 

Securing the software supply chain

Knowing where your open source originates from is the first step to decreasing exposure, but supply chain attacks are still increasing exponentially. According to Sonatype’s 2021 State of the Software Supply Chain report, in 2021 there was a 650% increase in software supply chain attacks aimed at exploiting weaknesses in upstream open source ecosystems. One of the most publicized attacks had nothing to do with open source code itself, but instead was an attack on the integrity of a company’s patch delivery process. And with the number of high-profile and costly security attacks to organizations that have been prevalent in the news over the past few years, increased attention and scrutiny is (rightly) being placed on supply chain security.  

Different actions are required to prevent or mitigate different types of attacks. In all cases, the principle of “trust but verify” is relevant.

Organizations can address this in part by shifting security left in new ways. Historically, shifting security left has focused on adding vulnerability analysis to the CI/CD pipeline. This is a good “trust but verify” practice when using both vendor-provided and upstream code. However, vulnerability analysis is really not enough. In addition to the binaries produced by the pipeline, application deployments require additional configuration data. For workloads deployed to Kubernetes platforms, configuration data may be provided through Kubernetes PodSecurityContexts, ConfigMaps, deployments, operators and/or Helm charts. Configuration data should also be scanned for potential risk such as excess privileges, including requests to access host volumes and host networks.  

Additionally, organizations need to protect their supply chain from intrusion. To better support this effort, organizations are adopting new technologies in software pipelines such as Tekton CD chains, which attests to the steps in the CI/CD pipeline, as well as technologies like Sigstore, which makes it easier have artifacts signed in the pipeline itself rather than after the fact.

Sigstore is an open source project that enhances security for software supply chains in an open, transparent, and accessible manner by making cryptographic signing easier. Digital signatures effectively freeze an object in time, indicating that in its current state it is verified to be what it says it is and that it hasn’t been altered in any way. By digitally signing the artifacts that make up applications, including the software bill of materials, component manifests, configuration files, and the like, users have insights into the chain of custody. 

Additionally, proposed standards around delivering software bills of material (SBOMs) have been around for quite some time, but we’ve reached the point where all organizations are going to need to figure out how to deliver a software bill of materials. Standards need to be set not only around static information in SBOMs but also around corresponding, yet separate, dynamic information such as vulnerability data, where the software package hasn’t changed but the vulnerabilities associated with that package have. 

While it may seem as though security is a constantly moving target, because of the intense scrutiny around software security in the past several years, more strategies and tools to reduce risk are being developed and implemented every day. That said, it’s important to remember that addressing security effectively requires that organizations regularly review and iterate on their security policies as well as their tool choices, and that all members of the organization are effectively engaged and educated in these processes. 

Kirsten Newcomer is director of cloud and DevSecOps strategy at Red Hat.

Vincent Danen is VP of Product Security at Red Hat.


Originally appeared on: TheSpuzz

Scoophot
Logo