Skip to main content

Software Supply Chain Management the Newest Trend

Software Supply Chain Management is the newest shiny object in both the DevOps and the DevSecOps community discussions. But what does it mean in relation to software development?

Historically, Supply Chain Management is a ‘commerce’ term that refers to tracking the logistics of goods and services moving between producer and consumer. This would include the storage and flow of raw materials with a focus around the channels that support the process.

When we look at this concept from the perspective of software development, we will soon see that it is a new form of Software Configuration Management. Yes, the new SCM is SCM. The core of Supply Chain Management is the process of controlling changes in software. The new SCM expands our old practices to consider all ‘raw materials’ which we manage as part of our software build and package step. Companies purchase and download open-source libraries used in their in-house software. Software Supply Chain Management includes the interrogation of these ‘raw materials’ as well as all of the internal code and libraries. We will soon be hearing more about Software Bill of Material reports and Difference Reports, as they are critical for understanding the objects that flow through the software we create.

Historically, not much has been discussed around the software supply chain or software configuration management practice. This area of expertise has never been ‘sexy.’ That is until the SolarWinds ‘supply chain’ hack. The Solar Winds hack was a hard lesson to learn, but it taught us to take a closer look at how we consume our ‘raw materials’ and to be more serious about how we build and package our software.

According to Fireeye, the firm who discovered the ‘backdoor’ Supply Chain hack:

SolarWinds.Orion.Core.BusinessLayer.dll is a SolarWinds digitally-signed component of the Orion software framework that contains a backdoor that communicates via HTTP to third party servers. We are tracking the trojanized version of this SolarWinds Orion plug-in as SUNBURST.”

It was obvious, something went very wrong with the creation of that .dll. I don’t pretend to fully understand how this breach occurred. But I do understand the build process and have been talking about the potential of these types of breaches for over 25 years.

Software Supply Chain and the Build

There has always been big security gaps and vulnerabilities in our most basic software development practice – the compile/link process. Most build processes are imperatively defined using an Ant, Make or Maven script. Each time the build runs, it rebuilds all the binaries, even if the corresponding source code has not changed.

This is a problem. Instead, we need a build that is incremental. Incremental builds are difficult to script because they need more advanced logic. Most developers are not given time to create a better build process, one that only re-builds the changes. This means that it is next to impossible to audit why a binary was re-compiled, because the answer is they are always recompiled. If we only recompiled the binaries that had corresponding code updates, we could more carefully audit the code and make sure that only approved coding updates were included.

This may not have caused the SolarWinds attack, but it could have. Once the hackers have gotten pass the firewall, they can then find the build directory, read through the build scripts, and identify the perfect piece of code to update with their ‘bad function.’ The next time the build runs, a ‘clean all’ recompiles everything and like magic, a SUNBURST is created.

In our not too distant past the developer community had tools that controlled software builds beyond what an imperative script could do. OpenMake Software provided a build automation solution called Meister, that ensured that incremental builds were easy. As part of the build, it generated Software Bill of Material reports and Differences report allowing teams to easily audit and compare the actual source changes to confirm that only the approved code updates were included. Rational Software’s ClearCase used a program called ClearMake that used a process called ‘winkin’ to reuse ‘derived’ objects that had not changed. This process created an incremental build that could be easily audited.

I realize that I am simplifying the overall problem in this case. But it is a good example of the basic Software Supply Chain practice and how potentially vulnerable our processes are.

The time is now to carefully think about how we manage our ‘raw materials’ and overall supply chain. I’m guessing that most monolithic practices will not see much change. It is too costly and impactful to re-write hundreds of CI pipelines. But that does not mean we can’t solve the supply chain puzzle as we shift into cloud native development in a microservice architecture.

In a microservice application, we need to begin thinking about what the supply chain looks like. How do we track ‘raw materials’ that are now in the form of hundreds of small ‘bounded context’ functions? The good news is there is an emerging microservice catalog market that can be part of the supply chain solution. A microservice catalog like DeployHub can track versions of services to versions of their consuming applications. Your new Software Bill of Material report will have two levels. First an SBOM for the microservice. The second level is aggregating this data at an application level. Remember, in this new architecture a full build is not done.

Important to understand is microservices are naturally ‘incremental’ and are independently built, packaged and deployed. Managing these small services opens the door to more closely auditing their source code before they are released. As we shift from monolithic development to microservices, we have an opportunity incorporate supply chain and configuration management best practices. This includes being able to carefully audit the code before a release is done. And remember we must do this as fast as possible. It sounds like a big task, but well worth the effort. Collectively we will find the answers as we explore new ways to manage our DevOps pipeline and incorporate configuration management and DevSecOps principles.

About the Author:

Tracy Ragan is CEO and Co-Founder of DeployHub. She is expert in software configuration management and pipeline practices with a hyper focus on microservices. She currently serves as a board member of the Continuous Delivery Foundation (CDF) and is the Executive Director of the Ortelius Open Source Project incubating at the CDF. Tracy often speaks at Industry conferences such as DevOps World and CDCon. She was recognized by TechBeacon as one of the top 100 DevOps visionaries in 2020.