Key Benefit

Federated Software Composition Analysis

Harvesting SCA Data for Comprehensive Composition Views

What is Software Composition Analysis?

Software Composition Analysis (SCA) is the process of uncovering the open-source and transitive dependencies used in your codebase. Initially, SCA was critical for knowing what license a particular package used. Some OS licenses were not accepted by all organizations. SCA has since grown into a process of evaluating the security and code quality of open-source libraries and tools.

How does SCA Work?

SCA is an automated scanning step that is added to the DevOps Pipeline to interrogate source code, external object libraries, packages, manifest files, and other resources that make up a software system.

It generally occurs at the software build step during the compile/link of binaries or created with the container image. Data gathered through this process is most commonly associated with a particular version or ‘release’ of a complete software system. The SCA scan produces a Software Bill of Material Report (SBOM). The data in the SBOM is then compared against a vulnerability database to determine the common vulnerabilities and exposure (CVE) score.


A SCA solution allows for the secure risk management of open source use throughout the software supply chain, allowing the security teams, developers, and organizations to:

  • Set and enforce policies. 
  • Discover and track all open source.
  • Enable proactive and continuous monitoring.
    Seamlessly integrate open-source code scanning into the build environment. 
  • Quicker, safer time-to-market.
  • Eliminate unknown business risks.

SCA Challenges in a Cloud-Native Environment

Microservices and cloud-native environments have changed the way we design software. We no longer build software systems based on a single codebase scanned for Software Composition Analysis (SCA) and compiled into a set of binaries that are released together.

Instead, our monolithic systems are decoupled into smaller, reusable functions deployed independently. Each function, or microservice, goes through an independent build process and pipeline. Each microservice is scanned independently with its own Software Bill of Material (SBOMS) and Common Vulnerability and Exposure (CVE) report for every released version.

The result is hundreds of SBOMs, CVE reports, and other critical DevOps intelligence data spread across the software supply chain that software systems consume. What is needed is to begin reporting on the ‘logical’ application in decoupled architectures the same as we reported on our monolithic applications.

Federating SCA to the Organizational Level

Managing the relationships between services and ‘logical’ cloud-native Applications is required to begin the process of federating Software Composition Analysis and supply chain data to higher organizational levels. In this context, we define these terms as follows:

Components The supply chain, such as a microservice, open-source packages, database SQL, infrastructure change, file updates, etc.
Applications A collection of Components.
Domains A collection of Applications.
Environments Run-time locations where Components and Applications are installed.

An update to a Component immediately impacts all applications, domains, and environments that consume it. For example, a new component version released into a cluster automatically creates a new version of the consuming ‘logical’ applications. No application build was required.

In fact, a single microservice update could cause multiple ‘logical’ application versions to be created. For each new ‘logical’ application version, new Software Composition Analysis reporting must be done. This means that tracking the many-to-many relationships between components and ‘logical’ applications must be automated and stored in a central location.

In a decoupled architecture, several important questions need answers, such as:

  • “Why is there a new version of my ‘logical’ Application?”
  • “What Component caused the change?”
  • “Where are the new SBOM and CVE reports for the new Application version?”
  • “Is the new updated Component running on all Clusters, or do I have a drift problem?”
  • “Who released this microservice update?”

Centralizing Software Composition Analysis

Centralizing and automating the collection of component-level Software Composition Analysis is an essential tool in the DevSecOps toolbox for cloud-native architectures. Without a central ‘evidence store’ of this data, it is nearly impossible to determine if the ‘logical’ application delivered to end users is safe for consumption. Instead of each team carefully managing the release of new monolithic application versions, an application could be impacted without the developers ever knowing that a change occurred.

Using a central governance catalog provides a metadata store that exposes how the ‘logical’ applications are configured and informs stakeholders on insights for the system, including Software Bill of Material and vulnerability reporting.

DeployHub’s Evidence Store of Component to Application Data

DeployHub provides a central ‘Evidence Store’ of supply chain data, including the Component to Application dependencies across the entire supply chain. It consumes Software Composition Analysis data and tracks it for each Component version.

microservice catalog

Once DeployHub detects that a new Component version is available, it automatically creates new versions of each ‘logical’ application that has been impacted, aggregating all of the SCA data to the highest level. DeployHub hooks into the DevOps Pipeline to collect component metadata.

A Baseline of the Logical Application

A ‘logical’ view of the application is a critical piece of understanding the changing supply chain in a decoupled architecture. DeployHub defines a ‘baseline’ application package to prime the pump. With DeployHub, development teams define their ‘Application Baseline Package’ by providing a Component .toml file, or by using a ‘designer’ drag and drop process. The Application Baseline is then used to track and progressively version the application changes over time, based on changes to the underlying Components.

Each time a Component is updated, all Supply Chain intelligence is gathered. DeployHub automatically creates a new Application version for any ‘logical’ Application that was impacted by the change and aggregates the Component Software Composition Analysis to the ‘logical’ application levels.

Conclusion: Software Composition Analysis

While we may no longer need monolithic applications, we must still understand and secure the software systems delivered to end users. DeployHub’s software supply chain management catalog provides the insights needed to allow Developers, DevOps, and Security Engineers to reason about the systems they are creating and delivering to customers around the world.

Suggested Reading

Collecting and organizing evidence is required for a comprehensive view of your organization’s supply chain and risk. Learn how a software supply chain managment catalog can aggregate this level of data across organizational siloes serving IT teams with different data requirements.

Get the Whitepaper

software supply chain catalog

Further Reading

Get Started Today - Signup for DeployHub Team For Free

Signup for DeployHub Team, the free SaaS supply chain management catalog. You will need a Company and Project Name to get started. The Company Name you enter will be created as your company’s private domain, referred to as your Global Domain. Your Project Name will be used under your company Domain. You will also receive an email from DeployHub. The email will contain your unique user ID and client ID, links, and useful information. Learn More

Get started federating all security and DevOps data with DeployHub Team SaaS Catalog.

Got questions?  Join our Discord channel and start a discussion. Open an issue on GitHub.

DeployHub Team is based on the Ortelius Open-Source project incubating at the Continuous Delivery Foundation.