Application Security Best Practices for Your DevOps Pipeline


What is Application Security?

Application security can be defined as building security into your software starting at the earliest point – the code. This practice includes adding logic to add and test security features and to prevent security vulnerabilities. Application security also includes writing code to fortify user access, protect application input, encryption, and threat modeling.

Application security has been recognized as a set of best practices for developers; however, in recent years, the DevOps community has begun to understand they are also responsible for implementing application security best practices around the supply chain and the DevOps pipeline.

While the work developers have done around their coding practices remains essential, additional application security best practices are needed to strengthen the life cycle process, particularly in a cloud-native architecture. These practices include the automation of code signing, security analysis (SBOMs, CVEs, SCA, Signatures), repo scanning, and data consumption.

Modern DevOps Pipeline Challenges with Application Security

Historically, our DevOps pipeline managed a complete software solution delivered to end-users. In a cloud-native, microservices architecture each service is built and deployed independently. SCA, SBOMs, and CVEs are reported at the microservice level with no insights into the application as a complete system. A change to a single microservices can impact multiple applications creating a new release configuration, new release version, new SBOM, and new CVE.

However, because the change impact of an updated microservice is not reported across the entire organization, an application team’s solution will get updated without their knowledge and the associated security reporting.

While application security best practices remain similar at the microservice code level, our DevOps pipelines are currently ill-equipped to deliver the same application-level reporting in a microservice architecture. New DevOps application security best practices are essential.

Top 4 Most Common Pipeline Application Security Best Practices

By now, most companies have built DevOps pipelines that address some level of application security, such as:

  • version control
  • code scanning for common security errors
  • automated security and pen testing
  • restricted access controls for deployments

These top 4 DevOps application security best practices must continue as we manage both traditional and microservice practices. So, if you have not done them, get started. If you have done them, there is still more work to do. Other DevOps pipeline activities and configurations still need to be addressed to fortify our software development practices from coding through production release.

Additional Activities Required to Harden the Pipeline

New security measures across the DevOps pipeline can improve your overall application security. Each phase of the pipeline will require updates to achieve the goal. If we look across the pipeline, 5 phases need to be updated:

  • Code and Pre-build – Critical security steps include code signing, scanning an entire codebase for vulnerabilities, and scanning individual files for code weaknesses.
  • Build – These actions include generating an image SBOM, image signing, and pre-package verification.
  • Post-Build – If the build step above does not include creating an SBOM image, a post-build effort is needed to add security actions for generating a complete SBOM of the entire build image.
  • Publish – Store and share containers, generate container CVEs, and collect security evidence to show an organization’s security profile.
  • Audit – Beyond adding security to the phases of the pipeline, auditing the pipeline itself further hardens the application life cycle process.

Updating each phase requires different levels of difficulty. The build step is often the most critical and complex. The software build process is probably the least understood and most vulnerable step in the DevOps pipeline.

A software build is compiling code and creating a release package. In traditional software, the build produces a monolithic set of binary objects and adds them to a container for eventual release. In a microservices architecture, a build is completed for each service.

The build process relies on version control solutions to pass the correct code. Application security best practices depend on version control to provide insights into what configuration changes occurred during the build. The build script pulls code from the version control solution into a ‘build directory.’  It is from this unsecured ‘build directory’ that the build reads the source as input. This directory is also where output is stored, and the final release package is created.

In general, our traditional build process is vulnerable to a supply chain breach where nefarious objects could be copied to the working directory and end up in the final package delivered to end users. The now famous solar winds hack is an example of a supply chain breach.

New security tooling is now available to harden the build step of the DevOps pipeline. From building on a secure hosted cloud build system to SBOM creation and decentralized package networks, we now have solutions to make this process much safer.

Once the build is addressed, the next steps include registering your container image to an OCI registry and gathering the security data into an evidence catalog. OCI registries provide a safe place for deployment staging. Evidence catalogs provide an end-to-end, comprehensive view of the organization’s security profile. In addition, both registries and evidence catalogs provide continuous updates of your common vulnerabilities and exposures (CVE).

DevSecOps Pipeline for application security

How to Add Application Security Tooling to the DevOps Pipeline


Application Security Best Practices in a Cloud-Native Microservices Environment

A cloud-native microservices architecture adds complexity to the DevOps pipeline due to decoupling. With microservices, hundreds of independent updates are moving across your pipeline, which means you have many ‘logical’ applications being updated all day. Tracking what is being sent to end users becomes more difficult.

Track Security Insights for the Complete Logical Application

Application security best practices in a cloud-native microservice environment are far more complex than traditional, monolithic development. A decoupled architecture requires the tracking of a logical application and the aggregation of all security insights.

In traditional development, we execute our build that statically assembles sources and libraries. We generate our application Software Bill of Material (SBOM) and assign a release number to the build. The SBOM shows a comprehensive list of all artifacts used. CVEs are tracked based on the application build. We can compare two application releases and understand what changed.

In a decoupled architecture, the builds are dispersed across all microservices. The application now becomes a logical composition of the required microservices. A microservice update is equal to running a traditional build, but it goes unnoticed. Suddenly the logical application is different but there is no release number, SBOM, CVE, or difference report to show what changed or why the end users are now experiencing issues.

Application SBOMs

The most difficult application security challenge for the DevOps pipeline is the ability to automatically capture when the logical application has been impacted by an updated microservice. A microservice update must be propagated to the logical application-level creating a new release number, SBOM, and CVE to reflect the change.

To meet the Biden Administration’s 2022 SBOM order, teams will need to deliver an SBOM that aggregates all microservice SBOMs to the logical application level when a change is delivered. Achieving this level of SBOM reporting means the DevOps pipeline will need to automatically track the logical application, and create an application release version, SBOM, and CVE for each change. And remember, a single microservice update can impact multiple logical applications. Not only that, microservices are delivered all day long. Automating microservice to application updates will be an essential component of your application security best practices in a microservices architecture.

Consolidate Comprehensive, End-to-End Insights

Creating tamper-proof software requires the generation of various types of security insights. Most of those insights, like SBOMS, are generated as part of the DevOps pipeline but left under the hood sitting in a log or displayed in various dashboards across the environment. In a cloud-native environment, hundreds of these logs and reports are generated for each new build of a microservice. The data is essential.
Generating security logs such as SBOMs and CVE results is the first step, but consuming the data and building actionable insights is the basis for building strong security policies, including zero-trust. What is required is a consolidation of the information. The consumption of this information provides a comprehensive, end-to-end understanding of the organization’s security profile. For example, you need one place to find where ‘log4j’ is being consumed and running. You need one location to view historical data to determine levels of exposure. You need one location to understand who owns a microservice, what is the impact of the microservice, and what versions are running across all clusters. Security starts and ends with knowing what software you are running across your enterprise. Consolidating the data allows you to build strong security policies with immediate insights to take fast and accurate actions comprehensively across your environments.

Data gathering becomes more and more relevant as we build our application security profile. New tooling and open-source projects are being developed to improve the security of individual artifacts, open-source packages, and the auditing of the DevOps workflow. Interesting projects to watch:

  • – A central evidence store of all security results, with application-level aggregation of the data for comprehensive end-to-end insights and history. Incubating at the Continuous Delivery Foundation(CDF).
  •  – Pyrsia creates a decentralized package network with a build consensus. By building across multiple nodes Pyrsia can compare results and immediately notify you when a build has been compromised. Incubating at the Continuous Delivery Foundation(CDF).
  • – An event-based CI/CD engine built for Kubernetes. Also includes Tekton Chains for auditing the pipeline itself. Incubating at the Continuous Delivery Foundation(CDF).
  • CD.Events – A critical piece in the overall pipeline puzzle. CDEvents is a Continuous Delivery Foundation(CDF) community effort to define standards for creating a CD events framework. CDEvents will simplify and standardize CI/CD workflows, eliminating much of the one-off scripting, and creating an audit of what is occurring in the pipeline. An event-based CI/CD pipeline will make it easy to add and update Pipeline activities without touching hundreds of pipeline workflows.
  • Keptn – An event-based Cloud-native CI/CD engine for orchestrating your application lifecycle. Designed to include observability and remediation with a GitOps approach. Incubating at the Cloud Native Computing Foundation.
  • Alpha-Omega – Their goal is to first (Alpha), work with the most popular open-source projects to find and fix vulnerabilities and second (Omega) provides over 10,000 OS projects with automated security analysis. An Open Source Security Foundation (OpenSSF)community project.
  • – Container Signing, verification, and storage in an OCI registry. Provides a historical record of changes and allows for searching of the record. An Open Source Security Foundation (OpenSSF)community project.
  • Syft – A CLI tool and Go library for generating a Software Bill of Materials (SBOM) from container images and filesystems. Managed by Anchore.
  • Apko – Build and publish OCI container images built from Alpine Package Keeper packages. A safer way to create containers. Managed by ChainGuard.


If you have worked hard to define solid application security best practices for your development teams, it is time to turn your attention to your DevOps pipeline. As you move away from monolithic practices to a decoupled cloud-native, microservices architecture, there will be new problems to address:

  1. More security data must be gathered, consumed, and acted upon for each object managed across your pipelines. At a minimum, each version of your microservices should have an associated SBOM and CVE scan.
  2. The ability to track your microservices, with their security profile aggregated up to the application level, will be essential. You cannot produce a logical application’s security profile without knowing the lower-level dependency data and reporting on it as a complete solution.
  3. Continue to add new and more secure ways of managing the creation of your containers.

Verifying content is the most crucial step in the entire process.

We are just getting started defining and implementing application security best practices. A considerable amount of work is yet to be done, and the open-source community wants to ensure they are ready and able to solve these challenging tasks.

Where Does DeployHub Fit?

DeployHub consumes and aggregates security and DevOps intelligence, providing comprehensive, end-to-end insights, including microservice and logical application relationships, consolidated microservice and application security reports, service version drift across clusters, and overall application supply chain usage. DeployHub is added to your CI/CD pipeline to automate the collection and aggregation of this data using a simple command line interface that can also add SBOM generation to the process if you have not already done so.