Skip to main content

Comparing a Monolithic Pipeline to a Microservice Pipeline

Microservice pipelines are different than traditional pipelines. As the saying goes…

“The more things change; the more things stay the same.” 

As with every step in the software development evolutionary process, our basic software practices are changing with Kubernetes and microservices.  But the basic requirements of moving software from design to release remain the same. Their look may change, but all the steps are still there.  In order to adapt to a new microservices architecture, DevOps Teams simply need to understand how our underlying pipeline practices need to shift and change shape.

Understanding Why Microservice Pipelines are Different

The key to understanding microservices and microservice pipelines is to think ‘functions.’ With a microservice environment, the concept of an ‘application’ goes away. It is replaced by a grouping of loosely coupled services connected via APIs at runtime, running inside of containers, nodes, and pods. The microservices are reused across teams increasing the need for an improved organization through a Domain-Driven Design, to facilitate collaboration, communication, reuse, and visibility.

The biggest change in microservices architecture is having a single microservice used by multiple application teams independently moving through the microservice pipeline. Again, one must stop thinking ‘application’ and think instead think ‘functions’ to fully appreciate the oncoming shift. And remember, multiple versions of a microservice could be running in your environments at the same time.

Microservices are immutable. You don’t ‘copy over’ the old one, you deploy a new version. When you deploy a microservice, you create a Kubernetes deployment YAML file that defines the Label and the version of the image.

Kubernetes Deployment file

Example Kubernetes Deployment file

In the above example, our Label is dh-ms-general. When a microservice Label is reused for a new container image, Kubernetes stops using the old image. But in some cases, a second Label may be used allowing both services to be running at the same time. This is controlled by the configuration of your ingresses. Our new microservice pipeline process must incorporate these new features of our modern architecture.

Comparing Monolithic to Microservice Pipelines

What does your CD pipeline look like when we manage small functions vs. monolithic applications running in a modern architecture? Below is a comparison of each category and their potential shift for supporting a microservice pipeline.

Change Request


Logging a user problem ticket, enhancement request, or anomaly based on an application.


This process will remain relatively unchanged in the microservice pipeline. Users will continue to open tickets for bugs and enhancements. The difference will be sorting out which microservice needs the update, and which version of the microservice the ticket was opened against.  Because a microservice can be used by multiple applications, dependency management, and impact analysis will become more critical for helping to determine where the issue lies.

Version Control


Tracking changes in source code content.  Branching and merging updates allow multiple developers to work on a single file.


While versioning your microservice source code will still be done, your source code will be smaller, 100-300 lines of code versus 1,000 – 3,000 lines of code. This impacts the need for branching and merging.  The concept of merging ‘back to the trunk’ is more of a monolithic concept, not a microservice concept.  And how often will you branch code that is a few hundred lines long?

Artifact Repository


Originally built around Maven, an artifact repository provides a central location for publishing jar files, node JS Packages, Java scripts packages, docker images, python modules. At the point in time where you run your build your package manager (maven, NPM, PIP) will perform the dependency management for tracking transitive dependencies.


Again, these tools supported monolithic builds and solved dependency management to resolve compile/link steps.  We move away from monolithic builds, but we still need to build our container and resolve our dependencies in the microservice pipeline.  These tools will help us build containers by determining the transitive dependencies need for the container to run.



Executes a serial process for calling compilers and linkers to translate source code into binaries (Jar, War, Ear, .Exe, .dlls, docker images).  Common languages that support the build logic include Make, Ant, Maven, Meister, NPM, PIP, and Docker Build. The build calls on artifact repositories to perform dependency management based on what versions of libraries have been specified by the build script.


For the most part, builds will look very different. A build of a microservice will involve creating a container image and resolving the dependencies needed for the container to run. You can think of a container image to be our new binary. This will be a relatively simple step and not involve a monolithic compile/link of an entire application. It will only involve a single microservice. Linking is done at runtime with the restful API call coded into the microservice itself.

Software Configuration Management (SCM)


The build process is the central tool for performing configuration management. Developers set up their build scripts (POM files) to define what versions of external libraries they want to include in the compile/link process. The build performs configuration management by pulling code from version control based on a ‘trunk’ or ‘branch. A Software Bill of Material can be created to show all artifacts that were used to create the application.


Much of what we used to do for configuring our application occurred at the software ‘build.’  But ‘builds’ as we know them go away in a microservice pipeline.  This is where we made very careful decisions about what versions of source code and libraries we would use to build a version of our monolithic application.  For the most part, the version and build configuration shift to run-time with microservices.  While the container image has a configuration, the broader picture of the configuration happens at run-time in the cluster via the APIs.

microservice configuration management shift

The configuration management shift

In addition, our SCM will begin to bring in the concept of Domain Driven Design where you are managing an architecture based on the microservice ‘problem space.’ New tooling will enter the market to help with managing your Domains, your logical view of your application, and to track versions of applications to versions of services. In general, SCM will become more challenging as we move away from resolving all dependencies at the compile/link step and must track more of them across the pipeline. Learn more about Software Supply Chain Management, a new kind SCM.

Continuous Integration (CI)


Continuous Integration is the triggered process of pulling code and libraries from version control and executing a Build based on a defined ‘quiet time.’  This process improved development by ensuring that code changes were integrated as frequently as possible to prevent broken builds, thus the term continuous integration.


Continuous Integration was originally adopted to keep us re-compiling and linking our code as frequently as possible in order to prevent the build from breaking. The goal was to get to a clean ’10-minute build’ or less.  With microservices, you are only building a single ‘function.’

This means that an integration build is no longer needed. CI will eventually go away, but the process of managing a continuous delivery pipeline will remain important with a step that creates the container.


Code Scanning


Code scanners have evolved from looking at coding techniques for memory issues and bugs to scanning for open-source library usage, licenses, and security problems.


Code scanners will continue to be important but will shift to scanning the container image more than the source. Some will be used during the container build focusing on scanning for open source libraries and licensing while others will focus more on security issues with scanning done at runtime.

Continuous Testing 


Continuous testing was born out of test automation tooling.  These tools allow you to perform an automated test on your entire application including timings for database transactions. The goal of these tools is to improve both the quality and speed of the testing efforts driven by your CD workflow.


Testing will always be an important part of the life cycle process. The difference with microservices will be understanding impact and risk levels. Testers will need to know what applications depend on a version of a microservice and what level of testing should be done across applications. Test automation tools will need to understand microservice relationships and their impact. Testing will grow beyond testing a single application and instead will shift to testing service configurations in a cluster.

Analytics and Monitoring


Tracking the health of both your network and deployed applications is a key function of these solutions.  In the early days monitoring the network was essential, and then we learned it was just as important to monitor the applications themselves. Application Performance Monitoring is a key player in this market.


In a Kubernetes architecture, monitoring shifts from physical servers to Nodes and Pods. Applications shift from one .jar or .war file to hundreds of microservices. ‘Observability’ becomes a key feature in solutions that fit into this category.

Understanding how an update to a microservice impacts the node as well as the consuming ‘logical’ application becomes one in the same. Pushing that information back to the developers will be key allowing them to understand how their change impacted the cluster.



Security solutions allow you to define or follow a specific set of standards. They include code scanning, container scanning, and monitoring. This field has grown into the DevSecOps movement where more of the security activities are being driven by Continuous Delivery.


Security solutions will shift further ‘left’ adding more scanning around the creation of containers.  As containers are deployed, security tools will begin to focus on vulnerabilities in the Kubernetes infrastructure as they relate to the content of the containers.

Continuous Delivery Orchestration (CD)


Continuous Delivery is the evolution of continuous integration triggering ‘build jobs’ or ‘workflows’ based on a software application.  It auto executes workflow processes between development, testing, and production orchestrating external tools to get the job done. Continuous Delivery calls on all players in the lifecycle process to execute in the correct order and centralize their logs.


Let’s start with the first and most obvious difference between microservice pipelines and monolithic pipelines.  Because microservices are independently deployed, most organizations moving to a microservice architecture tell us they use a single pipeline workflow for each microservice.

Also, most companies tell us that they start with 6-10 microservices and grow to 20-30 microservices per traditional application. This means you are going to have hundreds if not thousands of workflows.

CD tools will need to include the ability to template workflows allowing a fix in a shared template to be applied to all child workflows.  Managing hundreds of individual workflows is not practical. In addition, plug-ins need to be containerized and decoupled from a version of the CD tool. And finally, look for actions to be event-driven, with the ability for the CD engine to listen to multiple events, run events in parallel, and process thousands of microservices through the pipeline.

Continuous Deployments


This is the process of moving artifacts (binaries, containers, scripts, etc.) to the physical runtime environments on a high-frequency basis. In addition, deployment tools track where an artifact was deployed along with audit information (who, where, what) providing core data for value stream management. Continuous deployment is also referred to as Application Release Automation.


The concept of deploying an entire application will simply go away. Instead, deployments will be a mix of tracking the Kubernetes deployment YAML file with the ability to manage the application’s configuration each time a new microservice is introduced to the cluster. What will become important is the ability to track the ‘logical’ view of an application by associating which versions of the microservices make up an application.

This is a big shift. Deployment tools will begin generating the Kubernetes YAML file removing it from the developer’s to-do list. Deployment tools will automate the tracking of versions of the microservice source to the container image to the cluster and associated applications to provide the required value stream reporting and management.


As we shift from managing monolithic applications to microservices, we will create a new microservice pipeline. From the need to manage hundreds of workflows in our CD pipeline to the need for versioning microservices and their consuming application versions, much will be different. While there are changes, the core competencies we have defined in traditional CD will remain important even if it is just a simple function that we are now pushing independently across the microservice pipeline.

Further Reading: