Microservice Pipelines – A Disruption in CI/CD
A microservice pipeline is designed to handle hundreds of rapidly changing parts in a shared architecture. When you originally purchased your CI/CD tools and built your pipeline, you probably did not think about a process that looks like a forever-changing Rubik’s cube.
Instead, you defined your pipeline to handle an extensive compile process followed by a significant deployment. Infrastructure and database changes may not have been part of the discussion. But today, we are faced with something completely different.
A microservice pipeline needs to be highly flexible. It should address the entire technology stack. It must be aware of all the independently deployed parts for debugging and overall visibility.
I like to think of building a new microservice pipeline as an opportunity to fix all of the processes that kept us from becoming truly agile. We can finally build a pipeline where only incremental updates are released with cloud-native architecture and microservices. A microservice architecture allows us to travel down the last mile of agile. The problem is our pipeline needs to be rebuilt.
You might be thinking that you can hang on to some of the work you have already done in your CI/CD pipelines. That would be a mistake. Adapting to a component-driven architecture requires a fresh eye and an open mind. To help you get there, let me cover what I believe will change in our primary DevOps pipeline. Breaking it down into five significant pipeline challenges will help you understand the task at hand:
- CI/CD Pipeline Workflows – while a few dozen workflows served us in the past, each independently deployed component will need a private workflow. This potentially means thousands of workflows.
- Version control – version control will not disappear, but the need for branching and merging strategies will. After all, how often will you need to branch or merge a 300 line python module?
- Small builds with no links – Our CI build process as we know it goes away. We no longer statically compile and link our application release. While compiling might be done, we instead create a container image and worry about linking via APIs at runtime.
- Software Supply and Impact Analysis – If we think about this new world differently, we see that the ‘bounded context’ functions become our raw material or supply chain. A change to one service can potentially impact hundreds of dependent services.
- Managing the full stack – Infrastructure and data should be treated the same as any component and considered a part of the overall application package. After all, parts are parts.
Lots of Microservice Pipelines
Let’s start with the first and most apparent difference between monolithic and microservice pipelines. You may have managed one workflow per application release in a monolith model. Microservices are independently deployed. Most organizations moving to microservices tell us that each gets its own pipeline. Also, most companies tell us that they start with 6-10 microservices and grow to 40-60 per traditional application.
Now, if you add different releases for each microservice, you may quickly end up with thousands of workflows. Most enterprise companies we talk to indicate they believe their cluster will eventually manage 3,000 – 4,000 microservices. We call this a death star.
Microservices Pipeline and Version Control
In traditional development, version control has played a vital role in managing code. Branching and merging became critical in supporting many developers working on different versions of a single piece of code.
With microservices, you will have a single developer working on a small piece of code. Your version control tool will continue to serve as a repository. Less branching and merging will make version control less critical.
The Microservice ‘Build’
With the traditional CI/CD practices, the new microservice pipeline drastically impacts the continuous integration step. We no longer need to perform a gigantic build with microservices, statically linking the entire application into a set of binaries. Those days are numbered.
Instead, your microservice continuous integration step will create and register a container image, and dependency and security scanning will still be used. But once the container image is registered, it’s ready to go.
The continuous integration shift
An Unknown Supply Chain and Lack of Impact Analysis
Part of the problem with losing the full build is we also lose essential work products of the build. First, we lose the concept of full application and application releases based on a ‘build number.’ Without the build, we cannot track the application’s supply chain or the changes.
In addition, we lose the application’s Software Bill of Material and Difference Reports. Your microservice pipeline still needs to understand the application and its changes. You cannot easily see how an application release is packaged with microservices. Without an application package, how do we even know what version of the software the end-users are using?
Remember that developers control their software supply chain using a build script in a traditional build. Because microservices and other components are loosely coupled, the control in the build step is removed.
Developers, DevOps Engineers, and SREs have less control over the services their applications consume. They are not making those supply chain decisions at build time. A Kubernetes microservice architecture takes dynamic linking to an entirely new level. SBOMs (Software Bill of Materials) and impact are harder to track.
The Full Applications Stack
In monolithic pipelines, most companies manage just the application binaries. Infrastructure and database changes are usually not included in the continuous delivery process. Something or someone else manages those pieces.
Parts are parts. In our new cloud-native approach, an application stack includes all the services they consume. Those services could have a domain layer, a data layer, and an infrastructure layer. Our microservices pipeline must consider the management of the entire stack, not just the application layer.
Microservice Domain Layers
New Microservice Pipeline Tooling
From the CD orchestration level to deployments, the microservice pipeline tools were designed around a monolithic architecture. New tooling is required to build a microservice pipeline that is component-driven. There is no way around it. Holding on to old technology will not help you adapt to this modern architecture. OK – we got that out of the way.
There are tools that can exist in both worlds – scanning for transitive dependencies when your container image is being created comes to mind. Versioning your python script is another. But most of our new microservice pipelines will be different. New tooling is needed to get you there.
Event-Based Microservice Pipelines
The CD Orchestration engine itself needs a significant disruption. New orchestration tooling is required to support thousands of pipelines. Solutions like JenkinsX are an excellent start for solving this problem. More critical for microservice pipelines is Event-based CI/CD.
Events with a standard listener will provide the sustainability to manage thousands of microservice pipelines. This is the focus of Event-based systems such as Keptn. The Continuous Delivery Foundation works on Event-Based CI/CD standards for listeners. It’s recommended to keep your thumb on the pulse of CD Events. It will provide a declarative CD method to take us to the subsequent CD orchestration and interoperability level.
Ultimately, we want to stop imperatively defining workflows and let them be declared; this is the only road forward for a microservice implementation. In addition, we may be adding infrastructure updates and data to the pipeline. These changes have their own needs. Remember, not every component is equal. Different pipelines are needed for different component types.
Version Control as a Single Source of Truth
Versioning and ‘storing’ our microservice and component code will continue to be an essential benefit of version control. In fact, in a potential declarative model, your CD workflow declarations can be stored in version control. In addition, managing deployment .yaml files and interacting with IaC and GitOps operators will become the new focus of versioning solutions.
Microservice Pipelines and Component Catalogs – the data of DevOps
A new microservice pipeline tooling has been born to replace what we have lost in monolithic builds. The microservice ‘service’ catalog will become a critical data store of everything we need to know about a particular component. Unified microservice catalogs will replace our old ‘build’ process. They will provide visibility around usage, dependencies, and incident response information. The trick will be to get microservices developers to register their ‘initial’ component details to the catalog.
In addition, microservice catalogs will provide a ‘logical’ view of the application. This logical view is created when consumers of services create a ‘baseline’ of their ‘logical’ application package. Automation into the CD pipeline will trigger the catalog to version new services when an update to the container registry has been detected.
A new version of a service creates a new version of all ‘logical’ applications that consume it. If the service is pushed to an endpoint, the service’s inventory locations are recorded in the catalog. This shows what versions of services and versions of ‘logical’ applications are running across all endpoints.
The microservice catalog is how we can maintain the ability to understand the impact, view the supply chain (SBOM), know when to deprecate, and track which version of your application your end users are running in any cluster. Over time, this DevOps data will be essential for building more intelligent microservice pipelines. Imagine having a pipeline that can determine the risk value of a component and automatically assign the pipeline activities needed. That is where we are headed.
An Example of a Microservice Catalog
DeployHub’s unified microservice catalog tool maps your services to their consumers, tools, version history, and the teams that support them to achieve a DevOps breakthrough. Its unique features include the ability to version services, track changes in ‘logical’ applications, provide support teams with microservice ownership information, display SBOMs, difference maps, and the blast radius of any update. It can support any component, from microservices to database updates and infrastructure changes.
A microservice pipeline
DeployHub Blast Radius Maps
New supporting tools will be required to build your microservice pipeline. The introduction of Event-based CI/CD pipeline management and a unified microservice catalog for collecting data will be essential. An update to the orchestration engine itself may also be in order.
A microservice catalog will restore insights delivered by the monolithic build system, plus microservice inventory. The catalog will store:
- software supply chain
- versions of services and ‘logical’ applications
- cluster inventory
- service dependencies
- microservice ownership
A microservice implementation will remain complex without a central datastore, making our DevOps jobs difficult.
DeployHub is a unified catalog designed to fit into the new microservice pipeline. It hoards and leverages the deployment data needed to make microservices easy. Without this core pipeline datastore, microservices pipelines will fail to provide the automation and insights required to create a consistent, yet agile software assembly line.