Kubernetes pipelines are about to change the way we do things. Kubernetes and Microservices will drive us away from our old pipeline models to a much leaner method of software development. When I say a Kubernetes pipeline, I’m not referring to a particular set of tools but instead exposing the difference between what we do in a monolithic vs. a Kubernetes pipeline process. Although, I’m not convinced that the Kubernetes pipeline has been solved. We still have some work to do.
Kubernetes and Microservices are not just disrupting how we create, manage, and access software, they are obliterating it. We’ve not seen a change like this since the enterprise transformation from mainframes to PCs. This latest move is being driven by two primary features of the Kubernetes architecture: Fault Tolerance and High Availability with Auto Scaling. Both are required for creating modern software that can satisfy the consumers demand for more data faster. IoT, big data, machine learning and AI all require the bigger processing power, stability, and responsiveness offered by Kubernetes.
When I say Kubernetes is a shift as big as the move from the mainframe to distributed platforms, I’m not exaggerating.
Let’s first look at a monolithic pipeline, because it’s very different from a Kubernetes one. When we develop software in today’s pipeline environment, we pull together custom code, shared internal libraries, open source objects and database components. The application is written with an infrastructure in mind such as a specific version of Oracle and Tomcat. The first step in our monolithic pipeline habit is to compile the application, often called ‘the build.’ Our build scripts perform the magic of creating the binaries. They pull source code into a local build directory, point to compiler libraries and other directory locations for other required libraries, compile the objects, creates the .jar, .war. and .exe files that are to be deployed. In some cases, we must run multiple builds to update configuration info for a specific environment (Dev, Test, Prod).
Image 1: Traditional Continuous Delivery Pipeline
When we do the release, we send all the objects from the build. When the build is monolithic, the deployment is monolithic. When we deploy, we update physical servers, VM images, Cloud Images or a Docker Container with the entire set of objects from the build. Our continuous delivery pipeline orchestrates this build and release process, centralizing the logs and pushing the process from Dev to Test and Test to Prod.
A Kubernetes pipeline looks very different. Even the concepts of Dev, Test and Prod environments go away.
Ok – did your head just explode?
Good, because now you have an open mind.
Microservices offer a completely new way of approaching software design and deployment, one that is truly continuous. This process fits extremely well into an agile methodology where dev, test and prod specialists are all on the same team, and the goal is to drive innovation via smaller incremental updates that happen daily. This is the goal of a Kubernetes pipeline. The concepts of siloed environments and our old monolithic pipeline habits begin to fade as microservices move us into a truly continuous practice. Our monolithic application is broken into individual, independently deployable services. A single microservice alone only performs a highly specialized function of the application. A collection, or package, of microservices becomes equivalent to your monolithic application, but you rarely release in terms of the entire application.
In some cases, teams moving to microservices will still manage applications in a monolithic style. Each version of the application will be deployed to a siloed Kubernetes cluster. While this may work in the beginning, it does not take full advantage of the power of Kubernetes.
Image 2: Monolithic Applications Running in Individual Kubernetes Clusters
As teams developing on Kubernetes gain more experience, they will begin moving away from these silos. Service Mesh will become part of the Kubernetes pipeline and will perform the request routing, which will control user access to microservices. All end users will be defined to use the ONLY cluster, which will contain multiple versions of a single microservice to serve dev, test and prod end users. A new version of an application will be deployed, one that brings with it ONLY a new version of a microservice, v2 for example. The Kubernetes pipeline will instruct Service mesh to route some users (dev or test) to access the application that includes the v2 of the microservice. Once approved for release, service mesh will be updated to enable all users access to v2.
Image 3: Software package (application) using independently deployable microservices
When this process becomes the norm, teams can quickly perform an incremental update of an application and deploy it just the once. Once the new microservice (which creates a new version of the application) is ready for prime time, the Kubernetes pipeline will instruct service mesh to route all users of the application to begin using the v2 of the microservice. Now you can see how our pipeline (dev, test, prod) becomes a bit blurred, or maybe just a bit more continuous, and with that more agile. A rollback is simply an instruction to the service mesh to re-route users. A roll forward is the same. Builds and releases become much smaller as we focus on the microservice and not the monolithic application.
There are so many questions that still need to be answered. For example, will each microservice have its own workflow? How do you aggregate the monolithic equivalent of an application? How do we version a microservice? How do we track relationships?
I know we have spent lots of time sorting out a life cycle process to manage our monolithic pipeline, but eventually it must go. Kubernetes and the new Kubernetes pipeline is ushering in a much better way. Our job is to begin seeing this new way, and therefore begin breaking down the old patterns so a new, faster method of creating software is allowed. My advice then, is just keep your mind open and together we will create a Kubernetes pipeline process that drives innovation to end users faster then ever before. It’s about time.
- Solving the Kubernetes Pipeline Challenges
- DeployHub’s Version Engine
- DeployHub’s Version Engine
- Domain Driven Design for microservices
- Microservices and Components
- Drive your Deployment Process using the Jenkins Continuous Deployment Plug-in
- Track Component to Endpoint with a Feedback Loop
- DeployHub and Jenkins – This Demo shows how DeployHub interacts with the Jenkins pipeline (including the blue ocean visualization).
- DeployHub Team Sign-up – The hosted team version can be used to deploy to unlimited endpoints by unlimited users.
- Get Involved in the OS Project – Help us create the best, open source continuous deployment platform available.