Skip to main content

Getting to Know Kubernetes Minikube

Kubernetes Minikube requires some know-how to get working. I have been working with Docker Containers in building out our SaaS version of DeployHub. The first step was to get individual components running in their own Docker container. I used DeployHub to build the containers as part of my CI/CD process. Deploying to a container happens when you do a docker build command, the files are copied into the image along with any configuration data and saved. The Docker image is saved on my docker host. DeployHub enables the CI process to be on a different server than your docker host. DeployHub pulled my war files from the Jenkins repository and pushed them to the docker host followed by a docker build. A full trace of what went where is maintained by DeployHub for easy viewing of the full process.

At this point, I have my war files deployed into docker image with the images stored in my local docker repo. Using docker run command I spun up the images into running containers, logged into the containers with a bash command to run some localized test cases. I did this process for each of my microservices. I created an application in DeployHub to package together all of the component microservices for easy deployments.

The next step was to introduce nginx for handling the routing of the RESTapi calls. I created an image for nginx with my configuration files needed for the RESTapi routing. Again, I used DeployHub for handling the config files. Creating a repeatable process for nginx as well as my Java microservices was key for a long-term sustainable solution. I tested a single instance of each container all hooked together, including nginx. This single instance gave me the opportunity to test basic transactions to each microservice making sure that all of the plumbing was working correctly.

So let’s scale the solution…Kubernetes to the rescue

It took some time to figure out the Kubernetes objects to use for each container (I will post the Kubernetes config in a separate post) but with some trial and error I got a first pass. In order to run Kubernetes locally, you need Kubernetes Minikube. My Minikube setup used Oracle VirtualBox to host the Kubernetes cluster. Pretty painless once I realized that a VM host machine is required by Kubernetes Minikube. Minikube start took care of getting the cluster running in a new VM instance. From there I ran my kubectl create -f all.yaml to create my containers in Kubernetes. It failed miserably. Kubernetes Minikube has no access to my local docker repo since its running in a separate VM. was a quick way to solve that problem. I tagged my images with the right format and pushed them to the dockerhub repo. Kubernetes Minikube looks at dockerhub by default so an easy fit. Ran the kubectl create again and everything was up and running as expected with one exception, accessing my application from outside of Minikube. I could connect to the containers and run local commands but the networking between my host machine and nginx container was baffling.

It turns out that Minikube puts its VM in “host-only” networking mode. “Host-only” means that you can only access from the machine running the VM instance. Manual network routes need to be added to get past this limitation. Needless to say, I skipped the fancy network routing and tested out end-to-end transactions from my host machine.

SaaS version on Google

Kubernetes on Google is very similar to Minikube if not easier. Google Container Engine requires you to run a minimum of 3 instances of a container. Google, like Minikube, has a cluster, but the Google Cluster is based on 3 VM instances instead of 1. Sign in to the Google Compute Engine website and create a project followed by creating a cluster. On your host side install the gcloud SDK. Once the cluster is created you will get information on how to connect to it. Run that connect command back on your host. Make sure you shut down Minikube first. Google can use the dockerhub repos but its hard to configure so instead I just re-tagged my images to use the Google repo and did a push to Google. Finally, I did my kuebctl create again after I did the connect. And Google Container Engine spun everything up in the Google cluster. Did my end-to-end testing against Google and everything worked although not on the first try. I had some of my configuration for nginx messed up. Also, the ip address for nginx ends up being found under the VPC group in the Google Console.

All of the functionality between Kubernetes Minikube and Google Container Engine is there just that the command are slightly different. gcloud will preface some of the Google commands. Also, regular docker commands will be used, such as in the tagging step.