Hello Minikube - Walkthrough and Tutorial

By Robert Young • 31th July 2020 • 50 min read

Hello Minikube - Walkthrough and Tutorial

Managing applications is hard. Scaling them is ever harder. Thankfully there are a number of tools available to help us with this challenge, utilising things such as containers and the cloud.

Let's start with a scenario, where Bob works for CompanyX and is currently running an web application on EC2s within AWS. The website is an e-commerce site where customers purchase SOMETHING. The infrastructure consists of 2 instances sat behind a Application Load Balancer (ALB), each with 3 services running on different ports. Because the application is written by multiple teams, 2 of the services are written in Node.js and the other is in Golang. To deploy changes, he SSH’s on to each of the instances, does a git pull, installs dependencies, compiles the service and then stops and starts the service.

I know what you’re thinking… What can go wrong? Well, let's talk about some of these issues:

  1. What if the build fails? You have just installed new dependencies. If you restarted the existing node server, it may fail.
  2. Bob manages to update Server 1, but Server 2 fails. Rolling back is slow and tedious
  3. While Bob is installing dependencies and compiling the service, he is using resources of the instance which may be required if it’s a busy period.
  4. Bob deploys a version but users are reporting that the checkout isn’t working. Rolling back takes as long as it did to deploy, which could lead to loss of sales
  5. If the application is getting hammered by users, it needs to scale. The only way this can currently scale is if Bob manually provisions a new instance, deploys the application and adds it to the ALB.

A common theme here is that there is a lot of room for error, mainly caused by human intervention. Automation is key in this scenario to help fix some of these issues.

Some of you may be thinking, “Why not utilise Auto Scaling Groups (ASG)?”. Even if the instances are in an ASG, you may only need to scale one of the 3 services. For example, ServiceA can’t handle any more requests so it needs another. The ASG triggers a new instance, however now you have scaled out ServiceB and ServiceC, which is consuming resources it didn’t need. We also still have the issue of manual deployments.

These issues can be prevented by a number of different ways, such as Apache Mesos, Elastic Container Service (ECS), Docker Swarm or Open Shift, but today we are going to cover how to do it using Kubernetes. I have chosen Kubernetes for a number of reasons such as the community behind it, supported and battle tested by Google and it’s part of the Cloud Native Computing Foundation (CNCF), so it can be used on multiple cloud providers.

Primarily in this blog post, we will be getting up and running with Minikube on your local machine. The best way to describe Kubernetes is a cluster of computers all working together to keep the desired state active and happy, with the Operating System (OS) operations abstracted away for simplicity. There are many different resources within Kubernetes, some of which we’ll touch on today, to help you get applications running in a declarative way.

To demonstrate Kubernetes, we’ll start by setting up a single service running within Docker and deployed to Minikube, so let’s dive right into the fun stuff.

For this demonstration, I will be using Mac OSX Catalina so some of the commands won’t work, but I will post links where I can for the Windows equivalent.

Installing Go

There are a number of ways to download Go, so head to the Golang Downloads Page to find a solution that fits your needs. In my case, I downloaded go1.14.4.darwin-amd64.pkg and ran it, but there is also the option of downloading it via homebrew if you are on OSX:

brew install go

In most cases, installing is just as easy as downloading and running the bundled installer, which will extract the binaries and place them in your PATH, or through package managers.

Once complete, you can validate it has downloaded by running:

go version

Should print out:

go version go1.14.4 darwin/amd64

Building Our Go Application

We’ll create a web server in Go that has 3 very simple paths:

  1. / - returns if the server is running
  2. /health - returns the health of the server
  3. /echo - returns the hostname of the machine running the web server

We’ll start by creating the directory where the source code will live. It’s good practice in Go to put it under:

~/go/src/SOURCE_CONTROL_PROVIDER/YOUR_USERNAME/PROJECT_NAME

So in our case it will be:

mkdir -p ~/go/src/github.com/cloudgineers/hello-minikube

Create a main.go file with:

package main

import (
    "encoding/json"
    "fmt"
    "net/http"
    "os"

    "github.com/joho/godotenv"
)

// EchoResponse reprents the response to return
type EchoResponse struct {
    Hostname string `json:"hostname"`
    Path     string `json:"path"`
}

func init() {
    // Load .env file
    if err := godotenv.Load(); err != nil {
        panic(err)
    }
}

func main() {
    port := os.Getenv("PORT")

    fmt.Printf("Listening on port " + port + "\n")

    http.HandleFunc("/", handler)
    http.HandleFunc("/echo", echoHandler)
    http.HandleFunc("/health", healthHandler)
    http.ListenAndServe(":"+port, nil)
}

// Default handler
func handler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Server is running!")
}

// Return information for the running application
func echoHandler(w http.ResponseWriter, r *http.Request) {
    hostname, err := os.Hostname()

    if err != nil {
        panic(err)
    }

    data := EchoResponse{
        Path:     "/" + r.URL.Path[1:],
        Hostname: hostname,
    }

    w.Header().Set("Content-Type", "application/json")
    w.WriteHeader(http.StatusOK)
    json.NewEncoder(w).Encode(data)
}

// Health check for the application. Should return 200
func healthHandler(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Passed")
}

And create a .env file with:

# Default environment variables for the project to start
PORT=8080

First thing we need to do is install dependencies

go get

This should’ve download a dependency required for the project and put it here:

ls ~/go/src/github.com/joho/godotenv

We can now build the service:

go build

This will create an executable binary in your current working directory. Let’s start it

./hello-minikube

The server should have started on http://localhost:8080 if you haven’t changed the port in the environments file. View the other paths to make sure everything is working as expected. A path to note is /echo which will return your machine name.

curl http://localhost:8080/echo

Now let’s do what any good developer does and add some tests, create a main_test.go file with:

package main

import (
    "encoding/json"
    "io/ioutil"
    "net/http"
    "net/http/httptest"
    "os"
    "testing"

    "github.com/stretchr/testify/assert"
)

func TestHandlerReturnsServerIsRunning(t *testing.T) {
    res := httptest.NewRecorder()
    req, _ := http.NewRequest("GET", "URL", nil)

    handler(res, req)

    assert.Equal(t, "Server is running!", readBody(res))
}

func TestHandlerEchosInformation(t *testing.T) {
    var response EchoResponse
    expectedHostname, _ := os.Hostname()
    expectedPath := "/testPath"

    res := httptest.NewRecorder()
    req, _ := http.NewRequest("GET", expectedPath, nil)

    echoHandler(res, req)
    json.Unmarshal([]byte(readBody(res)), &response)

    assert.Equal(t, expectedHostname, response.Hostname)
    assert.Equal(t, expectedPath, response.Path)
}

func TestHandlerPassesHealthCheck(t *testing.T) {
    res := httptest.NewRecorder()
    req, _ := http.NewRequest("GET", "URL", nil)

    healthHandler(res, req)

    assert.Equal(t, "Passed", readBody(res))
}

func readBody(res *httptest.ResponseRecorder) string {
    content, _ := ioutil.ReadAll(res.Body)
    return string(content)
}

We will check to make sure the endpoints return the correct information.

Let's install the assesrtion dependency to run the tests:

go get github.com/stretchr/testify

And run these tests:

go test

You should see:

PASS
ok      github.com/cloudgineers/hello-minikube    0.014s

Our next step run this web server in a docker container

Building it in a Docker image

Docker can run on many different operating systems. To find the most suitable instructions, I would start here: https://docs.docker.com/get-docker/

Lets verify that you have docker running

docker -v

I am currently running:

Docker version 19.03.8, build afacb8b

Let's start by building a docker image for this Go application, create a Dockerfile with:

# Use version 1.13 golang base image
FROM golang:1.13

ENV DIR /go/src/github.com/cloudgineers/hello-minikube

# Use /app as the working directory
WORKDIR $DIR

# Copy files to the working directory
ADD . $DIR

# Build the application
RUN go get

RUN go build

# Expose the port that the application is running on
EXPOSE 8080

# Start the application
CMD ["./hello-minikube"]

Lets build it with the following command, which will build it in the current context, name it hello-minikube, and tag it with test:

docker build -t hello-minikube:test .

Once that has completed successfully, let's run it! This command will:

  • Run the image as a container that we’ve just built
  • Port map the web server to your host machine
  • Remove the container once exited
docker run --rm -p 8080:8080 hello-minikube:test

Navigate to http://localhost:8080 and you should see a running server! Lets check to see what the hostname is, now that it’s running in a container.

curl http://localhost:8080/echo

This should now return the container ID instead as the application is now isolated from your host machine.

docker ps --filter "ancestor=hello-minikube:test" | awk 'FNR == 2 { print $1 }'

Now we have a container running with our Go server inside, it’s time to try and deploy this to Kubernetes.

Running the Go webapp in Minikube

The first step is to install kubectl. Don’t ask me correct way on how to pronounce this as it seems that no one can agree, but I always seem to say Kube Control in my head whenever I see it. You can find install instructions here. You can verify it has installed correctly by running:

kubectl version

Don’t worry if it doesn’t say anything about your server version, we will get to that bit soon.

The next step is to install Minikube. You can find install instructions here. But for macOS, do:

brew install minikube

You can verify you have installed Minikube by running:

minikube version

Depending on how you have installed Minikube and which Hypervisor you have chosen, the following command may differ. Because I am using OSX and Docker Desktop for Mac which has Hyperkit as a core component, I can run :

minikube start --kubernetes-version 1.18.3

If you were using VirtualBox for example, you can run:

minikube start --kubernetes-version 1.18.3 --driver virtualbox

Now that Minikube is up and running, it will automatically switch your kubernetes config to minikube. This can be confirmed by running:

kubectl config current-context

Now if you check the kubectl version, you should see that the server version is populated also:

kubectl version

Outputs:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Let's confirm that Minikubes node is up and running. You should see that the status is Ready.

kubectl get nodes

If it’s not, then pause this walkthrough and try and get it into a working state. Running the following may give you some clues:

kubectl describe nodes -A

When you start minikube, it creates its own Docker daemon, so the image that we built before won’t be recognised. Alternatively, you can push to a docker registry, such as DockerHub and GitLab Registry, or if you really want a challenge, you can try and host your own registry using Harbor, but for demonstration purposes, I’m going to do it all locally. To build the image in Minikubes Docker daemon, we need to switch to it using environment variables. Luckily, Minikube has a built in command to do this for us!

Let's view the environment variables:

minikube docker-env

As you can see, these variables are read by the docker CLI, so it knows where the Docker daemon is running. Lets export these variables in the terminal:

eval $(minikube -p minikube docker-env)

Listing images now returns kubernetes images which are currently being used by Minikube

docker images

NOTE: If you’re thinking about how to get your original Docker environment back, don't worry. Opening a new terminal will reset to how it was before.

Lets build again:

docker build -t hello-minikube:test .

Now it’s time to create and have look at our k8s.yaml file:

apiVersion: v1
kind: Namespace
metadata:
  name: hello-minikube
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-minikube-app
  namespace: hello-minikube
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-minikube-app
  strategy:
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
      metadata:
        labels:
          app: hello-minikube-app
      spec:
          containers:
            - name: hello-minikube-app
              image: hello-minikube:test
              imagePullPolicy: Never # building docker image inside minikube. not for production use
              livenessProbe:
                httpGet:
                  path: /health
                  port: 8080
                initialDelaySeconds: 10
                timeoutSeconds: 1
              readinessProbe:
                httpGet:
                  path: /health
                  port: 8080
                initialDelaySeconds: 10
                timeoutSeconds: 1
              ports:
                - name: http
                  containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
  name: hello-minikube-app
  namespace: hello-minikube
spec:
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 31995
  selector:
    app: hello-minikube-app
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster
status:
  loadBalancer: {}

Let's break this down.

Namespace

Separating resources into namespaces is a good practice within Kubernetes. Some clusters have more than one environment running, or even completely different applications with hundreds of resources. Can you imagine trying to manage this cluster? More info on best practices can be found here. In our case, we gave our namespace the name hello-minikube

Deployment

The deployment resource gives Kubernetes a definition on how our application should be running within the cluster.

  • .metadata.name = the name of the deployment, which will be used by the service so it knows where to send traffic too.

  • .metadata.namespace = which namespace the deployment resource should be deployed to, which is hello-minikube in our case.

  • .spec.replicas = the desired number of pods that should always be running.

  • .spec.selector = how the deployment identifies which pods belong to this deployment.

  • .spec.strategy.rollingUpdate = the deployment strategy type, which will start new containers, then kill the old ones when they report they are health.

  • .spec.strategy.rollingUpdate.maxUnavailable = the number of pods that can be made unavailable during a new deployment.

  • .spec.strategy.rollingUpdate.maxSurge = the maximum number of additional pods that can be created for a new deployment.

  • .spec.template.metadata.labels = should match spec.selector.matchLabels.

So in our case, when we deploy an update, assuming everything is healthy, the sequence will be:

1 v1.0.0 v1.0.0 v1.0.0
2 v1.0.0 v1.0.0 v1.0.0 v1.0.1
3 v1.0.0 v1.0.0 v1.0.1
4 v1.0.0 v1.0.0 v1.0.1 v1.0.1
5 v1.0.0 v1.0.1 v1.0.1
6 v1.0.0 v1.0.1 v1.0.1 v1.0.1
7 v1.0.1 v1.0.1 v1.0.1
  • .spec.template.spec.containers = the container definition for the pod. Note that pods can container more than 1 container, however most of the time they will only contain 1.
  • .spec.template.spec.containers.name = the name of the container.
  • .spec.template.spec.containers.image = the location of the docker image.
  • .spec.template.spec.containers.imagePullPolicy = this is only set for minikube and our walkthrough, to tell Kubernetes not to pull the image, as we had already built it locally. Normally, this wouldn’t be set as Kubernetes would need to pull from a registry.
  • .spec.template.spec.containers.livenessProbe = defines a health check for the container. If this check fails, Kubernetes will kill the container and start a new one. In this scenario, it will check the HTTP status code of /health on the port 8080 of the container. It will wait 10 seconds before sending the first probe.
  • .spec.template.spec.containers.readinessProbe = tells Kubernetes whether it is ready to receive requests, so in our case, we have set it to the same definition as liveness probe.
  • .spec.template.spec.containers.ports = the port that the web server is listening on and should be exposed.

Service

  • .metadata.name = the name of the service.
  • .metadata.namespace = which namespace the service resource should be deployed to.
  • .spec.ports = is a list of port mappings. This has one mapping defined, which is listening on 8080, defines which port the pods are listening to (8080) and the port that this service should listen on for each node (31995).
  • .spec.selector.app = the pods selector so it knows where to send traffic too.
  • .spec.type = the type of service, in our case it will expose it externally.
  • .spec.sessionAffinity = disables sticky sessions so it should spread the connections across pods and doesn’t send the same client to the same pod each time.
  • .spec.externalTrafficPolicy = won’t send clients IP address to the pod.

Lets apply these resources to Minikube:

kubectl apply -f k8s.yaml

If all was successful you should see:

namespace/hello-minikube created
deployment.apps/hello-minikube-app created
service/hello-minikube-app created

Lets view our resources:

kubectl get all

Oh no, nothing is there! That’s because we have created our resources in a different namespace. There are a number of ways to fix this. We can either add the namespace flag to the command:

kubectl get all --namespace hello-minikube

But this can get very tedious if you’re running lots of commands. Or, my preferred way is to set it in the context:

kubectl config set-context minikube --namespace hello-minikube

Now viewing all resources should return everything we’ve just deployed:

kubectl get all

You should see something similar to the following:

NAME                                     READY   STATUS    RESTARTS   AGE
pod/hello-minikube-app-868457696-nq9hp   1/1     Running   0          25m
pod/hello-minikube-app-868457696-pd5fz   1/1     Running   0          25m
pod/hello-minikube-app-868457696-vfqf4   1/1     Running   0          25m

NAME                         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/hello-minikube-app   LoadBalancer   10.99.43.111   <pending>     8080:31995/TCP   25m

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-minikube-app   3/3     3            3           25m

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/hello-minikube-app-868457696   3         3         3       25m

One thing to note is that the service external IP is pending. We can fix that by running a tunnel, which again, Minikube has provided for us. Open a new terminal and run:

minikube tunnel

This should have now given the service an external IP address, which you see again with kubectl get svc. Find the external IP address and run the following a few times. You should see that the hostname changes and should match the name of the pod. This shows that the service is working and distributing traffic across all 3 pods.

curl $EXTERNAL_IP:8080/echo

Magic.

The last thing I would like to show you is the Dashboard that comes enabled by default within Minikube. This isn’t specific to Minikube, and can also be installed on a Kubernetes cluster. In a new terminal, run:

minikube dashboard

This will open up a dashboard in your browser.

Changing the namespace, in the left hand panel, to hello-minikube will give us a nice overview of our application. All the information you can see here is also provided as part of the kubectl CLI, however it certainly helps viewing things visually sometimes!

Final Words

Let's go back to our scenario and see if we have managed to solve any of the issues Bob was having.

What if the build fails? You have just installed new dependencies. If you restarted the existing node server, it may fail.

Now it doesn’t matter if the build fails, the application won’t be affected as we have moved the building of the application off the machine where it is currently running, as we are building Docker images and pushing them to a registry. We can still restart the pods as many times as we want while this is happening.

Bob manages to update Server 1, but Server 2 fails. Rolling back is slow and tedious

In our Kubernetes scenario, if a pod fails to start, it won’t affect the running of the application, as it won’t kill the old container until the new one has reported that it’s healthy

While Bob is installing dependencies and compiling the service, he is using resources of the instance which may be required if it’s a busy period.

The build of the application has been offloaded and containerised

Bob deploys a version but users are reporting that the checkout isn’t working. Rolling back takes as long as it did to deploy, which could lead to loss of sales

While we haven’t discussed it in this walkthrough, Kubernetes supports the functionality to rollback to a previous deployment. The cluster nodes should already have the Docker image pulled on to it, so starting it again should be very quick

If the application is getting hammered by users, it needs to scale. The only way this can currently scale is if Bob manually provisions a new instance, deploys the application and adds it to the ALB.

Now, this is something we haven’t solved in this walkthrough as we were using Minikube. However, in a real Kubernetes cluster and depending how it has been configured, the nodes can scale out based on triggers, such as CPU or network requests, but that is a whole other blog post.

Tidy Up

Keeping a Kubernetes cluster running on your machine constantly unused is a huge waste of resources, so lets delete it:

minikube delete

I hope that this walkthrough has been a good starting point for getting started with Minikube and Kubernetes. Let us know how you got on or any challenges you faced.

Robert Young

Written by
Robert Young

Rob is a Co-Founder at Cloudgineers and a deeply motivated and hardworking AWS Certified Professional with a history of front-end, back-end, database, DevOps and infrastructure experience who has a passion for designing, developing and maintaining highly available and scalable applications.

About Cloudgineers

Cloudgineers is a private community and hiring platform for certified cloud engineer – developers, architects and DevOps practitioners.

AWS Infrastructure