Alpha Coder

Deploy microservices on Kubernetes

Kubernetes (AKA k8s) has gained widespread adoption in recent years as a platform for microservices due to its ability to seamlessly automate app deployment at scale. Pinterest uses a suite of over 1000 microservices to power their “discovery engine”. Imagine having to configure and manage servers to run these services manually. It’s an Engineer’s nightmare to say the least.

Kubernetes bills itself as “a portable, extensible open-source platform for managing containerized workloads and services”. In simple terms, Kubernetes helps to automate the deployment and management of containerized applications. This means we can package an app (code, dependencies and config) in a container and hand it over to Kubernetes to deploy and scale without worrying about our infrastructure. Under the hood, Kubernetes decides where to run what, monitors the systems and fixes things if something goes wrong.

Microservice architecture

I particularly like James Lewis’ and Martin Fowler’s definition of microservices as it points out why Kubernetes is such a good solution for the architecture. “The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies”. Kubernetes is in fact a “fully automated deployment machine” that provides a powerful abstraction layer atop server infrastructure.

Common Kubernetes terms

Below are some common terms associated with Kubernetes.

Yoloo

In this tutorial, we’ll be deploying a simple microservices app called Yoloo on Google Kubernetes Engine (GKE), a managed Kubernetes service on Google Cloud Platform. Yoloo uses a pre-trained YOLO (You Only Look Once) model to detect common objects such as bottles and humans in an image. It comprises two microservices, detector and viewer. The detector service is a Python/Flask app which takes an image and passes it through the YOLO model to identify the objects in it. The viewer service is a PHP app that acts as a front-end by providing a User Interface for uploading and viewing the images. The app is built to use two external, managed services: Cloudinary for image hosting and Redis for data storage. The source code is available on my GitHub.

Download or clone the project with Git: git clone https://github.com/nicholaskajoh/microservices.git.

Detector service Dockerfile.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
FROM python:3.6-stretch
EXPOSE 8080

RUN mkdir /www
WORKDIR /www
COPY requirements.txt /www/
RUN pip install -r requirements.txt

ENV PYTHONUNBUFFERED 1

COPY . /www/
CMD gunicorn --bind 0.0.0.0:8080 wsgi

Viewer service Dockerfile.

1
2
3
4
FROM php:7.2-apache
EXPOSE 80

COPY . /var/www/html/

Download the YOLO weights in the detector directory.

1
cd detector/ && wget https://pjreddie.com/media/files/yolov3.weights

Change directory to viewer/ and install the PHP dependencies. You need to have PHP and Composer installed.

1
composer install

Google Cloud Platform (GCP)

Visit https://console.cloud.google.com/home/dashboard and create a new project. You need to have a Google account.

NB: Google offers a tier with $300 free credit (for 1 year) to use any GCP product you want.

Create a new project on Google Cloud Platform

NB: Make sure you enable billing for your project.

Go to the Kubernetes section of GCP and create a new standard cluster. GKE uses VM instances on Google Compute Engine as nodes in the cluster.

Create a new k8s cluster

Google Container Registry (GCR)

Kubernetes uses container images to launch pods. Images need to be stored in a registry where they can be pulled from. GCP provides a registry, the Google Container Registry, which can be used to store Docker images. Let’s build the images for the detector and viewer services and push them to GCR.

Deployments

The detector and viewer services contain deployment files, detector-deployment.yaml and viewer-deployment.yaml respectively, which tell k8s what workloads we want to run.

Detector service deployment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: detector-svc-deployment
spec:
  replicas: 3
  minReadySeconds: 15
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: detector-svc
    spec:
      containers:
        - image: gcr.io/{PROJECT_ID}/detector-svc
          imagePullPolicy: Always
          name: detector-svc
          ports:
            - containerPort: 8080
          envFrom:
            - secretRef:
                name: detector-svc-secrets

In this deployment, we want to run 3 copies (replicas: 3) of the detector service (- image: gcr.io/{PROJECT_ID}/detector-svc) for availability and scalability. We label the pods (app: detector-svc) so that they can easily be referenced as a group. We alse choose rolling updates (type: RollingUpdate) as our redeployment strategy. Rolling update means we can update the app without experiencing any downtime. In other words, k8s gradually replaces pods in the deployment so that the application is always available to consumers or clients even when an update is taking place.

Viewer service deployment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: viewer-svc-deployment
spec:
  replicas: 2
  minReadySeconds: 15
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: viewer-svc
    spec:
      containers:
        - image: gcr.io/{PROJECT_ID}/viewer-svc
          imagePullPolicy: Always
          name: viewer-svc
          ports:
            - containerPort: 80
          envFrom:
            - secretRef:
                name: viewer-svc-secrets

We choose a different redeployment strategy (type: Recreate) in the viewer service. This strategy destroys existing pods and recreates them with the updated image. Also, we’re going with 2 replicas here.

Notice envFrom under containers in both deployments? We’ll be loading our environment variables from a k8s Secret which we’ll create soon.

Services

The k8s services (not to be confused with microservices) in detector and viewer, detector-service.yaml and viewer-service.yaml, share traffic among a set of replicas and provide an interface for other applications to access them. The detector service uses the ClusterIP k8s service which exposes the app on a cluster-internal IP. This means detector is only reachable from within the cluster. The viewer service uses the LoadBalancer service which exposes it externally to the outside world.

Detector k8s service.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: Service
metadata:
  name: detector-service
spec:
  type: ClusterIP
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: detector-svc

Viewer k8s service.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: v1
kind: Service
metadata:
  name: viewer-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: viewer-svc

NB: we use the labels (app: detector-svc and app: viewer-svc) to select the group of pods created by the detector and viewer deployments, and make both services available on port 80.

Cloudinary and Redis

As mentioned earlier, Yoloo depends on Cloudinary and Redis. Cloudinary is a cloud-based image/video hosting service and Redis is an in-memory key-value database.

Create an account on Cloudinary and on Redis Labs (a free managed Redis hosting service).

Cloudinary console

Redis Labs configuration

Create .env files from the example env files in both services (.env.example) and populate them with your Cloudinary and Redis credentials.

Detector service .env

1
2
3
4
5
FLASK_APP=detector.py
FLASK_ENV=production
CLOUDINARY_CLOUD_NAME=somethingawesome
CLOUDINARY_API_KEY=0123456789876543210
CLOUDINARY_API_SECRET=formyappseyesonly

Viewer service .env

1
2
3
4
5
CLOUDINARY_CLOUD_NAME=somethingawesome
CLOUDINARY_API_KEY=0123456789876543210
CLOUDINARY_API_SECRET=formyappseyesonly
DETECTOR_SVC_URL=http://detector-service
REDIS_URL=redis://:password@127.0.0.1:6379

Notice the url in DETECTOR_SVC_URL? Kubernetes creates DNS records within the cluster, mapping service names to their IP addresses. So we can use http://detector-service and not have to worry about what IP a service actually uses.

kubectl

kubectl is a CLI tool for running commands against Kubernetes clusters. To get Kubernetes to run our microservices, we need to apply our deployments and services on the cluster. Outlined below are the steps involved.

Install kubectl CLI for your OS.

Set your Yoloo GCP project as default on the gcloud CLI.

1
gcloud config set project {PROJECT_ID}

Set the default compute zone or region of your cluster. You can find this in the cluster details page on your GCP dashboard.

1
gcloud config set compute/zone {COMPUTE_ZONE}

or

1
gcloud config set compute/region {COMPUTE_REGION}

Generate a kubeconfig entry to run kubectl commands against a your GCP cluster.

1
gcloud container clusters get-credentials {CLUSTER_NAME}    

NB: if you use minikube, you can use the following command to switch back to your local cluster.

1
kubectl config use-context minikube

Create k8s Secrets from the .env files in both services.

1
2
kubectl create secret generic detector-svc-secrets --from-env-file=detector/.env
kubectl create secret generic viewer-svc-secrets --from-env-file=viewer/.env

You can use the following commands to update the secrets.

1
2
kubectl create secret generic detector-svc-secrets --from-env-file=detector/.env --dry-run -o yaml | kubectl apply -f -
kubectl create secret generic viewer-svc-secrets --from-env-file=viewer/.env --dry-run -o yaml | kubectl apply -f -

Visit your GKE cluster dashboard on GCP and check the Configuration section. You should see the detector and viewer service secrets.

GKE cluster Config showing detector and viewer service secrets

NB: If you want to view the secrets on your k8s cluster (e.g when debugging), you can install the jq utility (https://stedolan.github.io/jq/) and run the following where my-secrets is the name of your k8s secret.

1
kubectl get secret my-secrets -o json | jq '.data | map_values(@base64d)'

Create the deployments.

1
2
kubectl apply -f detector/detector-deployment.yaml
kubectl apply -f viewer/viewer-deployment.yaml

Check the Workloads section of the dashboard. You should see the detector and viewer service deployments.

GKE cluster Workloads showing the microservice deployments

Create the services.

1
2
kubectl apply -f detector/detector-service.yaml
kubectl apply -f viewer/viewer-service.yaml

The detector and viewer k8s services can be found in the Services section of the dashboard.

GKE cluster Services showing the k8s services for Yoloo

To visit the application, go to the viewer service page on the dashboard and locate the External endpoints IP address.

Viewer service external IP address


UI of the Yoloo app

Sample output image from Yoloo

Need help getting something to work in your project? Try Alpha Coder Support!

Subscribe to the Alpha Coder Newsletter!

Get timely updates on new articles, courses and more from Nicholas Kajoh. Unsubscribe anytime.

Enjoy the content on Alpha Coder? Please buy me a coffee. 😊

Previous post: Hello Hugo!

Next post: Dealing with code debt


comments powered by Disqus