Programming

How to deploy a RESTful API Application in Kubernetes

RESTful API Application in Kubernetes

Introduction

Kubernetes is a powerful DevOps tool for achieving scalable software applications deployments. Kubernetes is a tool for automating the deployment of multiple instances of an application, scaling, updating and monitoring the health of deployments.

 

This article will help you deploy your REST API in Kubernetes. First, you’ll need to set up a local Kubernetes cluster, then create a simple API to deploy.

 

 

Set Up Local Kubernetes

There’s a couple options for running Kubernetes locally, with the most popular ones including minikube, k3s, kind, microk8s. In this guide, any of these will work, but we will be using k3s because of the lightweight installation.

 

Install k3d, which is a utility for running k3s. k3s will be running in Docker, so make sure you have that installed as well.

 

curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash

 



 

        

Create a Simple API

Create a simple API using Express.js.

 

mkdir my-backend-api && cd my-backend-api

touch server.js

npm init

npm i express –saveq

// server.js

const express = require(“express”);

const app = express();

 

app.get(“/user/:id”, (req, res) => {

  const id = req.params.id;

  res.json({

    id,

    name: John Doe #${id}

  });

});

 

app.listen(80, () => {

  console.log(“Server running on port 80”);

});



Deploy

Now, deploy the image to your local Kubernetes cluster. Use the default namespace.

 

Create a deployment:

 

kubectl create deploy my-backend-api –image=andyy5/my-backend-api

 

Check that everything was created and the pod is running:

kubectl get deploy -A

kubectl get svc -A

kubectl get pods -A

 

Once the pod is running, the API is accessible within the cluster only. One quick way to verify the deployment from our localhost is by doing port forwarding:

 

Replace the pod name below with the one in your cluster

kubectl port-forward my-backend-api-84bb9d79fc-m9ddn 3000:80

 

Now, you can send a curl request from your machine

curl localhost:3000/user/123

Manage external access in a cluster

To correctly manage external access to the services in a cluster, we need to use ingress. Close the port-forwarding and let’s expose our API by creating an ingress resource.

 

An ingress controller is also required, but k3d by default deploys the cluster with a Traefik ingress controller (listening on port 80).



Create an Ingress resource with the following YAML file:

kubectl create -f ingress.yaml

kubectl get ing -A

 

// ingress.yaml

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: my-backend-api

  annotations:

    ingress.kubernetes.io/ssl-redirect: “false”

spec:

  rules:

  – http:

      paths:

      – path: /user/

        pathType: Prefix

        backend:

          service:

            name: my-backend-api

            port:

              number: 80



Final thoughts

As one can see it is quite quick to let the traffic in or even deploy an autoscaling cluster onto Google Cloud Platform. There are some caveats mostly resulting from a different structure of a production cluster and a cluster one would run locally to test their applications. Without an ingress controller, like NGINX, there are a number of extra steps to bear in mind. With minikube clusters, one needs to enable ingress and when it comes to Docker Desktop one shall make sure the service resource is a LoadBalancer. Implementing an ingress controller seems to be the most suitable option as there is little overhead when moving from local deployments to production deployments

 

Ivelin Belchev
About author

A technology enthusiast focused on data manipulation using advanced technologies in Data Mining, Web Design, and Database Admin. I Love working with Python as a cross-platform solution for varied projects and sys-admin related solutions. Expert in AWS Technologies, Django, Scrapy, HTML, CSS, JavaScript, Jquery, and Selenium.