In this step by a step blog post, that illustrates how to integrate Python Flask applications with Docker and run them in a Kubernetes cluster, we will cover the following topics:
Before proceeding, make sure that your environment satisfies these requirements. Start by installing the following dependencies on your machine.
from flask import Flask import requests app = Flask(__name__) API_KEY = "b6907d289e10d714a6e88b30761fae22" @app.route('/') def index(): return 'App Works!' @app.route('/<string:city>/<string:country>/') def weather_by_city(country, city): url = 'https://samples.openweathermap.org/data/2.5/weather' params = dict( q=city + "," + country, appid= API_KEY, ) response = requests.get(url=url, params=params) data = response.json() return data if __name__ == '__main__': app.run(host="0.0.0.0", port=5000)
Dockerizing python applications is a straightforward and easy task. To do this, we need to introduce the following files to the project:
- Use python:3 as a base image for our application.
- Create the working directory inside the image and copy the requirements file. (This step helps in optimizing the Docker build time.)
- Install all the dependencies using pip.
- Copy the rest of the application files.
- Expose port 5000 and set the default command (CMD) for the image.
FROM python:3 ENV PYTHONUNBUFFERED 1 RUN mkdir /app WORKDIR /app COPY requirements.txt /app RUN pip install --upgrade pip RUN pip install -r requirements.txt COPY . /app EXPOSE 5000 CMD [ "python", "app.py" ]
We can now build the Docker image of our application using the below command:
$> docker build -t weather:v1.0
Running the application
We can run the application locally using Docker CLI as shown below:
$> docker run -dit --rm -p 5000:5000 --name weather weather:v1.0
Or we can use a Docker Compose file to manage the build and deployment of the application in a local development environment. For instance, the below Compose file will take care of building the Docker image for the application and deploying it.
version: '3.6' services: weather: build: . ports: - "5000:5000" volumes: - .:/app
Running the application using docker-compose can be done using:
$> docker-compose up
Once the application is running a CURL command can be used to retrieve the weather data in London, for instance:
$> curl http://0.0.0.0:5000/london/uk/
Running service directly with docker command or even with docker-compose is not recommended for production services because it’s not a production-ready tool. It will neither ensure that your application runs in a highly available mode nor help you to scale your application.
To illustrate better the last point, Compose is limited to only one Docker host and does not support running Docker services in clusters.
As a result, there is a need to use other solutions that provide such feathers. One of the most well known and used solutions is Kubernetes. This tool is an open-source project for automating deployment, scaling, and management of containerized applications. It is widely used by companies and individuals around the world for the following reasons
- It is free: The project is open-source and maintained by the CNCF.
- It is adopted by big companies such as Google, AWS, Microsoft, and many others.
- There are many cloud systems that offer Kubernetes managed services such as AWS, Google Cloud, and DigitalOcean.
- There are many plugins and tools developed by the Kubernetes community to make managing Kubernetes easier and more productive.
Creating a Kubernetes Cluster for your Development Environment
Kubernetes is a distributed system and integrates several components and binaries. This makes it challenging to build production clusters, at the same time running Kubernetes in a development environment will consume most of the machine resources. Furthermore, it would be difficult for developers to maintain the local cluster.
This is why there is a real need to run Kubernetes locally in an easy and smooth way. A tool that should help developers keep focusing on the development and not on maintaining clusters.
There are several options that can be used for achieving this task below are the top three
- Docker for mac: If you have a MacBook, you can install Docker for mac and enable Kubernetes from the configuration of the application as shown in the below image. You will have a Kubernetes cluster deployed locally.
- Microk8s : Single-package fully conformant lightweight Kubernetes that work on 42 flavours of Linux. This can be used to run Kubernetes locally on Linux systems including Raspberry Pi. MicroK8s can be installed on CentOS using the Snap command line as shown in the below snippet
- Minikube: implements a local Kubernetes cluster on macOS, Linux, and Windows that supports most of the Kubernetes features. Depending on your operation system, you can select the commands needed for installing Kubernetes from this page. For instance, to install the application and start it on macOS you can use the below commands:
$> sudo yum install epel-release $> sudo yum install snapd $> sudo systemctl enable --now snapd.socket $> sudo ln -s /var/lib/snapd/snap /snap $> sudo snap install microk8s --classic
$> brew install minikube $> minikube start
Once Minikube, Microk8s, or Docker for Mac is installed and running, you can start using the Kubernetes command line to interact with the Kubernetes cluster.
The above tools can be used easily to bootstrap a development environment and test your Kubernetes Deployments locally. However, they do not implement all features supported by Kubernetes, and not all of them are designed to support multi-node clusters.
Minikube, Microk8s, or Docker for Mac are great tools for local development. However, for testing and staging environments, highly available clusters are needed to simulate and test the application in a production-like environment.
On the other hand, Running Kubernetes clusters 24/7 for the testing environment can be very expensive. You should make sure to run your cluster only when needed, shut it down when it’s no longer required, then recreate when it’s needed again.
In part II of this series, we are going to how to deploy our application to a Kubernetes testing cluster. We are going to create the Kubernetes Deployment for our Flask application and use Traefik to manage our Ingress and expose our application to external traffic.
(Disclaimer: The author is the Founder and CEO at Cloudplex)