In recent years, you may hear about Docker, Docker Swarm, Kubernetes, devops and so on. In stead of asking what them are, we should ask ourselves why we use them. Of course, I will not talk all of them very deeply because it is really a big topic. The most important thing is how we could benefit from them. Thus, I will show some hands-on configuration and let you know how to use instead of paper talk. Also, Docker and Kubernets will be our main focus. I will talk about how to deploy them on Google Cloud. But when we talk about Kubernets, we should know what Docker is. Yes, this part is totally about Docker, if someone wants to learn Kubernets only, you may have to wait for the later article.
Why Should we use Docker?
One word, fast. By using Docker container, we could make our development faster, build our application faster, deploy our application faster, update our application faster. Docker container let us package up an application with all of the required components such as libraries and some dependencies.
In fact, for a system admin, they take most of the time to maintain our current system and application. Docker could reduce many tasks in maintenance and let us put more time on developing new things.
In console, we choose Cloud Shell and then click the editor mode such that we could easily review our code.
The we have to setup our environment e.g. define project, zone and region:
gcloud config set project <you-project-id>
gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1
Please download my prepared file from github or
git clone https://github.com/manbobo2002/docker.git
Traditional Method to Deploy an Application
I think many of you may be still confused, then let’s start with a very very simple application instead of talking too much. We firstly host a server and curl it, then we will have a string response.
If we want to implement it on VM (in this tutorial we will use Cloud Shell directly), we jsut type the below command:
Then the application is now hosting on localhost port 8080 in another terminal.
We open the second terminal and then curl the localhost with port 8080, it gives us a result of “Hello World”. It works fine right?
Use Docker to Deploy an Application
So, how do we implement it on docker? Actually, we just need a file called Dockerfile to build up our environment.
Actually, it consists of 5 steps. First of all, pull the Docker image with specified version. Docker image is nothing but just an image which consists of the application. In fact, it is like you are using different OS images, but this time you just treat them as application version. Secondly, set the working directory in the container. Thirdly, copy the current directory into the container. Note that “.” means the current directory. Then we open the port 8080 of container. And finally we run the node command to kick start the app.js application. When the Dockerfile is prepared, just run:
docker build -t node-app:1.0 .
Now we can see Docker is building the application step by step which are stated in our Dockerfile. The command flag -t means to name and tag an image i.e. name:tag. By default, the tag will be “latest” if we don’t specify a tag.
Then when we type:
It will show all the existing images. We could see the node image is what we call in Dockerfile while node-app is the image we just built. If we want to remove node image, we have to remove node-app image because node-app makes use of node image. Now we build our images, the next step is to run it.
docker run -p 5000:8080 –name app-demo node-app:1.0
Then we open the second terminal to curl it.
Yes, we successfully curl it. But here let me briefly explain the command. First of all, the -p means to map the localhost port 5000 to the container port 8080 i.e. localhost port:container port. Then the –name flag gives a name to the container.
What is Docker?
Now, you can clearly see how traditional and docker deploy an application. Someone may say Docker is a micro VM but it is not accurate. Docker is nothing but just a process. You could build many many Docker container within a VM.
Here you may think it seems the traditional approach is much more simple. Yes, if your application is not complicated, just keep it. But once your application becomes larger and larger, docker will help you a lot.
Some of you may confuse the difference between image and container. When we just simply say Docker, it probably refers to Docker container. Image is nothing but just like a snapshot of Docker container. As I mentioned before, you could treat Docker image as an application version of OS image like CentOS or Ubuntu image. And Docker container is like an instance of Compute Engine.
Update the Application
Imagine that the application we just built are on production now. But I have to do some editing without any impact to production. In traditional approach, we may have to make a copy of previous app.js file and run it on another port in the second terminal just like below.
Then we curl the localhost and see the result.
As we can see, we have to manually do some operations like copy and paste. If the application is larger, we have to use another VM and seperate it by development server and production server. Then we need to maintain 2 VM servers.
How about Docker? What if we want to update our application in Docker? Let’s try it out.
First of all, we still keep running the original application in the first terminal.
Secondly, we edit the app.js file directly and edit the Dockerfile with port 9000. Then build another image with tag version 1.1 in the second terminal:
docker build -t node-app:1.1 .
This time we immediately build our image because we built node image previously, now we just cache the image. Finally we run the new image:
docker run -p 9000:9000 –name app-demo2 node-app:1.1
And test them on the third terminal.
It works! Here we could see we do NOT have to do some copy and paste operations but just have to directly edit the file.
Enter to Docker Container
As I mentioned before, someone may say Docker is a micro VM. It is not accurate but not totally wrong. In fact, we could “enter” the container we created. Suppose we want to enter my app-demo2 container, we type:
docker exec -it [container_id] bash
In my case, container_id is 7749489a5441.
Then we could enter to the container and use “ls” or “cat” command just like normal Linux, press “exit” to exit container.
We could clean up all the containers by typing:
docker stop $(docker ps -q)
docker rm $(docker ps -aq)
Removing containers does not mean remove images. But we have to publish our images later so just keep it.
If you are not using Google Cloud, you could push your image on Docker Hub. Please register a new account in case you do not have. In fact, Docker Hub is just like Github. It is used for storing tens of thousands of images including official and unofficial.
Click “Repositories” and then click “Create Repository”.
Input the image name and then click create.
Be friendly reminded that your images will open to public by default, and only 1 private quota. You have to pay if you want more private images.
Back to Cloud Shell, we should login our Docker first and then push our images to Docker Hub.
docker image tag node-app:1.1 [username]/node-app:1.1
docker push [username]/node-app
Then we could find our images on Docker Hub now.
Since we are using GCP, in fact we could directly push to our GCP service called Container Registry.
docker image tag node-app:1.1 gcr.io/[project-id]/node-app:1.1
docker push gcr.io/[project-id]/node-app:1.1
It will have similar result as Docker Hub.
We could also find our image on Container Registry. Now let’s delete all the created images:
docker rmi -f $(docker images -aq)
If you want to delete the image in Container Registry, please check the image and click “DELETE”.
Without doubt, Docker has many benefits that I do not mention. It is hoped that I could share it to you in later article. Also, there are many Docker commands which are not discussed, please see the cheat sheet below.