Introduction
Containers are great for DevOps, and as a development environment in general.
Photo by Pixabay from Pexels |
In this article I want to take the opportunity to share with you the following:
- A brief overview of the two popular containers: Docker, Docker Swarm and Kubernetes
- Differences between Docker Swarm and Kubernetes
- Example of Docker and Kubernetes to provide you a hands-on experience
A container is a runtime environment. It will allow you to separate the applications from the infrastructure. This decoupling allows you to deliver the applications at a faster pace.
With containers, you can have the applications, and the dependencies, in one self-contained package. That package can be easily shared with other developers in the team irrespective of the OS they are using. This will get the developers setup faster and ready to contribute to the software being developed/maintained/supported.
Diagram 1 |
Similar to the benefits to the development process, containers also simplify the code-pipeline (build/test/deploy) process for DevOps.
If done/setup well, DevOps for containerized applications provides two (2) immediate benefits:
- Velocity: building faster.
- Control the cost. You can scale up or down, adjust elements of your CI (Continuous Integration) quickly and provision resources effectively, and sometimes at a lower cost than the traditional setup.
Diagram 2 |
Brief overview of Docker, Docker Swarm and Kubernetes
Docker: “Docker is an open platform for developing, shipping, and running applications. Docker provides the ability to package and run an application in a loosely isolated environment called a container...”. Worker nodes are instances of the Docker Engine used for running the containers. - source is docker docs (read more by finding the link in the reference section).
Docker Swarm: The docker native clustering for the containers produced. It allows you to administer containers running across clusters.
Kubernetes: Created by Google, is an “open-source system for automating deployment, scaling, and management of containerized applications.” - source is Kubernetes (read more by finding the link in the reference section). The nodes are virtual or physical machines, depending on the cluster, where the containers (grouped into Pods) are running. Within the Nodes you will find other elements outside the container, such as the kubelets.
Note
if you want to know more about terms such as Nodes, Pods, PodSpec, the usage of YAML or JSON within the Kubernetes setup, docker compose, docker CLI, etc, then we invite you go to our reference section that will take you to external links that will assist you in your journey). image by Kubernetes tutorials
Difference between Kubernetes and Docker Swarm
Usually when the question arises: “Docker or Kubernetes?” authors are referring to Docker Swarm vs Kubernetes, as most likely authors are wondering which way to go when it comes to the orchestration of the containers, meaning its management and distribution.
When containerization becomes the path for your infrastructure then you will get many benefits, such as deploying and scaling applications quickly which has an immediate benefit to the business as it allows to modernize and adapt at a faster pace to the latest and greatest in this competitive market.
As your business grows the demand on the infrastructure supporting it will increase. The more you scale, to adapt to the business needs then the more you will need a way to coordinate, schedule and monitor the services of the containers. This is to ensure the services are in sync and working in harmony.
When it comes to comparison of the container orchestration technologies that we are tackling in this article, it is my opinion that Docker gets you started faster.
Having said that, the technologies are NOT mutually exclusive. In fact they are more powerful when used together, because as your infrastructure evolves then Kubernetes can offer better features for larger management that can benefit the enterprise. Therefore, using Kubernetes for the orchestration and management for the containers, while using Docker as your container solution makes a good formula for those more complex, entreprise setups. For the regular setups they are very competitive, and at this point, in that scenario, it will depend on the architect’s choice and level of comfortability.
Below you will find a brief summary of the differences between Docker Swarm vs Kubernetes. But, If you are looking for a table to portrait a list with key differences then I recommend visiting the guru99 and phoenixNap links in our reference section.
Docker Swarm | Kubernetes |
---|---|
No auto-scaling. But easy to set up | Automated scheduling and auto-scalable |
Auto load-balancing | Load balancing requires manual configuration |
Scaling up is faster | Scaling up is slower than Docker Swarm, but it is said that its cluster is better |
Good community | Good community |
Info Note block:
You have noticed I have a strikethrough on the “Tolerance ratio”. The reason being is because while some may claim Kubernetes has higher tolerance, I would argue it depends on the additional tools or combination of technologies in order to make your containerized setup more or less tolerant, as well as the investment behind it.
Fault tolerance is how a system continues its operations without interruptions (no downtime) when a component (or a set of them) fails. There are times people tend to confuse “fault tolerance” with “high availability”(i.e. 99.99% uptime) but they are slightly different and both are important. A system can be highly available but not fault-tolerant, or it can be both. Also to clarify, if a system is considered fault-tolerant then that means it also has highly availability.
High availability | Fault tolerance |
---|---|
x% uptime (i.e. 99.99%) | 100% uptime |
Light redundancy | Strong redundancy |
Less cost ($) | High cost ($$$) |
Back to tolerance. The more we try to get to 100% fault-tolerance the more expensive it gets. Let us say that at a fail-silent (or fail-stop) failure, a system can be tolerant (k-fault tolerant) if it can resist x-amount of faults (k). Therefore, by having a k + 1 then k components can fail, while 1 will still be working. A practical way to see it, in the context of this article, is how many nodes the cluster can allow to fail, while maintaining the operations. Also, can it self-identify and fix itself, or does it need manual intervention for the fix?
Therefore, depending on how you set your architecture, you can achieve similar fault-tolerance with Kubernetes or Docker Swarm.
End of Info Note block
Docker and Kubernetes examples
Usage of Docker - example building app
In the following example you will get a docker-compose file to spin 2 containers, connected to each other: (1) an nginx, and (2) a postgres DB (database).
Take the following docker-compose.yml sample file.
services:
web:
image: nginx
ports:
- 80:80
restart: always
depends_on:
- db
db:
image: postgres:latest
restart: always
environment:
- POSTGRES_DB=dbtest
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
ports:
- 5432:5432
volumes:
- ./.docker/conf/postgres/:/docker-entrypoint-initdb.d/
Based on this file take the following steps:
- Create a directory of your choice
- Place the sample docker-compose file in the newly created directory
- Note: you could create a dockerfile. While docker-compose is used to define and run the containers, the dockerfile contains the commands to assemble the images that you will have in the containers. In this case we are not using a dockerfile.
- Run the command “docker-compose up”
- Run “docker images” to see your images
- Run “docker ps” to see your containers in running state. You should see something similar to the below:
If you want to go into one of your containers (e.g. the nginx one) then run “docker exec -it [name of the container] bash”
If you want to see the nginx page on browser then:
- In my case I am using Docker Machine on Windows. Therefore, Get the IP via “docker-machine ip”
- Open the browser with the http://[Machine IP]:PORT
Kubernetes example - converting docker-compose
Kubernetes has a great community. The Kubernetes documentation is good and it provides a nice playground, using the minikube (and Katacoda) within a terminal running in browser, to get yourself started.
- Minikube at https://kubernetes.io/docs/tutorials/hello-minikube/
- Run command "snap install kompose"
- Get the same docker-compose file presented on the previous example.
- Create a test folder
- Move the sample docker-compose.yml to the test folder.
- Run command "kompose convert" (see image below)
- Multiple files will get created. Run the kubectl in order to apply to all files created:
- The output:
- Now that all is running on Kubernetes then let us take a quick look at the services
- If you execute "kubectl describe svc web" then you will get details, such as the IP
- In order to validate the test do "curl http://[IP]" and you should see the nginx default page:
Containers Orchestration Solutions
If you want to know about the offerings from Amazon AWS and Microsoft Azure then knowing that you can always search for it, we placed those links here to make your life easier.
Writing about these options, and its benefits, It is out of scope of this article. If you want to know more about their offering, please refer to the links within the CTAs below:
AWS ECS
To recap
- I presented reasons on why containers can be resourceful for setting up development environments, and for DevOps in general and went over Docker, Docker Swarm and Kubernetes.
- I outlined differences between Docker Swarm and Kubernetes.
- I provided you a Docker and Kubernetes examples to help you familiarize.
References:
- Docker docs > docker-what-is-a-container.
- Docker docs > Docker Swarm > Swarm-tutorial
- Kubernetes > Kubernetes-io
Articles with similar subject:
- Sumo logic > kubernetes-vs-docker
- Microsoft > containers-foundation-for-devops-collaboration
- PhoenixNap > kubernetes-vs-docker-swarm
- Guru99 > kubernetes-vs-docker
- Kompose > translate-compose-kubernetes
- Tutorial > hello-minikube > hello-minikube docker swarm > Single node with docker swarm
- End-to-end docker nodejs build > docker-and-nodejs
---