Skip to main content

Get containerized: Docker and Kubernetes

[PLACEHOLDER]

Introduction


Containers are great for DevOps, and as a development environment in general.

Photo by Pixabay from Pexels

 In this article I want to take the opportunity to share with you the following:

  1. A brief overview of the two popular containers: Docker, Docker Swarm and Kubernetes
  2. Differences between Docker Swarm and Kubernetes
  3. Example of Docker and Kubernetes to provide you a hands-on experience

A container is a runtime environment. It will allow you to separate the applications from the infrastructure. This decoupling allows you to deliver the applications at a faster pace.

With containers, you can have the applications, and the dependencies, in one self-contained package. That package can be easily shared with other developers in the team irrespective of the OS they are using. This will get the developers setup faster and ready to contribute to the software being developed/maintained/supported.

Diagram 1

 

Similar to the benefits to the development process, containers also simplify the code-pipeline (build/test/deploy) process for DevOps.

If done/setup well, DevOps for containerized applications provides two (2) immediate benefits:

  • Velocity: building faster.
  • Control the cost. You can scale up or down, adjust elements of your CI (Continuous Integration) quickly and provision resources effectively, and sometimes at a lower cost than the traditional setup.

Diagram 2

 

Brief overview of Docker, Docker Swarm and Kubernetes

 

Docker: “Docker is an open platform for developing, shipping, and running applications. Docker provides the ability to package and run an application in a loosely isolated environment called a container...”. Worker nodes are instances of the Docker Engine used for running the containers. - source is docker docs (read more by finding the link in the reference section).



Docker Swarm: The docker native clustering for the containers produced. It allows you to administer containers running across clusters.


Kubernetes: Created by Google, is an “open-source system for automating deployment, scaling, and management of containerized applications.” - source is Kubernetes (read more by finding the link in the reference section). The nodes are virtual or physical machines, depending on the cluster, where the containers (grouped into Pods) are running. Within the Nodes you will find other elements outside the container, such as the kubelets.



 

Note

if you want to know more about terms such as Nodes, Pods, PodSpec, the usage of YAML or JSON within the Kubernetes setup, docker compose, docker CLI, etc, then we invite you go to our reference section that will take you to external links that will assist you in your journey). image by Kubernetes tutorials

Difference between Kubernetes and Docker Swarm

Usually when the question arises: “Docker or Kubernetes?” authors are referring to Docker Swarm vs Kubernetes, as most likely authors are wondering which way to go when it comes to the orchestration of the containers, meaning its management and distribution.

When containerization becomes the path for your infrastructure then you will get many benefits, such as deploying and scaling applications quickly which has an immediate benefit to the business as it allows to modernize and adapt at a faster pace to the latest and greatest in this competitive market.

As your business grows the demand on the infrastructure supporting it will increase. The more you scale, to adapt to the business needs then the more you will need a way to coordinate, schedule and monitor the services of the containers. This is to ensure the services are in sync and  working in harmony.

When it comes to comparison of the container orchestration technologies that we are tackling in this article, it is my opinion that Docker gets you started faster.

Having said that, the technologies are NOT mutually exclusive. In fact they are more powerful when used together, because as your infrastructure evolves then Kubernetes can offer better features for larger management that can benefit the enterprise. Therefore, using Kubernetes for the orchestration and management for the containers, while using Docker as your container solution makes a good formula for those more complex, entreprise setups. For the regular setups they are very competitive, and at this point, in that scenario, it will depend on the architect’s choice and level of comfortability.  

Below you will find a brief summary of the differences between Docker Swarm vs Kubernetes. But, If you are looking for a table to portrait a list with key differences then I recommend visiting the guru99 and phoenixNap links in our reference section.

 

Docker Swarm Kubernetes
No auto-scaling. But easy to set up Automated scheduling and auto-scalable
Auto load-balancing Load balancing requires manual configuration
Tolerance ratio ** Tolerance ratio **
Scaling up is faster Scaling up is slower than Docker Swarm, but it is said that its cluster is better
Good community Good community

Info Note block:
You have noticed I have a strikethrough on the “Tolerance ratio”. The reason being is because while some may claim Kubernetes has higher tolerance, I would argue it depends on the additional tools or combination of technologies in order to make your containerized setup more or less tolerant, as well as the investment behind it.

Fault tolerance is how a system continues its operations without interruptions (no downtime) when a component (or a set of them) fails. There are times people tend to confuse “fault tolerance” with “high availability”(i.e. 99.99% uptime) but they are slightly different and both are important. A system can be highly available but not fault-tolerant, or it can be both. Also to clarify, if a system is considered fault-tolerant then that means it also has highly availability.

High availability Fault tolerance
x% uptime (i.e. 99.99%) 100% uptime
Light redundancy Strong redundancy
Less cost ($) High cost ($$$)

Back to tolerance. The more we try to get to 100% fault-tolerance the more expensive it gets. Let us say that at a fail-silent (or fail-stop) failure, a system can be tolerant (k-fault tolerant) if it can resist x-amount of faults (k). Therefore, by having a k + 1 then k components can fail, while 1 will still be working. A practical way to see it, in the context of this article, is how many nodes the cluster can allow to fail, while maintaining the operations. Also, can it self-identify and fix itself, or does it need manual intervention for the fix?

Therefore, depending on how you set your architecture, you can achieve similar fault-tolerance with Kubernetes or Docker Swarm.

End of Info Note block


Docker and Kubernetes examples

Usage of Docker - example building app

In the following example you will get a docker-compose file to spin 2 containers, connected to each other: (1) an nginx, and (2) a postgres DB (database).

Take the following docker-compose.yml sample file.

version: '3'
services:
   web:
   image: nginx
   ports:
    - 80:80
    restart: always
   depends_on:
    - db
  db:
    image: postgres:latest
    restart: always
    environment:
     - POSTGRES_DB=dbtest
     - POSTGRES_USER=admin
     - POSTGRES_PASSWORD=admin
     ports:
     - 5432:5432
    volumes:
     - ./.docker/conf/postgres/:/docker-entrypoint-initdb.d/

Based on this file take the following steps:

  1. Create a directory of your choice
  2. Place the sample docker-compose file in the newly created directory
    1. Note: you could create a dockerfile. While docker-compose is used to define and run the containers, the dockerfile contains the commands to assemble the images that you will have in the containers. In this case we are not using a dockerfile. 
  3. Run the command “docker-compose up
  4. Run “docker images” to see your images
  5. Run “docker ps” to see your containers in running state. You should see something similar to the below:  

 

 If you want to go into one of your containers (e.g. the nginx one) then run “docker exec -it [name of the container] bash

If you want to see the nginx page on browser then:

  1. In my case I am using Docker Machine on Windows. Therefore, Get the IP via “docker-machine ip
  2. Open the browser with the http://[Machine IP]:PORT
If using Docker native or Docker Desktop for Mac or Windows then simply open your browser of preference and do http://127.0.0.1:[PORT] or http://locahost:[PORT]

Kubernetes example - converting docker-compose

Kubernetes has a great community. The Kubernetes documentation is good and it provides a nice playground, using the minikube (and Katacoda) within a terminal running in browser, to get yourself started. 

  • Minikube at https://kubernetes.io/docs/tutorials/hello-minikube/
  • Run command "snap install kompose"
  • Get the same docker-compose file presented on the previous example.
  • Create a test folder
  • Move the sample docker-compose.yml to the test folder.
  • Run command "kompose convert" (see image below)

  • Multiple files will get created. Run the kubectl in order to apply to all files created:
Kubectl apply -f db-service.yaml,web-service.yaml,db-deployment.yaml, db-claim0-persistentvolumeclaim.yaml,web-deployment.yaml
  • The output:


  • Now that all is running on Kubernetes then let us take a quick look at the services
  • If you execute "kubectl describe svc web" then you will get details, such as the IP
  • In order to validate the test do "curl http://[IP]" and you should see the nginx default page: 

 Containers Orchestration Solutions

If you want to know about the offerings from Amazon AWS and Microsoft Azure then knowing that you can always search for it, we placed those links here to make your life easier.

Writing about these options, and its benefits, It is out of scope of this article. If you want to know more about their offering, please refer to the links within the CTAs below:



To recap

  1. I presented reasons on why containers can be resourceful for setting up development environments, and for DevOps in general and went over Docker, Docker Swarm and Kubernetes.
  2. I outlined differences between Docker Swarm and Kubernetes.
  3. I provided you a Docker and Kubernetes examples to help you familiarize.

References:

 

Articles with similar subject:

 

---

Trending posts

Democratizing AI

Democratizing AI is all about empowering others to use it, by making it available to them. Audiences, such as marketers in a company, will be able to access AI capabilities as part of their MarTech solutions, without the need of being technical. It could also be schools, where the younger generations are learning how to use it in responsible, secure, innovative, and creative ways. This is the year where companies, after discovery phases and teams experimenting, are looking to activate and take advantage of the AI advances. Generated with Microsoft Designer   And so, questions emerge, such as “What to democratize when leveraging AI?” There are common scenarios, as well as specific ones, that will depend on the company, and the industry they belong to. A common scenario, seen in many industries, when democratizing data is the data visualization and reporting . In digital marketing, as an example, data scientists and data analysts can automate reporting, making them available to the c...

Productivity framework for 2024

Recently I was at a Christmas party and I found myself giving advice to a friend on being more productive. I shared the approaches that I take which helped me become more productive at work and in my personal life. The conversation with my friend inspired me to share my approaches in this blog .  Photo by Moose Photos from Pexels   My productivity framework has five key pillars and to remember them I use the mnemonic, POFOR = P lan your tasks, O rganize yourself, F ocus on your tasks, O ptimize yourself with habits and R eflect to ensure you are being productive on the right tasks. Plan Planning is very crucial as it sets the tone for the rest of the pillars. I always found I was more productive when I planned my tasks compared to when I didn’t, and hence planning has become my rule of thumb. I recommend taking 30 minutes at the end of each day to plan your next day. This means prioritizing your tasks and blocking your calendar accordingly. By not doing so, you are at risk o...

Take a break on zero emission day 2024

 Do you know how much you contribute to the daily emissions in your city? How much does the city you live in contribute within your country? How much does your country contribute to the emissions on our planet? Do you know its impact? Do you know why we have a zero emission day? Photo by Pixabay via Pexels Let us start by getting our acronyms right, shall we? You may have heard the term GHG emissions, wondering what that means. GHG stands for Green House Gas. These gases are part of the cause of the rising temperature on Earth. What is interesting about them  is that they absorb infrared radiation resulting in the greenhouse effect. Within the greenhouse gases you find carbon dioxide, methane, nitrous oxide, ozone, water vapour. The vast majority of carbon dioxide emissions by humans come from the burning of fossil fuels. Key sectors to consider for GHG Fuel Exploitation Power Industry Transport Waste Agriculture Buildings Industry combustion and processes Top GHG emissions...

Small Language Models

 Open source models will continue to grow in popularity. Small Language Models (SLMs) are smaller, faster to train with less compute.  They can be used for tackling specific cases while being at a lower cost.  Photo by Tobias Bjørkli via Pexels  SLMs can be more efficient SLMs are faster in inference speed, and they also require less memory and storage.    SLMs and cost Small Language models can run on less powerful machines, making them more affordable. This could be ideal for experimentation, startups and/or small size companies. Here is a short list Tiny Llama. The 1.1B parameters AI Model, trained on 3T Tokens. Microsoft’s Phi-2. The 2.7B parameters, trained on 1.4T tokens. Gemini Nano.  The 6B parameters. Deepseek Coder

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed