Skip to main content

Get containerized: Docker and Kubernetes



Containers are great for DevOps, and as a development environment in general.

Photo by Pixabay from Pexels

 In this article I want to take the opportunity to share with you the following:

  1. A brief overview of the two popular containers: Docker, Docker Swarm and Kubernetes
  2. Differences between Docker Swarm and Kubernetes
  3. Example of Docker and Kubernetes to provide you a hands-on experience

A container is a runtime environment. It will allow you to separate the applications from the infrastructure. This decoupling allows you to deliver the applications at a faster pace.

With containers, you can have the applications, and the dependencies, in one self-contained package. That package can be easily shared with other developers in the team irrespective of the OS they are using. This will get the developers setup faster and ready to contribute to the software being developed/maintained/supported.

Diagram 1


Similar to the benefits to the development process, containers also simplify the code-pipeline (build/test/deploy) process for DevOps.

If done/setup well, DevOps for containerized applications provides two (2) immediate benefits:

  • Velocity: building faster.
  • Control the cost. You can scale up or down, adjust elements of your CI (Continuous Integration) quickly and provision resources effectively, and sometimes at a lower cost than the traditional setup.

Diagram 2


Brief overview of Docker, Docker Swarm and Kubernetes


Docker: “Docker is an open platform for developing, shipping, and running applications. Docker provides the ability to package and run an application in a loosely isolated environment called a container...”. Worker nodes are instances of the Docker Engine used for running the containers. - source is docker docs (read more by finding the link in the reference section).

Docker Swarm: The docker native clustering for the containers produced. It allows you to administer containers running across clusters.

Kubernetes: Created by Google, is an “open-source system for automating deployment, scaling, and management of containerized applications.” - source is Kubernetes (read more by finding the link in the reference section). The nodes are virtual or physical machines, depending on the cluster, where the containers (grouped into Pods) are running. Within the Nodes you will find other elements outside the container, such as the kubelets.



if you want to know more about terms such as Nodes, Pods, PodSpec, the usage of YAML or JSON within the Kubernetes setup, docker compose, docker CLI, etc, then we invite you go to our reference section that will take you to external links that will assist you in your journey). image by Kubernetes tutorials

Difference between Kubernetes and Docker Swarm

Usually when the question arises: “Docker or Kubernetes?” authors are referring to Docker Swarm vs Kubernetes, as most likely authors are wondering which way to go when it comes to the orchestration of the containers, meaning its management and distribution.

When containerization becomes the path for your infrastructure then you will get many benefits, such as deploying and scaling applications quickly which has an immediate benefit to the business as it allows to modernize and adapt at a faster pace to the latest and greatest in this competitive market.

As your business grows the demand on the infrastructure supporting it will increase. The more you scale, to adapt to the business needs then the more you will need a way to coordinate, schedule and monitor the services of the containers. This is to ensure the services are in sync and  working in harmony.

When it comes to comparison of the container orchestration technologies that we are tackling in this article, it is my opinion that Docker gets you started faster.

Having said that, the technologies are NOT mutually exclusive. In fact they are more powerful when used together, because as your infrastructure evolves then Kubernetes can offer better features for larger management that can benefit the enterprise. Therefore, using Kubernetes for the orchestration and management for the containers, while using Docker as your container solution makes a good formula for those more complex, entreprise setups. For the regular setups they are very competitive, and at this point, in that scenario, it will depend on the architect’s choice and level of comfortability.  

Below you will find a brief summary of the differences between Docker Swarm vs Kubernetes. But, If you are looking for a table to portrait a list with key differences then I recommend visiting the guru99 and phoenixNap links in our reference section.


Docker Swarm Kubernetes
No auto-scaling. But easy to set up Automated scheduling and auto-scalable
Auto load-balancing Load balancing requires manual configuration
Tolerance ratio ** Tolerance ratio **
Scaling up is faster Scaling up is slower than Docker Swarm, but it is said that its cluster is better
Good community Good community

Info Note block:
You have noticed I have a strikethrough on the “Tolerance ratio”. The reason being is because while some may claim Kubernetes has higher tolerance, I would argue it depends on the additional tools or combination of technologies in order to make your containerized setup more or less tolerant, as well as the investment behind it.

Fault tolerance is how a system continues its operations without interruptions (no downtime) when a component (or a set of them) fails. There are times people tend to confuse “fault tolerance” with “high availability”(i.e. 99.99% uptime) but they are slightly different and both are important. A system can be highly available but not fault-tolerant, or it can be both. Also to clarify, if a system is considered fault-tolerant then that means it also has highly availability.

High availability Fault tolerance
x% uptime (i.e. 99.99%) 100% uptime
Light redundancy Strong redundancy
Less cost ($) High cost ($$$)

Back to tolerance. The more we try to get to 100% fault-tolerance the more expensive it gets. Let us say that at a fail-silent (or fail-stop) failure, a system can be tolerant (k-fault tolerant) if it can resist x-amount of faults (k). Therefore, by having a k + 1 then k components can fail, while 1 will still be working. A practical way to see it, in the context of this article, is how many nodes the cluster can allow to fail, while maintaining the operations. Also, can it self-identify and fix itself, or does it need manual intervention for the fix?

Therefore, depending on how you set your architecture, you can achieve similar fault-tolerance with Kubernetes or Docker Swarm.

End of Info Note block

Docker and Kubernetes examples

Usage of Docker - example building app

In the following example you will get a docker-compose file to spin 2 containers, connected to each other: (1) an nginx, and (2) a postgres DB (database).

Take the following docker-compose.yml sample file.

version: '3'
   image: nginx
    - 80:80
    restart: always
    - db
    image: postgres:latest
    restart: always
     - POSTGRES_DB=dbtest
     - POSTGRES_USER=admin
     - 5432:5432
     - ./.docker/conf/postgres/:/docker-entrypoint-initdb.d/

Based on this file take the following steps:

  1. Create a directory of your choice
  2. Place the sample docker-compose file in the newly created directory
    1. Note: you could create a dockerfile. While docker-compose is used to define and run the containers, the dockerfile contains the commands to assemble the images that you will have in the containers. In this case we are not using a dockerfile. 
  3. Run the command “docker-compose up
  4. Run “docker images” to see your images
  5. Run “docker ps” to see your containers in running state. You should see something similar to the below:  


 If you want to go into one of your containers (e.g. the nginx one) then run “docker exec -it [name of the container] bash

If you want to see the nginx page on browser then:

  1. In my case I am using Docker Machine on Windows. Therefore, Get the IP via “docker-machine ip
  2. Open the browser with the http://[Machine IP]:PORT
If using Docker native or Docker Desktop for Mac or Windows then simply open your browser of preference and do[PORT] or http://locahost:[PORT]

Kubernetes example - converting docker-compose

Kubernetes has a great community. The Kubernetes documentation is good and it provides a nice playground, using the minikube (and Katacoda) within a terminal running in browser, to get yourself started. 

  • Minikube at
  • Run command "snap install kompose"
  • Get the same docker-compose file presented on the previous example.
  • Create a test folder
  • Move the sample docker-compose.yml to the test folder.
  • Run command "kompose convert" (see image below)

  • Multiple files will get created. Run the kubectl in order to apply to all files created:
Kubectl apply -f db-service.yaml,web-service.yaml,db-deployment.yaml, db-claim0-persistentvolumeclaim.yaml,web-deployment.yaml
  • The output:

  • Now that all is running on Kubernetes then let us take a quick look at the services
  • If you execute "kubectl describe svc web" then you will get details, such as the IP
  • In order to validate the test do "curl http://[IP]" and you should see the nginx default page: 

 Containers Orchestration Solutions

If you want to know about the offerings from Amazon AWS and Microsoft Azure then knowing that you can always search for it, we placed those links here to make your life easier.

Writing about these options, and its benefits, It is out of scope of this article. If you want to know more about their offering, please refer to the links within the CTAs below:

To recap

  1. I presented reasons on why containers can be resourceful for setting up development environments, and for DevOps in general and went over Docker, Docker Swarm and Kubernetes.
  2. I outlined differences between Docker Swarm and Kubernetes.
  3. I provided you a Docker and Kubernetes examples to help you familiarize.



Articles with similar subject:



Trending posts

Upcoming updates to Gmail and Yahoo Mail in Feb 2024

Gmail and Yahoo Mail recipients can comprise over 65-70% of an organization's email target lists. Hence, organizations must ensure they prepare for the upcoming requirements for Gmail and Yahoo Mail for senders. Recently, Google and Yahoo jointly announced they will implement stricter controls to ensure their users receive relevant emails and can unsubscribe effectively.  Photo by Solen Feyissa on Unsplash   In this article, we summarize these upcoming Gmail and Yahoo Mail requirements and provide resources for additional readings if you choose to explore further. Key date: According to Adobe, Google and Yahoo Mail will implement these requirements in Feb 2024.   There are three (3) essential requirements 1. Authenticate your email sending domain We wrote a detailed article on the three pillars of Email Authentication in Essence of email deliverability - SPF, DKIM, DMARC and segmentation . In summary, Google and Yahoo will require the sender domain to have proper authentications,

AI with great power comes responsibility

Generative AI continues to be front and centre of all topics. Companies continue to make an effort for making sense of the technology, investing in their teams, as well as vendors/providers in order to “crack” those use cases that will give them the advantage in this competitive market, and while we are still in this phase of the “AI revolution” where things are still getting sorted.   Photo by Google DeepMind on Unsplash I bet that Uncle Ben’s advise could go beyond Peter Parker, as many of us can make use of that wisdom due to the many things that are currently happening. AI would not be the exception when using this iconic phrase from one of the best comics out there. Uncle Ben and Peter Parker - Spiderman A short list of products out there in the space of generated AI: Text to image Dall.E-2 Fotor Midjourney NightCafe Adobe Firefly

Demystifying OKR Scoring

You have probably read that one of the many good things about OKRs is that it provides structure and clarity to work towards common goals. It helps connect company, teams and individuals’ objectives to measurable results.   Photo by Garreth Brown via Pexels In a previous Beolle article, Herak wrote about HOSKR and OKRs. In this iteration we will focus on the OKR scoring. Measuring the “How” The KRs in OKRs are the Key Results. With them we measure the progress towards the Objectives we have set. So how do we score them in a way that makes sense, and measure the success? Few “gotchas” before we start Grades are an indication where you're going. In OKRs, scoring between .6 to .7 is your target. Scores between .8 and 1.0 are rare, meaning they are not the usual. If you find yourself completing all your OKRs within this range then something is not correct, for example, your Objectives are not Ambitious enough, meaning you always knew you (or your company or your team) were going to ach

CRM - Overview Diagram

CRM is an interesting topic. I had the opportunity to work for a company where I participated in a project for the integration of: Microsoft Dynamics GP Microsoft Dynamics CRM Sharepoint (for the intranet)     I’ve also worked in the Digital Agency industry, involved in Web App solutions, where CRM strategies play an important role. For a client (Eg: An online company in retail selling shoes) would be beneficial to find a Shop (Digital Agency) that understands CRM from the strategic point of view and also offers services using software tools that facilitate the finding of insights, track consumer behaviors and helps to provide a personalize experience to the consumer; bringing value to the business. There are CRM solution providers out there that a Shop can partner with. To name a couple: Salesforce SDL There could also be a case where a Shop can create their own custom solution that could have as their center piece a BI system, created as a produc

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.