DEV Community

Edmund Eryuba
Edmund Eryuba

Posted on

Dive Into Containerization, Docker & Docker Compose

Modern software systems are expected to run consistently across multiple environments such as development laptops, testing servers, cloud platforms and production infrastructure.

One of the major challenges developers and data engineers face is ensuring that applications behave the same way regardless of where they are deployed. This challenge led to the rise of containerization, a technology that packages applications together with their dependencies into isolated, portable environments called containers.

Among the most widely used containerization tools today are Docker and Docker Compose. These tools simplify application deployment, improve scalability and reduce environment-related issues.

What is Containerization

Containerization is the process of packaging an application together with everything it needs to run. This includes the applications source code, runtime environment, libraries, dependencies and configuration files. The application is packaged into a lightweight unit called a container.

A container can run consistently on a developer’s laptop, virtual machine (VMs), on-premise servers and also on cloud infrastructures. Containers isolate applications from the underlying system while still sharing system resources efficiently.

Unlike traditional virtual machines, containers share the host operating system kernel, making them lightweight, fast and efficient.

Why are containers useful

  • Portability – the isolated environment that containers provide effectively means the container is decoupled from the environment in which they run. Basically, they don't care much about the environment in which they run, which means they can be run in many different environments with different operating systems and hardware platforms.

  • Consistency – since the containers are decoupled from the environment in which they run, you can be sure that they operate the same, regardless of where they are deployed. The isolated environment that they provide is the same across different deployment environments.

  • Fast deployment - Containers start within seconds and can be deployed rapidly across environments. This supports continuous integration (CI), continuous deployment (CD), agile development.

  • Resource Efficiency - Containers consume fewer resources compared to virtual machines because they share the host operating system kernel. This reduces infrastructure cost and memory usage.


Docker

Docker is an open-source platform that allows you to build, deploy and manage containerized applications.

There are alternative containerization platforms, such as podman, however, Docker is the leading player in this space. There is also Docker Inc, the company that sells the commercial version of Docker. Docker comes with a command line interface (CLI), using which you can do all of the operations that the platform provides.

Docker allows developers to create container images, run containers, share applications easily and automate deployments.

Core Docker components

  • Docker Images: The blueprints of our application which form the basis of containers. These contain all of the configuration settings that define the isolated environment.

  • Docker Containers: Are running instances of a Docker image and are what run the actual application.

  • Dockerfile: A text file containing instructions to build a Docker image.

  • Docker Hub: A cloud repository for storing and sharing Docker images. A user can have their own registry, from which they can pull images.

  • Docker Compose: a tool used to manage multiple containers using a single YAML configuration file. Instead of starting containers one by one, Docker Compose allows developers to define services, configure networks, manage volumes and start entire applications using one command

How Similar Is Docker to a Virtual Machine?

Docker and virtual machines (VMs) may seem similar because they both provide isolated environments for applications, but they differ fundamentally in how they achieve this isolation.

  • Virtual Machines: A VM includes a full copy of an operating system, the application, necessary binaries, and libraries—making it heavier and requiring more resources. VMs run on a hypervisor that emulates hardware for each VM, allowing multiple VMs to run on a single physical machine. This offers strong isolation but at the cost of performance and efficient resource usage.

  • Docker Containers: In contrast, Docker containers share the host system's OS kernel, making them much lighter and faster to start. Containers package the application and its dependencies, but they do not include an entire OS, relying instead on features provided by the host OS. This results in faster performance and more efficient resource utilization compared to VMs.

Thus, while both technologies allow for application isolation, Docker containers provide a more lightweight and efficient solution, especially suited for cloud-native and distributed applications where quick scaling and portability across environments are critical.


Setting Up Docker

To begin working with Docker on your machine, you need to install Docker Desktop.

This comprehensive tool includes Docker Engine, Docker CLI, Docker Compose, and other essential components, providing everything you need to develop, test, and manage containers seamlessly. By installing Docker Desktop, you'll be equipped with:

  • The ability to use the Docker Command Line Interface (CLI) for executing commands such as managing containers, images, and networks.
  • Access to a user-friendly graphical interface to easily monitor and manage your Docker setup.

Interacting with Docker: Engine, Daemon, and CLI

To effectively work with Docker, it's important to understand its architecture and how its main components interact. Here’s how these pieces fit together:

  • Docker Engine: This is the core part of Docker—a client-server application that enables you to build and run containers. The Docker Engine consists of two main components: the Docker Daemon and the Docker CLI, which communicate via a REST API.

  • Docker Daemon: Also known as dockerd, the Docker Daemon is a background service that manages Docker objects such as images, containers, networks, and volumes. It listens for API requests and performs the actions needed to build, run, and manage containers. The Daemon is responsible for the actual work of creating and running containers.

  • Docker CLI: The Docker Command Line Interface (CLI) is the tool you use to interact with Docker from your terminal. When you type a command starting with docker, the CLI sends your request to the Docker Daemon via the REST API. The CLI acts as the user-facing entry point for managing containers, images, and other Docker resources.

The Docker Engine is the overall system that powers Docker, made up of the Docker Daemon (the server) and the Docker CLI (the client). When you run a Docker command, the CLI communicates with the Daemon, which then carries out the requested operation. This architecture allows you to efficiently manage containers and images.


Practical Use of Docker & Docker Compose

This example demonstrates a simple but realistic multi-container application using three services; python flask application, PostgreSQL database and pgAdmin (database management UI).

Project Structure

docker-demo/
│
├── docker-compose.yml
├── Dockerfile
├── requirements.txt
├── app.py
└── .env
Enter fullscreen mode Exit fullscreen mode

1. Create the Python Application (app.py)

Contents:

from flask import Flask
import psycopg2
import os

app = Flask(__name__)

DB_NAME = os.getenv("POSTGRES_DB")
DB_USER = os.getenv("POSTGRES_USER")
DB_PASSWORD = os.getenv("POSTGRES_PASSWORD")
DB_HOST = os.getenv("POSTGRES_HOST")

@app.route("/")
def home():

    try:
        conn = psycopg2.connect(
            dbname=DB_NAME,
            user=DB_USER,
            password=DB_PASSWORD,
            host=DB_HOST
        )

        return "Application connected to PostgreSQL successfully!"

    except Exception as e:
        return f"Database connection failed: {e}"

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5000)
Enter fullscreen mode Exit fullscreen mode

2. Create Requirements File (requirements.txt)

Contents:

flask
psycopg2-binary
Enter fullscreen mode Exit fullscreen mode

3. Create Dockerfile (Dockerfile)

Contents:

# Use lightweight Python image
FROM python:3.11-slim

# Set working directory inside container
WORKDIR /app

# Copy dependency file
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application files
COPY . .

# Expose Flask port
EXPOSE 5000

# Start application
CMD ["python", "app.py"]
Enter fullscreen mode Exit fullscreen mode

4. Create Environment Variables File (.env)

Contents:

POSTGRES_DB=demo_db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres123
POSTGRES_HOST=postgres_db
Enter fullscreen mode Exit fullscreen mode

5. Create Docker Compose File (docker-compose.yml)

Contents:

version: '3.9'

services: # Defines all application containers

  # PostgreSQL database service
  postgres_db:
    image: postgres:15
    container_name: postgres_container
    restart: unless-stopped
    env_file:
      - .env
    ports:
      - "5432:5432"
    volumes:
      # Persist database data
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      # Checks whether PostgreSQL is ready
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Python flask application
  flask_app:
    build: .
    container_name: flask_container
    restart: unless-stopped
    depends_on:
      postgres_db:
        condition: service_healthy
    env_file:
      - .env
    ports:
      - "5000:5000"
    healthcheck:
      # Checks if application responds
      test: ["CMD", "curl", "-f", "http://localhost:5000"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Pgadmin database ui
  pgadmin:
    image: dpage/pgadmin4
    container_name: pgadmin_container
    restart: unless-stopped
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@example.com
      PGADMIN_DEFAULT_PASSWORD: admin123
    ports:
      - "5050:80"
    depends_on:
      postgres_db:
        condition: service_healthy

# Persistent storage
volumes:
  postgres_data:
Enter fullscreen mode Exit fullscreen mode

6. Build and Start Containers

Run:

docker compose up –build
Enter fullscreen mode Exit fullscreen mode

Explanation:

  • up → starts containers
  • --build→ rebuilds images

7. Verify Running Containers

Run:

docker ps
Enter fullscreen mode Exit fullscreen mode

Expected containers:

postgres_container
flask_container
pgadmin_container
Enter fullscreen mode Exit fullscreen mode

8. Access the Application

Flask Application

Open browser: http://localhost:5000

pgAdmin

Open: http://localhost:5050

Login:
Email: admin@example.com
Password: admin123

9. Stop Containers

Run:

docker compose down
Enter fullscreen mode Exit fullscreen mode

To also remove database storage:

docker compose down -v
Enter fullscreen mode Exit fullscreen mode

In Summary

When the docker compose up –build command is executed, Docker Compose reads the docker-compose.yml file and automatically creates the three services defined in the project: the PostgreSQL database, the Flask application and pgAdmin.

Docker first builds the Flask application image using the instructions inside the Dockerfile, installs the required Python dependencies and then starts each container in the correct order. The PostgreSQL container initializes the database and stores its data in a persistent Docker volume so that the data is not lost when the container stops.

The Flask application container waits until the database passes its health check before attempting to connect, ensuring reliable startup. Once connected, the Flask app becomes accessible through port 5000 on the local machine, while pgAdmin becomes available on port 5050 for database management through a browser interface.

Docker automatically creates an internal network that allows the containers to communicate using service names such as postgres_db instead of IP addresses. Together, Docker and Docker Compose simplify deployment by orchestrating all services, networking, storage, startup dependencies and health monitoring using a single configuration file.

Top comments (1)

Collapse
 
anthony-gicheru profile image
Anthony Gicheru

This is a good read.