Docker: A Practical Guide for Developers
docker containerize deployment

Docker: A Practical Guide for Developers

D. Rout

D. Rout

March 11, 2026 14 min read

On this page

If you've spent any time in a development team, you've almost certainly heard the phrase that haunts engineers everywhere: "It works on my machine."

It's usually delivered with a shrug, a nervous laugh, and the quiet dread of a production outage caused by some invisible difference between one developer's laptop and the server running the app. Different OS versions, different library versions, different environment variables — the list of ways your software can silently break when it moves between environments is staggering.

Docker was built to kill that problem dead.

In this tutorial, we'll cover:

  • What Docker is and the core problem it solves
  • Key concepts: images, containers, volumes, and networks
  • How to install Docker
  • Writing your first Dockerfile
  • Running, stopping, and managing containers
  • Using Docker Compose for multi-container apps
  • Best practices and where to go next

By the end, you'll have a real mental model of Docker and a working containerized application.


What Problem Does Docker Solve?

The Environment Hell Problem

Imagine you're building a web app. It runs Node.js 20, uses a PostgreSQL 15 database, and depends on a handful of native C libraries. On your laptop, everything is perfect. Then:

  • Your colleague checks out the code. They're running Node.js 18. Things break.
  • You push to the staging server. It's running Ubuntu 20.04; your laptop is on macOS. A native dependency behaves differently.
  • You push to production. The ops team installed a system package last week that conflicts with yours.

This is environment drift — the slow divergence between the environments your code is expected to run in. It wastes hours, causes subtle bugs, and erodes trust between development and ops teams.

The Old Solution: Virtual Machines

Before Docker, the gold standard for environment consistency was the Virtual Machine (VM). A VM runs a full operating system on top of your physical hardware, giving you a sandboxed, reproducible environment.

The problem? VMs are heavy. Each one bundles an entire OS kernel, which can take gigabytes of disk space and minutes to boot. Running a few VMs on a developer laptop is feasible; running thousands on a server cluster is expensive.

The Docker Solution: Containers

Docker introduced containers — a lightweight alternative to VMs. Instead of virtualizing hardware and running a full OS per environment, containers share the host machine's OS kernel and isolate only the application and its dependencies.

The result:

Virtual Machines Docker Containers
Boot time Minutes Seconds (often milliseconds)
Size Gigabytes Megabytes
Isolation Full OS Process-level
Portability Moderate Excellent
Overhead High Low

A container packages your app, its runtime, its libraries, and its configuration into a single, portable unit. That unit runs identically on your laptop, your colleague's laptop, a CI server, and a cloud VM.

"Works on my machine" becomes "runs in a container" — and containers run everywhere.


Core Concepts

Before writing any code, let's nail down the vocabulary.

Images

A Docker image is a read-only blueprint for a container. Think of it like a class in object-oriented programming, or a snapshot of a filesystem at a point in time.

Images are built in layers. Each instruction in a Dockerfile (more on this shortly) adds a layer on top of the previous one. Layers are cached and reused, which makes builds fast and storage efficient.

You can:

  • Build your own images with a Dockerfile
  • Pull pre-built images from Docker Hub, the public registry of images

Containers

A container is a running instance of an image. Using the class analogy: if an image is a class, a container is an object — an actual, live thing created from that blueprint.

You can run multiple containers from the same image simultaneously. They're isolated from each other and from the host machine, but you can explicitly connect them via networks and volumes.

Dockerfile

A Dockerfile is a plain-text file containing instructions for building a Docker image. It defines the base OS, copies in your application code, installs dependencies, and specifies how the app starts.

Volumes

Containers are ephemeral by default — when a container stops, any data it wrote to its filesystem is gone. Volumes are Docker's mechanism for persisting data beyond the lifetime of a container.

Volumes are stored on the host filesystem and mounted into the container at a specified path. They're essential for databases and any service that needs to retain state.

Networks

By default, Docker containers are isolated from each other. Networks let containers communicate. When you use Docker Compose (covered below), containers on the same Compose project are automatically networked together and can reach each other by service name.


Installing Docker

Docker is available for macOS, Windows, and Linux via Docker Desktop — a GUI application that bundles the Docker engine, CLI, and useful tooling.

Head to the official installation page and follow the instructions for your OS:

→ Install Docker Desktop

Once installed, verify it's working:

docker --version
# Docker version 26.x.x, build ...
 
docker run hello-world
# Should print a "Hello from Docker!" message

If you see that message, you're ready to go.


Your First Dockerfile

Let's containerize a simple Node.js web server. Create a new directory and add these files:

app.js

const http = require('http');
 
const server = http.createServer((req, res) => {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello from Docker!\n');
});
 
server.listen(3000, () => {
  console.log('Server running on port 3000');
});

package.json

{
  "name": "docker-demo",
  "version": "1.0.0",
  "main": "app.js",
  "scripts": {
    "start": "node app.js"
  }
}

Now create the Dockerfile (no file extension):

# 1. Base image — start from an official Node.js image
FROM node:20-alpine
 
# 2. Set the working directory inside the container
WORKDIR /app
 
# 3. Copy dependency files first (for better layer caching)
COPY package*.json ./
 
# 4. Install dependencies
RUN npm install
 
# 5. Copy the rest of the application code
COPY . .
 
# 6. Expose the port the app listens on
EXPOSE 3000
 
# 7. Command to run the app when the container starts
CMD ["node", "app.js"]

Let's unpack each instruction:

  • FROM node:20-alpine — Every image starts from a base. node:20-alpine is an official Node.js 20 image built on Alpine Linux, a tiny distribution (~5 MB). Using Alpine keeps your images lean.
  • WORKDIR /app — Sets /app as the current directory for all subsequent commands. If it doesn't exist, Docker creates it.
  • COPY package*.json ./ — Copies package.json (and package-lock.json if it exists) before the rest of the code. This is a caching strategy: if your dependencies haven't changed, Docker reuses the cached layer from a previous build and skips npm install.
  • RUN npm install — Runs inside the image during build time. The resulting node_modules are baked into the image.
  • COPY . . — Copies all remaining project files into the container's /app directory.
  • EXPOSE 3000 — Documents that the container listens on port 3000. This is metadata — it doesn't actually publish the port (you do that at runtime).
  • CMD ["node", "app.js"] — The default command to run when the container starts. Use the JSON array form (exec form) — it avoids spawning a shell and handles signals correctly.

Building and Running Your Container

Build the Image

From the directory containing your Dockerfile:

docker build -t my-node-app .
  • -t my-node-app — Tags the image with the name my-node-app
  • . — The build context (current directory)

You'll see Docker pulling the base image and executing each layer. On subsequent builds, unchanged layers are served from cache — very fast.

Verify the image exists:

docker images
# REPOSITORY      TAG       IMAGE ID       CREATED         SIZE
# my-node-app     latest    abc123...      5 seconds ago   130MB

Run a Container

docker run -d -p 3000:3000 --name my-app my-node-app

Flags explained:

  • -d — Detached mode. Runs the container in the background and prints its ID.
  • -p 3000:3000 — Maps port 3000 on your host to port 3000 in the container. Format: -p HOST_PORT:CONTAINER_PORT.
  • --name my-app — Gives the container a human-readable name.
  • my-node-app — The image to run.

Now visit http://localhost:3000 in your browser. You should see "Hello from Docker!".

Useful Container Commands

# List running containers
docker ps
 
# List all containers (including stopped)
docker ps -a
 
# View logs from a container
docker logs my-app
 
# Stream logs (follow mode)
docker logs -f my-app
 
# Execute a command inside a running container
docker exec -it my-app sh
 
# Stop a container
docker stop my-app
 
# Remove a container
docker rm my-app
 
# Remove an image
docker rmi my-node-app

The docker exec -it my-app sh command is especially useful for debugging — it opens an interactive shell inside the running container so you can inspect the filesystem, environment variables, and running processes.


The .dockerignore File

Just like .gitignore tells Git which files to skip, .dockerignore tells Docker which files to exclude from the build context.

Create a .dockerignore file:

node_modules
npm-debug.log
.git
.env
*.md

This is important for two reasons:

  1. Speed — The build context is sent to the Docker daemon before the build starts. Excluding node_modules (which can be huge) speeds this up significantly.
  2. Security — Prevents accidentally copying .env files or credentials into your image.

Docker Compose: Multi-Container Applications

Real applications rarely consist of a single service. A typical web app might have a Node.js backend, a PostgreSQL database, and a Redis cache. Managing these individually with docker run commands becomes unwieldy fast.

Docker Compose lets you define and run multi-container applications with a single YAML file.

Example: Node.js + PostgreSQL

Create a docker-compose.yml in your project root:

version: '3.9'
 
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://user:password@db:5432/mydb
    depends_on:
      - db
    volumes:
      - .:/app
      - /app/node_modules
 
  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"
 
volumes:
  postgres_data:

Key things to note:

  • services — Defines each container. Here we have app (our Node.js server) and db (PostgreSQL).
  • build: . — Tells Compose to build the image from the Dockerfile in the current directory, rather than pulling a pre-built image.
  • depends_on — Ensures db starts before app. Note: this only waits for the container to start, not for Postgres to be ready. For production, use a health check or a wait script.
  • environment — Sets environment variables inside the container.
  • volumes — The postgres_data named volume persists the database across container restarts. The .:/app mount syncs your local code into the container (useful for development hot-reloading).
  • db:5432 — In the DATABASE_URL, db is the hostname. Compose automatically creates a network where services can reach each other by their service name.

Running with Compose

# Start all services (build if needed)
docker compose up
 
# Start in detached mode
docker compose up -d
 
# Stop all services
docker compose down
 
# Stop and remove volumes (wipes the database!)
docker compose down -v
 
# Rebuild images before starting
docker compose up --build
 
# View logs for a specific service
docker compose logs app
 
# Run a one-off command in a service
docker compose exec app sh

Best Practices

1. Use Specific Base Image Tags

Avoid FROM node:latest. The latest tag can change, breaking your builds unexpectedly. Pin to a specific version:

FROM node:20.12-alpine3.19

2. Leverage Layer Caching

Order your Dockerfile instructions from least-to-most frequently changed. Copy dependency files and install them before copying application code. This way, npm install only re-runs when package.json changes.

3. Run as a Non-Root User

By default, Docker containers run as root, which is a security risk. Create and switch to a non-root user:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

4. Use Multi-Stage Builds

For compiled languages (Go, Java, TypeScript), use multi-stage builds to keep your final image lean — build in one stage, copy only the output to the final stage:

# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
 
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY package*.json ./
RUN npm ci --omit=dev
CMD ["node", "dist/index.js"]

5. Never Store Secrets in Images

Don't COPY .env into your image or hardcode credentials in your Dockerfile. Use environment variables passed at runtime, Docker secrets, or a secrets manager like Docker Secrets or HashiCorp Vault.

6. Scan Your Images for Vulnerabilities

Use docker scout (built into Docker Desktop) or tools like Trivy to scan images for known CVEs before deploying.

docker scout cves my-node-app

Common Gotchas

Container exits immediately after starting? The process defined in CMD probably exited. Check logs with docker logs <container>. Make sure your app doesn't exit on startup (e.g., no uncaught exceptions).

Cannot connect to the Docker daemon? Docker Desktop isn't running. Start it from your applications.

Port already in use? Something on your host is already using the port you're mapping to. Change the host-side port: -p 3001:3000.

Changes to code not reflected? If you're not using a volume mount, you need to rebuild the image and restart the container. Add a volume in Compose for development (.:/app).

Postgres container healthy but app can't connect? depends_on only waits for the container to start, not for Postgres to be ready to accept connections. Add a health check or use a wait wrapper like wait-for-it.sh.


Further Learning

You've covered the fundamentals. Here's where to go next:

Official Documentation

Going Deeper on Containers

Orchestration (the next step) Once you're comfortable with Docker, the natural next step is Kubernetes — for orchestrating containers at scale.

Registries

Security


Recap

Here's what we covered:

  1. The problem — Environment inconsistency causes bugs, wasted time, and deployment failures.
  2. What Docker is — A containerization platform that packages apps and their dependencies into portable, isolated units.
  3. Core concepts — Images (blueprints), containers (running instances), volumes (persistence), networks (communication).
  4. Dockerfile — The recipe for building an image, with layer caching as a core optimization.
  5. Docker CLIbuild, run, ps, logs, exec, stop, rm.
  6. Docker Compose — Multi-container orchestration with a YAML file.
  7. Best practices — Pin tags, leverage caching, don't run as root, use multi-stage builds, never bake in secrets.

Docker has fundamentally changed how software is built, shipped, and run. Once you internalize the container mental model, you'll find it hard to imagine going back to bare-metal deployments.

Now go containerize something.

Share

Comments (0)

Join the conversation

Sign in to leave a comment on this post.

No comments yet. to be the first!