Skip to main content

Immutable infrastructure with Docker explained

Immutable infrastructure is a deployment philosophy: once a runtime instance is deployed, it is never modified in place. Updates, patches, and config changes happen by building a new instance (a new image, a new VM, a new container) and replacing the old one. Docker enforces this naturally because the image is read-only and containers are cheap to recreate. The result is reproducible, predictable infrastructure.

Theory

TL;DR

  • Mutable: patch a running server in place (SSH, apt upgrade, edit configs). Each server's state diverges over time. "Snowflake servers."
  • Immutable: build the new artifact (image), deploy, retire old. Each instance is identical. "Phoenix servers."
  • Docker makes immutable easy: image = artifact, container = instance, redeploy = stop+start.
  • State must be externalized: DB on a managed service or persistent volume, files in object storage, sessions in Redis. The container itself is disposable.
  • Rollback is just "deploy the previous tag."
  • Combines naturally with blue-green, canary, rolling updates.

The mental shift

Mutable mindset:

  • "Server prod-1 has been running for 3 years. We have applied 47 patches."
  • Configs live on disk; everyone has SSH access; over time, no two servers are identical.
  • New version = SSH in, run installer, restart service.
  • Drift between staging and production is a constant fight.

Immutable mindset:

  • "This release is image tag 1.7.3. Every running container is from the same digest."
  • Configs come from environment variables or mounted files at start time.
  • New version = new image, new container, same image runs in dev/staging/prod.
  • Drift is impossible because nothing modifies the running instance.

What goes outside the container

For immutable to work, anything that needs to change at runtime must live outside:

ConcernWhere it lives
Application binary, libs, runtimeInside the image (immutable).
Static configs that vary by envEnv vars or mounted files.
SecretsMounted from a secret manager (Vault, AWS Secrets Manager, Docker Secrets).
User data, DB stateExternal DB / managed service / persistent volume.
Uploaded filesObject storage (S3) or shared volume.
LogsStreamed to a log aggregator (Loki, ELK, CloudWatch).
SessionsRedis, JWT, or signed cookies — never in-memory tied to a single container.

If any of these is on the container's writable layer, immutable is broken: replacing the container loses data.

Why immutable wins

  1. Reproducibility. A given image tag, run anywhere, behaves the same way. Dev = staging = prod.
  2. Easy rollback. "Deploy 1.7.2 again." One command. No restoration of system state.
  3. No config drift. Every container of the same tag is bit-identical.
  4. Auditability. What is running? docker inspect shows the image digest. Trace it back to a git SHA via build labels.
  5. Forces good hygiene. You cannot SSH and "just fix it" — pushing back into the image build process produces a permanent fix.
  6. Pairs with deployment patterns. Blue-green, canary, rolling — all assume identical replicas you can swap.

Trade-offs

  • Cost of state externalization. Setting up Postgres, Redis, S3, etc. is more upfront work than slapping data on the local disk.
  • Build pipeline must be solid. If your CI is flaky, every change costs you.
  • Image size. A 2 GB image is fine in mutable land but slows immutable redeploys. Optimize image size aggressively.
  • Cold-start cost. A new container starts cold; warm caches are gone. Mitigate with startup warmup or readiness probes.

Examples

Mutable vs Immutable: same change

Fix: bump the API's max body size from 1 MB to 5 MB.

Mutable workflow:

bash
ssh prod-1 "sudo vi /etc/api/config.yaml" # (edit max_body_size: 5MB) ssh prod-1 "sudo systemctl restart api" # Repeat on prod-2, prod-3... # Forget prod-4. Now prod-4 has 1 MB limit. Drift.

Immutable workflow:

bash
# Edit config in repo git checkout -b bump-body-size vi config/api.yaml # max_body_size: 5MB git commit -am "Bump body size" && git push # CI builds image myorg/api:1.7.4 # Deploy: kubectl set image deployment/api api=myorg/api:1.7.4 # Or: docker compose pull && docker compose up -d # Every container picks up the new image. No drift possible.

Externalizing state

Good:

yaml
# docker-compose.yaml services: app: image: myorg/app:1.0 environment: DATABASE_URL: ${DATABASE_URL} REDIS_URL: ${REDIS_URL} S3_BUCKET: my-uploads # Note: no "db" service. DB is RDS/Cloud SQL/etc.

Bad (state inside container):

yaml
services: app: image: myorg/app:1.0 # If we lose this container, we lose data!

Immutable + blue-green

bash
# Image v1 is running docker run -d --name app-blue myorg/app:1.0 # Build and push v2 docker build -t myorg/app:2.0 . docker push myorg/app:2.0 # Bring up green from new image docker run -d --name app-green myorg/app:2.0 # Traffic flip via load balancer # Confirm v2 is healthy # Remove blue docker stop app-blue && docker rm app-blue

Blue and green are both immutable: blue runs 1.0, green runs 2.0, neither is mutated mid-flight.

Immutable + canary

bash
# 9 copies of v1, 1 copy of v2 — load balancer sends 10% to v2 # Monitor metrics. If good, gradually replace more v1 with v2.

Each copy is immutable; the rollout is a series of replacements, not in-place upgrades.

Immutable + rolling update (Swarm)

bash
docker service update --image myorg/app:2.0 myapp # Swarm replaces tasks one at a time, each task is a fresh container from the new image.

Pinning by digest for true immutability

yaml
image: myorg/app@sha256:abc123def456...

A tag like myorg/app:1.0 can be re-pushed (someone could overwrite it). A digest cannot. For maximum reproducibility (and security), pin by digest in production.

bash
# Find digest after push docker inspect myorg/app:1.0 --format='{{.Id}}' # sha256:abc...

Build systems can lock the digest into the deploy manifest:

yaml
image: myorg/app:1.0@sha256:abc123...

Real-world usage

  • Microservices on Kubernetes: every Pod is a container from a specific image; no kubectl exec to fix bugs in prod.
  • Serverless / containers-as-a-service (Cloud Run, Fargate): platform enforces immutability — you cannot SSH in.
  • Compose-managed dev environments: redeploy on every change; the container is throwaway.
  • CI build agents: each job runs in a fresh container.
  • Hardened production environments: container images are signed and scanned, deployed via GitOps (Argo CD, Flux). The whole pipeline is immutable end-to-end.

Anti-patterns to avoid

SSH-ing into running containers to fix things

bash
docker exec -it api bash # vi /etc/api/config.yaml ← BAD

The fix lives until the container restarts, then is gone. Worse, it diverges from your image. Push the fix into the repo, rebuild, redeploy.

Storing data in the container's writable layer

When the container is replaced, that data is gone. Always use volumes or external services.

Re-using the same tag for new builds

bash
docker build -t myorg/app:latest . docker push myorg/app:latest # Now what is ":latest"? Yesterday's? Today's?

Use meaningful tags (semver, git SHA, timestamp). Reserve latest for dev convenience; never deploy from latest in prod.

Treating the host as immutable but containers as mutable

If you docker exec into containers to patch them, you have not gained much. The whole stack should follow the discipline.

Common mistakes

Forgetting that volumes break the model

A volume is mutable state. That is fine — it is the externalized data. But understand: when you redeploy, the new container reuses the same volume. If a deploy migration corrupts data, the next deploy inherits the corruption. Treat volumes (and DB schemas) with care.

Configs baked into the image

dockerfile
COPY config/prod.yaml /etc/api/config.yaml

Now the same image is not portable across environments. Mount or env-inject configs at run time.

Long-lived containers with hot-reload

If your dev workflow is docker exec api npm run reload, you have made the container mutable. For dev, that is fine. For prod, never.

Follow-up questions

Q: Is immutable infrastructure only achievable with containers?


A: No. AMIs, Packer-built images, terraform-recreated servers all enable it. Containers make it cheaper because rebuild + redeploy is fast.

Q: What about secrets that change without redeploy?


A: Mount them from a secrets manager (Vault, AWS, GCP). The container reads them on startup or refreshes them periodically. The image itself contains no secrets.

Q: Does immutable mean I can never restart a container?


A: A restart is fine — it is not modifying the image, just rerunning it. "Immutable" refers to the image and on-disk filesystem of the running container, not its lifecycle.

Q: (Senior) How do you reason about long-lived in-memory state (cache warmups, leader-election) under immutable?


A: Two strategies. (1) Make state externally durable: leader-election via a shared service (etcd, ZooKeeper); cache backed by Redis. (2) Embrace cold starts: each new container warms up from cold. Combine with rolling deploys so traffic shifts gradually, no thundering herd. Real systems use both: warm caches when it matters, externalize when correctness depends on it.

Q: (Senior) How does immutable infrastructure interact with regulated environments (PCI, HIPAA)?


A: Beautifully. Auditors love it. Every running version maps back to a signed image (cosign/Notary), which maps back to a CI build, which maps back to a git commit. Provenance is end-to-end. Patching is a code change with a PR, review, and CI artifact, not someone SSH-ing into prod and editing a config. Most compliance frameworks now explicitly favor immutable + GitOps as a stronger control surface than mutable patch-management.

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet