Skip to main content

How to update a Docker container without losing data?

Updating a Docker container without losing data is the everyday workflow that catches new users. The trick is realizing the container is supposed to be disposable and the volume is the part you protect.

Theory

TL;DR

  • A container's writable layer is disposable. State that matters lives in volumes (or external services).
  • Update flow: stop old → rm old → run new with same volumes and new image tag.
  • For Compose: docker compose pull && docker compose up -d recreates only changed services.
  • For zero-downtime: blue-green, rolling update (Swarm/K8s), or --update-config with health gating.
  • DB version bumps need extra care: Postgres 16 → 17 requires pg_upgrade or pg_dumpall. Volume preservation alone is not enough across major versions.

The simple update pattern

bash
# Original docker run -d --name api \ -v api_data:/var/lib/myapp \ -p 80:80 \ --restart=unless-stopped \ myapp:1.0 # Update to 1.1 docker pull myapp:1.1 docker stop api && docker rm api docker run -d --name api \ -v api_data:/var/lib/myapp \ # SAME volume -p 80:80 \ --restart=unless-stopped \ myapp:1.1 # NEW image

The data in api_data survives because volumes are separate from containers. The container is just the "runtime" that points at the data.

Compose: the cleaner version

yaml
services: api: image: myapp:1.0 volumes: - api_data:/var/lib/myapp ports: ["80:80"] restart: unless-stopped volumes: api_data:
bash
# Update workflow sed -i 's/myapp:1.0/myapp:1.1/' compose.yaml docker compose pull docker compose up -d # Compose detects the image change, stops + recreates ONLY api, # leaves the volume untouched, leaves other services running.

No manual stop/rm/run. Compose handles the recreation while preserving volumes.

What survives, what does not

On docker rm:

Survives?
Named volumesYES (separate from container)
Anonymous volumes (without --rm -v)YES
Anonymous volumes (with --rm -v)NO (deleted)
Bind mounts (host paths)YES (live on host)
Container's writable layerNO (deleted with container)
Container's network aliasNO (recreated for new container)
Logs in /var/lib/docker/containers/<id>/NO (deleted)

Rule: anything in a volume or bind mount survives. Anything else dies with the container.

The DB version-bump caveat

Keeping the volume across docker rm is fine for compatible image versions. Postgres 16 → 16.5 = same data format, same volume works. Postgres 16 → 17 = different on-disk format, different volume layout.

bash
# This will fail or corrupt: docker stop pg && docker rm pg docker run -d --name pg -v pgdata:/var/lib/postgresql/data postgres:17 # postgres:17 sees /var/lib/postgresql/data formatted by version 16; refuses to start.

For major DB version upgrades:

  1. Backup first. Always.
  2. Use pg_dumpall from old container, restore into new container with fresh volume.
  3. OR use pg_upgrade in a transitional container that has both versions.
  4. Test in staging before production.

Same applies to MySQL, Mongo, Elasticsearch — major version bumps need migration logic, not just a tag change.

Zero-downtime updates

The basic stop+rm+run flow has a brief outage (a few seconds while the new container starts). For zero-downtime:

Compose with health gating

yaml
services: api: image: myapp:1.0 healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 5s
bash
# Rolling-style update with two replicas behind a load balancer docker compose up -d --no-deps --scale api=2 api # spin up new alongside old # Wait for health # Remove old replicas

For reliable zero-downtime in Compose, you usually add a reverse proxy (Traefik/nginx) that watches healthchecks.

Swarm rolling update

bash
docker service update --image myapp:1.1 \ --update-parallelism 1 \ --update-delay 10s \ --update-failure-action rollback \ api # Updates one replica at a time, waits 10s, checks health, continues or rolls back.

Swarm handles rolling update natively. If health checks fail, automatic rollback.

Blue-green

Run green (new) alongside blue (old). Switch the load balancer when green is healthy. Tear down blue.

bash
# Old: myapp:1.0 listening on 8081, lb routes here # Start new: myapp:1.1 on 8082 docker run -d --name api-green -p 8082:3000 -v api_data:/data myapp:1.1 # Wait for /health to be 200 # Update load balancer config: route to 8082 # Drain old: docker stop api-blue; docker rm api-blue

Double resources during transition; instant cutover; easy rollback (flip the LB back).

Common mistakes

Forgetting the same volume mount on the new container

bash
# WRONG: new container has no volume; data "missing" docker run -d --name api myapp:1.1 # RIGHT: same volume mount as before docker run -d --name api -v api_data:/var/lib/myapp myapp:1.1

The data is still in the volume; the new container just is not connected to it. Re-run with the right -v.

Using anonymous volumes

bash
docker run -d --name api -v /var/lib/myapp myapp:1.0 # Anonymous volume (auto-named UUID) docker rm -fv api # -v wipes anonymous volume → DATA LOSS

Always use named volumes (-v api_data:/var/lib/myapp). They survive rm -v; anonymous ones do not, depending on flags.

Major DB version upgrade without migration

Covered above. The fix is pg_dump+restore or pg_upgrade, not just bumping the tag.

Updating during peak traffic without health gating

bash
# WRONG during prod traffic docker stop api && docker rm api # 5-30 second window of 502s docker run -d --name api -v api_data:/data myapp:1.1

For user-facing services, use Compose's recreate, Swarm rolling update, or blue-green. The basic stop+run is fine for off-hours updates only.

Forgetting the network

bash
# Old container was on a custom network docker run -d --name api --network appnet -v api_data:/data myapp:1.0 # Recreated without --network → on default bridge → cannot reach db, redis, etc. docker run -d --name api -v api_data:/data myapp:1.1 # api now isolated from the rest of the stack.

Reproduce ALL flags from the original docker run, not just the volume.

Real-world usage

  • Single-container service: docker stop && docker rm && docker run with correct -v. Brief outage, simple.
  • Compose stacks: docker compose pull && docker compose up -d. Compose computes the diff and recreates only what changed.
  • Production with traffic: Compose + reverse proxy with health checks, OR Swarm/K8s with native rolling update.
  • DB upgrades: dump → fresh volume → restore. Schedule downtime, test in staging.

Follow-up questions

Q: What if the new image needs a different volume mount path?


A: Mount the same volume at the new path: -v api_data:/new/path/in/v1.1/myapp. Or run a one-time cp job inside a temporary container to move data within the volume.

Q: Does docker update change the image?


A: No. docker update modifies runtime parameters (memory, CPU, restart policy) of an existing container WITHOUT changing the image. To use a new image, you must recreate.

Q: Is docker compose restart enough to apply image changes?


A: No. restart just stops+starts the SAME container. To apply image changes, use docker compose up -d.

Q: How do I roll back a bad update?


A: With Compose, edit back to the old tag and docker compose up -d. With Swarm, docker service rollback <name>. With manual docker run, you must save the old container's command first, then re-run it.

Q: (Senior) How do you handle a stateful update that requires a schema migration?


A: Three-step pattern: (1) deploy migration job (separate container, runs against the volume, exits 0 on success). (2) deploy new app version that expects the new schema. (3) keep migration idempotent so reruns are safe. In Compose, use a one-shot migrate service with restart: "no" and depends_on: db: service_healthy. In K8s, use a Job + readiness gate on the deployment. The hard part is making migrations backward-compatible (old app must keep working during the rolling update); that requires expand-then-contract schema changes.

Examples

Compose update with one bumped service

yaml
services: web: image: nginx:1.27-alpine api: image: myorg/api:1.0 # bump this volumes: [api_data:/var/lib/myapp] depends_on: [db] db: image: postgres:16 volumes: [pgdata:/var/lib/postgresql/data] volumes: api_data: pgdata:
bash
# Edit compose.yaml: api becomes myorg/api:1.1 sed -i 's/myorg\/api:1.0/myorg\/api:1.1/' compose.yaml docker compose pull docker compose up -d [+] Running 4/4 ✔ Container web Running (unchanged) ✔ Container db Running (unchanged) ✔ Container api Recreated (image changed)

Only api recreated. db and web untouched. api_data survives.

Postgres major version upgrade

bash
# Step 1: backup $ docker exec pg pg_dumpall -U postgres > backup.sql # Step 2: stop and remove old $ docker stop pg && docker rm pg # Step 3: rename old volume (keep as fallback) $ docker volume create pgdata-pg17 $ docker run --rm \ -v pgdata:/old \ -v pgdata-pg17:/new \ alpine sh -c 'cp -a /old/. /new/' # initial copy if you want incremental # Step 4: start fresh pg17 with empty volume $ docker volume rm pgdata-pg17 && docker volume create pgdata-pg17 $ docker run -d --name pg \ -v pgdata-pg17:/var/lib/postgresql/data \ -e POSTGRES_PASSWORD=devpass \ postgres:17 # Step 5: restore $ cat backup.sql | docker exec -i pg psql -U postgres # Step 6: verify, then delete old `pgdata` volume

Major version bumps cannot just "point at the volume". Dump + restore is the safe path. Or use the official postgres:17 image's pg_upgrade instructions.

Swarm rolling update with rollback on failure

bash
docker service update \ --image myorg/api:1.1 \ --update-parallelism 1 \ --update-delay 30s \ --update-failure-action rollback \ --update-monitor 30s \ --rollback-parallelism 2 \ api

One replica at a time, 30s monitor, automatic rollback if health checks fail. Production-ready zero-downtime update for Swarm.

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet