Skip to main content

What is the Docker container lifecycle?

The Docker container lifecycle describes how a container moves between states from creation to removal. Understanding the diagram (and what each command does to it) is what separates someone who runs containers from someone who debugs them.

Theory

TL;DR

  • Six states: created, running, paused, restarting, exited, dead β€” plus the implicit removed terminal.
  • docker run = docker create + docker start (created β†’ running in one shot).
  • docker stop sends SIGTERM, waits 10 seconds, then SIGKILL. Container moves to exited.
  • An exited container is still on disk. docker start revives it. docker rm finally deletes it.
  • The exit code (visible in docker ps -a's Status column) tells you why: 0 = clean exit, 137 = SIGKILL, 143 = SIGTERM, 1 = error from app.
  • Restart policy (--restart) automates parts of this: unless-stopped means "if exited, daemon restarts it (unless you manually stopped it)."

State diagram

docker create docker start β”‚ β”‚ β–Ό β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” docker run β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” docker stop / kill β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ created β”‚ ──────────────► β”‚ running β”‚ ────────────────────► β”‚ exited β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ app exits on its own β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β–² β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β–Ό docker pause β–Ό β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” docker rm β”‚ β”‚ paused β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β–Ό β”‚ β”‚ docker unpause β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β””β”€β”€β”€β”€β”€β”˜ β”‚ removed β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ restarting β”‚ (transient β€” restart policy is firing) β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ dead β”‚ (terminal β€” daemon could not clean up) β””β”€β”€β”€β”€β”€β”€β”˜

Each state, with the commands that put you there

created

The container exists in the daemon's database with config + filesystem prepared, but no process is running.

bash
$ docker create --name foo alpine sleep 3600 2a3b4c5d... $ docker ps -a --filter name=foo CONTAINER ID STATUS NAMES 2a3b4c5d... Created foo

Useful when you want to inspect/edit the config before starting (docker network connect, attaching volumes, setting env). Most people skip this and use docker run directly.

running

The container's main process (PID 1 inside) is alive. CPU is consumed, memory allocated, network active.

bash
$ docker start foo # from created $ docker run -d alpine sleep 3600 # straight to running (skips manual create)

paused

The container's processes are frozen via the kernel's freezer cgroup β€” they exist in memory but are not scheduled. Resumed exactly where they were.

bash
$ docker pause foo $ docker ps --filter name=foo STATUS Up 5 minutes (Paused) $ docker unpause foo

Rare in practice; sometimes used for snapshotting or quick CPU offloading.

exited

The main process has stopped (cleanly or otherwise). Container still exists on disk; its writable layer is intact. Can be restarted.

bash
$ docker stop foo $ docker ps -a --filter name=foo STATUS NAMES Exited (0) 3 seconds ago foo # 0 = clean exit $ docker start foo # back to running

Exit codes you will see in real life:

CodeMeaning
0Clean exit (your app returned 0)
1App-level error (your app threw and exited 1)
137SIGKILL β€” usually docker kill, OOM, or docker stop after grace period
143SIGTERM β€” clean shutdown that did NOT exit 0
139Segfault (SIGSEGV) β€” your binary crashed
130SIGINT (Ctrl+C) β€” interactive run interrupted

restarting

Transient state. The restart policy fired and the daemon is starting the container again.

Status: Restarting (1) 4 seconds ago

If you see this for long, your app is in a crash loop (starts β†’ exits non-zero β†’ restart kicks in β†’ repeat). Check docker logs for the cause.

dead

Terminal failure state. Daemon could not move the container to a clean state (corrupt filesystem layer, kernel issue). Rare.

Status: Dead

Usually docker rm -f and recreate. If it keeps happening, look at daemon logs.

removed

Not a daemon state β€” once docker rm runs, the container is gone from the daemon entirely. docker ps -a no longer lists it.

What docker stop actually does

t=0 daemon sends SIGTERM to PID 1 inside the container t=0..10 your app should catch SIGTERM and shut down cleanly t=10 if still running, daemon sends SIGKILL exited

The 10-second grace period is configurable: docker stop -t 30 mycontainer for a longer wait. Apps that ignore SIGTERM get killed hard β€” exit code 137, no chance to flush state. Always trap SIGTERM in production apps.

Restart policies as automation

The --restart flag tells the daemon to apply transitions for you:

bash
--restart=no # default: do nothing on exit --restart=on-failure # restart only if exit != 0 --restart=always # restart no matter what (even after host reboot) --restart=unless-stopped # restart unless YOU manually stopped it

With unless-stopped, the daemon revives crashed containers automatically; a manual docker stop keeps them down (the daemon respects the user's intent).

Common mistakes

Thinking docker stop deletes the container

bash
$ docker stop web $ docker run -d --name web nginx:1.27 # error: name already in use

stop only halts the process. The container's name and writable layer are still there. To delete, use docker rm (or docker rm -f to combine stop+remove).

Letting exited containers pile up

bash
$ docker ps -a | wc -l 247

By default, exited containers stay forever. Either use --rm on docker run (auto-clean on exit) or periodically docker container prune.

Misreading exit code 137 as "daemon killed it"

137 = 128 + 9 (SIGKILL). The cause might be:

  • Your docker kill or docker stop (after timeout)
  • The Linux OOM killer (out-of-memory cgroup limit hit)
  • The host running out of disk space and the daemon killing things

Check docker inspect <container> --format '{{.State.OOMKilled}}' β€” true means OOM, not external kill.

Confusing crash-looping with restart-policy spam

Status: Restarting (1) 2 seconds ago # repeatedly

Restart policies will keep retrying until you stop them. If your app is broken, you get an infinite loop with logs growing fast. docker logs --tail 50 <container> to see what is failing.

Real-world usage

  • CI builds: docker run --rm for the build container. State explicitly designed to be "run, exit, gone".
  • Long-lived services: docker run -d --restart=unless-stopped. The container goes through running β†’ restarting β†’ running over its life as the app crashes and recovers.
  • Debugging: docker exec -it <name> sh while running, or docker start -ai <name> to revive an exited container with stdin/stdout attached.
  • Cleanup hooks: docker container prune --filter 'until=24h' removes everything in exited state for over a day.

Follow-up questions

Q: Is a paused container using CPU?


A: Effectively no β€” its processes are not on the runqueue. It is using memory (the process state is preserved). Pause is closer to "frozen" than "asleep".

Q: What is the difference between docker stop and docker kill?


A: stop is graceful: SIGTERM, 10-second grace, then SIGKILL. kill is immediate: SIGKILL straight away (or whatever signal you specify with --signal). Stop for clean shutdowns; kill for hung processes.

Q: Can I see the lifecycle history of a container?


A: docker inspect <container> --format '{{json .State}}' shows current state details (started/finished timestamps, exit code, OOM flag, error message). For full event history, docker events shows live transitions; some setups log this.

Q: What happens to writable-layer changes when a container exits and restarts?


A: They persist. The writable layer survives until you docker rm. Restarting an exited container picks up exactly where it left off (filesystem-wise; in-memory state is gone, of course).

Q: (Senior) How does Kubernetes' pod lifecycle relate to Docker's?


A: Kubernetes wraps Docker (or any OCI runtime) with its own state machine: Pending β†’ Running β†’ Succeeded/Failed/Unknown. Each container inside a pod still has Docker's container lifecycle underneath. K8s controllers (Deployment, StatefulSet) watch the pod state and replace failed pods; Docker's restart policy is sometimes used too, but K8s usually owns that decision via probes and controller restart policy.

Examples

Walking a container through every state

bash
$ docker create --name demo alpine sleep 60 # Status: Created $ docker start demo # Status: Up 2 seconds $ docker pause demo # Status: Up 5 seconds (Paused) $ docker unpause demo # Status: Up 8 seconds $ docker stop demo # Status: Exited (137) 0 seconds ago # (sleep ignored SIGTERM; killed after 10s) $ docker start demo # exited β†’ running again # Status: Up 1 second $ docker rm -f demo # Container gone from docker ps -a

The whole graph in seven commands.

Using exit codes to debug

bash
$ docker run --name flake myapp $ docker ps -a --filter name=flake --format '{{.Status}}' Exited (137) 12 seconds ago $ docker inspect flake --format '{{.State.OOMKilled}}' true # OOM killer hit. Memory limit (or no limit + greedy app) is the cause. # vs $ docker inspect flake --format '{{.State.OOMKilled}} {{.State.Error}}' false # Not OOM. Probably explicit kill or external reason.

Exit code + OOM flag + error message together usually nail down what happened.

Restart policy in action

bash
# Container that crashes immediately $ docker run -d --name flapper --restart=on-failure:5 alpine sh -c 'exit 1' $ docker ps -a --filter name=flapper STATUS NAMES Restarting (1) Less than a second ago flapper # After 5 attempts, daemon gives up $ docker ps -a --filter name=flapper STATUS NAMES Exited (1) 30 seconds ago flapper

on-failure:5 caps retries at 5. Without the cap, the daemon would retry forever.

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet