Skip to main content

docker stop vs docker kill: what is the difference?

docker stop and docker kill are both ways to halt a running container, but they differ in how aggressive they are. The choice between them is mostly about whether the app inside needs time to shut down cleanly.

Theory

TL;DR

  • docker stop = graceful. Sends SIGTERM, waits N seconds (default 10), sends SIGKILL if still running.
  • docker kill = immediate. Sends SIGKILL by default, or any signal with --signal.
  • Both ultimately make the container exit. Difference is whether the app gets a chance to clean up.
  • For databases, services with state, anything with open connections — use stop.
  • For hung processes, app with unresponsive PID 1, debugging — kill.
  • docker rm -f = docker kill + docker rm in one step.

What docker stop does

t=0 daemon sends SIGTERM to PID 1 inside the container t=0..N the app should handle SIGTERM: - finish in-flight requests - flush logs / WAL / cache - close DB connections - exit cleanly with code 0 t=N if still running, daemon sends SIGKILL (where N = --time, default 10 seconds)

The grace period is configurable per stop:

bash
docker stop -t 30 mycontainer # 30 seconds docker stop -t 0 mycontainer # 0 = SIGKILL immediately (same as kill)

Or per container at run time:

bash
docker run --stop-timeout 60 myapp

What docker kill does

bash
docker kill <name> # SIGKILL by default docker kill --signal=SIGTERM <name> # explicit SIGTERM (no grace period) docker kill --signal=SIGUSR1 <name> # any signal docker kill -s 9 <name> # numeric signal

The --signal flag is what makes docker kill more flexible than its name suggests. It is also the way to send a non-fatal signal to a running app — for example, telling nginx to reload its config:

bash
docker kill --signal=SIGHUP nginx-container # nginx reloads its config without restarting.

Despite the name, kill does not always kill — only SIGKILL and SIGTERM (without trap) terminate the process.

Side-by-side

docker stopdocker kill
Default signalSIGTERM, then SIGKILLSIGKILL
Grace periodyes (default 10s, configurable)none
Custom signalnoyes (--signal)
Use for clean shutdownYESno
Use for hung processesnoYES
Use for sending signals (HUP, USR1)noYES

Why graceful matters

bash
# Postgres with --memory pressure on background docker stop pg # SIGTERM → postgres flushes WAL, closes connections, exits 0 → all good docker kill pg # SIGKILL → postgres dies mid-write → on next start, WAL replay → potentially slow recovery

For databases: stop, never kill (unless deliberately).

bash
# Web service with in-flight requests docker stop api # SIGTERM → api stops accepting new requests, finishes existing, exits → no client errors docker kill api # SIGKILL → connections drop mid-flight → 502s for the user

For user-facing services: stop. Always.

When kill is right

  • Hung process that ignores SIGTERM and is blocking everything. Stop has been waiting; kill ends it.
  • Sending control signals like SIGHUP (reload), SIGUSR1 (rotate logs), SIGUSR2 (debug dump) to a running app.
  • Tests / disposable containers where you do not care about graceful shutdown.
  • docker rm -f uses kill internally — fine for cleaning up already-stopped or never-mattered containers.

Common mistakes

Killing a database mid-write

bash
# WRONG docker kill postgres-prod # Sometimes recovers fine; sometimes corrupted indexes; sometimes WAL replay takes 30 minutes.

Use docker stop. Even better: shut down the app cleanly first, then stop the container.

App ignores SIGTERM, then gets SIGKILL after 10s

bash
$ docker stop web # Looks slow; takes the full 10 seconds; container exits with code 137.

App not handling SIGTERM = dirty shutdown after grace period. Fix the app: trap SIGTERM and exit cleanly. Test with docker stop and confirm exit 0 (or whatever clean code is) within a couple of seconds.

Using kill for --signal=SIGTERM instead of just stop

bash
# Same as docker stop -t 0 (no grace window) docker kill --signal=SIGTERM web

This sends SIGTERM but does NOT follow with SIGKILL after a grace period — if the app ignores the signal, the container keeps running. Usually you want docker stop, which DOES follow up with SIGKILL.

Forgetting that PID 1 has special signal semantics

Linux PID 1 ignores most signals by default unless the process explicitly handles them. If your app is PID 1 (it is, in containers), you must trap SIGTERM in code:

js
// Node.js example process.on('SIGTERM', () => { server.close(() => process.exit(0)); });

Without this trap, docker stop waits the full grace period and sends SIGKILL.

Real-world usage

  • Production deploys: docker stop (or stack-level orchestrator equivalent) for graceful shutdown.
  • CI cleanup: docker rm -f $(docker ps -aq) (kill + rm) for tests where graceful does not matter.
  • nginx config reload: docker kill --signal=SIGHUP nginx.
  • Log rotation: docker kill --signal=USR1 myapp to make the app reopen log files.
  • Hung container debug: docker kill after docker stop waited the full grace period unsuccessfully.

Follow-up questions

Q: What is the default grace period and how do I change it?


A: 10 seconds. Override per-stop with docker stop -t N, or per-container with docker run --stop-timeout N, or in Compose with stop_grace_period: 30s.

Q: Does the daemon retry SIGTERM during the grace period?


A: No. SIGTERM is sent once. If the app ignores it, the daemon waits and then SIGKILLs.

Q: What does exit code 137 vs 143 mean?


A: 137 = 128 + 9 (SIGKILL). 143 = 128 + 15 (SIGTERM). The container exited because of the corresponding signal. If you docker stop and see 137, the app did not handle SIGTERM and got SIGKILLed after the grace period.

Q: Can I send a signal that the running process will receive?


A: Yes — docker kill --signal=<SIG> <container> sends the signal to PID 1 inside. Whether the app does anything with it depends on the app's code (signal handlers).

Q: (Senior) How do you ensure your dockerized app handles SIGTERM correctly?


A: Three checks. (1) Code: install signal handlers in your app's entrypoint that trap SIGTERM and start graceful shutdown. (2) Init: avoid wrapping in /bin/sh -c shell form — that makes sh PID 1 and your app a child that does not get the signal directly. Use exec form (CMD ["node", "server.js"]) or tini via --init. (3) Test: docker stop your container and assert it exits within ~1 second with code 0 (or your designated clean code), not after the full grace period.

Examples

Graceful stop with extended grace period

bash
# Production DB needs time to flush $ docker stop -t 60 postgres-prod postgres-prod # Postgres handles SIGTERM, runs checkpoint, exits cleanly. Took ~12 seconds.

Long-running graceful shutdown for a stateful service. Faster than the default 10s timeout would have allowed.

Reload nginx config without restart

bash
$ docker kill --signal=SIGHUP nginx-prod nginx-prod # nginx receives HUP, reloads config from /etc/nginx/nginx.conf, keeps running. # All existing connections continue uninterrupted.

Classic use of docker kill to send a non-fatal control signal.

Diagnose a hung container

bash
# App not responding to anything $ docker stop -t 5 hung-container # Waits 5s, sends SIGKILL. # Or skip the wait entirely: $ docker kill hung-container

For a container that is stuck, kill is the right tool — graceful is a wasted wait.

Compose grace-period setting

yaml
services: api: image: myapp stop_grace_period: 30s # docker compose stop will wait up to 30s stop_signal: SIGUSR1 # use SIGUSR1 instead of SIGTERM (rare)

Compose lets you customize per service. For most apps, the default is fine; for slow-shutting-down DBs or workers, extend it.

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet