Skip to main content

How to handle signals in containers and what is the init process?

Signal handling and the init process in containers is one of those topics that does not matter until it does — and then it matters a lot. Apps that ignore SIGTERM lose data on every deploy. Apps that fork without an init process leak zombies until the system dies.

Theory

TL;DR

  • Inside a container, PID 1 is your app (whatever you put in CMD/ENTRYPOINT).
  • PID 1 has special semantics inherited from Unix:
    1. Ignores most signals by default unless explicitly trapped.
    2. Responsible for reaping zombies — child processes that exited but whose status has not been collected.
  • docker stop sends SIGTERM to PID 1, waits 10s grace, then SIGKILL.
  • Without trapping SIGTERM, your app gets SIGKILLed after the grace period — no graceful shutdown, no flush, no clean exit.
  • Without proper init process (your app or a wrapper), zombie children accumulate. For apps that fork (Python multiprocessing, certain Node patterns), this is a real issue.
  • Solution: trap signals in your code AND/OR use tini (--init flag) as PID 1.

Why PID 1 is special

Linux PID 1 (the init process, usually systemd on a host) has two responsibilities:

  1. Signal handling. Most signals (SIGTERM, SIGINT, SIGUSR1) are ignored unless PID 1 explicitly registers a handler. The kernel does this to protect init from being accidentally killed.
  2. Reaping zombies. When a process exits, its parent must call wait() to collect the exit status. If the parent does not, the child becomes a zombie (visible in ps as <defunct>). When a parent dies, its orphaned children are reparented to PID 1, which is expected to reap them.

In a container, your app is unexpectedly cast in this role. If your app does not handle these duties, surprises follow.

Signal handling: SIGTERM

Docker's docker stop flow:

t=0 daemon sends SIGTERM to PID 1 inside the container t=0..N PID 1 should: - finish in-flight work - flush state (DB, logs) - close connections - exit cleanly t=N if still running, daemon sends SIGKILL (default N=10s)

If your app does not register a SIGTERM handler, the kernel ignores the signal. docker stop sees no exit, waits 10s, sends SIGKILL. Result: dirty shutdown, exit code 137.

Fix in code:

js
// Node.js process.on('SIGTERM', () => { console.log('SIGTERM received, shutting down gracefully'); server.close(() => { db.disconnect(); process.exit(0); }); });
python
# Python import signal, sys def handle_sigterm(signum, frame): print('SIGTERM received') cleanup() sys.exit(0) signal.signal(signal.SIGTERM, handle_sigterm)
go
// Go func main() { sigChan := make(chan os.Signal, 1) signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT) go func() { <-sigChan log.Println("SIGTERM received") // graceful cleanup os.Exit(0) }() server.ListenAndServe() }

Zombies and the init process

python
# Python parent forks children but never waits import os for i in range(100): pid = os.fork() if pid == 0: # child does work and exits os._exit(0) # Parent never calls os.wait() → 100 zombies accumulate

ps -ef inside the container shows them as <defunct>. Each zombie consumes a PID slot and a tiny bit of memory. Long-running apps that fork can run out of PIDs (limit ~32768).

The real fix is in the app: always reap. But if the app cannot be fixed, an init process can do it.

tini and --init

bash
docker run --init myapp

The --init flag wraps your app in tini. tini becomes PID 1; your app becomes PID 2 (and any zombies it creates are reparented to tini, which reaps them properly). tini also forwards signals to your app correctly.

Without --init:

PID 1: your-app

With --init:

PID 1: /sbin/docker-init (tini) PID 2: your-app

tini's responsibilities:

  • Forward SIGTERM, SIGINT etc. from PID 1 to your app (PID 2).
  • Reap any zombies that get reparented to it.
  • Exit when your app exits, with the same exit code.

Shell vs exec form revisited

This is the most common cause of signal problems:

dockerfile
# WRONG: shell form — sh becomes PID 1, your app is PID 2 CMD nginx -g "daemon off;" # RIGHT: exec form — your app IS PID 1 CMD ["nginx", "-g", "daemon off;"]

With shell form, Docker wraps the command in /bin/sh -c '...'. So sh is PID 1; nginx is PID 2. docker stop sends SIGTERM to sh, which ignores it (sh does not forward signals to children). After 10s, SIGKILL — and nginx never knew it should shut down.

Always use exec form (["prog", "arg"]) for production CMD/ENTRYPOINT.

When to use tini / --init

Yes:

  • Your app forks children and does not reap them (Python multiprocessing without proper join, Node child_process without exit handling).
  • You use bash/sh as part of a complex entrypoint script.
  • You see <defunct> processes in docker top or ps aux inside the container.

No (do not need it):

  • Your app is a single process that handles its own signals (most Go, Rust services).
  • Your image's entrypoint is already an init-like wrapper (some official images, like postgres' docker-entrypoint.sh, do exec at the end so the real binary is PID 1).

Shell entrypoint scripts: the right way

Many images use a shell script as entrypoint for setup:

bash
#!/bin/sh # docker-entrypoint.sh run-setup-tasks exec "$@" # ← critical: replaces sh with the real command

The exec "$@" replaces the shell process with the actual command. Your app becomes PID 1 (after the shell's brief existence); signals work; no shell wrapper.

Without exec:

bash
#!/bin/sh run-setup-tasks "$@" # spawns child; sh stays as PID 1

Now sh is PID 1, ignores signals, and your app does not get SIGTERM. Bug.

Common mistakes

Shell form CMD breaking docker stop

Covered. Always exec form.

Missing exec in shell entrypoint

bash
# WRONG #!/bin/sh run-init /usr/bin/myapp # RIGHT #!/bin/sh run-init exec /usr/bin/myapp

Trap added but app does not actually shut down

python
signal.signal(signal.SIGTERM, lambda s, f: print('SIGTERM')) # Logs the signal but does not exit. Container still SIGKILLed at grace period.

The handler must actually exit (and finish cleanup before exiting).

Forgetting --init for fork-heavy apps

OOM-like behavior on long-running containers, lots of <defunct> in ps. Add --init or fix the app's fork/wait logic.

Treating exit code 137 as fine

Exit 137 (SIGKILL) means the app did NOT shut down gracefully — it was force-killed. If you see 137 on docker stop, your signal handling is broken.

Real-world impact

  • Database containers (Postgres, MySQL): if they get SIGKILLed, WAL/journal replay on next start. Sometimes minutes of recovery time.
  • Worker containers: in-flight jobs lost or duplicated; downstream impact depends on idempotency.
  • Web servers: in-flight requests dropped; clients see 502/connection-reset.
  • High-fork-rate containers (Python multiprocessing, parallel test runners): zombie accumulation can crash the container.

Follow-up questions

Q: What signals does docker stop send?


A: SIGTERM by default. Override with --stop-signal=SIGUSR1 (per-container) or STOPSIGNAL in Dockerfile. Useful for apps that use SIGUSR1 for graceful shutdown.

Q: Why does my Node.js app exit immediately on Ctrl+C but not on docker stop?


A: Ctrl+C sends SIGINT (not SIGTERM). Node has a default SIGINT handler that exits; it does not have a default SIGTERM handler. Add process.on('SIGTERM', ...) explicitly.

Q: Should I always use --init?


A: Adding it is harmless. For fork-heavy or shell-heavy entrypoints, use it. For single-process apps in exec-form CMD, it is unnecessary but not harmful.

Q: What is the difference between tini, dumb-init, and s6?


A: tini (used by --init): minimal, signal-forward + zombie-reap. dumb-init: similar to tini, slightly different defaults. s6 and runit: full process supervisors (multi-process, restart on crash). For a single app, tini is enough; for multi-process containers (anti-pattern but sometimes needed), s6.

Q: (Senior) How do you debug whether your app is properly handling SIGTERM?


A: Send SIGTERM and time the exit. docker stop --time 30 mycontainer with time wrapping it. A clean app exits in 1-2 seconds with code 0. A dirty app waits 10-30s and exits 137. Inside, log the signal handler activity ("received SIGTERM at...") to confirm it fires. For deeper inspection: strace -p <PID> -e trace=signal from outside (with --cap-add SYS_PTRACE) shows raw signal delivery.

Examples

Node.js graceful shutdown

js
const http = require('http'); const server = http.createServer((req, res) => { res.end('hello'); }); server.listen(3000); let shuttingDown = false; process.on('SIGTERM', () => { if (shuttingDown) return; shuttingDown = true; console.log('SIGTERM received, draining...'); server.close((err) => { if (err) console.error(err); console.log('Server closed; exiting'); process.exit(0); }); // Force exit after 25s if cleanup hangs setTimeout(() => process.exit(1), 25000).unref(); });
bash
docker run -d --name web myapp docker stop web # 'SIGTERM received, draining...' # 'Server closed; exiting' # Exit code 0; took ~1 second.

Entrypoint with exec

bash
#!/bin/sh # docker-entrypoint.sh set -e # Migrate DB if first run if [ ! -f /var/lib/myapp/.initialized ]; then myapp migrate touch /var/lib/myapp/.initialized fi # Replace shell with real command exec "$@"
dockerfile
COPY docker-entrypoint.sh /usr/local/bin/ ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"] CMD ["myapp", "server"]

exec "$@" is the magic line. After init tasks, the shell is replaced with myapp server. The myapp binary is now PID 1 (or PID 2 if --init). Signals reach it.

Detecting zombie accumulation

bash
# Inside the container $ ps -ef | grep defunct | wc -l 42 # 42 zombies. Either fix the app's fork/wait logic or add --init. # Re-run with --init docker run --init myapp # Inside: $ ps -ef | grep defunct # (none — tini reaped them)

Short Answer

Interview ready
Premium

A concise answer to help you respond confidently on this topic during an interview.

Comments

No comments yet