Suggest an editImprove this articleRefine the answer for “What is a Docker container?”. Your changes go to moderation before they’re published.Approval requiredContentWhat you’re changing🇺🇸EN🇺🇦UAPreviewTitle (EN)Short answer (EN)**A Docker container** is a running instance of an image. It is a process on the host kernel, isolated from other processes by Linux namespaces and cgroups, with its own filesystem view, network, and resource limits. ```bash $ docker run -d --name web -p 80:80 nginx:1.27 # image (template) → container (running process) $ docker ps CONTAINER ID IMAGE STATUS PORTS NAMES a3f9d2b8c1e4 nginx:1.27 Up 2 seconds 0.0.0.0:80->80/tcp web ``` **Key:** image is the blueprint, container is the running thing. A container is replaceable - state in the image's filesystem is lost on restart unless mounted as a volume.Shown above the full answer for quick recall.Answer (EN)Image**A Docker container** is a running instance of an image. Mechanically, it is a process on the host's Linux kernel that gets its own isolated view of the system through namespaces and resource limits through cgroups. ## Theory ### TL;DR - Container = host process + isolation primitives (Linux namespaces + cgroups). Not a tiny VM, not a separate OS. - One image can spawn many containers. Each starts with the image's filesystem and adds a thin writable layer on top. - **Lifecycle:** `created` → `running` → `paused / exited` → `removed`. You move between states with `docker run`, `docker stop`, `docker start`, `docker rm`. - Containers are **ephemeral**. Anything you write to the writable layer disappears on `docker rm`. Use volumes for state that should survive. - Default isolation: PID, mount, network, IPC, UTS, user, and cgroup namespaces. The container thinks it has its own PID 1, its own filesystem root, its own network interfaces. ### Quick example ```bash # Start a container in the background $ docker run -d --name web -p 8080:80 nginx:1.27-alpine a3f9d2b8c1e4f3a2... # It is running $ docker ps CONTAINER ID IMAGE STATUS PORTS NAMES a3f9d2b8c1e4 nginx:1.27-alpine Up 3 seconds 0.0.0.0:8080->80/tcp web # From inside, it sees its own world $ docker exec web ps aux USER PID COMMAND root 1 nginx: master process nginx -g daemon off; nginx 29 nginx: worker process # PID 1 is nginx - the container has its own process tree. ``` That container is a process on your host (you can find it via `ps aux | grep nginx` from outside) but inside its PID namespace it thinks it is process 1. ### Lifecycle states A container moves through a small set of states: ``` docker create docker start | | v v +--------+ docker run +---------+ docker stop +---------+ | created| -------------> | running | --------------> | exited | +--------+ +---------+ docker kill +---------+ ^ | | | | docker pause | docker rm | v v +--------+ +---------+ | paused | | removed | +--------+ +---------+ ``` - `docker run` is shorthand for `docker create` + `docker start`. - `docker stop` sends SIGTERM, waits 10 seconds, then SIGKILL. `docker kill` skips straight to SIGKILL. - An exited container still exists - you can restart it. It is gone only after `docker rm`. ### How isolation actually works A container is just a process. The Linux kernel makes it feel like its own machine through six namespaces: | Namespace | What it isolates | |---|---| | `pid` | Process IDs - container has its own PID 1 | | `mnt` | Mount points - container has its own filesystem view | | `net` | Network interfaces, routing tables, sockets | | `ipc` | Shared memory, semaphores, message queues | | `uts` | Hostname and domain name | | `user` | User and group IDs (UID/GID remapping) | | `cgroup` | Cgroup root view | On top of namespaces, **cgroups** account for and limit CPU, memory, block I/O, and network bandwidth. When you run `docker run --memory=512m`, the daemon sets a memory cgroup that the kernel enforces. Exceed it and the kernel OOM-kills your process inside the container. All of this is enforced by one kernel - yours. There is no guest kernel inside the container. ### Key commands ```bash docker run # create + start in one step docker ps # list running containers docker ps -a # list all containers (including exited) docker stop <id> # graceful stop (SIGTERM, then SIGKILL after 10s) docker kill <id> # immediate stop (SIGKILL) docker rm <id> # delete a container (must be stopped, or use -f) docker exec <id> <cmd> # run a command inside a running container docker logs <id> # see what stdout/stderr the container has produced docker inspect <id> # full JSON dump of container state ``` ### Common mistakes **Treating a container like a long-lived server** ```bash # WRONG: shelling in to install or configure things $ docker exec -it web apt-get install vim # Vanishes the moment the container restarts. # RIGHT: bake it into the image FROM nginx:1.27-alpine RUN apk add --no-cache vim ``` A container is meant to be replaceable. Configure through the Dockerfile or environment variables, not by editing the running container. **Losing data because state was not in a volume** ```bash # WRONG: data dies with the container $ docker run --name pg postgres:16 $ docker rm -f pg # All your tables, gone. # RIGHT: persist state in a volume $ docker run -d --name pg \ -v pgdata:/var/lib/postgresql/data \ postgres:16 ``` The writable layer is for tmp scratch state, not real data. Volumes survive container removal. **Running multiple processes in one container** The convention is **one main process per container**. Pack a database, a web server, and a job runner into one container and you lose the ability to scale them independently, log them separately, or restart them in isolation. Use `docker compose` to orchestrate multiple single-process containers instead. **Forgetting that PID 1 has special semantics** In a container, PID 1 is the process you specified as `CMD` / `ENTRYPOINT`. PID 1 ignores most signals by default, and is responsible for reaping zombie children. If your app forks subprocesses but never reaps them, you accumulate zombies. Tools like `tini` or `dumb-init` handle this for you (`docker run --init` adds `tini` automatically). ### Real-world usage - **CI/CD jobs:** GitHub Actions runners, GitLab CI, Jenkins agents - each pipeline step often runs in its own short-lived container. The container starts, runs the step, exits, and is removed. - **Local dev stacks:** Postgres, Redis, MinIO, RabbitMQ - a dev brings up a half-dozen containers via `docker compose up` and tears them all down with `docker compose down`. - **Production microservices:** each service runs as one or more containers behind a load balancer. Kubernetes schedules them across hosts; the container is the smallest deploy unit. - **Functions-as-a-service:** AWS Lambda containers (using Firecracker microVMs underneath), Google Cloud Run, Azure Container Apps - all surface a container interface to the developer. ### Follow-up questions **Q:** What happens when I run `docker stop`? **A:** The daemon sends SIGTERM to PID 1 inside the container. PID 1 ought to handle it and shut down gracefully. After a 10-second grace period (configurable with `--time`), the daemon sends SIGKILL. Apps that ignore SIGTERM get a hard kill - which is fine for stateless services, terrible for databases. **Q:** Can two containers from the same image interfere with each other? **A:** Not by default. Each gets its own namespaces, its own writable layer, its own network. They share read-only image layers on disk and in memory but cannot see each other's processes or files. If you want them to share something explicitly, use a named volume or a Docker network. **Q:** What is the difference between `docker stop` and `docker rm`? **A:** Stop transitions a running container to exited; the container still exists and can be restarted with `docker start`. Remove deletes the container entirely - its writable layer, its config, its name. A typical CI cleanup is `docker rm -f $(docker ps -aq)` to nuke everything. **Q:** Why do my files have UID 1000 on the host even though the container ran as a different user? **A:** Without user namespace remapping, UIDs inside the container map directly to UIDs on the host. If your container's `node` user has UID 1000, files it writes to a bind mount appear owned by host UID 1000 - which may or may not be a real user. Use `--user $(id -u):$(id -g)` to align, or set up user namespaces for full remapping. **Q:** (Senior) How is a Docker container different from a Kubernetes pod? **A:** A pod is a wrapper around one or more containers that share network and storage namespaces. The pod has one IP; containers inside share `localhost`. Use a pod when you need a tightly coupled sidecar pattern (e.g., app container + log shipper). For 99 percent of cases, one container per pod is the norm. Docker has no built-in pod concept; it has `docker compose` for multi-container apps but no shared-network grouping like a pod. ## Examples ### Inspecting a running container ```bash $ docker run -d --name api node:22-alpine sleep 3600 $ docker inspect api --format '{{.State.Status}} (PID {{.State.Pid}})' running (PID 84231) # That PID exists on your host $ ps -p 84231 -o pid,user,comm PID USER COMMAND 84231 root sleep # Inside the container, the same process is PID 1 $ docker exec api ps aux USER PID COMMAND root 1 sleep 3600 ``` One process, two views. The host sees PID 84231; inside its PID namespace, the container sees PID 1. That namespace remapping is the heart of container isolation. ### Container lifecycle in one session ```bash $ docker run -d --name worker alpine sleep 1000 $ docker ps --filter name=worker STATUS NAMES Up 5 seconds worker $ docker stop worker $ docker ps -a --filter name=worker STATUS NAMES Exited (137) 2 seconds ago worker # 137 = 128 + SIGKILL(9). The container ignored SIGTERM until timeout. $ docker start worker # back to running $ docker rm -f worker # gone ``` Note the exit code 137. `sleep` did not handle SIGTERM, so after the grace period the daemon killed it. Real apps should trap SIGTERM and shut down cleanly.For the reviewerNote to the moderator (optional)Visible only to the moderator. Helps review go faster.