How does Docker networking work and what network types exist?
Docker networking is the layer that decides how containers reach each other and the outside world. There are six built-in drivers, but most production setups use just two: a user-defined bridge for single host, overlay for multi-host.
Theory
TL;DR
- Docker creates virtual networks; containers attach to them and get an internal IP.
- Six built-in drivers cover every common case. Pick by topology, not preference.
bridge(default): isolated virtual network on one host. Default fordocker runif you do not specify.- User-defined bridges are like the default bridge but with DNS resolution by container name — almost always preferred.
host: container shares the host's network stack (no isolation, fastest).none: no network at all (rare, for very locked-down workloads).overlay: spans multiple Docker hosts via VXLAN. Required for Swarm services.macvlan/ipvlan: container appears as a real device on the physical LAN (rare, for legacy or specific routing needs).
Quick example
# List the default networks Docker creates
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4f06b3e2c0c1 bridge bridge local
8a3f2d1c9b8e host host local
1234567890ab none null local
# Create a user-defined bridge (the right default for app stacks)
$ docker network create mynet
# Run two containers on it; they can ping each other by name
$ docker run -d --name api --network mynet myapp
$ docker run -d --name db --network mynet postgres:16
$ docker exec api ping -c 1 db
# Resolves db → 172.18.0.2 via Docker's embedded DNS, ICMP works.User-defined bridges give name-based DNS automatically — the default bridge does not. That alone is the reason to prefer them.
The drivers in detail
bridge (default) — single host
A virtual L2 bridge on the host. Containers attached to it get an IP on a private subnet (e.g., 172.17.0.0/16 for the default bridge).
- Default
bridgenetwork (created by Docker on install): no DNS by name, only by--link(deprecated). Avoid. - User-defined bridges (
docker network create mynet): containers resolve each other by container name. Always prefer.
Iptables NAT rules forward published ports (-p) from the host to the container's IP.
host — no isolation
docker run --network host nginx
# nginx listens on the host's port 80 directly. No -p needed.Container shares the host's network namespace. No NAT, no port mapping, no separate IP. Performance-wise the fastest (no overhead). Security-wise the loosest (container sees every host interface).
Use when: you want raw network performance, or you need to bind to specific host interfaces. Avoid in shared / multi-tenant setups.
none — isolated
docker run --network none alpine ip a
# Only the loopback interface (lo). No external network at all.Container has its own network namespace but no interfaces beyond lo. Use for batch jobs that should not touch the network, or for setting up custom networking manually after the fact.
overlay — multi-host (Swarm)
Spans Docker hosts in a Swarm cluster via VXLAN encapsulation. Containers on host A and host B can talk to each other as if on the same LAN.
# On a Swarm manager
$ docker network create --driver overlay --attachable myoverlayUsed for Swarm services. Without Swarm, you almost never reach for overlay. Kubernetes has its own equivalent (CNI plugins) and does not use Docker overlay.
macvlan — container as a real device on LAN
Container gets its own MAC address on the physical network. Looks like a separate device to your router, DHCP server, etc.
$ docker network create --driver macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
pubnet
$ docker run --network pubnet --ip 192.168.1.50 myappUse when legacy software expects to bind to a real LAN interface, or when you need each container on the same subnet as your office network. Caveat: most cloud providers (AWS, GCP, Azure) block MACs they did not assign — macvlan does not work cleanly in cloud.
ipvlan — similar to macvlan, IP-level
Like macvlan but containers share the host's MAC. Better for environments where MACs are restricted (cloud, switch port-security). Less commonly used than macvlan.
How container DNS works
User-defined bridges and overlay networks include Docker's embedded DNS server at 127.0.0.11. From inside a container:
# Container 'api' on 'mynet' resolving sibling 'db'
$ docker exec api cat /etc/resolv.conf
nameserver 127.0.0.11
$ docker exec api nslookup db
Server: 127.0.0.11
Name: db
Address 1: 172.18.0.2The DNS knows about all containers on the same network. Names resolve to current IPs (containers' IPs can change on restart, DNS keeps up).
Network management commands
docker network ls # list
docker network inspect <name> # full details (subnet, attached containers)
docker network create [opts] <name> # create
docker network rm <name> # delete (must be empty)
docker network connect <net> <container> # attach a running container
docker network disconnect <net> <container> # detach
docker network prune # delete unused networksCommon mistakes
Using the default bridge and wondering why DNS fails
# WRONG: containers on the default bridge cannot resolve each other by name
$ docker run -d --name db postgres:16
$ docker run -d --name api myapp # both on default bridge
$ docker exec api ping db # name resolution failsFix: create a user-defined bridge.
$ docker network create mynet
$ docker run -d --name db --network mynet postgres:16
$ docker run -d --name api --network mynet myapp
$ docker exec api ping db # worksCompose creates a user-defined bridge per project automatically — that is why DNS "just works" in Compose.
Publishing ports while on host network
# WRONG: -p is ignored on host network
$ docker run --network host -p 8080:80 nginx
# nginx still listens on host:80 (its real port), -p does nothinghost mode and -p are mutually exclusive. Pick one.
Using localhost to connect to other containers
Classic mistake. localhost inside a container is the container itself. Use the service name (on a user-defined bridge or Compose project) or the container's IP (fragile, IPs change).
Overlay without Swarm
$ docker network create --driver overlay test
Error response from daemon: This node is not a swarm manager.Overlay requires a Swarm cluster (docker swarm init first). Outside Swarm, use bridge.
Real-world usage
- Local dev with Compose: every project gets its own user-defined bridge automatically. Service-name DNS works out of the box.
- Single-host production: explicit user-defined bridge per app stack. Internal services unpublished, web/proxy published.
- Swarm clusters: overlay networks for service-to-service traffic across nodes. Encrypted by default in modern Swarm.
- Performance-critical workloads on bare metal:
--network hostto skip the bridge and NAT overhead. - Special LAN integration (legacy, IoT, home labs): macvlan to get containers "on the LAN" with their own IP/MAC.
Follow-up questions
Q: Why do containers on a user-defined bridge have DNS but the default bridge does not?
A: Historical. The default bridge network is Docker's original implementation; user-defined bridges came later with DNS-by-name baked in. The default bridge was kept for backward compatibility but is essentially deprecated for new use.
Q: Can a container belong to multiple networks?
A: Yes. docker run --network net1 ... && docker network connect net2 container. Useful for separating concerns: a web container on frontend and backend, db only on backend.
Q: What is the difference between bridge and --network=host?
A: bridge gives the container its own network namespace and IP, with NAT translating to/from the host. host makes the container share the host's network namespace — same IP, same interfaces, no NAT.
Q: How does Kubernetes networking relate to Docker's?
A: Kubernetes does not use Docker's networking. K8s has the CNI (Container Network Interface) plugin model — each pod gets its own network namespace, but the connectivity is provided by a CNI plugin (Calico, Flannel, Cilium, etc.), not Docker. Underneath, the runtime (containerd) still creates the namespace; Kubernetes provides the IP and routing.
Q: (Senior) When would you pick macvlan over a bridge with -p mapping?
A: When the container's traffic must originate from a real LAN address (legacy services that whitelist by IP, license servers tied to a MAC, network equipment management). Bridge + -p always shows the host's IP from outside; macvlan gives the container its own IP. Avoid in cloud — major cloud providers reject unfamiliar MACs at the hypervisor.
Examples
Manually creating a user-defined bridge for a stack
$ docker network create --driver bridge \
--subnet 172.20.0.0/24 \
--gateway 172.20.0.1 \
appnet
$ docker run -d --name db --network appnet \
-e POSTGRES_PASSWORD=devpass postgres:16
$ docker run -d --name api --network appnet \
-e DATABASE_URL=postgres://postgres:devpass@db:5432/app myapp
$ docker run -d --name web --network appnet -p 80:80 nginx
# web reaches api by name; api reaches db by name; only web is exposed to host.Production-shape pattern: explicit network, services unpublished except the entry point.
Inspecting a network
$ docker network inspect appnet --format '{{json .Containers}}' | jq
{
"a3f9d2b8c1e4": {
"Name": "db",
"IPv4Address": "172.20.0.2/24"
},
"b7e1f4d6a2b8": {
"Name": "api",
"IPv4Address": "172.20.0.3/24"
},
...
}Who is on the network, with what IP. Useful for debugging communication issues.
Short Answer
Interview readyA concise answer to help you respond confidently on this topic during an interview.
Comments
No comments yet