github.com/kobeld/docker@v1.12.0-rc1/docs/userguide/networking/dockernetworks.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Docker container networking" 4 description = "How do we connect docker containers within and across hosts ?" 5 keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight = -5 9 +++ 10 <![end-metadata]--> 11 12 # Understand Docker container networks 13 14 To build web applications that act in concert but do so securely, use the Docker 15 networks feature. Networks, by definition, provide complete isolation for 16 containers. So, it is important to have control over the networks your 17 applications run on. Docker container networks give you that control. 18 19 This section provides an overview of the default networking behavior that Docker 20 Engine delivers natively. It describes the type of networks created by default 21 and how to create your own, user-defined networks. It also describes the 22 resources required to create networks on a single host or across a cluster of 23 hosts. 24 25 ## Default Networks 26 27 When you install Docker, it creates three networks automatically. You can list 28 these networks using the `docker network ls` command: 29 30 ``` 31 $ docker network ls 32 NETWORK ID NAME DRIVER 33 7fca4eb8c647 bridge bridge 34 9f904ee27bf5 none null 35 cf03ee007fb4 host host 36 ``` 37 38 Historically, these three networks are part of Docker's implementation. When 39 you run a container you can use the `--net` flag to specify which network you 40 want to run a container on. These three networks are still available to you. 41 42 The `bridge` network represents the `docker0` network present in all Docker 43 installations. Unless you specify otherwise with the `docker run 44 --net=<NETWORK>` option, the Docker daemon connects containers to this network 45 by default. You can see this bridge as part of a host's network stack by using 46 the `ifconfig` command on the host. 47 48 ``` 49 $ ifconfig 50 docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb 51 inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0 52 inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link 53 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 54 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 55 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 56 collisions:0 txqueuelen:0 57 RX bytes:1100 (1.1 KB) TX bytes:648 (648.0 B) 58 ``` 59 60 The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this: 61 62 ``` 63 $ docker attach nonenetcontainer 64 65 root@0cb243cd1293:/# cat /etc/hosts 66 127.0.0.1 localhost 67 ::1 localhost ip6-localhost ip6-loopback 68 fe00::0 ip6-localnet 69 ff00::0 ip6-mcastprefix 70 ff02::1 ip6-allnodes 71 ff02::2 ip6-allrouters 72 root@0cb243cd1293:/# ifconfig 73 lo Link encap:Local Loopback 74 inet addr:127.0.0.1 Mask:255.0.0.0 75 inet6 addr: ::1/128 Scope:Host 76 UP LOOPBACK RUNNING MTU:65536 Metric:1 77 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 78 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 79 collisions:0 txqueuelen:0 80 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 81 82 root@0cb243cd1293:/# 83 ``` 84 >**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`. 85 86 The `host` network adds a container on the hosts network stack. You'll find the 87 network configuration inside the container is identical to the host. 88 89 With the exception of the `bridge` network, you really don't need to 90 interact with these default networks. While you can list and inspect them, you 91 cannot remove them. They are required by your Docker installation. However, you 92 can add your own user-defined networks and these you can remove when you no 93 longer need them. Before you learn more about creating your own networks, it is 94 worth looking at the default `bridge` network a bit. 95 96 97 ### The default bridge network in detail 98 The default `bridge` network is present on all Docker hosts. The `docker network inspect` 99 command returns information about a network: 100 101 ``` 102 $ docker network inspect bridge 103 [ 104 { 105 "Name": "bridge", 106 "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f", 107 "Scope": "local", 108 "Driver": "bridge", 109 "IPAM": { 110 "Driver": "default", 111 "Config": [ 112 { 113 "Subnet": "172.17.0.1/16", 114 "Gateway": "172.17.0.1" 115 } 116 ] 117 }, 118 "Containers": {}, 119 "Options": { 120 "com.docker.network.bridge.default_bridge": "true", 121 "com.docker.network.bridge.enable_icc": "true", 122 "com.docker.network.bridge.enable_ip_masquerade": "true", 123 "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", 124 "com.docker.network.bridge.name": "docker0", 125 "com.docker.network.driver.mtu": "9001" 126 } 127 } 128 ] 129 ``` 130 The Engine automatically creates a `Subnet` and `Gateway` to the network. 131 The `docker run` command automatically adds new containers to this network. 132 133 ``` 134 $ docker run -itd --name=container1 busybox 135 3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c 136 137 $ docker run -itd --name=container2 busybox 138 94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c 139 ``` 140 141 Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`: 142 143 ``` 144 $ docker network inspect bridge 145 {[ 146 { 147 "Name": "bridge", 148 "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f", 149 "Scope": "local", 150 "Driver": "bridge", 151 "IPAM": { 152 "Driver": "default", 153 "Config": [ 154 { 155 "Subnet": "172.17.0.1/16", 156 "Gateway": "172.17.0.1" 157 } 158 ] 159 }, 160 "Containers": { 161 "3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": { 162 "EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1", 163 "MacAddress": "02:42:ac:11:00:02", 164 "IPv4Address": "172.17.0.2/16", 165 "IPv6Address": "" 166 }, 167 "94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": { 168 "EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48", 169 "MacAddress": "02:42:ac:11:00:03", 170 "IPv4Address": "172.17.0.3/16", 171 "IPv6Address": "" 172 } 173 }, 174 "Options": { 175 "com.docker.network.bridge.default_bridge": "true", 176 "com.docker.network.bridge.enable_icc": "true", 177 "com.docker.network.bridge.enable_ip_masquerade": "true", 178 "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", 179 "com.docker.network.bridge.name": "docker0", 180 "com.docker.network.driver.mtu": "9001" 181 } 182 } 183 ] 184 ``` 185 186 The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option. 187 188 You can `attach` to a running `container` and investigate its configuration: 189 190 ``` 191 $ docker attach container1 192 193 root@0cb243cd1293:/# ifconfig 194 ifconfig 195 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02 196 inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 197 inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link 198 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 199 RX packets:16 errors:0 dropped:0 overruns:0 frame:0 200 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 201 collisions:0 txqueuelen:0 202 RX bytes:1296 (1.2 KiB) TX bytes:648 (648.0 B) 203 204 lo Link encap:Local Loopback 205 inet addr:127.0.0.1 Mask:255.0.0.0 206 inet6 addr: ::1/128 Scope:Host 207 UP LOOPBACK RUNNING MTU:65536 Metric:1 208 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 209 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 210 collisions:0 txqueuelen:0 211 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 212 ``` 213 214 Then use `ping` for about 3 seconds to test the connectivity of the containers on this `bridge` network. 215 216 ``` 217 root@0cb243cd1293:/# ping -w3 172.17.0.3 218 PING 172.17.0.3 (172.17.0.3): 56 data bytes 219 64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms 220 64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms 221 64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms 222 223 --- 172.17.0.3 ping statistics --- 224 3 packets transmitted, 3 packets received, 0% packet loss 225 round-trip min/avg/max = 0.074/0.083/0.096 ms 226 ``` 227 228 Finally, use the `cat` command to check the `container1` network configuration: 229 230 ``` 231 root@0cb243cd1293:/# cat /etc/hosts 232 172.17.0.2 3386a527aa08 233 127.0.0.1 localhost 234 ::1 localhost ip6-localhost ip6-loopback 235 fe00::0 ip6-localnet 236 ff00::0 ip6-mcastprefix 237 ff02::1 ip6-allnodes 238 ff02::2 ip6-allrouters 239 ``` 240 To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands. 241 242 ``` 243 $ docker attach container2 244 245 root@0cb243cd1293:/# ifconfig 246 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 247 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 248 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 249 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 250 RX packets:15 errors:0 dropped:0 overruns:0 frame:0 251 TX packets:13 errors:0 dropped:0 overruns:0 carrier:0 252 collisions:0 txqueuelen:0 253 RX bytes:1166 (1.1 KiB) TX bytes:1026 (1.0 KiB) 254 255 lo Link encap:Local Loopback 256 inet addr:127.0.0.1 Mask:255.0.0.0 257 inet6 addr: ::1/128 Scope:Host 258 UP LOOPBACK RUNNING MTU:65536 Metric:1 259 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 260 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 261 collisions:0 txqueuelen:0 262 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 263 264 root@0cb243cd1293:/# ping -w3 172.17.0.2 265 PING 172.17.0.2 (172.17.0.2): 56 data bytes 266 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms 267 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms 268 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms 269 270 --- 172.17.0.2 ping statistics --- 271 3 packets transmitted, 3 packets received, 0% packet loss 272 round-trip min/avg/max = 0.067/0.071/0.075 ms 273 / # cat /etc/hosts 274 172.17.0.3 94447ca47985 275 127.0.0.1 localhost 276 ::1 localhost ip6-localhost ip6-loopback 277 fe00::0 ip6-localnet 278 ff00::0 ip6-mcastprefix 279 ff02::1 ip6-allnodes 280 ff02::2 ip6-allrouters 281 ``` 282 283 The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead. 284 285 ## User-defined networks 286 287 You can create your own user-defined networks that better isolate containers. 288 Docker provides some default **network drivers** for creating these 289 networks. You can create a new **bridge network** or **overlay network**. You 290 can also create a **network plugin** or **remote network** written to your own 291 specifications. 292 293 You can create multiple networks. You can add containers to more than one 294 network. Containers can only communicate within networks but not across 295 networks. A container attached to two networks can communicate with member 296 containers in either network. When a container is connected to multiple 297 networks, its external connectivity is provided via the first non-internal 298 network, in lexical order. 299 300 The next few sections describe each of Docker's built-in network drivers in 301 greater detail. 302 303 ### A bridge network 304 305 The easiest user-defined network to create is a `bridge` network. This network 306 is similar to the historical, default `docker0` network. There are some added 307 features and some old features that aren't available. 308 309 ``` 310 $ docker network create --driver bridge isolated_nw 311 1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b 312 313 $ docker network inspect isolated_nw 314 [ 315 { 316 "Name": "isolated_nw", 317 "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 318 "Scope": "local", 319 "Driver": "bridge", 320 "IPAM": { 321 "Driver": "default", 322 "Config": [ 323 { 324 "Subnet": "172.21.0.0/16", 325 "Gateway": "172.21.0.1/16" 326 } 327 ] 328 }, 329 "Containers": {}, 330 "Options": {} 331 } 332 ] 333 334 $ docker network ls 335 NETWORK ID NAME DRIVER 336 9f904ee27bf5 none null 337 cf03ee007fb4 host host 338 7fca4eb8c647 bridge bridge 339 c5ee82f76de3 isolated_nw bridge 340 341 ``` 342 343 After you create the network, you can launch containers on it using the `docker run --net=<NETWORK>` option. 344 345 ``` 346 $ docker run --net=isolated_nw -itd --name=container3 busybox 347 8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c 348 349 $ docker network inspect isolated_nw 350 [ 351 { 352 "Name": "isolated_nw", 353 "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 354 "Scope": "local", 355 "Driver": "bridge", 356 "IPAM": { 357 "Driver": "default", 358 "Config": [ 359 {} 360 ] 361 }, 362 "Containers": { 363 "8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": { 364 "EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08", 365 "MacAddress": "02:42:ac:15:00:02", 366 "IPv4Address": "172.21.0.2/16", 367 "IPv6Address": "" 368 } 369 }, 370 "Options": {} 371 } 372 ] 373 ``` 374 375 The containers you launch into this network must reside on the same Docker host. 376 Each container in the network can immediately communicate with other containers 377 in the network. Though, the network itself isolates the containers from external 378 networks. 379 380  381 382 Within a user-defined bridge network, linking is not supported. You can 383 expose and publish container ports on containers in this network. This is useful 384 if you want to make a portion of the `bridge` network available to an outside 385 network. 386 387  388 389 A bridge network is useful in cases where you want to run a relatively small 390 network on a single host. You can, however, create significantly larger networks 391 by creating an `overlay` network. 392 393 394 ### An overlay network 395 396 Docker's `overlay` network driver supports multi-host networking natively 397 out-of-the-box. This support is accomplished with the help of `libnetwork`, a 398 built-in VXLAN-based overlay network driver, and Docker's `libkv` library. 399 400 The `overlay` network requires a valid key-value store service. Currently, 401 Docker's `libkv` supports Consul, Etcd, and ZooKeeper (Distributed store). Before 402 creating a network you must install and configure your chosen key-value store 403 service. The Docker hosts that you intend to network and the service must be 404 able to communicate. 405 406  407 408 Each host in the network must run a Docker Engine instance. The easiest way to 409 provision the hosts are with Docker Machine. 410 411  412 413 You should open the following ports between each of your hosts. 414 415 | Protocol | Port | Description | 416 |----------|------|-----------------------| 417 | udp | 4789 | Data plane (VXLAN) | 418 | tcp/udp | 7946 | Control plane | 419 420 Your key-value store service may require additional ports. 421 Check your vendor's documentation and open any required ports. 422 423 Once you have several machines provisioned, you can use Docker Swarm to quickly 424 form them into a swarm which includes a discovery service as well. 425 426 To create an overlay network, you configure options on the `daemon` on each 427 Docker Engine for use with `overlay` network. There are three options to set: 428 429 <table> 430 <thead> 431 <tr> 432 <th>Option</th> 433 <th>Description</th> 434 </tr> 435 </thead> 436 <tbody> 437 <tr> 438 <td><pre>--cluster-store=PROVIDER://URL</pre></td> 439 <td>Describes the location of the KV service.</td> 440 </tr> 441 <tr> 442 <td><pre>--cluster-advertise=HOST_IP|HOST_IFACE:PORT</pre></td> 443 <td>The IP address or interface of the HOST used for clustering.</td> 444 </tr> 445 <tr> 446 <td><pre>--cluster-store-opt=KEY-VALUE OPTIONS</pre></td> 447 <td>Options such as TLS certificate or tuning discovery Timers</td> 448 </tr> 449 </tbody> 450 </table> 451 452 Create an `overlay` network on one of the machines in the Swarm. 453 454 $ docker network create --driver overlay my-multi-host-network 455 456 This results in a single network spanning multiple hosts. An `overlay` network 457 provides complete isolation for the containers. 458 459  460 461 Then, on each host, launch containers making sure to specify the network name. 462 463 $ docker run -itd --net=my-multi-host-network busybox 464 465 Once connected, each container has access to all the containers in the network 466 regardless of which Docker host the container was launched on. 467 468  469 470 If you would like to try this for yourself, see the [Getting started for 471 overlay](get-started-overlay.md). 472 473 ### Custom network plugin 474 475 If you like, you can write your own network driver plugin. A network 476 driver plugin makes use of Docker's plugin infrastructure. In this 477 infrastructure, a plugin is a process running on the same Docker host as the 478 Docker `daemon`. 479 480 Network plugins follow the same restrictions and installation rules as other 481 plugins. All plugins make use of the plugin API. They have a lifecycle that 482 encompasses installation, starting, stopping and activation. 483 484 Once you have created and installed a custom network driver, you use it like the 485 built-in network drivers. For example: 486 487 $ docker network create --driver weave mynet 488 489 You can inspect it, add containers to and from it, and so forth. Of course, 490 different plugins may make use of different technologies or frameworks. Custom 491 networks can include features not present in Docker's default networks. For more 492 information on writing plugins, see [Extending Docker](../../extend/index.md) and 493 [Writing a network driver plugin](../../extend/plugins_network.md). 494 495 ### Docker embedded DNS server 496 497 Docker daemon runs an embedded DNS server to provide automatic service discovery 498 for containers connected to user defined networks. Name resolution requests from 499 the containers are handled first by the embedded DNS server. If the embedded DNS 500 server is unable to resolve the request it will be forwarded to any external DNS 501 servers configured for the container. To facilitate this when the container is 502 created, only the embedded DNS server reachable at `127.0.0.11` will be listed 503 in the container's `resolv.conf` file. More information on embedded DNS server on 504 user-defined networks can be found in the [embedded DNS server in user-defined networks] 505 (configure-dns.md) 506 507 ## Links 508 509 Before the Docker network feature, you could use the Docker link feature to 510 allow containers to discover each other. With the introduction of Docker networks, 511 containers can be discovered by its name automatically. But you can still create 512 links but they behave differently when used in the default `docker0` bridge network 513 compared to user-defined networks. For more information, please refer to 514 [Legacy Links](default_network/dockerlinks.md) for link feature in default `bridge` network 515 and the [linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks) for links 516 functionality in user-defined networks. 517 518 ## Related information 519 520 - [Work with network commands](work-with-networks.md) 521 - [Get started with multi-host networking](get-started-overlay.md) 522 - [Managing Data in Containers](../../tutorials/dockervolumes.md) 523 - [Docker Machine overview](https://docs.docker.com/machine) 524 - [Docker Swarm overview](https://docs.docker.com/swarm) 525 - [Investigate the LibNetwork project](https://github.com/docker/libnetwork)