github.com/brahmaroutu/docker@v1.2.1-0.20160809185609-eb28dde01f16/docs/userguide/networking/work-with-networks.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Work with network commands" 4 description = "How to work with docker networks" 5 keywords = ["commands, Usage, network, docker, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-4 9 +++ 10 <![end-metadata]--> 11 12 # Work with network commands 13 14 This article provides examples of the network subcommands you can use to 15 interact with Docker networks and the containers in them. The commands are 16 available through the Docker Engine CLI. These commands are: 17 18 * `docker network create` 19 * `docker network connect` 20 * `docker network ls` 21 * `docker network rm` 22 * `docker network disconnect` 23 * `docker network inspect` 24 25 While not required, it is a good idea to read [Understanding Docker 26 network](dockernetworks.md) before trying the examples in this section. The 27 examples for the rely on a `bridge` network so that you can try them 28 immediately. If you would prefer to experiment with an `overlay` network see 29 the [Getting started with multi-host networks](get-started-overlay.md) instead. 30 31 ## Create networks 32 33 Docker Engine creates a `bridge` network automatically when you install Engine. 34 This network corresponds to the `docker0` bridge that Engine has traditionally 35 relied on. In addition to this network, you can create your own `bridge` or 36 `overlay` network. 37 38 A `bridge` network resides on a single host running an instance of Docker 39 Engine. An `overlay` network can span multiple hosts running their own engines. 40 If you run `docker network create` and supply only a network name, it creates a 41 bridge network for you. 42 43 ```bash 44 $ docker network create simple-network 45 46 69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a 47 48 $ docker network inspect simple-network 49 [ 50 { 51 "Name": "simple-network", 52 "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a", 53 "Scope": "local", 54 "Driver": "bridge", 55 "IPAM": { 56 "Driver": "default", 57 "Config": [ 58 { 59 "Subnet": "172.22.0.0/16", 60 "Gateway": "172.22.0.1/16" 61 } 62 ] 63 }, 64 "Containers": {}, 65 "Options": {} 66 } 67 ] 68 ``` 69 70 Unlike `bridge` networks, `overlay` networks require some pre-existing conditions 71 before you can create one. These conditions are: 72 73 * Access to a key-value store. Engine supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 74 * A cluster of hosts with connectivity to the key-value store. 75 * A properly configured Engine `daemon` on each host in the swarm. 76 77 The `dockerd` options that support the `overlay` network are: 78 79 * `--cluster-store` 80 * `--cluster-store-opt` 81 * `--cluster-advertise` 82 83 It is also a good idea, though not required, that you install Docker Swarm 84 to manage the cluster. Swarm provides sophisticated discovery and server 85 management that can assist your implementation. 86 87 When you create a network, Engine creates a non-overlapping subnetwork for the 88 network by default. You can override this default and specify a subnetwork 89 directly using the `--subnet` option. On a `bridge` network you can only 90 specify a single subnet. An `overlay` network supports multiple subnets. 91 92 > **Note** : It is highly recommended to use the `--subnet` option while creating 93 > a network. If the `--subnet` is not specified, the docker daemon automatically 94 > chooses and assigns a subnet for the network and it could overlap with another subnet 95 > in your infrastructure that is not managed by docker. Such overlaps can cause 96 > connectivity issues or failures when containers are connected to that network. 97 98 In addition to the `--subnet` option, you also specify the `--gateway`, 99 `--ip-range`, and `--aux-address` options. 100 101 ```bash 102 $ docker network create -d overlay \ 103 --subnet=192.168.0.0/16 \ 104 --subnet=192.170.0.0/16 \ 105 --gateway=192.168.0.100 \ 106 --gateway=192.170.0.100 \ 107 --ip-range=192.168.1.0/24 \ 108 --aux-address a=192.168.1.5 --aux-address b=192.168.1.6 \ 109 --aux-address a=192.170.1.5 --aux-address b=192.170.1.6 \ 110 my-multihost-network 111 ``` 112 113 Be sure that your subnetworks do not overlap. If they do, the network create 114 fails and Engine returns an error. 115 116 When creating a custom network, the default network driver (i.e. `bridge`) has 117 additional options that can be passed. The following are those options and the 118 equivalent docker daemon flags used for docker0 bridge: 119 120 | Option | Equivalent | Description | 121 |--------------------------------------------------|-------------|-------------------------------------------------------| 122 | `com.docker.network.bridge.name` | - | bridge name to be used when creating the Linux bridge | 123 | `com.docker.network.bridge.enable_ip_masquerade` | `--ip-masq` | Enable IP masquerading | 124 | `com.docker.network.bridge.enable_icc` | `--icc` | Enable or Disable Inter Container Connectivity | 125 | `com.docker.network.bridge.host_binding_ipv4` | `--ip` | Default IP when binding container ports | 126 | `com.docker.network.mtu` | `--mtu` | Set the containers network MTU | 127 128 The following arguments can be passed to `docker network create` for any network driver. 129 130 | Argument | Equivalent | Description | 131 |--------------|------------|------------------------------------------| 132 | `--internal` | - | Restricts external access to the network | 133 | `--ipv6` | `--ipv6` | Enable IPv6 networking | 134 135 For example, now let's use `-o` or `--opt` options to specify an IP address binding when publishing ports: 136 137 ```bash 138 $ docker network create -o "com.docker.network.bridge.host_binding_ipv4"="172.23.0.1" my-network 139 140 b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a 141 142 $ docker network inspect my-network 143 144 [ 145 { 146 "Name": "my-network", 147 "Id": "b1a086897963e6a2e7fc6868962e55e746bee8ad0c97b54a5831054b5f62672a", 148 "Scope": "local", 149 "Driver": "bridge", 150 "IPAM": { 151 "Driver": "default", 152 "Options": {}, 153 "Config": [ 154 { 155 "Subnet": "172.23.0.0/16", 156 "Gateway": "172.23.0.1/16" 157 } 158 ] 159 }, 160 "Containers": {}, 161 "Options": { 162 "com.docker.network.bridge.host_binding_ipv4": "172.23.0.1" 163 } 164 } 165 ] 166 167 $ docker run -d -P --name redis --network my-network redis 168 169 bafb0c808c53104b2c90346f284bda33a69beadcab4fc83ab8f2c5a4410cd129 170 171 $ docker ps 172 173 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 174 bafb0c808c53 redis "/entrypoint.sh redis" 4 seconds ago Up 3 seconds 172.23.0.1:32770->6379/tcp redis 175 ``` 176 177 ## Connect containers 178 179 You can connect containers dynamically to one or more networks. These networks 180 can be backed the same or different network drivers. Once connected, the 181 containers can communicate using another container's IP address or name. 182 183 For `overlay` networks or custom plugins that support multi-host 184 connectivity, containers connected to the same multi-host network but launched 185 from different hosts can also communicate in this way. 186 187 Create two containers for this example: 188 189 ```bash 190 $ docker run -itd --name=container1 busybox 191 192 18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731 193 194 $ docker run -itd --name=container2 busybox 195 196 498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152 197 ``` 198 199 Then create an isolated, `bridge` network to test with. 200 201 ```bash 202 $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw 203 204 06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8 205 ``` 206 207 Connect `container2` to the network and then `inspect` the network to verify 208 the connection: 209 210 ``` 211 $ docker network connect isolated_nw container2 212 213 $ docker network inspect isolated_nw 214 215 [ 216 { 217 "Name": "isolated_nw", 218 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 219 "Scope": "local", 220 "Driver": "bridge", 221 "IPAM": { 222 "Driver": "default", 223 "Config": [ 224 { 225 "Subnet": "172.25.0.0/16", 226 "Gateway": "172.25.0.1/16" 227 } 228 ] 229 }, 230 "Containers": { 231 "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": { 232 "Name": "container2", 233 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 234 "MacAddress": "02:42:ac:19:00:02", 235 "IPv4Address": "172.25.0.2/16", 236 "IPv6Address": "" 237 } 238 }, 239 "Options": {} 240 } 241 ] 242 ``` 243 244 You can see that the Engine automatically assigns an IP address to `container2`. 245 Given we specified a `--subnet` when creating the network, Engine picked 246 an address from that same subnet. Now, start a third container and connect it to 247 the network on launch using the `docker run` command's `--network` option: 248 249 ```bash 250 $ docker run --network=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox 251 252 467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551 253 ``` 254 255 As you can see you were able to specify the ip address for your container. As 256 long as the network to which the container is connecting was created with a 257 user specified subnet, you will be able to select the IPv4 and/or IPv6 258 address(es) for your container when executing `docker run` and `docker network 259 connect` commands by respectively passing the `--ip` and `--ip6` flags for IPv4 260 and IPv6. The selected IP address is part of the container networking 261 configuration and will be preserved across container reload. The feature is 262 only available on user defined networks, because they guarantee their subnets 263 configuration does not change across daemon reload. 264 265 Now, inspect the network resources used by `container3`. 266 267 ```bash 268 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container3 269 270 {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 271 "EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}} 272 ``` 273 Repeat this command for `container2`. If you have Python installed, you can pretty print the output. 274 275 ```bash 276 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 277 278 { 279 "bridge": { 280 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 281 "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365", 282 "Gateway": "172.17.0.1", 283 "GlobalIPv6Address": "", 284 "GlobalIPv6PrefixLen": 0, 285 "IPAMConfig": null, 286 "IPAddress": "172.17.0.3", 287 "IPPrefixLen": 16, 288 "IPv6Gateway": "", 289 "MacAddress": "02:42:ac:11:00:03" 290 }, 291 "isolated_nw": { 292 "NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 293 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 294 "Gateway": "172.25.0.1", 295 "GlobalIPv6Address": "", 296 "GlobalIPv6PrefixLen": 0, 297 "IPAMConfig": null, 298 "IPAddress": "172.25.0.2", 299 "IPPrefixLen": 16, 300 "IPv6Gateway": "", 301 "MacAddress": "02:42:ac:19:00:02" 302 } 303 } 304 ``` 305 306 You should find `container2` belongs to two networks. The `bridge` network 307 which it joined by default when you launched it and the `isolated_nw` which you 308 later connected it to. 309 310  311 312 In the case of `container3`, you connected it through `docker run` to the 313 `isolated_nw` so that container is not connected to `bridge`. 314 315 Use the `docker attach` command to connect to the running `container2` and 316 examine its networking stack: 317 318 ```bash 319 $ docker attach container2 320 ``` 321 322 If you look at the container's network stack you should see two Ethernet 323 interfaces, one for the default bridge network and one for the `isolated_nw` 324 network. 325 326 ```bash 327 / # ifconfig 328 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 329 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 330 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 331 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 332 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 333 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 334 collisions:0 txqueuelen:0 335 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 336 337 eth1 Link encap:Ethernet HWaddr 02:42:AC:15:00:02 338 inet addr:172.25.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 339 inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link 340 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 341 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 342 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 343 collisions:0 txqueuelen:0 344 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 345 346 lo Link encap:Local Loopback 347 inet addr:127.0.0.1 Mask:255.0.0.0 348 inet6 addr: ::1/128 Scope:Host 349 UP LOOPBACK RUNNING MTU:65536 Metric:1 350 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 351 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 352 collisions:0 txqueuelen:0 353 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 354 ``` 355 356 On the `isolated_nw` which was user defined, the Docker embedded DNS server 357 enables name resolution for other containers in the network. Inside of 358 `container2` it is possible to ping `container3` by name. 359 360 ```bash 361 / # ping -w 4 container3 362 PING container3 (172.25.3.3): 56 data bytes 363 64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms 364 64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms 365 64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms 366 64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms 367 368 --- container3 ping statistics --- 369 4 packets transmitted, 4 packets received, 0% packet loss 370 round-trip min/avg/max = 0.070/0.081/0.097 ms 371 ``` 372 373 This isn't the case for the default `bridge` network. Both `container2` and 374 `container1` are connected to the default bridge network. Docker does not 375 support automatic service discovery on this network. For this reason, pinging 376 `container1` by name fails as you would expect based on the `/etc/hosts` file: 377 378 ```bash 379 / # ping -w 4 container1 380 ping: bad address 'container1' 381 ``` 382 383 A ping using the `container1` IP address does succeed though: 384 385 ```bash 386 / # ping -w 4 172.17.0.2 387 PING 172.17.0.2 (172.17.0.2): 56 data bytes 388 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms 389 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms 390 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms 391 64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms 392 393 --- 172.17.0.2 ping statistics --- 394 4 packets transmitted, 4 packets received, 0% packet loss 395 round-trip min/avg/max = 0.072/0.085/0.101 ms 396 ``` 397 398 If you wanted you could connect `container1` to `container2` with the `docker 399 run --link` command and that would enable the two containers to interact by name 400 as well as IP. 401 402 Detach from a `container2` and leave it running using `CTRL-p CTRL-q`. 403 404 In this example, `container2` is attached to both networks and so can talk to 405 `container1` and `container3`. But `container3` and `container1` are not in the 406 same network and cannot communicate. Test, this now by attaching to 407 `container3` and attempting to ping `container1` by IP address. 408 409 ```bash 410 $ docker attach container3 411 412 / # ping 172.17.0.2 413 PING 172.17.0.2 (172.17.0.2): 56 data bytes 414 ^C 415 --- 172.17.0.2 ping statistics --- 416 10 packets transmitted, 0 packets received, 100% packet loss 417 418 ``` 419 420 You can connect both running and non-running containers to a network. However, 421 `docker network inspect` only displays information on running containers. 422 423 ### Linking containers in user-defined networks 424 425 In the above example, `container2` was able to resolve `container3`'s name 426 automatically in the user defined network `isolated_nw`, but the name 427 resolution did not succeed automatically in the default `bridge` network. This 428 is expected in order to maintain backward compatibility with [legacy 429 link](default_network/dockerlinks.md). 430 431 The `legacy link` provided 4 major functionalities to the default `bridge` 432 network. 433 434 * name resolution 435 * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS` 436 * secured container connectivity (in isolation via `--icc=false`) 437 * environment variable injection 438 439 Comparing the above 4 functionalities with the non-default user-defined 440 networks such as `isolated_nw` in this example, without any additional config, 441 `docker network` provides 442 443 * automatic name resolution using DNS 444 * automatic secured isolated environment for the containers in a network 445 * ability to dynamically attach and detach to multiple networks 446 * supports the `--link` option to provide name alias for the linked container 447 448 Continuing with the above example, create another container `container4` in 449 `isolated_nw` with `--link` to provide additional name resolution using alias 450 for other containers in the same network. 451 452 ```bash 453 $ docker run --network=isolated_nw -itd --name=container4 --link container5:c5 busybox 454 455 01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c 456 ``` 457 458 With the help of `--link` `container4` will be able to reach `container5` using 459 the aliased name `c5` as well. 460 461 Please note that while creating `container4`, we linked to a container named 462 `container5` which is not created yet. That is one of the differences in 463 behavior between the *legacy link* in default `bridge` network and the new 464 *link* functionality in user defined networks. The *legacy link* is static in 465 nature and it hard-binds the container with the alias and it doesn't tolerate 466 linked container restarts. While the new *link* functionality in user defined 467 networks are dynamic in nature and supports linked container restarts including 468 tolerating ip-address changes on the linked container. 469 470 Now let us launch another container named `container5` linking `container4` to 471 c4. 472 473 ```bash 474 $ docker run --network=isolated_nw -itd --name=container5 --link container4:c4 busybox 475 476 72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a 477 ``` 478 479 As expected, `container4` will be able to reach `container5` by both its 480 container name and its alias c5 and `container5` will be able to reach 481 `container4` by its container name and its alias c4. 482 483 ```bash 484 $ docker attach container4 485 486 / # ping -w 4 c5 487 PING c5 (172.25.0.5): 56 data bytes 488 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 489 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 490 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 491 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 492 493 --- c5 ping statistics --- 494 4 packets transmitted, 4 packets received, 0% packet loss 495 round-trip min/avg/max = 0.070/0.081/0.097 ms 496 497 / # ping -w 4 container5 498 PING container5 (172.25.0.5): 56 data bytes 499 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 500 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 501 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 502 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 503 504 --- container5 ping statistics --- 505 4 packets transmitted, 4 packets received, 0% packet loss 506 round-trip min/avg/max = 0.070/0.081/0.097 ms 507 ``` 508 509 ```bash 510 $ docker attach container5 511 512 / # ping -w 4 c4 513 PING c4 (172.25.0.4): 56 data bytes 514 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 515 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 516 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 517 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 518 519 --- c4 ping statistics --- 520 4 packets transmitted, 4 packets received, 0% packet loss 521 round-trip min/avg/max = 0.065/0.070/0.082 ms 522 523 / # ping -w 4 container4 524 PING container4 (172.25.0.4): 56 data bytes 525 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 526 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 527 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 528 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 529 530 --- container4 ping statistics --- 531 4 packets transmitted, 4 packets received, 0% packet loss 532 round-trip min/avg/max = 0.065/0.070/0.082 ms 533 ``` 534 535 Similar to the legacy link functionality the new link alias is localized to a 536 container and the aliased name has no meaning outside of the container using 537 the `--link`. 538 539 Also, it is important to note that if a container belongs to multiple networks, 540 the linked alias is scoped within a given network. Hence the containers can be 541 linked to different aliases in different networks. 542 543 Extending the example, let us create another network named `local_alias` 544 545 ```bash 546 $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias 547 76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa 548 ``` 549 550 let us connect `container4` and `container5` to the new network `local_alias` 551 552 ``` 553 $ docker network connect --link container5:foo local_alias container4 554 $ docker network connect --link container4:bar local_alias container5 555 ``` 556 557 ```bash 558 $ docker attach container4 559 560 / # ping -w 4 foo 561 PING foo (172.26.0.3): 56 data bytes 562 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 563 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 564 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 565 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 566 567 --- foo ping statistics --- 568 4 packets transmitted, 4 packets received, 0% packet loss 569 round-trip min/avg/max = 0.070/0.081/0.097 ms 570 571 / # ping -w 4 c5 572 PING c5 (172.25.0.5): 56 data bytes 573 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 574 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 575 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 576 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 577 578 --- c5 ping statistics --- 579 4 packets transmitted, 4 packets received, 0% packet loss 580 round-trip min/avg/max = 0.070/0.081/0.097 ms 581 ``` 582 583 Note that the ping succeeds for both the aliases but on different networks. Let 584 us conclude this section by disconnecting `container5` from the `isolated_nw` 585 and observe the results 586 587 ``` 588 $ docker network disconnect isolated_nw container5 589 590 $ docker attach container4 591 592 / # ping -w 4 c5 593 ping: bad address 'c5' 594 595 / # ping -w 4 foo 596 PING foo (172.26.0.3): 56 data bytes 597 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 598 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 599 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 600 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 601 602 --- foo ping statistics --- 603 4 packets transmitted, 4 packets received, 0% packet loss 604 round-trip min/avg/max = 0.070/0.081/0.097 ms 605 606 ``` 607 608 In conclusion, the new link functionality in user defined networks provides all 609 the benefits of legacy links while avoiding most of the well-known issues with 610 *legacy links*. 611 612 One notable missing functionality compared to *legacy links* is the injection 613 of environment variables. Though very useful, environment variable injection is 614 static in nature and must be injected when the container is started. One cannot 615 inject environment variables into a running container without significant 616 effort and hence it is not compatible with `docker network` which provides a 617 dynamic way to connect/ disconnect containers to/from a network. 618 619 ### Network-scoped alias 620 621 While *link*s provide private name resolution that is localized within a 622 container, the network-scoped alias provides a way for a container to be 623 discovered by an alternate name by any other container within the scope of a 624 particular network. Unlike the *link* alias, which is defined by the consumer 625 of a service, the network-scoped alias is defined by the container that is 626 offering the service to the network. 627 628 Continuing with the above example, create another container in `isolated_nw` 629 with a network alias. 630 631 ```bash 632 $ docker run --network=isolated_nw -itd --name=container6 --network-alias app busybox 633 634 8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17 635 ``` 636 637 ```bash 638 $ docker attach container4 639 640 / # ping -w 4 app 641 PING app (172.25.0.6): 56 data bytes 642 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 643 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 644 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 645 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 646 647 --- app ping statistics --- 648 4 packets transmitted, 4 packets received, 0% packet loss 649 round-trip min/avg/max = 0.070/0.081/0.097 ms 650 651 / # ping -w 4 container6 652 PING container5 (172.25.0.6): 56 data bytes 653 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 654 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 655 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 656 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 657 658 --- container6 ping statistics --- 659 4 packets transmitted, 4 packets received, 0% packet loss 660 round-trip min/avg/max = 0.070/0.081/0.097 ms 661 ``` 662 663 Now let us connect `container6` to the `local_alias` network with a different 664 network-scoped alias. 665 666 ```bash 667 $ docker network connect --alias scoped-app local_alias container6 668 ``` 669 670 `container6` in this example now is aliased as `app` in network `isolated_nw` 671 and as `scoped-app` in network `local_alias`. 672 673 Let's try to reach these aliases from `container4` (which is connected to both 674 these networks) and `container5` (which is connected only to `isolated_nw`). 675 676 ```bash 677 $ docker attach container4 678 679 / # ping -w 4 scoped-app 680 PING foo (172.26.0.5): 56 data bytes 681 64 bytes from 172.26.0.5: seq=0 ttl=64 time=0.070 ms 682 64 bytes from 172.26.0.5: seq=1 ttl=64 time=0.080 ms 683 64 bytes from 172.26.0.5: seq=2 ttl=64 time=0.080 ms 684 64 bytes from 172.26.0.5: seq=3 ttl=64 time=0.097 ms 685 686 --- foo ping statistics --- 687 4 packets transmitted, 4 packets received, 0% packet loss 688 round-trip min/avg/max = 0.070/0.081/0.097 ms 689 690 $ docker attach container5 691 692 / # ping -w 4 scoped-app 693 ping: bad address 'scoped-app' 694 695 ``` 696 697 As you can see, the alias is scoped to the network it is defined on and hence 698 only those containers that are connected to that network can access the alias. 699 700 In addition to the above features, multiple containers can share the same 701 network-scoped alias within the same network. For example, let's launch 702 `container7` in `isolated_nw` with the same alias as `container6` 703 704 ```bash 705 $ docker run --network=isolated_nw -itd --name=container7 --network-alias app busybox 706 707 3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554 708 ``` 709 710 When multiple containers share the same alias, name resolution to that alias 711 will happen to one of the containers (typically the first container that is 712 aliased). When the container that backs the alias goes down or disconnected 713 from the network, the next container that backs the alias will be resolved. 714 715 Let us ping the alias `app` from `container4` and bring down `container6` to 716 verify that `container7` is resolving the `app` alias. 717 718 ```bash 719 $ docker attach container4 720 721 / # ping -w 4 app 722 PING app (172.25.0.6): 56 data bytes 723 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 724 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 725 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 726 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 727 728 --- app ping statistics --- 729 4 packets transmitted, 4 packets received, 0% packet loss 730 round-trip min/avg/max = 0.070/0.081/0.097 ms 731 732 $ docker stop container6 733 734 $ docker attach container4 735 736 / # ping -w 4 app 737 PING app (172.25.0.7): 56 data bytes 738 64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms 739 64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms 740 64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms 741 64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms 742 743 --- app ping statistics --- 744 4 packets transmitted, 4 packets received, 0% packet loss 745 round-trip min/avg/max = 0.072/0.085/0.101 ms 746 747 ``` 748 749 ## Disconnecting containers 750 751 You can disconnect a container from a network using the `docker network 752 disconnect` command. 753 754 ```bash 755 $ docker network disconnect isolated_nw container2 756 757 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 758 759 { 760 "bridge": { 761 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 762 "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0", 763 "Gateway": "172.17.0.1", 764 "GlobalIPv6Address": "", 765 "GlobalIPv6PrefixLen": 0, 766 "IPAddress": "172.17.0.3", 767 "IPPrefixLen": 16, 768 "IPv6Gateway": "", 769 "MacAddress": "02:42:ac:11:00:03" 770 } 771 } 772 773 774 $ docker network inspect isolated_nw 775 776 [ 777 { 778 "Name": "isolated_nw", 779 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 780 "Scope": "local", 781 "Driver": "bridge", 782 "IPAM": { 783 "Driver": "default", 784 "Config": [ 785 { 786 "Subnet": "172.21.0.0/16", 787 "Gateway": "172.21.0.1/16" 788 } 789 ] 790 }, 791 "Containers": { 792 "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": { 793 "Name": "container3", 794 "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103", 795 "MacAddress": "02:42:ac:19:03:03", 796 "IPv4Address": "172.25.3.3/16", 797 "IPv6Address": "" 798 } 799 }, 800 "Options": {} 801 } 802 ] 803 ``` 804 805 Once a container is disconnected from a network, it cannot communicate with 806 other containers connected to that network. In this example, `container2` can 807 no longer talk to `container3` on the `isolated_nw` network. 808 809 ```bash 810 $ docker attach container2 811 812 / # ifconfig 813 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 814 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 815 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 816 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 817 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 818 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 819 collisions:0 txqueuelen:0 820 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 821 822 lo Link encap:Local Loopback 823 inet addr:127.0.0.1 Mask:255.0.0.0 824 inet6 addr: ::1/128 Scope:Host 825 UP LOOPBACK RUNNING MTU:65536 Metric:1 826 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 827 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 828 collisions:0 txqueuelen:0 829 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 830 831 / # ping container3 832 PING container3 (172.25.3.3): 56 data bytes 833 ^C 834 --- container3 ping statistics --- 835 2 packets transmitted, 0 packets received, 100% packet loss 836 ``` 837 838 The `container2` still has full connectivity to the bridge network 839 840 ```bash 841 / # ping container1 842 PING container1 (172.17.0.2): 56 data bytes 843 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms 844 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms 845 ^C 846 --- container1 ping statistics --- 847 2 packets transmitted, 2 packets received, 0% packet loss 848 round-trip min/avg/max = 0.119/0.146/0.174 ms 849 / # 850 ``` 851 852 There are certain scenarios such as ungraceful docker daemon restarts in 853 multi-host network, where the daemon is unable to cleanup stale connectivity 854 endpoints. Such stale endpoints may cause an error `container already connected 855 to network` when a new container is connected to that network with the same 856 name as the stale endpoint. In order to cleanup these stale endpoints, first 857 remove the container and force disconnect (`docker network disconnect -f`) the 858 endpoint from the network. Once the endpoint is cleaned up, the container can 859 be connected to the network. 860 861 ```bash 862 $ docker run -d --name redis_db --network multihost redis 863 864 ERROR: Cannot start container bc0b19c089978f7845633027aa3435624ca3d12dd4f4f764b61eac4c0610f32e: container already connected to network multihost 865 866 $ docker rm -f redis_db 867 868 $ docker network disconnect -f multihost redis_db 869 870 $ docker run -d --name redis_db --network multihost redis 871 872 7d986da974aeea5e9f7aca7e510bdb216d58682faa83a9040c2f2adc0544795a 873 ``` 874 875 ## Remove a network 876 877 When all the containers in a network are stopped or disconnected, you can 878 remove a network. 879 880 ```bash 881 $ docker network disconnect isolated_nw container3 882 ``` 883 884 ```bash 885 docker network inspect isolated_nw 886 887 [ 888 { 889 "Name": "isolated_nw", 890 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 891 "Scope": "local", 892 "Driver": "bridge", 893 "IPAM": { 894 "Driver": "default", 895 "Config": [ 896 { 897 "Subnet": "172.21.0.0/16", 898 "Gateway": "172.21.0.1/16" 899 } 900 ] 901 }, 902 "Containers": {}, 903 "Options": {} 904 } 905 ] 906 907 $ docker network rm isolated_nw 908 ``` 909 910 List all your networks to verify the `isolated_nw` was removed: 911 912 ```bash 913 $ docker network ls 914 915 NETWORK ID NAME DRIVER 916 72314fa53006 host host 917 f7ab26d71dbd bridge bridge 918 0f32e83e61ac none null 919 ``` 920 921 ## Related information 922 923 * [network create](../../reference/commandline/network_create.md) 924 * [network inspect](../../reference/commandline/network_inspect.md) 925 * [network connect](../../reference/commandline/network_connect.md) 926 * [network disconnect](../../reference/commandline/network_disconnect.md) 927 * [network ls](../../reference/commandline/network_ls.md) 928 * [network rm](../../reference/commandline/network_rm.md)