github.com/vincentwoo/docker@v0.7.3-0.20160116130405-82401a4b13c0/docs/userguide/networking/work-with-networks.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Work with network commands" 4 description = "How to work with docker networks" 5 keywords = ["commands, Usage, network, docker, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-4 9 +++ 10 <![end-metadata]--> 11 12 # Work with network commands 13 14 This article provides examples of the network subcommands you can use to interact with Docker networks and the containers in them. The commands are available through the Docker Engine CLI. These commands are: 15 16 * `docker network create` 17 * `docker network connect` 18 * `docker network ls` 19 * `docker network rm` 20 * `docker network disconnect` 21 * `docker network inspect` 22 23 While not required, it is a good idea to read [Understanding Docker 24 network](dockernetworks.md) before trying the examples in this section. The 25 examples for the rely on a `bridge` network so that you can try them 26 immediately. If you would prefer to experiment with an `overlay` network see 27 the [Getting started with multi-host networks](get-started-overlay.md) instead. 28 29 ## Create networks 30 31 Docker Engine creates a `bridge` network automatically when you install Engine. 32 This network corresponds to the `docker0` bridge that Engine has traditionally 33 relied on. In addition to this network, you can create your own `bridge` or `overlay` network. 34 35 A `bridge` network resides on a single host running an instance of Docker Engine. An `overlay` network can span multiple hosts running their own engines. If you run `docker network create` and supply only a network name, it creates a bridge network for you. 36 37 ```bash 38 $ docker network create simple-network 39 69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a 40 $ docker network inspect simple-network 41 [ 42 { 43 "Name": "simple-network", 44 "Id": "69568e6336d8c96bbf57869030919f7c69524f71183b44d80948bd3927c87f6a", 45 "Scope": "local", 46 "Driver": "bridge", 47 "IPAM": { 48 "Driver": "default", 49 "Config": [ 50 { 51 "Subnet": "172.22.0.0/16", 52 "Gateway": "172.22.0.1/16" 53 } 54 ] 55 }, 56 "Containers": {}, 57 "Options": {} 58 } 59 ] 60 ``` 61 62 Unlike `bridge` networks, `overlay` networks require some pre-existing conditions 63 before you can create one. These conditions are: 64 65 * Access to a key-value store. Engine supports Consul Etcd, and ZooKeeper (Distributed store) key-value stores. 66 * A cluster of hosts with connectivity to the key-value store. 67 * A properly configured Engine `daemon` on each host in the swarm. 68 69 The `docker daemon` options that support the `overlay` network are: 70 71 * `--cluster-store` 72 * `--cluster-store-opt` 73 * `--cluster-advertise` 74 75 It is also a good idea, though not required, that you install Docker Swarm 76 to manage the cluster. Swarm provides sophisticated discovery and server 77 management that can assist your implementation. 78 79 When you create a network, Engine creates a non-overlapping subnetwork for the 80 network by default. You can override this default and specify a subnetwork 81 directly using the the `--subnet` option. On a `bridge` network you can only 82 create a single subnet. An `overlay` network supports multiple subnets. 83 84 In addition to the `--subnetwork` option, you also specify the `--gateway` `--ip-range` and `--aux-address` options. 85 86 ```bash 87 $ docker network create -d overlay 88 --subnet=192.168.0.0/16 --subnet=192.170.0.0/16 89 --gateway=192.168.0.100 --gateway=192.170.0.100 90 --ip-range=192.168.1.0/24 91 --aux-address a=192.168.1.5 --aux-address b=192.168.1.6 92 --aux-address a=192.170.1.5 --aux-address b=192.170.1.6 93 my-multihost-network 94 ``` 95 96 Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns an error. 97 98 ## Connect containers 99 100 You can connect containers dynamically to one or more networks. These networks 101 can be backed the same or different network drivers. Once connected, the 102 containers can communicate using another container's IP address or name. 103 104 For `overlay` networks or custom plugins that support multi-host 105 connectivity, containers connected to the same multi-host network but launched 106 from different hosts can also communicate in this way. 107 108 Create two containers for this example: 109 110 ```bash 111 $ docker run -itd --name=container1 busybox 112 18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731 113 114 $ docker run -itd --name=container2 busybox 115 498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152 116 ``` 117 118 Then create an isolated, `bridge` network to test with. 119 120 ```bash 121 $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw 122 06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8 123 ``` 124 125 Connect `container2` to the network and then `inspect` the network to verify the connection: 126 127 ``` 128 $ docker network connect isolated_nw container2 129 $ docker network inspect isolated_nw 130 [ 131 { 132 "Name": "isolated_nw", 133 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 134 "Scope": "local", 135 "Driver": "bridge", 136 "IPAM": { 137 "Driver": "default", 138 "Config": [ 139 { 140 "Subnet": "172.21.0.0/16", 141 "Gateway": "172.21.0.1/16" 142 } 143 ] 144 }, 145 "Containers": { 146 "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": { 147 "Name": "container2", 148 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 149 "MacAddress": "02:42:ac:19:00:02", 150 "IPv4Address": "172.25.0.2/16", 151 "IPv6Address": "" 152 } 153 }, 154 "Options": {} 155 } 156 ] 157 ``` 158 159 You can see that the Engine automatically assigns an IP address to `container2`. 160 Given we specified a `--subnet` when creating the network, Engine picked 161 an address from that same subnet. Now, start a third container and connect it to 162 the network on launch using the `docker run` command's `--net` option: 163 164 ```bash 165 $ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox 166 467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551 167 ``` 168 169 As you can see you were able to specify the ip address for your container. 170 As long as the network to which the container is connecting was created with 171 a user specified subnet, you will be able to select the IPv4 and/or IPv6 address(es) 172 for your container when executing `docker run` and `docker network connect` commands. 173 The selected IP address is part of the container networking configuration and will be 174 preserved across container reload. The feature is only available on user defined networks, 175 because they guarantee their subnets configuration does not change across daemon reload. 176 177 Now, inspect the network resources used by `container3`. 178 179 ```bash 180 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container3 181 {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 182 "EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}} 183 ``` 184 Repeat this command for `container2`. If you have Python installed, you can pretty print the output. 185 186 ```bash 187 $ docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 188 { 189 "bridge": { 190 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 191 "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365", 192 "Gateway": "172.17.0.1", 193 "GlobalIPv6Address": "", 194 "GlobalIPv6PrefixLen": 0, 195 "IPAMConfig": null, 196 "IPAddress": "172.17.0.3", 197 "IPPrefixLen": 16, 198 "IPv6Gateway": "", 199 "MacAddress": "02:42:ac:11:00:03" 200 }, 201 "isolated_nw": { 202 "NetworkID":"1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b", 203 "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d", 204 "Gateway": "172.25.0.1", 205 "GlobalIPv6Address": "", 206 "GlobalIPv6PrefixLen": 0, 207 "IPAMConfig": null, 208 "IPAddress": "172.25.0.2", 209 "IPPrefixLen": 16, 210 "IPv6Gateway": "", 211 "MacAddress": "02:42:ac:19:00:02" 212 } 213 } 214 ``` 215 216 You should find `container2` belongs to two networks. The `bridge` network 217 which it joined by default when you launched it and the `isolated_nw` which you 218 later connected it to. 219 220  221 222 In the case of `container3`, you connected it through `docker run` to the 223 `isolated_nw` so that container is not connected to `bridge`. 224 225 Use the `docker attach` command to connect to the running `container2` and 226 examine its networking stack: 227 228 ```bash 229 $ docker attach container2 230 ``` 231 232 If you look a the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network. 233 234 ```bash 235 / # ifconfig 236 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 237 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 238 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 239 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 240 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 241 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 242 collisions:0 txqueuelen:0 243 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 244 245 eth1 Link encap:Ethernet HWaddr 02:42:AC:15:00:02 246 inet addr:172.25.0.2 Bcast:0.0.0.0 Mask:255.255.0.0 247 inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link 248 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 249 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 250 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 251 collisions:0 txqueuelen:0 252 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 253 254 lo Link encap:Local Loopback 255 inet addr:127.0.0.1 Mask:255.0.0.0 256 inet6 addr: ::1/128 Scope:Host 257 UP LOOPBACK RUNNING MTU:65536 Metric:1 258 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 259 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 260 collisions:0 txqueuelen:0 261 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 262 263 On the `isolated_nw` which was user defined, the Docker embedded DNS server enables name resolution for other containers in the network. Inside of `container2` it is possible to ping `container3` by name. 264 265 ```bash 266 / # ping -w 4 container3 267 PING container3 (172.25.3.3): 56 data bytes 268 64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms 269 64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms 270 64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms 271 64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms 272 273 --- container3 ping statistics --- 274 4 packets transmitted, 4 packets received, 0% packet loss 275 round-trip min/avg/max = 0.070/0.081/0.097 ms 276 ``` 277 278 This isn't the case for the default `bridge` network. Both `container2` and `container1` are connected to the default bridge network. Docker does not support automatic service discovery on this network. For this reason, pinging `container1` by name fails as you would expect based on the `/etc/hosts` file: 279 280 ```bash 281 / # ping -w 4 container1 282 ping: bad address 'container1' 283 ``` 284 285 A ping using the `container1` IP address does succeed though: 286 287 ```bash 288 / # ping -w 4 172.17.0.2 289 PING 172.17.0.2 (172.17.0.2): 56 data bytes 290 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms 291 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms 292 64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms 293 64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms 294 295 --- 172.17.0.2 ping statistics --- 296 4 packets transmitted, 4 packets received, 0% packet loss 297 round-trip min/avg/max = 0.072/0.085/0.101 ms 298 ``` 299 300 If you wanted you could connect `container1` to `container2` with the `docker 301 run --link` command and that would enable the two containers to interact by name 302 as well as IP. 303 304 Detach from a `container2` and leave it running using `CTRL-p CTRL-q`. 305 306 In this example, `container2` is attached to both networks and so can talk to 307 `container1` and `container3`. But `container3` and `container1` are not in the 308 same network and cannot communicate. Test, this now by attaching to 309 `container3` and attempting to ping `container1` by IP address. 310 311 ```bash 312 $ docker attach container3 313 / # ping 172.17.0.2 314 PING 172.17.0.2 (172.17.0.2): 56 data bytes 315 ^C 316 --- 172.17.0.2 ping statistics --- 317 10 packets transmitted, 0 packets received, 100% packet loss 318 319 ``` 320 321 You can connect both running and non-running containers to a network. However, 322 `docker network inspect` only displays information on running containers. 323 324 ### Linking containers in user-defined networks 325 326 In the above example, container_2 was able to resolve container_3's name automatically 327 in the user defined network `isolated_nw`, but the name resolution did not succeed 328 automatically in the default `bridge` network. This is expected in order to maintain 329 backward compatibility with [legacy link](default_network/dockerlinks.md). 330 331 The `legacy link` provided 4 major functionalities to the default `bridge` network. 332 333 * name resolution 334 * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS` 335 * secured container connectivity (in isolation via `--icc=false`) 336 * environment variable injection 337 338 Comparing the above 4 functionalities with the non-default user-defined networks such as 339 `isolated_nw` in this example, without any additional config, `docker network` provides 340 341 * automatic name resolution using DNS 342 * automatic secured isolated environment for the containers in a network 343 * ability to dynamically attach and detach to multiple networks 344 * supports the `--link` option to provide name alias for the linked container 345 346 Continuing with the above example, create another container `container_4` in `isolated_nw` 347 with `--link` to provide additional name resolution using alias for other containers in 348 the same network. 349 350 ```bash 351 $ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox 352 01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c 353 ``` 354 355 With the help of `--link` container4 will be able to reach container5 using the 356 aliased name `c5` as well. 357 358 Please note that while creating container4, we linked to a container named `container5` 359 which is not created yet. That is one of the differences in behavior between the 360 `legacy link` in default `bridge` network and the new `link` functionality in user defined 361 networks. The `legacy link` is static in nature and it hard-binds the container with the 362 alias and it doesnt tolerate linked container restarts. While the new `link` functionality 363 in user defined networks are dynamic in nature and supports linked container restarts 364 including tolerating ip-address changes on the linked container. 365 366 Now let us launch another container named `container5` linking container4 to c4. 367 368 ```bash 369 $ docker run --net=isolated_nw -itd --name=container5 --link container4:c4 busybox 370 72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a 371 ``` 372 373 As expected, container4 will be able to reach container5 by both its container name and 374 its alias c5 and container5 will be able to reach container4 by its container name and 375 its alias c4. 376 377 ```bash 378 $ docker attach container4 379 / # ping -w 4 c5 380 PING c5 (172.25.0.5): 56 data bytes 381 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 382 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 383 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 384 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 385 386 --- c5 ping statistics --- 387 4 packets transmitted, 4 packets received, 0% packet loss 388 round-trip min/avg/max = 0.070/0.081/0.097 ms 389 390 / # ping -w 4 container5 391 PING container5 (172.25.0.5): 56 data bytes 392 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 393 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 394 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 395 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 396 397 --- container5 ping statistics --- 398 4 packets transmitted, 4 packets received, 0% packet loss 399 round-trip min/avg/max = 0.070/0.081/0.097 ms 400 ``` 401 402 ```bash 403 $ docker attach container5 404 / # ping -w 4 c4 405 PING c4 (172.25.0.4): 56 data bytes 406 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 407 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 408 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 409 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 410 411 --- c4 ping statistics --- 412 4 packets transmitted, 4 packets received, 0% packet loss 413 round-trip min/avg/max = 0.065/0.070/0.082 ms 414 415 / # ping -w 4 container4 416 PING container4 (172.25.0.4): 56 data bytes 417 64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms 418 64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms 419 64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms 420 64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms 421 422 --- container4 ping statistics --- 423 4 packets transmitted, 4 packets received, 0% packet loss 424 round-trip min/avg/max = 0.065/0.070/0.082 ms 425 ``` 426 427 Similar to the legacy link functionality the new link alias is localized to a container 428 and the aliased name has no meaning outside of the container using the `--link`. 429 430 Also, it is important to note that if a container belongs to multiple networks, the 431 linked alias is scoped within a given network. Hence the containers can be linked to 432 different aliases in different networks. 433 434 Extending the example, let us create another network named `local_alias` 435 436 ```bash 437 $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias 438 76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa 439 ``` 440 441 let us connect container4 and container5 to the new network `local_alias` 442 443 ``` 444 $ docker network connect --link container5:foo local_alias container4 445 $ docker network connect --link container4:bar local_alias container5 446 ``` 447 448 ```bash 449 $ docker attach container4 450 451 / # ping -w 4 foo 452 PING foo (172.26.0.3): 56 data bytes 453 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 454 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 455 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 456 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 457 458 --- foo ping statistics --- 459 4 packets transmitted, 4 packets received, 0% packet loss 460 round-trip min/avg/max = 0.070/0.081/0.097 ms 461 462 / # ping -w 4 c5 463 PING c5 (172.25.0.5): 56 data bytes 464 64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms 465 64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms 466 64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms 467 64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms 468 469 --- c5 ping statistics --- 470 4 packets transmitted, 4 packets received, 0% packet loss 471 round-trip min/avg/max = 0.070/0.081/0.097 ms 472 ``` 473 474 Note that the ping succeeds for both the aliases but on different networks. 475 Let us conclude this section by disconnecting container5 from the `isolated_nw` 476 and observe the results 477 478 ``` 479 $ docker network disconnect isolated_nw container5 480 481 $ docker attach container4 482 483 / # ping -w 4 c5 484 ping: bad address 'c5' 485 486 / # ping -w 4 foo 487 PING foo (172.26.0.3): 56 data bytes 488 64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms 489 64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms 490 64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms 491 64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms 492 493 --- foo ping statistics --- 494 4 packets transmitted, 4 packets received, 0% packet loss 495 round-trip min/avg/max = 0.070/0.081/0.097 ms 496 497 ``` 498 499 In conclusion, the new link functionality in user defined networks provides all the 500 benefits of legacy links while avoiding most of the well-known issues with `legacy links`. 501 502 One notable missing functionality compared to `legacy links` is the injection of 503 environment variables. Though very useful, environment variable injection is static 504 in nature and must be injected when the container is started. One cannot inject 505 environment variables into a running container without significant effort and hence 506 it is not compatible with `docker network` which provides a dynamic way to connect/ 507 disconnect containers to/from a network. 508 509 ### Network-scoped alias 510 511 While `links` provide private name resolution that is localized within a container, 512 the network-scoped alias provides a way for a container to be discovered by an 513 alternate name by any other container within the scope of a particular network. 514 Unlike the `link` alias, which is defined by the consumer of a service, the 515 network-scoped alias is defined by the container that is offering the service 516 to the network. 517 518 Continuing with the above example, create another container in `isolated_nw` with a 519 network alias. 520 521 ```bash 522 $ docker run --net=isolated_nw -itd --name=container6 --net-alias app busybox 523 8ebe6767c1e0361f27433090060b33200aac054a68476c3be87ef4005eb1df17 524 ``` 525 526 ```bash 527 $ docker attach container4 528 / # ping -w 4 app 529 PING app (172.25.0.6): 56 data bytes 530 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 531 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 532 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 533 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 534 535 --- app ping statistics --- 536 4 packets transmitted, 4 packets received, 0% packet loss 537 round-trip min/avg/max = 0.070/0.081/0.097 ms 538 539 / # ping -w 4 container6 540 PING container5 (172.25.0.6): 56 data bytes 541 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 542 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 543 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 544 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 545 546 --- container6 ping statistics --- 547 4 packets transmitted, 4 packets received, 0% packet loss 548 round-trip min/avg/max = 0.070/0.081/0.097 ms 549 ``` 550 551 Now let us connect `container6` to the `local_alias` network with a different network-scoped 552 alias. 553 554 ``` 555 $ docker network connect --alias scoped-app local_alias container6 556 ``` 557 558 `container6` in this example now is aliased as `app` in network `isolated_nw` and 559 as `scoped-app` in network `local_alias`. 560 561 Let's try to reach these aliases from `container4` (which is connected to both these networks) 562 and `container5` (which is connected only to `isolated_nw`). 563 564 ```bash 565 $ docker attach container4 566 567 / # ping -w 4 scoped-app 568 PING foo (172.26.0.5): 56 data bytes 569 64 bytes from 172.26.0.5: seq=0 ttl=64 time=0.070 ms 570 64 bytes from 172.26.0.5: seq=1 ttl=64 time=0.080 ms 571 64 bytes from 172.26.0.5: seq=2 ttl=64 time=0.080 ms 572 64 bytes from 172.26.0.5: seq=3 ttl=64 time=0.097 ms 573 574 --- foo ping statistics --- 575 4 packets transmitted, 4 packets received, 0% packet loss 576 round-trip min/avg/max = 0.070/0.081/0.097 ms 577 578 $ docker attach container5 579 580 / # ping -w 4 scoped-app 581 ping: bad address 'scoped-app' 582 583 ``` 584 585 As you can see, the alias is scoped to the network it is defined on and hence only 586 those containers that are connected to that network can access the alias. 587 588 In addition to the above features, multiple containers can share the same network-scoped 589 alias within the same network. For example, let's launch `container7` in `isolated_nw` with 590 the same alias as `container6` 591 592 ```bash 593 $ docker run --net=isolated_nw -itd --name=container7 --net-alias app busybox 594 3138c678c123b8799f4c7cc6a0cecc595acbdfa8bf81f621834103cd4f504554 595 ``` 596 597 When multiple containers share the same alias, name resolution to that alias will happen 598 to one of the containers (typically the first container that is aliased). When the container 599 that backs the alias goes down or disconnected from the network, the next container that 600 backs the alias will be resolved. 601 602 Let us ping the alias `app` from `container4` and bring down `container6` to verify that 603 `container7` is resolving the `app` alias. 604 605 ```bash 606 $ docker attach container4 607 / # ping -w 4 app 608 PING app (172.25.0.6): 56 data bytes 609 64 bytes from 172.25.0.6: seq=0 ttl=64 time=0.070 ms 610 64 bytes from 172.25.0.6: seq=1 ttl=64 time=0.080 ms 611 64 bytes from 172.25.0.6: seq=2 ttl=64 time=0.080 ms 612 64 bytes from 172.25.0.6: seq=3 ttl=64 time=0.097 ms 613 614 --- app ping statistics --- 615 4 packets transmitted, 4 packets received, 0% packet loss 616 round-trip min/avg/max = 0.070/0.081/0.097 ms 617 618 $ docker stop container6 619 620 $ docker attach container4 621 / # ping -w 4 app 622 PING app (172.25.0.7): 56 data bytes 623 64 bytes from 172.25.0.7: seq=0 ttl=64 time=0.095 ms 624 64 bytes from 172.25.0.7: seq=1 ttl=64 time=0.075 ms 625 64 bytes from 172.25.0.7: seq=2 ttl=64 time=0.072 ms 626 64 bytes from 172.25.0.7: seq=3 ttl=64 time=0.101 ms 627 628 --- app ping statistics --- 629 4 packets transmitted, 4 packets received, 0% packet loss 630 round-trip min/avg/max = 0.072/0.085/0.101 ms 631 632 ``` 633 634 ## Disconnecting containers 635 636 You can disconnect a container from a network using the `docker network 637 disconnect` command. 638 639 ``` 640 $ docker network disconnect isolated_nw container2 641 642 docker inspect --format='{{json .NetworkSettings.Networks}}' container2 | python -m json.tool 643 { 644 "bridge": { 645 "NetworkID":"7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812", 646 "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0", 647 "Gateway": "172.17.0.1", 648 "GlobalIPv6Address": "", 649 "GlobalIPv6PrefixLen": 0, 650 "IPAddress": "172.17.0.3", 651 "IPPrefixLen": 16, 652 "IPv6Gateway": "", 653 "MacAddress": "02:42:ac:11:00:03" 654 } 655 } 656 657 658 $ docker network inspect isolated_nw 659 [ 660 { 661 "Name": "isolated_nw", 662 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 663 "Scope": "local", 664 "Driver": "bridge", 665 "IPAM": { 666 "Driver": "default", 667 "Config": [ 668 { 669 "Subnet": "172.21.0.0/16", 670 "Gateway": "172.21.0.1/16" 671 } 672 ] 673 }, 674 "Containers": { 675 "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": { 676 "Name": "container3", 677 "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103", 678 "MacAddress": "02:42:ac:19:03:03", 679 "IPv4Address": "172.25.3.3/16", 680 "IPv6Address": "" 681 } 682 }, 683 "Options": {} 684 } 685 ] 686 ``` 687 688 Once a container is disconnected from a network, it cannot communicate with 689 other containers connected to that network. In this example, `container2` can no longer talk to `container3` on the `isolated_nw` network. 690 691 ``` 692 $ docker attach container2 693 694 / # ifconfig 695 eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:03 696 inet addr:172.17.0.3 Bcast:0.0.0.0 Mask:255.255.0.0 697 inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link 698 UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1 699 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 700 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 701 collisions:0 txqueuelen:0 702 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) 703 704 lo Link encap:Local Loopback 705 inet addr:127.0.0.1 Mask:255.0.0.0 706 inet6 addr: ::1/128 Scope:Host 707 UP LOOPBACK RUNNING MTU:65536 Metric:1 708 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 709 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 710 collisions:0 txqueuelen:0 711 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) 712 713 / # ping container3 714 PING container3 (172.25.3.3): 56 data bytes 715 ^C 716 --- container3 ping statistics --- 717 2 packets transmitted, 0 packets received, 100% packet loss 718 ``` 719 720 The `container2` still has full connectivity to the bridge network 721 722 ```bash 723 / # ping container1 724 PING container1 (172.17.0.2): 56 data bytes 725 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms 726 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms 727 ^C 728 --- container1 ping statistics --- 729 2 packets transmitted, 2 packets received, 0% packet loss 730 round-trip min/avg/max = 0.119/0.146/0.174 ms 731 / # 732 ``` 733 734 ## Remove a network 735 736 When all the containers in a network are stopped or disconnected, you can remove a network. 737 738 ```bash 739 $ docker network disconnect isolated_nw container3 740 ``` 741 742 ```bash 743 docker network inspect isolated_nw 744 [ 745 { 746 "Name": "isolated_nw", 747 "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8", 748 "Scope": "local", 749 "Driver": "bridge", 750 "IPAM": { 751 "Driver": "default", 752 "Config": [ 753 { 754 "Subnet": "172.21.0.0/16", 755 "Gateway": "172.21.0.1/16" 756 } 757 ] 758 }, 759 "Containers": {}, 760 "Options": {} 761 } 762 ] 763 764 $ docker network rm isolated_nw 765 ``` 766 767 List all your networks to verify the `isolated_nw` was removed: 768 769 ``` 770 $ docker network ls 771 NETWORK ID NAME DRIVER 772 72314fa53006 host host 773 f7ab26d71dbd bridge bridge 774 0f32e83e61ac none null 775 ``` 776 777 ## Related information 778 779 * [network create](../../reference/commandline/network_create.md) 780 * [network inspect](../../reference/commandline/network_inspect.md) 781 * [network connect](../../reference/commandline/network_connect.md) 782 * [network disconnect](../../reference/commandline/network_disconnect.md) 783 * [network ls](../../reference/commandline/network_ls.md) 784 * [network rm](../../reference/commandline/network_rm.md)