github.com/endocode/docker@v1.4.2-0.20160113120958-46eb4700391e/docs/userguide/networking/work-with-networks.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Work with network commands"
     4  description = "How to work with docker networks"
     5  keywords = ["commands, Usage, network, docker, cluster"]
     6  [menu.main]
     7  parent = "smn_networking"
     8  weight=-4
     9  +++
    10  <![end-metadata]-->
    11  
    12  # Work with network commands
    13  
    14  This article provides examples of the network subcommands you can use to interact with Docker networks and the containers in them. The commands are available through the Docker Engine CLI.  These commands are:
    15  
    16  * `docker network create`
    17  * `docker network connect`
    18  * `docker network ls`
    19  * `docker network rm`
    20  * `docker network disconnect`
    21  * `docker network inspect`
    22  
    23  While not required, it is a good idea to read [Understanding Docker
    24  network](dockernetworks.md) before trying the examples in this section. The
    25  examples for the rely on a `bridge` network so that you can try them
    26  immediately.  If you would prefer to experiment with an `overlay` network see
    27  the [Getting started with multi-host networks](get-started-overlay.md) instead.
    28  
    29  ## Create networks
    30  
    31  Docker Engine creates a `bridge` network automatically when you install Engine.
    32  This network corresponds to the `docker0` bridge that Engine has traditionally
    33  relied on. In addition to this network, you can create your own `bridge` or `overlay` network.  
    34  
    35  A `bridge` network resides on a single host running an instance of Docker Engine.  An `overlay` network can span multiple hosts running their own engines. If you run `docker network create` and supply only a network name, it creates a bridge network for you.
    36  
    37  ```bash
    38  $ docker network create simple-network
    39  de792b8258895cf5dc3b43835e9d61a9803500b991654dacb1f4f0546b1c88f8
    40  $ docker network inspect simple-network
    41  [
    42      {
    43          "Name": "simple-network",
    44          "Id": "de792b8258895cf5dc3b43835e9d61a9803500b991654dacb1f4f0546b1c88f8",
    45          "Scope": "local",
    46          "Driver": "bridge",
    47          "IPAM": {
    48              "Driver": "default",
    49              "Config": [
    50                  {}
    51              ]
    52          },
    53          "Containers": {},
    54          "Options": {}
    55      }
    56  ]
    57  ```
    58  
    59  Unlike `bridge` networks, `overlay` networks require some pre-existing conditions
    60  before you can create one. These conditions are:
    61  
    62  * Access to a key-value store. Engine supports Consul Etcd, and ZooKeeper (Distributed store) key-value stores.
    63  * A cluster of hosts with connectivity to the key-value store.
    64  * A properly configured Engine `daemon` on each host in the swarm.
    65  
    66  The `docker daemon` options that support the `overlay` network are:
    67  
    68  * `--cluster-store`
    69  * `--cluster-store-opt`
    70  * `--cluster-advertise`
    71  
    72  It is also a good idea, though not required, that you install Docker Swarm
    73  to manage the cluster. Swarm provides sophisticated discovery and server
    74  management that can assist your implementation.
    75  
    76  When you create a network, Engine creates a non-overlapping subnetwork for the
    77  network by default. You can override this default and specify a subnetwork
    78  directly using the the `--subnet` option. On a `bridge` network you can only
    79  create a single subnet. An `overlay` network supports multiple subnets.
    80  
    81  In addition to the `--subnetwork` option, you also specify the `--gateway` `--ip-range` and `--aux-address` options.
    82  
    83  ```bash
    84  $ docker network create -d overlay
    85    --subnet=192.168.0.0/16 --subnet=192.170.0.0/16
    86    --gateway=192.168.0.100 --gateway=192.170.0.100
    87    --ip-range=192.168.1.0/24
    88    --aux-address a=192.168.1.5 --aux-address b=192.168.1.6
    89    --aux-address a=192.170.1.5 --aux-address b=192.170.1.6
    90    my-multihost-network
    91  ```
    92  
    93  Be sure that your subnetworks do not overlap. If they do, the network create fails and Engine returns an error.
    94  
    95  ## Connect containers
    96  
    97  You can connect containers dynamically to one or more networks. These networks
    98  can be backed the same or different network drivers. Once connected, the
    99  containers can communicate using another container's IP address or name.
   100  
   101  For `overlay` networks or custom plugins that support multi-host
   102  connectivity, containers connected to the same multi-host network but launched
   103  from different hosts can also communicate in this way.
   104  
   105  Create two containers for this example:
   106  
   107  ```bash
   108  $ docker run -itd --name=container1 busybox
   109  18c062ef45ac0c026ee48a83afa39d25635ee5f02b58de4abc8f467bcaa28731
   110  
   111  $ docker run -itd --name=container2 busybox
   112  498eaaaf328e1018042c04b2de04036fc04719a6e39a097a4f4866043a2c2152
   113  ```
   114  
   115  Then create an isolated, `bridge` network to test with.
   116  
   117  ```bash
   118  $ docker network create -d bridge --subnet 172.25.0.0/16 isolated_nw
   119  06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8
   120  ```
   121  
   122  Connect `container2` to the network and then `inspect` the network to verify the connection:
   123  
   124  ```
   125  $ docker network connect isolated_nw container2
   126  $ docker network inspect isolated_nw
   127  [
   128      {
   129          "Name": "isolated_nw",
   130          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   131          "Scope": "local",
   132          "Driver": "bridge",
   133          "IPAM": {
   134              "Driver": "default",
   135              "Config": [
   136                  {
   137                      "Subnet": "172.25.0.0/16"
   138                  }
   139              ]
   140          },
   141          "Containers": {
   142              "90e1f3ec71caf82ae776a827e0712a68a110a3f175954e5bd4222fd142ac9428": {
   143                  "Name": "container2",
   144                  "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d",
   145                  "MacAddress": "02:42:ac:19:00:02",
   146                  "IPv4Address": "172.25.0.2/16",
   147                  "IPv6Address": ""
   148              }
   149          },
   150          "Options": {}
   151      }
   152  ]
   153  ```
   154  
   155  You can see that the Engine automatically assigns an IP address to `container2`.
   156  Given we specified a `--subnet` when creating the network, Engine picked
   157  an address from that same subnet. Now, start a third container and connect it to
   158  the network on launch using the `docker run` command's `--net` option:
   159  
   160  ```bash
   161  $ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
   162  467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551
   163  ```
   164  
   165  As you can see you were able to specify the ip address for your container.
   166  As long as the network to which the container is connecting was created with
   167  a user specified subnet, you will be able to select the IPv4 and/or IPv6 address(es)
   168  for your container when executing `docker run` and `docker network connect` commands.
   169  The selected IP address is part of the container networking configuration and will be
   170  preserved across container reload. The feature is only available on user defined networks,
   171  because they guarantee their subnets configuration does not change across daemon reload.
   172  
   173  Now, inspect the network resources used by `container3`.
   174  
   175  ```bash
   176  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container3
   177  {"isolated_nw":{"IPAMConfig":{"IPv4Address":"172.25.3.3"},"EndpointID":"dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103","Gateway":"172.25.0.1","IPAddress":"172.25.3.3","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"02:42:ac:19:03:03"}}
   178  ```
   179  Repeat this command for `container2`. If you have Python installed, you can pretty print the output.
   180  
   181  ```bash
   182  $ docker inspect --format='{{json .NetworkSettings.Networks}}'  container2 | python -m json.tool
   183  {
   184      "bridge": {
   185          "EndpointID": "0099f9efb5a3727f6a554f176b1e96fca34cae773da68b3b6a26d046c12cb365",
   186          "Gateway": "172.17.0.1",
   187          "GlobalIPv6Address": "",
   188          "GlobalIPv6PrefixLen": 0,
   189          "IPAMConfig": null,
   190          "IPAddress": "172.17.0.3",
   191          "IPPrefixLen": 16,
   192          "IPv6Gateway": "",
   193          "MacAddress": "02:42:ac:11:00:03"
   194      },
   195      "isolated_nw": {
   196          "EndpointID": "11cedac1810e864d6b1589d92da12af66203879ab89f4ccd8c8fdaa9b1c48b1d",
   197          "Gateway": "172.25.0.1",
   198          "GlobalIPv6Address": "",
   199          "GlobalIPv6PrefixLen": 0,
   200          "IPAMConfig": null,
   201          "IPAddress": "172.25.0.2",
   202          "IPPrefixLen": 16,
   203          "IPv6Gateway": "",
   204          "MacAddress": "02:42:ac:19:00:02"
   205      }
   206  }
   207  ```
   208  
   209  You should find `container2` belongs to two networks.  The `bridge` network
   210  which it joined by default when you launched it and the `isolated_nw` which you
   211  later connected it to.
   212  
   213  ![](images/working.png)
   214  
   215  In the case of `container3`, you connected it through `docker run` to the
   216  `isolated_nw` so that container is not connected to `bridge`.
   217  
   218  Use the `docker attach` command to connect to the running `container2` and
   219  examine its networking stack:
   220  
   221  ```bash
   222  $ docker attach container2
   223  ```
   224  
   225  If you look a the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network.
   226  
   227  ```bash
   228  / # ifconfig
   229  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
   230            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   231            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   232            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   233            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   234            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   235            collisions:0 txqueuelen:0
   236            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   237  
   238  eth1      Link encap:Ethernet  HWaddr 02:42:AC:15:00:02  
   239            inet addr:172.25.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
   240            inet6 addr: fe80::42:acff:fe19:2/64 Scope:Link
   241            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
   242            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   243            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   244            collisions:0 txqueuelen:0
   245            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   246  
   247  lo        Link encap:Local Loopback  
   248            inet addr:127.0.0.1  Mask:255.0.0.0
   249            inet6 addr: ::1/128 Scope:Host
   250            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   251            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   252            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   253            collisions:0 txqueuelen:0
   254            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   255  
   256  On the `isolated_nw` which was user defined, the Docker embedded DNS server enables name resolution for other containers in the network.  Inside of `container2` it is possible to ping `container3` by name.
   257  
   258  ```bash
   259  / # ping -w 4 container3
   260  PING container3 (172.25.3.3): 56 data bytes
   261  64 bytes from 172.25.3.3: seq=0 ttl=64 time=0.070 ms
   262  64 bytes from 172.25.3.3: seq=1 ttl=64 time=0.080 ms
   263  64 bytes from 172.25.3.3: seq=2 ttl=64 time=0.080 ms
   264  64 bytes from 172.25.3.3: seq=3 ttl=64 time=0.097 ms
   265  
   266  --- container3 ping statistics ---
   267  4 packets transmitted, 4 packets received, 0% packet loss
   268  round-trip min/avg/max = 0.070/0.081/0.097 ms
   269  ```
   270  
   271  This isn't the case for the default `bridge` network. Both `container2` and  `container1` are connected to the default bridge network. Docker does not support automatic service discovery on this network. For this reason, pinging  `container1` by name fails as you would expect based on the `/etc/hosts` file:
   272  
   273  ```bash
   274  / # ping -w 4 container1
   275  ping: bad address 'container1'
   276  ```
   277  
   278  A ping using the `container1` IP address does succeed though:
   279  
   280  ```bash
   281  / # ping -w 4 172.17.0.2
   282  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   283  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.095 ms
   284  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
   285  64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
   286  64 bytes from 172.17.0.2: seq=3 ttl=64 time=0.101 ms
   287  
   288  --- 172.17.0.2 ping statistics ---
   289  4 packets transmitted, 4 packets received, 0% packet loss
   290  round-trip min/avg/max = 0.072/0.085/0.101 ms
   291  ```
   292  
   293  If you wanted you could connect `container1` to `container2` with the `docker
   294  run --link` command and that would enable the two containers to interact by name
   295  as well as IP.
   296  
   297  Detach from a `container2` and leave it running using `CTRL-p CTRL-q`.
   298  
   299  In this example, `container2` is attached to both networks and so can talk to
   300  `container1` and `container3`. But `container3` and `container1` are not in the
   301  same network and cannot communicate. Test, this now by attaching to
   302  `container3` and attempting to ping `container1` by IP address.
   303  
   304  ```bash
   305  $ docker attach container3
   306  / # ping 172.17.0.2
   307  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   308  ^C
   309  --- 172.17.0.2 ping statistics ---
   310  10 packets transmitted, 0 packets received, 100% packet loss
   311  
   312  ```
   313  
   314  You can connect both running and non-running containers to a network. However,
   315  `docker network inspect` only displays information on running containers.
   316  
   317  ### Linking containers in user-defined networks
   318  
   319  In the above example, container_2 was able to resolve container_3's name automatically
   320  in the user defined network `isolated_nw`, but the name resolution did not succeed
   321  automatically in the default `bridge` network. This is expected in order to maintain
   322  backward compatibility with [legacy link](default_network/dockerlinks.md).
   323  
   324  The `legacy link` provided 4 major functionalities to the default `bridge` network.
   325  
   326  * name resolution
   327  * name alias for the linked container using `--link=CONTAINER-NAME:ALIAS`
   328  * secured container connectivity (in isolation via `--icc=false`)
   329  * environment variable injection
   330  
   331  Comparing the above 4 functionalities with the non-default user-defined networks such as
   332  `isolated_nw` in this example, without any additional config, `docker network` provides
   333  
   334  * automatic name resolution using DNS
   335  * automatic secured isolated environment for the containers in a network
   336  * ability to dynamically attach and detach to multiple networks
   337  * supports the `--link` option to provide name alias for the linked container
   338  
   339  Continuing with the above example, create another container `container_4` in `isolated_nw`
   340  with `--link` to provide additional name resolution using alias for other containers in
   341  the same network.
   342  
   343  ```bash
   344  $ docker run --net=isolated_nw -itd --name=container4 --link container5:c5 busybox
   345  01b5df970834b77a9eadbaff39051f237957bd35c4c56f11193e0594cfd5117c
   346  ```
   347  
   348  With the help of `--link` container4 will be able to reach container5 using the
   349  aliased name `c5` as well.
   350  
   351  Please note that while creating container4, we linked to a container named `container5`
   352  which is not created yet. That is one of the differences in behavior between the
   353  `legacy link` in default `bridge` network and the new `link` functionality in user defined
   354  networks. The `legacy link` is static in nature and it hard-binds the container with the
   355  alias and it doesnt tolerate linked container restarts. While the new `link` functionality
   356  in user defined networks are dynamic in nature and supports linked container restarts
   357  including tolerating ip-address changes on the linked container.
   358  
   359  Now let us launch another container named `container5` linking container4 to c4.
   360  
   361  ```bash
   362  $ docker run --net=isolated_nw -itd --name=container5 --link container4:c4 busybox
   363  72eccf2208336f31e9e33ba327734125af00d1e1d2657878e2ee8154fbb23c7a
   364  ```
   365  
   366  As expected, container4 will be able to reach container5 by both its container name and
   367  its alias c5 and container5 will be able to reach container4 by its container name and
   368  its alias c4.
   369  
   370  ```bash
   371  $ docker attach container4
   372  / # ping -w 4 c5
   373  PING c5 (172.25.0.5): 56 data bytes
   374  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   375  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   376  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   377  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   378  
   379  --- c5 ping statistics ---
   380  4 packets transmitted, 4 packets received, 0% packet loss
   381  round-trip min/avg/max = 0.070/0.081/0.097 ms
   382  
   383  / # ping -w 4 container5
   384  PING container5 (172.25.0.5): 56 data bytes
   385  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   386  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   387  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   388  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   389  
   390  --- container5 ping statistics ---
   391  4 packets transmitted, 4 packets received, 0% packet loss
   392  round-trip min/avg/max = 0.070/0.081/0.097 ms
   393  ```
   394  
   395  ```bash
   396  $ docker attach container5
   397  / # ping -w 4 c4
   398  PING c4 (172.25.0.4): 56 data bytes
   399  64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
   400  64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms
   401  64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms
   402  64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms
   403  
   404  --- c4 ping statistics ---
   405  4 packets transmitted, 4 packets received, 0% packet loss
   406  round-trip min/avg/max = 0.065/0.070/0.082 ms
   407  
   408  / # ping -w 4 container4
   409  PING container4 (172.25.0.4): 56 data bytes
   410  64 bytes from 172.25.0.4: seq=0 ttl=64 time=0.065 ms
   411  64 bytes from 172.25.0.4: seq=1 ttl=64 time=0.070 ms
   412  64 bytes from 172.25.0.4: seq=2 ttl=64 time=0.067 ms
   413  64 bytes from 172.25.0.4: seq=3 ttl=64 time=0.082 ms
   414  
   415  --- container4 ping statistics ---
   416  4 packets transmitted, 4 packets received, 0% packet loss
   417  round-trip min/avg/max = 0.065/0.070/0.082 ms
   418  ```
   419  
   420  Similar to the legacy link functionality the new link alias is localized to a container
   421  and the aliased name has no meaning outside of the container using the `--link`.
   422  
   423  Also, it is important to note that if a container belongs to multiple networks, the
   424  linked alias is scoped within a given network. Hence the containers can be linked to
   425  different aliases in different networks.
   426  
   427  Extending the example, let us create another network named `local_alias`
   428  
   429  ```bash
   430  $ docker network create -d bridge --subnet 172.26.0.0/24 local_alias
   431  76b7dc932e037589e6553f59f76008e5b76fa069638cd39776b890607f567aaa
   432  ```
   433  
   434  let us connect container4 and container5 to the new network `local_alias`
   435  
   436  ```
   437  $ docker network connect --link container5:foo local_alias container4
   438  $ docker network connect --link container4:bar local_alias container5
   439  ```
   440  
   441  ```bash
   442  $ docker attach container4
   443  
   444  / # ping -w 4 foo
   445  PING foo (172.26.0.3): 56 data bytes
   446  64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms
   447  64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms
   448  64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms
   449  64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms
   450  
   451  --- foo ping statistics ---
   452  4 packets transmitted, 4 packets received, 0% packet loss
   453  round-trip min/avg/max = 0.070/0.081/0.097 ms
   454  
   455  / # ping -w 4 c5
   456  PING c5 (172.25.0.5): 56 data bytes
   457  64 bytes from 172.25.0.5: seq=0 ttl=64 time=0.070 ms
   458  64 bytes from 172.25.0.5: seq=1 ttl=64 time=0.080 ms
   459  64 bytes from 172.25.0.5: seq=2 ttl=64 time=0.080 ms
   460  64 bytes from 172.25.0.5: seq=3 ttl=64 time=0.097 ms
   461  
   462  --- c5 ping statistics ---
   463  4 packets transmitted, 4 packets received, 0% packet loss
   464  round-trip min/avg/max = 0.070/0.081/0.097 ms
   465  ```
   466  
   467  Note that the ping succeeds for both the aliases but on different networks.
   468  Let us conclude this section by disconnecting container5 from the `isolated_nw`
   469  and observe the results
   470  
   471  ```
   472  $ docker network disconnect isolated_nw container5
   473  
   474  $ docker attach container4
   475  
   476  / # ping -w 4 c5
   477  ping: bad address 'c5'
   478  
   479  / # ping -w 4 foo
   480  PING foo (172.26.0.3): 56 data bytes
   481  64 bytes from 172.26.0.3: seq=0 ttl=64 time=0.070 ms
   482  64 bytes from 172.26.0.3: seq=1 ttl=64 time=0.080 ms
   483  64 bytes from 172.26.0.3: seq=2 ttl=64 time=0.080 ms
   484  64 bytes from 172.26.0.3: seq=3 ttl=64 time=0.097 ms
   485  
   486  --- foo ping statistics ---
   487  4 packets transmitted, 4 packets received, 0% packet loss
   488  round-trip min/avg/max = 0.070/0.081/0.097 ms
   489  
   490  ```
   491  
   492  In conclusion, the new link functionality in user defined networks provides all the
   493  benefits of legacy links while avoiding most of the well-known issues with `legacy links`.
   494  
   495  One notable missing functionality compared to `legacy links` is the injection of
   496  environment variables. Though very useful, environment variable injection is static
   497  in nature and must be injected when the container is started. One cannot inject
   498  environment variables into a running container without significant effort and hence
   499  it is not compatible with `docker network` which provides a dynamic way to connect/
   500  disconnect containers to/from a network.
   501  
   502  
   503  ## Disconnecting containers
   504  
   505  You can disconnect a container from a network using the `docker network
   506  disconnect` command.
   507  
   508  ```
   509  $ docker network disconnect isolated_nw container2
   510  
   511  docker inspect --format='{{json .NetworkSettings.Networks}}'  container2 | python -m json.tool
   512  {
   513      "bridge": {
   514          "EndpointID": "9e4575f7f61c0f9d69317b7a4b92eefc133347836dd83ef65deffa16b9985dc0",
   515          "Gateway": "172.17.0.1",
   516          "GlobalIPv6Address": "",
   517          "GlobalIPv6PrefixLen": 0,
   518          "IPAddress": "172.17.0.3",
   519          "IPPrefixLen": 16,
   520          "IPv6Gateway": "",
   521          "MacAddress": "02:42:ac:11:00:03"
   522      }
   523  }
   524  
   525  
   526  $ docker network inspect isolated_nw
   527  [
   528      {
   529          "Name": "isolated_nw",
   530          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   531          "Scope": "local",
   532          "Driver": "bridge",
   533          "IPAM": {
   534              "Driver": "default",
   535              "Config": [
   536                  {
   537                      "Subnet": "172.25.0.0/16"
   538                  }
   539              ]
   540          },
   541          "Containers": {
   542              "467a7863c3f0277ef8e661b38427737f28099b61fa55622d6c30fb288d88c551": {
   543                  "Name": "container3",
   544                  "EndpointID": "dffc7ec2915af58cc827d995e6ebdc897342be0420123277103c40ae35579103",
   545                  "MacAddress": "02:42:ac:19:03:03",
   546                  "IPv4Address": "172.25.3.3/16",
   547                  "IPv6Address": ""
   548              }
   549          },
   550          "Options": {}
   551      }
   552  ]
   553  ```
   554  
   555  Once a container is disconnected from a network, it cannot communicate with
   556  other containers connected to that network. In this example, `container2` can no longer  talk to `container3` on the `isolated_nw` network.
   557  
   558  ```
   559  $ docker attach container2
   560  
   561  / # ifconfig
   562  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03  
   563            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   564            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   565            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   566            RX packets:8 errors:0 dropped:0 overruns:0 frame:0
   567            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   568            collisions:0 txqueuelen:0
   569            RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)
   570  
   571  lo        Link encap:Local Loopback  
   572            inet addr:127.0.0.1  Mask:255.0.0.0
   573            inet6 addr: ::1/128 Scope:Host
   574            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   575            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   576            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   577            collisions:0 txqueuelen:0
   578            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   579  
   580  / # ping container3
   581  PING container3 (172.25.3.3): 56 data bytes
   582  ^C
   583  --- container3 ping statistics ---
   584  2 packets transmitted, 0 packets received, 100% packet loss
   585  ```
   586  
   587  The `container2` still has full connectivity to the bridge network
   588  
   589  ```bash
   590  / # ping container1
   591  PING container1 (172.17.0.2): 56 data bytes
   592  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.119 ms
   593  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.174 ms
   594  ^C
   595  --- container1 ping statistics ---
   596  2 packets transmitted, 2 packets received, 0% packet loss
   597  round-trip min/avg/max = 0.119/0.146/0.174 ms
   598  / #
   599  ```
   600  
   601  ## Remove a network
   602  
   603  When all the containers in a network are stopped or disconnected, you can remove a network.
   604  
   605  ```bash
   606  $ docker network disconnect isolated_nw container3
   607  ```
   608  
   609  ```bash
   610  docker network inspect isolated_nw
   611  [
   612      {
   613          "Name": "isolated_nw",
   614          "Id": "06a62f1c73c4e3107c0f555b7a5f163309827bfbbf999840166065a8f35455a8",
   615          "Scope": "local",
   616          "Driver": "bridge",
   617          "IPAM": {
   618              "Driver": "default",
   619              "Config": [
   620                  {
   621                      "Subnet": "172.25.0.0/16"
   622                  }
   623              ]
   624          },
   625          "Containers": {},
   626          "Options": {}
   627      }
   628  ]
   629  
   630  $ docker network rm isolated_nw
   631  ```
   632  
   633  List all your networks to verify the `isolated_nw` was removed:
   634  
   635  ```
   636  $ docker network ls
   637  NETWORK ID          NAME                DRIVER
   638  72314fa53006        host                host                
   639  f7ab26d71dbd        bridge              bridge              
   640  0f32e83e61ac        none                null  
   641  ```
   642  
   643  ## Related information
   644  
   645  * [network create](../../reference/commandline/network_create.md)
   646  * [network inspect](../../reference/commandline/network_inspect.md)
   647  * [network connect](../../reference/commandline/network_connect.md)
   648  * [network disconnect](../../reference/commandline/network_disconnect.md)
   649  * [network ls](../../reference/commandline/network_ls.md)
   650  * [network rm](../../reference/commandline/network_rm.md)