github.com/dpiddy/docker@v1.12.2-rc1/docs/userguide/networking/index.md (about)

     1  <!--[metadata]>
     2  +++
     3  aliases=[
     4  "/engine/userguide/networking/dockernetworks/"
     5  ]
     6  title = "Docker container networking"
     7  description = "How do we connect docker containers within and across hosts ?"
     8  keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"]
     9  [menu.main]
    10  identifier="networking_index"
    11  parent = "smn_networking"
    12  weight = -5
    13  +++
    14  <![end-metadata]-->
    15  
    16  # Understand Docker container networks
    17  
    18  This section provides an overview of the default networking behavior that Docker
    19  Engine delivers natively. It describes the type of networks created by default
    20  and how to create your own, user-defined networks. It also describes the
    21  resources required to create networks on a single host or across a cluster of
    22  hosts.
    23  
    24  ## Default Networks
    25  
    26  When you install Docker, it creates three networks automatically. You can list
    27  these networks using the `docker network ls` command:
    28  
    29  ```
    30  $ docker network ls
    31  
    32  NETWORK ID          NAME                DRIVER
    33  7fca4eb8c647        bridge              bridge
    34  9f904ee27bf5        none                null
    35  cf03ee007fb4        host                host
    36  ```
    37  
    38  Historically, these three networks are part of Docker's implementation. When
    39  you run a container you can use the `--network` flag to specify which network you
    40  want to run a container on. These three networks are still available to you.
    41  
    42  The `bridge` network represents the `docker0` network present in all Docker
    43  installations. Unless you specify otherwise with the `docker run
    44  --network=<NETWORK>` option, the Docker daemon connects containers to this network
    45  by default. You can see this bridge as part of a host's network stack by using
    46  the `ifconfig` command on the host.
    47  
    48  ```
    49  $ ifconfig
    50  
    51  docker0   Link encap:Ethernet  HWaddr 02:42:47:bc:3a:eb
    52            inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
    53            inet6 addr: fe80::42:47ff:febc:3aeb/64 Scope:Link
    54            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
    55            RX packets:17 errors:0 dropped:0 overruns:0 frame:0
    56            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
    57            collisions:0 txqueuelen:0
    58            RX bytes:1100 (1.1 KB)  TX bytes:648 (648.0 B)
    59  ```
    60  
    61  The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:
    62  
    63  ```
    64  $ docker attach nonenetcontainer
    65  
    66  root@0cb243cd1293:/# cat /etc/hosts
    67  127.0.0.1	localhost
    68  ::1	localhost ip6-localhost ip6-loopback
    69  fe00::0	ip6-localnet
    70  ff00::0	ip6-mcastprefix
    71  ff02::1	ip6-allnodes
    72  ff02::2	ip6-allrouters
    73  root@0cb243cd1293:/# ifconfig
    74  lo        Link encap:Local Loopback
    75            inet addr:127.0.0.1  Mask:255.0.0.0
    76            inet6 addr: ::1/128 Scope:Host
    77            UP LOOPBACK RUNNING  MTU:65536  Metric:1
    78            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
    79            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
    80            collisions:0 txqueuelen:0
    81            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    82  
    83  root@0cb243cd1293:/#
    84  ```
    85  >**Note**: You can detach from the container and leave it running with `CTRL-p CTRL-q`.
    86  
    87  The `host` network adds a container on the hosts network stack. You'll find the
    88  network configuration inside the container is identical to the host.
    89  
    90  With the exception of the `bridge` network, you really don't need to
    91  interact with these default networks. While you can list and inspect them, you
    92  cannot remove them. They are required by your Docker installation. However, you
    93  can add your own user-defined networks and these you can remove when you no
    94  longer need them. Before you learn more about creating your own networks, it is
    95  worth looking at the default `bridge` network a bit.
    96  
    97  
    98  ### The default bridge network in detail
    99  The default `bridge` network is present on all Docker hosts. The `docker network inspect`
   100  command returns information about a network:
   101  
   102  ```
   103  $ docker network inspect bridge
   104  
   105  [
   106     {
   107         "Name": "bridge",
   108         "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
   109         "Scope": "local",
   110         "Driver": "bridge",
   111         "IPAM": {
   112             "Driver": "default",
   113             "Config": [
   114                 {
   115                     "Subnet": "172.17.0.1/16",
   116                     "Gateway": "172.17.0.1"
   117                 }
   118             ]
   119         },
   120         "Containers": {},
   121         "Options": {
   122             "com.docker.network.bridge.default_bridge": "true",
   123             "com.docker.network.bridge.enable_icc": "true",
   124             "com.docker.network.bridge.enable_ip_masquerade": "true",
   125             "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
   126             "com.docker.network.bridge.name": "docker0",
   127             "com.docker.network.driver.mtu": "9001"
   128         }
   129     }
   130  ]
   131  ```
   132  The Engine automatically creates a `Subnet` and `Gateway` to the network.
   133  The `docker run` command automatically adds new containers to this network.
   134  
   135  ```
   136  $ docker run -itd --name=container1 busybox
   137  
   138  3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c
   139  
   140  $ docker run -itd --name=container2 busybox
   141  
   142  94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
   143  ```
   144  
   145  Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
   146  
   147  ```
   148  $ docker network inspect bridge
   149  
   150  {[
   151      {
   152          "Name": "bridge",
   153          "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
   154          "Scope": "local",
   155          "Driver": "bridge",
   156          "IPAM": {
   157              "Driver": "default",
   158              "Config": [
   159                  {
   160                      "Subnet": "172.17.0.1/16",
   161                      "Gateway": "172.17.0.1"
   162                  }
   163              ]
   164          },
   165          "Containers": {
   166              "3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
   167                  "EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
   168                  "MacAddress": "02:42:ac:11:00:02",
   169                  "IPv4Address": "172.17.0.2/16",
   170                  "IPv6Address": ""
   171              },
   172              "94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
   173                  "EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
   174                  "MacAddress": "02:42:ac:11:00:03",
   175                  "IPv4Address": "172.17.0.3/16",
   176                  "IPv6Address": ""
   177              }
   178          },
   179          "Options": {
   180              "com.docker.network.bridge.default_bridge": "true",
   181              "com.docker.network.bridge.enable_icc": "true",
   182              "com.docker.network.bridge.enable_ip_masquerade": "true",
   183              "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
   184              "com.docker.network.bridge.name": "docker0",
   185              "com.docker.network.driver.mtu": "9001"
   186          }
   187      }
   188  ]
   189  ```
   190  
   191  The `docker network inspect` command above shows all the connected containers and their network resources on a given network. Containers in this default network are able to communicate with each other using IP addresses. Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy `docker run --link` option.
   192  
   193  You can `attach` to a running `container` and investigate its configuration:
   194  
   195  ```
   196  $ docker attach container1
   197  
   198  root@0cb243cd1293:/# ifconfig
   199  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02
   200            inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
   201            inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
   202            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   203            RX packets:16 errors:0 dropped:0 overruns:0 frame:0
   204            TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
   205            collisions:0 txqueuelen:0
   206            RX bytes:1296 (1.2 KiB)  TX bytes:648 (648.0 B)
   207  
   208  lo        Link encap:Local Loopback
   209            inet addr:127.0.0.1  Mask:255.0.0.0
   210            inet6 addr: ::1/128 Scope:Host
   211            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   212            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   213            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   214            collisions:0 txqueuelen:0
   215            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   216  ```
   217  
   218  Then use `ping`to send three ICMP requests and test the connectivity of the
   219  containers on this `bridge` network.
   220  
   221  ```
   222  root@0cb243cd1293:/# ping -w3 172.17.0.3
   223  
   224  PING 172.17.0.3 (172.17.0.3): 56 data bytes
   225  64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
   226  64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.080 ms
   227  64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.074 ms
   228  
   229  --- 172.17.0.3 ping statistics ---
   230  3 packets transmitted, 3 packets received, 0% packet loss
   231  round-trip min/avg/max = 0.074/0.083/0.096 ms
   232  ```
   233  
   234  Finally, use the `cat` command to check the `container1` network configuration:
   235  
   236  ```
   237  root@0cb243cd1293:/# cat /etc/hosts
   238  
   239  172.17.0.2	3386a527aa08
   240  127.0.0.1	localhost
   241  ::1	localhost ip6-localhost ip6-loopback
   242  fe00::0	ip6-localnet
   243  ff00::0	ip6-mcastprefix
   244  ff02::1	ip6-allnodes
   245  ff02::2	ip6-allrouters
   246  ```
   247  To detach from a `container1` and leave it running use `CTRL-p CTRL-q`.Then, attach to `container2` and repeat these three commands.
   248  
   249  ```
   250  $ docker attach container2
   251  
   252  root@0cb243cd1293:/# ifconfig
   253  eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03
   254            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
   255            inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
   256            UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
   257            RX packets:15 errors:0 dropped:0 overruns:0 frame:0
   258            TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
   259            collisions:0 txqueuelen:0
   260            RX bytes:1166 (1.1 KiB)  TX bytes:1026 (1.0 KiB)
   261  
   262  lo        Link encap:Local Loopback
   263            inet addr:127.0.0.1  Mask:255.0.0.0
   264            inet6 addr: ::1/128 Scope:Host
   265            UP LOOPBACK RUNNING  MTU:65536  Metric:1
   266            RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   267            TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
   268            collisions:0 txqueuelen:0
   269            RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
   270  
   271  root@0cb243cd1293:/# ping -w3 172.17.0.2
   272  
   273  PING 172.17.0.2 (172.17.0.2): 56 data bytes
   274  64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.067 ms
   275  64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
   276  64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.072 ms
   277  
   278  --- 172.17.0.2 ping statistics ---
   279  3 packets transmitted, 3 packets received, 0% packet loss
   280  round-trip min/avg/max = 0.067/0.071/0.075 ms
   281  / # cat /etc/hosts
   282  172.17.0.3	94447ca47985
   283  127.0.0.1	localhost
   284  ::1	localhost ip6-localhost ip6-loopback
   285  fe00::0	ip6-localnet
   286  ff00::0	ip6-mcastprefix
   287  ff02::1	ip6-allnodes
   288  ff02::2	ip6-allrouters
   289  ```
   290  
   291  The default `docker0` bridge network supports the use of port mapping and `docker run --link` to allow communications between containers in the `docker0` network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead.
   292  
   293  ## User-defined networks
   294  
   295  You can create your own user-defined networks that better isolate containers.
   296  Docker provides some default **network drivers** for creating these networks.
   297  You can create a new **bridge network**, **overlay network** or **MACVLAN
   298  network**. You can also create a **network plugin** or **remote network**
   299  written to your own specifications.
   300  
   301  You can create multiple networks. You can add containers to more than one
   302  network. Containers can only communicate within networks but not across
   303  networks. A container attached to two networks can communicate with member
   304  containers in either network. When a container is connected to multiple
   305  networks, its external connectivity is provided via the first non-internal
   306  network, in lexical order.
   307  
   308  The next few sections describe each of Docker's built-in network drivers in
   309  greater detail.
   310  
   311  ### A bridge network
   312  
   313  The easiest user-defined network to create is a `bridge` network. This network
   314  is similar to the historical, default `docker0` network. There are some added
   315  features and some old features that aren't available.
   316  
   317  ```
   318  $ docker network create --driver bridge isolated_nw
   319  1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b
   320  
   321  $ docker network inspect isolated_nw
   322  
   323  [
   324      {
   325          "Name": "isolated_nw",
   326          "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
   327          "Scope": "local",
   328          "Driver": "bridge",
   329          "IPAM": {
   330              "Driver": "default",
   331              "Config": [
   332                  {
   333                      "Subnet": "172.21.0.0/16",
   334                      "Gateway": "172.21.0.1/16"
   335                  }
   336              ]
   337          },
   338          "Containers": {},
   339          "Options": {}
   340      }
   341  ]
   342  
   343  $ docker network ls
   344  
   345  NETWORK ID          NAME                DRIVER
   346  9f904ee27bf5        none                null
   347  cf03ee007fb4        host                host
   348  7fca4eb8c647        bridge              bridge
   349  c5ee82f76de3        isolated_nw         bridge
   350  
   351  ```
   352  
   353  After you create the network, you can launch containers on it using  the `docker run --network=<NETWORK>` option.
   354  
   355  ```
   356  $ docker run --network=isolated_nw -itd --name=container3 busybox
   357  
   358  8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c
   359  
   360  $ docker network inspect isolated_nw
   361  [
   362      {
   363          "Name": "isolated_nw",
   364          "Id": "1196a4c5af43a21ae38ef34515b6af19236a3fc48122cf585e3f3054d509679b",
   365          "Scope": "local",
   366          "Driver": "bridge",
   367          "IPAM": {
   368              "Driver": "default",
   369              "Config": [
   370                  {}
   371              ]
   372          },
   373          "Containers": {
   374              "8c1a0a5be480921d669a073393ade66a3fc49933f08bcc5515b37b8144f6d47c": {
   375                  "EndpointID": "93b2db4a9b9a997beb912d28bcfc117f7b0eb924ff91d48cfa251d473e6a9b08",
   376                  "MacAddress": "02:42:ac:15:00:02",
   377                  "IPv4Address": "172.21.0.2/16",
   378                  "IPv6Address": ""
   379              }
   380          },
   381          "Options": {}
   382      }
   383  ]
   384  ```
   385  
   386  The containers you launch into this network must reside on the same Docker host.
   387  Each container in the network can immediately communicate with other containers
   388  in the network. Though, the network itself isolates the containers from external
   389  networks.
   390  
   391  ![An isolated network](images/bridge_network.png)
   392  
   393  Within a user-defined bridge network, linking is not supported. You can
   394  expose and publish container ports on containers in this network. This is useful
   395  if you want to make a portion of the `bridge` network available to an outside
   396  network.
   397  
   398  ![Bridge network](images/network_access.png)
   399  
   400  A bridge network is useful in cases where you want to run a relatively small
   401  network on a single host. You can, however, create significantly larger networks
   402  by creating an `overlay` network.
   403  
   404  
   405  ### An overlay network with Docker Engine swarm mode
   406  
   407  You can create an overlay network on a manager node running in swarm mode
   408  without an external key-value store. The swarm makes the overlay network
   409  available only to nodes in the swarm that require it for a service. When you
   410  create a service that uses the overlay network, the manager node automatically
   411  extends the overlay network to nodes that run service tasks.
   412  
   413  To learn more about running Docker Engine in swarm mode, refer to the
   414  [Swarm mode overview](../../swarm/index.md).
   415  
   416  The example below shows how to create a network and use it for a service from a manager node in the swarm:
   417  
   418  ```bash
   419  # Create an overlay network `my-multi-host-network`.
   420  $ docker network create \
   421    --driver overlay \
   422    --subnet 10.0.9.0/24 \
   423    my-multi-host-network
   424  
   425  400g6bwzd68jizzdx5pgyoe95
   426  
   427  # Create an nginx service and extend the my-multi-host-network to nodes where
   428  # the service's tasks run.
   429  $ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx
   430  
   431  716thylsndqma81j6kkkb5aus
   432  ```
   433  
   434  Overlay networks for a swarm are not available to containers started with
   435  `docker run` that don't run as part of a swarm mode service. For more
   436  information refer to [Docker swarm mode overlay network security model](overlay-security-model.md).
   437  
   438  See also [Attach services to an overlay network](../../swarm/networking.md).
   439  
   440  ### An overlay network with an external key-value store
   441  
   442  If you are not using Docker Engine in swarm mode, the `overlay` network requires
   443  a valid key-value store service. Supported key-value stores include Consul,
   444  Etcd, and ZooKeeper (Distributed store). Before creating a network on this
   445  version of the Engine, you must install and configure your chosen key-value
   446  store service. The Docker hosts that you intend to network and the service must
   447  be able to communicate.
   448  
   449  >**Note:** Docker Engine running in swarm mode is not compatible with networking
   450  with an external key-value store.
   451  
   452  ![Key-value store](images/key_value.png)
   453  
   454  Each host in the network must run a Docker Engine instance. The easiest way to
   455  provision the hosts is with Docker Machine.
   456  
   457  ![Engine on each host](images/engine_on_net.png)
   458  
   459  You should open the following ports between each of your hosts.
   460  
   461  | Protocol | Port | Description           |
   462  |----------|------|-----------------------|
   463  | udp      | 4789 | Data plane (VXLAN)    |
   464  | tcp/udp  | 7946 | Control plane         |
   465  
   466  Your key-value store service may require additional ports.
   467  Check your vendor's documentation and open any required ports.
   468  
   469  Once you have several machines provisioned, you can use Docker Swarm to quickly
   470  form them into a swarm which includes a discovery service as well.
   471  
   472  To create an overlay network, you configure options on  the `daemon` on each
   473  Docker Engine for use with `overlay` network. There are three options to set:
   474  
   475  <table>
   476      <thead>
   477      <tr>
   478          <th>Option</th>
   479          <th>Description</th>
   480      </tr>
   481      </thead>
   482      <tbody>
   483      <tr>
   484          <td><pre>--cluster-store=PROVIDER://URL</pre></td>
   485          <td>Describes the location of the KV service.</td>
   486      </tr>
   487      <tr>
   488          <td><pre>--cluster-advertise=HOST_IP|HOST_IFACE:PORT</pre></td>
   489          <td>The IP address or interface of the HOST used for clustering.</td>
   490      </tr>
   491      <tr>
   492          <td><pre>--cluster-store-opt=KEY-VALUE OPTIONS</pre></td>
   493          <td>Options such as TLS certificate or tuning discovery Timers</td>
   494      </tr>
   495      </tbody>
   496  </table>
   497  
   498  Create an `overlay` network on one of the machines in the swarm.
   499  
   500      $ docker network create --driver overlay my-multi-host-network
   501  
   502  This results in a single network spanning multiple hosts. An `overlay` network
   503  provides complete isolation for the containers.
   504  
   505  ![An overlay network](images/overlay_network.png)
   506  
   507  Then, on each host, launch containers making sure to specify the network name.
   508  
   509      $ docker run -itd --network=my-multi-host-network busybox
   510  
   511  Once connected, each container has access to all the containers in the network
   512  regardless of which Docker host the container was launched on.
   513  
   514  ![Published port](images/overlay-network-final.png)
   515  
   516  If you would like to try this for yourself, see the [Getting started for
   517  overlay](get-started-overlay.md).
   518  
   519  ### Custom network plugin
   520  
   521  If you like, you can write your own network driver plugin. A network
   522  driver plugin makes use of Docker's plugin infrastructure. In this
   523  infrastructure, a plugin is a process running on the same Docker host as the
   524  Docker `daemon`.
   525  
   526  Network plugins follow the same restrictions and installation rules as other
   527  plugins. All plugins make use of the plugin API. They have a lifecycle that
   528  encompasses installation, starting, stopping and activation.
   529  
   530  Once you have created and installed a custom network driver, you use it like the
   531  built-in network drivers. For example:
   532  
   533      $ docker network create --driver weave mynet
   534  
   535  You can inspect it, add containers to and from it, and so forth. Of course,
   536  different plugins may make use of different technologies or frameworks. Custom
   537  networks can include features not present in Docker's default networks. For more
   538  information on writing plugins, see [Extending Docker](../../extend/legacy_plugins.md) and
   539  [Writing a network driver plugin](../../extend/plugins_network.md).
   540  
   541  ### Docker embedded DNS server
   542  
   543  Docker daemon runs an embedded DNS server to provide automatic service discovery
   544  for containers connected to user defined networks. Name resolution requests from
   545  the containers are handled first by the embedded DNS server. If the embedded DNS
   546  server is unable to resolve the request it will be forwarded to any external DNS
   547  servers configured for the container. To facilitate this when the container is
   548  created, only the embedded DNS server reachable at `127.0.0.11` will be listed
   549  in the container's `resolv.conf` file. More information on embedded DNS server on
   550  user-defined networks can be found in the [embedded DNS server in user-defined networks]
   551  (configure-dns.md)
   552  
   553  ## Links
   554  
   555  Before the Docker network feature, you could use the Docker link feature to
   556  allow containers to discover each other.  With the introduction of Docker networks,
   557  containers can be discovered by its name automatically. But you can still create
   558  links but they behave differently when used in the default `docker0` bridge network
   559  compared to user-defined networks. For more information, please refer to
   560  [Legacy Links](default_network/dockerlinks.md) for link feature in default `bridge` network
   561  and the [linking containers in user-defined networks](work-with-networks.md#linking-containers-in-user-defined-networks) for links
   562  functionality in user-defined networks.
   563  
   564  ## Related information
   565  
   566  - [Work with network commands](work-with-networks.md)
   567  - [Get started with multi-host networking](get-started-overlay.md)
   568  - [Managing Data in Containers](../../tutorials/dockervolumes.md)
   569  - [Docker Machine overview](https://docs.docker.com/machine)
   570  - [Docker Swarm overview](https://docs.docker.com/swarm)
   571  - [Investigate the LibNetwork project](https://github.com/docker/libnetwork)