github.com/dpiddy/docker@v1.12.2-rc1/docs/userguide/networking/get-started-overlay.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Get started with multi-host networking" 4 description = "Use overlay for multi-host networking" 5 keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-3 9 +++ 10 <![end-metadata]--> 11 12 # Get started with multi-host networking 13 14 This article uses an example to explain the basics of creating a multi-host 15 network. Docker Engine supports multi-host networking out-of-the-box through the 16 `overlay` network driver. Unlike `bridge` networks, overlay networks require 17 some pre-existing conditions before you can create one: 18 19 * [Docker Engine running in swarm mode](#overlay-networking-and-swarm-mode) 20 21 OR 22 23 * [A cluster of hosts using a key value store](#overlay-networking-with-an-external-key-value-store) 24 25 ## Overlay networking and swarm mode 26 27 Using docker engine running in [swarm mode](../../swarm/swarm-mode.md), you can create an overlay network on a manager node. 28 29 The swarm makes the overlay network available only to nodes in the swarm that 30 require it for a service. When you create a service that uses an overlay 31 network, the manager node automatically extends the overlay network to nodes 32 that run service tasks. 33 34 To learn more about running Docker Engine in swarm mode, refer to the 35 [Swarm mode overview](../../swarm/index.md). 36 37 The example below shows how to create a network and use it for a service from a manager node in the swarm: 38 39 ```bash 40 # Create an overlay network `my-multi-host-network`. 41 $ docker network create \ 42 --driver overlay \ 43 --subnet 10.0.9.0/24 \ 44 my-multi-host-network 45 46 400g6bwzd68jizzdx5pgyoe95 47 48 # Create an nginx service and extend the my-multi-host-network to nodes where 49 # the service's tasks run. 50 $ $ docker service create --replicas 2 --network my-multi-host-network --name my-web nginx 51 52 716thylsndqma81j6kkkb5aus 53 ``` 54 55 Overlay networks for a swarm are not available to unmanaged containers. For more information refer to [Docker swarm mode overlay network security model](overlay-security-model.md). 56 57 See also [Attach services to an overlay network](../../swarm/networking.md). 58 59 ## Overlay networking with an external key-value store 60 61 To use an Docker engine with an external key-value store, you need the 62 following: 63 64 * Access to the key-value store. Docker supports Consul, Etcd, and ZooKeeper 65 (Distributed store) key-value stores. 66 * A cluster of hosts with connectivity to the key-value store. 67 * A properly configured Engine `daemon` on each host in the cluster. 68 * Hosts within the cluster must have unique hostnames because the key-value 69 store uses the hostnames to identify cluster members. 70 71 Though Docker Machine and Docker Swarm are not mandatory to experience Docker 72 multi-host networking with a key-value store, this example uses them to 73 illustrate how they are integrated. You'll use Machine to create both the 74 key-value store server and the host cluster. This example creates a swarm 75 cluster. 76 77 >**Note:** Docker Engine running in swarm mode is not compatible with networking 78 with an external key-value store. 79 80 ### Prerequisites 81 82 Before you begin, make sure you have a system on your network with the latest 83 version of Docker Engine and Docker Machine installed. The example also relies 84 on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you 85 have all of these installed already. 86 87 If you have not already done so, make sure you upgrade Docker Engine and Docker 88 Machine to the latest versions. 89 90 91 ### Set up a key-value store 92 93 An overlay network requires a key-value store. The key-value store holds 94 information about the network state which includes discovery, networks, 95 endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper 96 key-value stores. This example uses Consul. 97 98 1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software. 99 100 2. Provision a VirtualBox machine called `mh-keystore`. 101 102 $ docker-machine create -d virtualbox mh-keystore 103 104 When you provision a new machine, the process adds Docker Engine to the 105 host. This means rather than installing Consul manually, you can create an 106 instance using the [consul image from Docker 107 Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step. 108 109 3. Set your local environment to the `mh-keystore` machine. 110 111 $ eval "$(docker-machine env mh-keystore)" 112 113 4. Start a `progrium/consul` container running on the `mh-keystore` machine. 114 115 $ docker run -d \ 116 -p "8500:8500" \ 117 -h "consul" \ 118 progrium/consul -server -bootstrap 119 120 The client starts a `progrium/consul` image running in the 121 `mh-keystore` machine. The server is called `consul` and is 122 listening on port `8500`. 123 124 5. Run the `docker ps` command to see the `consul` container. 125 126 $ docker ps 127 128 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 129 4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini 130 131 Keep your terminal open and move onto the next step. 132 133 134 ### Create a Swarm cluster 135 136 In this step, you use `docker-machine` to provision the hosts for your network. 137 At this point, you won't actually create the network. You'll create several 138 machines in VirtualBox. One of the machines will act as the swarm master; 139 you'll create that first. As you create each host, you'll pass the Engine on 140 that machine options that are needed by the `overlay` network driver. 141 142 1. Create a swarm master. 143 144 $ docker-machine create \ 145 -d virtualbox \ 146 --swarm --swarm-master \ 147 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 148 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 149 --engine-opt="cluster-advertise=eth1:2376" \ 150 mhs-demo0 151 152 At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network. 153 154 2. Create another host and add it to the swarm cluster. 155 156 $ docker-machine create -d virtualbox \ 157 --swarm \ 158 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 159 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 160 --engine-opt="cluster-advertise=eth1:2376" \ 161 mhs-demo1 162 163 3. List your machines to confirm they are all up and running. 164 165 $ docker-machine ls 166 167 NAME ACTIVE DRIVER STATE URL SWARM 168 default - virtualbox Running tcp://192.168.99.100:2376 169 mh-keystore * virtualbox Running tcp://192.168.99.103:2376 170 mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) 171 mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 172 173 At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts. 174 175 Leave your terminal open and go onto the next step. 176 177 ### Create the overlay Network 178 179 To create an overlay network 180 181 1. Set your docker environment to the swarm master. 182 183 $ eval $(docker-machine env --swarm mhs-demo0) 184 185 Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to swarm information alone. 186 187 2. Use the `docker info` command to view the swarm. 188 189 $ docker info 190 191 Containers: 3 192 Images: 2 193 Role: primary 194 Strategy: spread 195 Filters: affinity, health, constraint, port, dependency 196 Nodes: 2 197 mhs-demo0: 192.168.99.104:2376 198 └ Containers: 2 199 └ Reserved CPUs: 0 / 1 200 └ Reserved Memory: 0 B / 1.021 GiB 201 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 202 mhs-demo1: 192.168.99.105:2376 203 └ Containers: 1 204 └ Reserved CPUs: 0 / 1 205 └ Reserved Memory: 0 B / 1.021 GiB 206 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 207 CPUs: 2 208 Total Memory: 2.043 GiB 209 Name: 30438ece0915 210 211 From this information, you can see that you are running three containers and two images on the Master. 212 213 3. Create your `overlay` network. 214 215 $ docker network create --driver overlay --subnet=10.0.9.0/24 my-net 216 217 You only need to create the network on a single host in the cluster. In this case, you used the swarm master but you could easily have run it on any host in the cluster. 218 219 > **Note** : It is highly recommended to use the `--subnet` option when creating 220 > a network. If the `--subnet` is not specified, the docker daemon automatically 221 > chooses and assigns a subnet for the network and it could overlap with another subnet 222 > in your infrastructure that is not managed by docker. Such overlaps can cause 223 > connectivity issues or failures when containers are connected to that network. 224 225 4. Check that the network is running: 226 227 $ docker network ls 228 229 NETWORK ID NAME DRIVER 230 412c2496d0eb mhs-demo1/host host 231 dd51763e6dd2 mhs-demo0/bridge bridge 232 6b07d0be843f my-net overlay 233 b4234109bd9b mhs-demo0/none null 234 1aeead6dd890 mhs-demo0/host host 235 d0bb78cbe7bd mhs-demo1/bridge bridge 236 1c0eb8f69ebb mhs-demo1/none null 237 238 As you are in the swarm master environment, you see all the networks on all 239 the swarm agents: the default networks on each engine and the single overlay 240 network. Notice that each `NETWORK ID` is unique. 241 242 5. Switch to each swarm agent in turn and list the networks. 243 244 $ eval $(docker-machine env mhs-demo0) 245 246 $ docker network ls 247 248 NETWORK ID NAME DRIVER 249 6b07d0be843f my-net overlay 250 dd51763e6dd2 bridge bridge 251 b4234109bd9b none null 252 1aeead6dd890 host host 253 254 $ eval $(docker-machine env mhs-demo1) 255 256 $ docker network ls 257 258 NETWORK ID NAME DRIVER 259 d0bb78cbe7bd bridge bridge 260 1c0eb8f69ebb none null 261 412c2496d0eb host host 262 6b07d0be843f my-net overlay 263 264 Both agents report they have the `my-net` network with the `6b07d0be843f` ID. 265 You now have a multi-host container network running! 266 267 ### Run an application on your Network 268 269 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network. 270 271 1. Point your environment to the swarm master. 272 273 $ eval $(docker-machine env --swarm mhs-demo0) 274 275 2. Start an Nginx web server on the `mhs-demo0` instance. 276 277 $ docker run -itd --name=web --network=my-net --env="constraint:node==mhs-demo0" nginx 278 279 4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page. 280 281 $ docker run -it --rm --network=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web 282 283 Unable to find image 'busybox:latest' locally 284 latest: Pulling from library/busybox 285 ab2b8a86ca6c: Pull complete 286 2c5ac3f849df: Pull complete 287 Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 288 Status: Downloaded newer image for busybox:latest 289 Connecting to web (10.0.0.2:80) 290 <!DOCTYPE html> 291 <html> 292 <head> 293 <title>Welcome to nginx!</title> 294 <style> 295 body { 296 width: 35em; 297 margin: 0 auto; 298 font-family: Tahoma, Verdana, Arial, sans-serif; 299 } 300 </style> 301 </head> 302 <body> 303 <h1>Welcome to nginx!</h1> 304 <p>If you see this page, the nginx web server is successfully installed and 305 working. Further configuration is required.</p> 306 307 <p>For online documentation and support please refer to 308 <a href="http://nginx.org/">nginx.org</a>.<br/> 309 Commercial support is available at 310 <a href="http://nginx.com/">nginx.com</a>.</p> 311 312 <p><em>Thank you for using nginx.</em></p> 313 </body> 314 </html> 315 - 100% |*******************************| 612 0:00:00 ETA 316 317 ### Check external connectivity 318 319 As you've seen, Docker's built-in overlay network driver provides out-of-the-box 320 connectivity between the containers on multiple hosts within the same network. 321 Additionally, containers connected to the multi-host network are automatically 322 connected to the `docker_gwbridge` network. This network allows the containers 323 to have external connectivity outside of their cluster. 324 325 1. Change your environment to the swarm agent. 326 327 $ eval $(docker-machine env mhs-demo1) 328 329 2. View the `docker_gwbridge` network, by listing the networks. 330 331 $ docker network ls 332 333 NETWORK ID NAME DRIVER 334 6b07d0be843f my-net overlay 335 dd51763e6dd2 bridge bridge 336 b4234109bd9b none null 337 1aeead6dd890 host host 338 e1dbd5dff8be docker_gwbridge bridge 339 340 3. Repeat steps 1 and 2 on the swarm master. 341 342 $ eval $(docker-machine env mhs-demo0) 343 344 $ docker network ls 345 346 NETWORK ID NAME DRIVER 347 6b07d0be843f my-net overlay 348 d0bb78cbe7bd bridge bridge 349 1c0eb8f69ebb none null 350 412c2496d0eb host host 351 97102a22e8d2 docker_gwbridge bridge 352 353 2. Check the Nginx container's network interfaces. 354 355 $ docker exec web ip addr 356 357 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 358 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 359 inet 127.0.0.1/8 scope host lo 360 valid_lft forever preferred_lft forever 361 inet6 ::1/128 scope host 362 valid_lft forever preferred_lft forever 363 22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 364 link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff 365 inet 10.0.9.3/24 scope global eth0 366 valid_lft forever preferred_lft forever 367 inet6 fe80::42:aff:fe00:903/64 scope link 368 valid_lft forever preferred_lft forever 369 24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 370 link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff 371 inet 172.18.0.2/16 scope global eth1 372 valid_lft forever preferred_lft forever 373 inet6 fe80::42:acff:fe12:2/64 scope link 374 valid_lft forever preferred_lft forever 375 376 The `eth0` interface represents the container interface that is connected to 377 the `my-net` overlay network. While the `eth1` interface represents the 378 container interface that is connected to the `docker_gwbridge` network. 379 380 ### Extra Credit with Docker Compose 381 382 Please refer to the Networking feature introduced in [Compose V2 format] 383 (https://docs.docker.com/compose/networking/) and execute the 384 multi-host networking scenario in the swarm cluster used above. 385 386 ## Related information 387 388 * [Understand Docker container networks](index.md) 389 * [Work with network commands](work-with-networks.md) 390 * [Docker Swarm overview](https://docs.docker.com/swarm) 391 * [Docker Machine overview](https://docs.docker.com/machine)