github.com/endocode/docker@v1.4.2-0.20160113120958-46eb4700391e/docs/userguide/networking/get-started-overlay.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Get started with multi-host networking" 4 description = "Use overlay for multi-host networking" 5 keywords = ["Examples, Usage, network, docker, documentation, user guide, multihost, cluster"] 6 [menu.main] 7 parent = "smn_networking" 8 weight=-3 9 +++ 10 <![end-metadata]--> 11 12 # Get started with multi-host networking 13 14 This article uses an example to explain the basics of creating a multi-host 15 network. Docker Engine supports multi-host networking out-of-the-box through the 16 `overlay` network driver. Unlike `bridge` networks, overlay networks require 17 some pre-existing conditions before you can create one. These conditions are: 18 19 * Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. 20 * A cluster of hosts with connectivity to the key-value store. 21 * A properly configured Engine `daemon` on each host in the cluster. 22 23 Though Docker Machine and Docker Swarm are not mandatory to experience Docker 24 multi-host networking, this example uses them to illustrate how they are 25 integrated. You'll use Machine to create both the key-value store 26 server and the host cluster. This example creates a Swarm cluster. 27 28 ## Prerequisites 29 30 Before you begin, make sure you have a system on your network with the latest 31 version of Docker Engine and Docker Machine installed. The example also relies 32 on VirtualBox. If you installed on a Mac or Windows with Docker Toolbox, you 33 have all of these installed already. 34 35 If you have not already done so, make sure you upgrade Docker Engine and Docker 36 Machine to the latest versions. 37 38 39 ## Step 1: Set up a key-value store 40 41 An overlay network requires a key-value store. The key-value store holds 42 information about the network state which includes discovery, networks, 43 endpoints, IP addresses, and more. Docker supports Consul, Etcd, and ZooKeeper 44 key-value stores. This example uses Consul. 45 46 1. Log into a system prepared with the prerequisite Docker Engine, Docker Machine, and VirtualBox software. 47 48 2. Provision a VirtualBox machine called `mh-keystore`. 49 50 $ docker-machine create -d virtualbox mh-keystore 51 52 When you provision a new machine, the process adds Docker Engine to the 53 host. This means rather than installing Consul manually, you can create an 54 instance using the [consul image from Docker 55 Hub](https://hub.docker.com/r/progrium/consul/). You'll do this in the next step. 56 57 3. Start a `progrium/consul` container running on the `mh-keystore` machine. 58 59 $ docker $(docker-machine config mh-keystore) run -d \ 60 -p "8500:8500" \ 61 -h "consul" \ 62 progrium/consul -server -bootstrap 63 64 A bash expansion `$(docker-machine config mh-keystore)` is used to pass the 65 connection configuration to the `docker run` command. The client starts a 66 `progrium/consul` image running in the `mh-keystore` machine. The server is 67 called `consul` and is listening on port `8500`. 68 69 4. Set your local environment to the `mh-keystore` machine. 70 71 $ eval "$(docker-machine env mh-keystore)" 72 73 5. Run the `docker ps` command to see the `consul` container. 74 75 $ docker ps 76 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 77 4d51392253b3 progrium/consul "/bin/start -server -" 25 minutes ago Up 25 minutes 53/tcp, 53/udp, 8300-8302/tcp, 0.0.0.0:8500->8500/tcp, 8400/tcp, 8301-8302/udp admiring_panini 78 79 Keep your terminal open and move onto the next step. 80 81 82 ## Step 2: Create a Swarm cluster 83 84 In this step, you use `docker-machine` to provision the hosts for your network. 85 At this point, you won't actually create the network. You'll create several 86 machines in VirtualBox. One of the machines will act as the Swarm master; 87 you'll create that first. As you create each host, you'll pass the Engine on 88 that machine options that are needed by the `overlay` network driver. 89 90 1. Create a Swarm master. 91 92 $ docker-machine create \ 93 -d virtualbox \ 94 --swarm --swarm-master \ 95 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 96 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 97 --engine-opt="cluster-advertise=eth1:2376" \ 98 mhs-demo0 99 100 At creation time, you supply the Engine `daemon` with the ` --cluster-store` option. This option tells the Engine the location of the key-value store for the `overlay` network. The bash expansion `$(docker-machine ip mh-keystore)` resolves to the IP address of the Consul server you created in "STEP 1". The `--cluster-advertise` option advertises the machine on the network. 101 102 2. Create another host and add it to the Swarm cluster. 103 104 $ docker-machine create -d virtualbox \ 105 --swarm \ 106 --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ 107 --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ 108 --engine-opt="cluster-advertise=eth1:2376" \ 109 mhs-demo1 110 111 3. List your machines to confirm they are all up and running. 112 113 $ docker-machine ls 114 NAME ACTIVE DRIVER STATE URL SWARM 115 default - virtualbox Running tcp://192.168.99.100:2376 116 mh-keystore * virtualbox Running tcp://192.168.99.103:2376 117 mhs-demo0 - virtualbox Running tcp://192.168.99.104:2376 mhs-demo0 (master) 118 mhs-demo1 - virtualbox Running tcp://192.168.99.105:2376 mhs-demo0 119 120 At this point you have a set of hosts running on your network. You are ready to create a multi-host network for containers using these hosts. 121 122 Leave your terminal open and go onto the next step. 123 124 ## Step 3: Create the overlay Network 125 126 To create an overlay network 127 128 1. Set your docker environment to the Swarm master. 129 130 $ eval $(docker-machine env --swarm mhs-demo0) 131 132 Using the `--swarm` flag with `docker-machine` restricts the `docker` commands to Swarm information alone. 133 134 2. Use the `docker info` command to view the Swarm. 135 136 $ docker info 137 Containers: 3 138 Images: 2 139 Role: primary 140 Strategy: spread 141 Filters: affinity, health, constraint, port, dependency 142 Nodes: 2 143 mhs-demo0: 192.168.99.104:2376 144 └ Containers: 2 145 └ Reserved CPUs: 0 / 1 146 └ Reserved Memory: 0 B / 1.021 GiB 147 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 148 mhs-demo1: 192.168.99.105:2376 149 └ Containers: 1 150 └ Reserved CPUs: 0 / 1 151 └ Reserved Memory: 0 B / 1.021 GiB 152 └ Labels: executiondriver=native-0.2, kernelversion=4.1.10-boot2docker, operatingsystem=Boot2Docker 1.9.0-rc1 (TCL 6.4); master : 4187d2c - Wed Oct 14 14:00:28 UTC 2015, provider=virtualbox, storagedriver=aufs 153 CPUs: 2 154 Total Memory: 2.043 GiB 155 Name: 30438ece0915 156 157 From this information, you can see that you are running three containers and two images on the Master. 158 159 3. Create your `overlay` network. 160 161 $ docker network create --driver overlay my-net 162 163 You only need to create the network on a single host in the cluster. In this case, you used the Swarm master but you could easily have run it on any host in the cluster. 164 165 4. Check that the network is running: 166 167 $ docker network ls 168 NETWORK ID NAME DRIVER 169 412c2496d0eb mhs-demo1/host host 170 dd51763e6dd2 mhs-demo0/bridge bridge 171 6b07d0be843f my-net overlay 172 b4234109bd9b mhs-demo0/none null 173 1aeead6dd890 mhs-demo0/host host 174 d0bb78cbe7bd mhs-demo1/bridge bridge 175 1c0eb8f69ebb mhs-demo1/none null 176 177 As you are in the Swarm master environment, you see all the networks on all 178 the Swarm agents: the default networks on each engine and the single overlay 179 network. Notice that each `NETWORK ID` is unique. 180 181 5. Switch to each Swarm agent in turn and list the networks. 182 183 $ eval $(docker-machine env mhs-demo0) 184 $ docker network ls 185 NETWORK ID NAME DRIVER 186 6b07d0be843f my-net overlay 187 dd51763e6dd2 bridge bridge 188 b4234109bd9b none null 189 1aeead6dd890 host host 190 $ eval $(docker-machine env mhs-demo1) 191 $ docker network ls 192 NETWORK ID NAME DRIVER 193 d0bb78cbe7bd bridge bridge 194 1c0eb8f69ebb none null 195 412c2496d0eb host host 196 6b07d0be843f my-net overlay 197 198 Both agents report they have the `my-net` network with the `6b07d0be843f` ID. 199 You now have a multi-host container network running! 200 201 ## Step 4: Run an application on your Network 202 203 Once your network is created, you can start a container on any of the hosts and it automatically is part of the network. 204 205 1. Point your environment to the Swarm master. 206 207 $ eval $(docker-machine env --swarm mhs-demo0) 208 209 2. Start an Nginx web server on the `mhs-demo0` instance. 210 211 $ docker run -itd --name=web --net=my-net --env="constraint:node==mhs-demo0" nginx 212 213 4. Run a BusyBox instance on the `mhs-demo1` instance and get the contents of the Nginx server's home page. 214 215 $ docker run -it --rm --net=my-net --env="constraint:node==mhs-demo1" busybox wget -O- http://web 216 Unable to find image 'busybox:latest' locally 217 latest: Pulling from library/busybox 218 ab2b8a86ca6c: Pull complete 219 2c5ac3f849df: Pull complete 220 Digest: sha256:5551dbdfc48d66734d0f01cafee0952cb6e8eeecd1e2492240bf2fd9640c2279 221 Status: Downloaded newer image for busybox:latest 222 Connecting to web (10.0.0.2:80) 223 <!DOCTYPE html> 224 <html> 225 <head> 226 <title>Welcome to nginx!</title> 227 <style> 228 body { 229 width: 35em; 230 margin: 0 auto; 231 font-family: Tahoma, Verdana, Arial, sans-serif; 232 } 233 </style> 234 </head> 235 <body> 236 <h1>Welcome to nginx!</h1> 237 <p>If you see this page, the nginx web server is successfully installed and 238 working. Further configuration is required.</p> 239 240 <p>For online documentation and support please refer to 241 <a href="http://nginx.org/">nginx.org</a>.<br/> 242 Commercial support is available at 243 <a href="http://nginx.com/">nginx.com</a>.</p> 244 245 <p><em>Thank you for using nginx.</em></p> 246 </body> 247 </html> 248 - 100% |*******************************| 612 0:00:00 ETA 249 250 ## Step 5: Check external connectivity 251 252 As you've seen, Docker's built-in overlay network driver provides out-of-the-box 253 connectivity between the containers on multiple hosts within the same network. 254 Additionally, containers connected to the multi-host network are automatically 255 connected to the `docker_gwbridge` network. This network allows the containers 256 to have external connectivity outside of their cluster. 257 258 1. Change your environment to the Swarm agent. 259 260 $ eval $(docker-machine env mhs-demo1) 261 262 2. View the `docker_gwbridge` network, by listing the networks. 263 264 $ docker network ls 265 NETWORK ID NAME DRIVER 266 6b07d0be843f my-net overlay 267 dd51763e6dd2 bridge bridge 268 b4234109bd9b none null 269 1aeead6dd890 host host 270 e1dbd5dff8be docker_gwbridge bridge 271 272 3. Repeat steps 1 and 2 on the Swarm master. 273 274 $ eval $(docker-machine env mhs-demo0) 275 $ docker network ls 276 NETWORK ID NAME DRIVER 277 6b07d0be843f my-net overlay 278 d0bb78cbe7bd bridge bridge 279 1c0eb8f69ebb none null 280 412c2496d0eb host host 281 97102a22e8d2 docker_gwbridge bridge 282 283 2. Check the Nginx container's network interfaces. 284 285 $ docker exec web ip addr 286 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 287 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 288 inet 127.0.0.1/8 scope host lo 289 valid_lft forever preferred_lft forever 290 inet6 ::1/128 scope host 291 valid_lft forever preferred_lft forever 292 22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 293 link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff 294 inet 10.0.9.3/24 scope global eth0 295 valid_lft forever preferred_lft forever 296 inet6 fe80::42:aff:fe00:903/64 scope link 297 valid_lft forever preferred_lft forever 298 24: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 299 link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff 300 inet 172.18.0.2/16 scope global eth1 301 valid_lft forever preferred_lft forever 302 inet6 fe80::42:acff:fe12:2/64 scope link 303 valid_lft forever preferred_lft forever 304 305 The `eth0` interface represents the container interface that is connected to 306 the `my-net` overlay network. While the `eth1` interface represents the 307 container interface that is connected to the `docker_gwbridge` network. 308 309 ## Step 6: Extra Credit with Docker Compose 310 311 You can try starting a second network on your existing Swarm cluster using Docker Compose. 312 313 1. If you haven't already, install Docker Compose. 314 315 2. Change your environment to the Swarm master. 316 317 $ eval $(docker-machine env --swarm mhs-demo0) 318 319 3. Create a `docker-compose.yml` file. 320 321 4. Add the following content to the file. 322 323 web: 324 image: bfirsh/compose-mongodb-demo 325 environment: 326 - "MONGO_HOST=counter_mongo_1" 327 - "constraint:node==mhs-demo0" 328 ports: 329 - "80:5000" 330 mongo: 331 image: mongo 332 333 5. Save and close the file. 334 335 6. Start the application with Compose. 336 337 $ docker-compose --x-networking --project-name=counter up -d 338 339 7. Get the Swarm master's IP address. 340 341 $ docker-machine ip mhs-demo0 342 343 8. Put the IP address into your web browser. 344 345 Upon success, the browser should display the web application. 346 347 ## Related information 348 349 * [Understand Docker container networks](dockernetworks.md) 350 * [Work with network commands](work-with-networks.md) 351 * [Docker Swarm overview](https://docs.docker.com/swarm) 352 * [Docker Machine overview](https://docs.docker.com/machine)