github.com/gunjan5/docker@v1.8.2/experimental/compose_swarm_networking.md (about) 1 # Experimental: Compose, Swarm and Multi-Host Networking 2 3 The [experimental build of Docker](https://github.com/docker/docker/tree/master/experimental) has an entirely new networking system, which enables secure communication between containers on multiple hosts. In combination with Docker Swarm and Docker Compose, you can now run multi-container apps on multi-host clusters with the same tooling and configuration format you use to develop them locally. 4 5 > Note: This functionality is in the experimental stage, and contains some hacks and workarounds which will be removed as it matures. 6 7 ## Prerequisites 8 9 Before you start, you’ll need to install the experimental build of Docker, and the latest versions of Machine and Compose. 10 11 - To install the experimental Docker build on a Linux machine, follow the instructions [here](https://github.com/docker/docker/tree/master/experimental#install-docker-experimental). 12 13 - To install the experimental Docker build on a Mac, run these commands: 14 15 $ curl -L https://experimental.docker.com/builds/Darwin/x86_64/docker-latest > /usr/local/bin/docker 16 $ chmod +x /usr/local/bin/docker 17 18 - To install Machine, follow the instructions [here](http://docs.docker.com/machine/). 19 20 - To install Compose, follow the instructions [here](http://docs.docker.com/compose/install/). 21 22 You’ll also need a [Docker Hub](https://hub.docker.com/account/signup/) account and a [Digital Ocean](https://www.digitalocean.com/) account. 23 24 ## Set up a swarm with multi-host networking 25 26 Set the `DIGITALOCEAN_ACCESS_TOKEN` environment variable to a valid Digital Ocean API token, which you can generate in the [API panel](https://cloud.digitalocean.com/settings/applications). 27 28 export DIGITALOCEAN_ACCESS_TOKEN=abc12345 29 30 Start a consul server: 31 32 docker-machine --debug create \ 33 -d digitalocean \ 34 --engine-install-url="https://experimental.docker.com" \ 35 consul 36 37 docker $(docker-machine config consul) run -d \ 38 -p "8500:8500" \ 39 -h "consul" \ 40 progrium/consul -server -bootstrap 41 42 (In a real world setting you’d set up a distributed consul, but that’s beyond the scope of this guide!) 43 44 Create a Swarm token: 45 46 export SWARM_TOKEN=$(docker run swarm create) 47 48 Next, you create a Swarm master with Machine: 49 50 docker-machine --debug create \ 51 -d digitalocean \ 52 --digitalocean-image="ubuntu-14-10-x64" \ 53 --engine-install-url="https://experimental.docker.com" \ 54 --engine-opt="default-network=overlay:multihost" \ 55 --engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \ 56 --engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \ 57 swarm-0 58 59 Usually Machine can create Swarms for you, but it doesn't yet fully support multi-host networks yet, so you'll have to start up the Swarm manually: 60 61 docker $(docker-machine config swarm-0) run -d \ 62 --restart="always" \ 63 --net="bridge" \ 64 swarm:latest join \ 65 --addr "$(docker-machine ip swarm-0):2376" \ 66 "token://$SWARM_TOKEN" 67 68 docker $(docker-machine config swarm-0) run -d \ 69 --restart="always" \ 70 --net="bridge" \ 71 -p "3376:3376" \ 72 -v "/etc/docker:/etc/docker" \ 73 swarm:latest manage \ 74 --tlsverify \ 75 --tlscacert="/etc/docker/ca.pem" \ 76 --tlscert="/etc/docker/server.pem" \ 77 --tlskey="/etc/docker/server-key.pem" \ 78 -H "tcp://0.0.0.0:3376" \ 79 --strategy spread \ 80 "token://$SWARM_TOKEN" 81 82 Create a Swarm node: 83 84 docker-machine --debug create \ 85 -d digitalocean \ 86 --digitalocean-image="ubuntu-14-10-x64" \ 87 --engine-install-url="https://experimental.docker.com" \ 88 --engine-opt="default-network=overlay:multihost" \ 89 --engine-opt="kv-store=consul:$(docker-machine ip consul):8500" \ 90 --engine-label="com.docker.network.driver.overlay.bind_interface=eth0" \ 91 --engine-label="com.docker.network.driver.overlay.neighbor_ip=$(docker-machine ip swarm-0)" \ 92 swarm-1 93 94 docker $(docker-machine config swarm-1) run -d \ 95 --restart="always" \ 96 --net="bridge" \ 97 swarm:latest join \ 98 --addr "$(docker-machine ip swarm-1):2376" \ 99 "token://$SWARM_TOKEN" 100 101 You can create more Swarm nodes if you want - it’s best to give them sensible names (swarm-2, swarm-3, etc). 102 103 Finally, point Docker at your swarm: 104 105 export DOCKER_HOST=tcp://"$(docker-machine ip swarm-0):3376" 106 export DOCKER_TLS_VERIFY=1 107 export DOCKER_CERT_PATH="$HOME/.docker/machine/machines/swarm-0" 108 109 ## Run containers and get them communicating 110 111 Now that you’ve got a swarm up and running, you can create containers on it just like a single Docker instance: 112 113 $ docker run busybox echo hello world 114 hello world 115 116 If you run `docker ps -a`, you can see what node that container was started on by looking at its name (here it’s swarm-3): 117 118 $ docker ps -a 119 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 120 41f59749737b busybox "echo hello world" 15 seconds ago Exited (0) 13 seconds ago swarm-3/trusting_leakey 121 122 As you start more containers, they’ll be placed on different nodes across the cluster, thanks to Swarm’s default “spread” scheduling strategy. 123 124 Every container started on this swarm will use the “overlay:multihost” network by default, meaning they can all intercommunicate. Each container gets an IP address on that network, and an `/etc/hosts` file which will be updated on-the-fly with every other container’s IP address and name. That means that if you have a running container named ‘foo’, other containers can access it at the hostname ‘foo’. 125 126 Let’s verify that multi-host networking is functioning. Start a long-running container: 127 128 $ docker run -d --name long-running busybox top 129 <container id> 130 131 If you start a new container and inspect its /etc/hosts file, you’ll see the long-running container in there: 132 133 $ docker run busybox cat /etc/hosts 134 ... 135 172.21.0.6 long-running 136 137 Verify that connectivity works between containers: 138 139 $ docker run busybox ping long-running 140 PING long-running (172.21.0.6): 56 data bytes 141 64 bytes from 172.21.0.6: seq=0 ttl=64 time=7.975 ms 142 64 bytes from 172.21.0.6: seq=1 ttl=64 time=1.378 ms 143 64 bytes from 172.21.0.6: seq=2 ttl=64 time=1.348 ms 144 ^C 145 --- long-running ping statistics --- 146 3 packets transmitted, 3 packets received, 0% packet loss 147 round-trip min/avg/max = 1.140/2.099/7.975 ms 148 149 ## Run a Compose application 150 151 Here’s an example of a simple Python + Redis app using multi-host networking on a swarm. 152 153 Create a directory for the app: 154 155 $ mkdir composetest 156 $ cd composetest 157 158 Inside this directory, create 2 files. 159 160 First, create `app.py` - a simple web app that uses the Flask framework and increments a value in Redis: 161 162 from flask import Flask 163 from redis import Redis 164 import os 165 app = Flask(__name__) 166 redis = Redis(host='composetest_redis_1', port=6379) 167 168 @app.route('/') 169 def hello(): 170 redis.incr('hits') 171 return 'Hello World! I have been seen %s times.' % redis.get('hits') 172 173 if __name__ == "__main__": 174 app.run(host="0.0.0.0", debug=True) 175 176 Note that we’re connecting to a host called `composetest_redis_1` - this is the name of the Redis container that Compose will start. 177 178 Second, create a Dockerfile for the app container: 179 180 FROM python:2.7 181 RUN pip install flask redis 182 ADD . /code 183 WORKDIR /code 184 CMD ["python", "app.py"] 185 186 Build the Docker image and push it to the Hub (you’ll need a Hub account). Replace `<username>` with your Docker Hub username: 187 188 $ docker build -t <username>/counter . 189 $ docker push <username>/counter 190 191 Next, create a `docker-compose.yml`, which defines the configuration for the web and redis containers. Once again, replace `<username>` with your Hub username: 192 193 web: 194 image: <username>/counter 195 ports: 196 - "80:5000" 197 redis: 198 image: redis 199 200 Now start the app: 201 202 $ docker-compose up -d 203 Pulling web (username/counter:latest)... 204 swarm-0: Pulling username/counter:latest... : downloaded 205 swarm-2: Pulling username/counter:latest... : downloaded 206 swarm-1: Pulling username/counter:latest... : downloaded 207 swarm-3: Pulling username/counter:latest... : downloaded 208 swarm-4: Pulling username/counter:latest... : downloaded 209 Creating composetest_web_1... 210 Pulling redis (redis:latest)... 211 swarm-2: Pulling redis:latest... : downloaded 212 swarm-1: Pulling redis:latest... : downloaded 213 swarm-3: Pulling redis:latest... : downloaded 214 swarm-4: Pulling redis:latest... : downloaded 215 swarm-0: Pulling redis:latest... : downloaded 216 Creating composetest_redis_1... 217 218 Swarm has created containers for both web and redis, and placed them on different nodes, which you can check with `docker ps`: 219 220 $ docker ps 221 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 222 92faad2135c9 redis "/entrypoint.sh redi 43 seconds ago Up 42 seconds swarm-2/composetest_redis_1 223 adb809e5cdac username/counter "/bin/sh -c 'python 55 seconds ago Up 54 seconds 45.67.8.9:80->5000/tcp swarm-1/composetest_web_1 224 225 You can also see that the web container has exposed port 80 on its swarm node. If you curl that IP, you’ll get a response from the container: 226 227 $ curl http://45.67.8.9 228 Hello World! I have been seen 1 times. 229 230 If you hit it repeatedly, the counter will increment, demonstrating that the web and redis container are communicating: 231 232 $ curl http://45.67.8.9 233 Hello World! I have been seen 2 times. 234 $ curl http://45.67.8.9 235 Hello World! I have been seen 3 times. 236 $ curl http://45.67.8.9 237 Hello World! I have been seen 4 times.