github.com/gondor/docker@v1.9.0-rc1/docs/reference/run.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Docker run reference" 4 description = "Configure containers at runtime" 5 keywords = ["docker, run, configure, runtime"] 6 [menu.main] 7 parent = "mn_reference" 8 +++ 9 <![end-metadata]--> 10 11 <!-- TODO (@thaJeztah) define more flexible table/td classes --> 12 <style> 13 .content-body table .no-wrap { 14 white-space: nowrap; 15 } 16 </style> 17 # Docker run reference 18 19 Docker runs processes in isolated containers. A container is a process 20 which runs on a host. The host may be local or remote. When an operator 21 executes `docker run`, the container process that runs is isolated in 22 that it has its own file system, its own networking, and its own 23 isolated process tree separate from the host. 24 25 This page details how to use the `docker run` command to define the 26 container's resources at runtime. 27 28 ## General form 29 30 The basic `docker run` command takes this form: 31 32 $ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...] 33 34 The `docker run` command must specify an [*IMAGE*](glossary.md#image) 35 to derive the container from. An image developer can define image 36 defaults related to: 37 38 * detached or foreground running 39 * container identification 40 * network settings 41 * runtime constraints on CPU and memory 42 * privileges and LXC configuration 43 44 With the `docker run [OPTIONS]` an operator can add to or override the 45 image defaults set by a developer. And, additionally, operators can 46 override nearly all the defaults set by the Docker runtime itself. The 47 operator's ability to override image and Docker runtime defaults is why 48 [*run*](commandline/run.md) has more options than any 49 other `docker` command. 50 51 To learn how to interpret the types of `[OPTIONS]`, see [*Option 52 types*](commandline/cli.md#option-types). 53 54 > **Note**: Depending on your Docker system configuration, you may be 55 > required to preface the `docker run` command with `sudo`. To avoid 56 > having to use `sudo` with the `docker` command, your system 57 > administrator can create a Unix group called `docker` and add users to 58 > it. For more information about this configuration, refer to the Docker 59 > installation documentation for your operating system. 60 61 62 ## Operator exclusive options 63 64 Only the operator (the person executing `docker run`) can set the 65 following options. 66 67 - [Detached vs foreground](#detached-vs-foreground) 68 - [Detached (-d)](#detached-d) 69 - [Foreground](#foreground) 70 - [Container identification](#container-identification) 71 - [Name (--name)](#name-name) 72 - [PID equivalent](#pid-equivalent) 73 - [IPC settings (--ipc)](#ipc-settings-ipc) 74 - [Network settings](#network-settings) 75 - [Restart policies (--restart)](#restart-policies-restart) 76 - [Clean up (--rm)](#clean-up-rm) 77 - [Runtime constraints on resources](#runtime-constraints-on-resources) 78 - [Runtime privilege, Linux capabilities, and LXC configuration](#runtime-privilege-linux-capabilities-and-lxc-configuration) 79 80 ## Detached vs foreground 81 82 When starting a Docker container, you must first decide if you want to 83 run the container in the background in a "detached" mode or in the 84 default foreground mode: 85 86 -d=false: Detached mode: Run container in the background, print new container id 87 88 ### Detached (-d) 89 90 To start a container in detached mode, you use `-d=true` or just `-d` option. By 91 design, containers started in detached mode exit when the root process used to 92 run the container exits. A container in detached mode cannot be automatically 93 removed when it stops, this means you cannot use the `--rm` option with `-d` option. 94 95 Do not pass a `service x start` command to a detached container. For example, this 96 command attempts to start the `nginx` service. 97 98 $ docker run -d -p 80:80 my_image service nginx start 99 100 This succeeds in starting the `nginx` service inside the container. However, it 101 fails the detached container paradigm in that, the root process (`service nginx 102 start`) returns and the detached container stops as designed. As a result, the 103 `nginx` service is started but could not be used. Instead, to start a process 104 such as the `nginx` web server do the following: 105 106 $ docker run -d -p 80:80 my_image nginx -g 'daemon off;' 107 108 To do input/output with a detached container use network connections or shared 109 volumes. These are required because the container is no longer listening to the 110 command line where `docker run` was run. 111 112 To reattach to a detached container, use `docker` 113 [*attach*](commandline/attach.md) command. 114 115 ### Foreground 116 117 In foreground mode (the default when `-d` is not specified), `docker 118 run` can start the process in the container and attach the console to 119 the process's standard input, output, and standard error. It can even 120 pretend to be a TTY (this is what most command line executables expect) 121 and pass along signals. All of that is configurable: 122 123 -a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR` 124 -t=false : Allocate a pseudo-tty 125 --sig-proxy=true: Proxify all received signal to the process (non-TTY mode only) 126 -i=false : Keep STDIN open even if not attached 127 128 If you do not specify `-a` then Docker will [attach all standard 129 streams]( https://github.com/docker/docker/blob/ 130 75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). You can 131 specify to which of the three standard streams (`STDIN`, `STDOUT`, 132 `STDERR`) you'd like to connect instead, as in: 133 134 $ docker run -a stdin -a stdout -i -t ubuntu /bin/bash 135 136 For interactive processes (like a shell), you must use `-i -t` together in 137 order to allocate a tty for the container process. `-i -t` is often written `-it` 138 as you'll see in later examples. Specifying `-t` is forbidden when the client 139 standard output is redirected or piped, such as in: 140 `echo test | docker run -i busybox cat`. 141 142 >**Note**: A process running as PID 1 inside a container is treated 143 >specially by Linux: it ignores any signal with the default action. 144 >So, the process will not terminate on `SIGINT` or `SIGTERM` unless it is 145 >coded to do so. 146 147 ## Container identification 148 149 ### Name (--name) 150 151 The operator can identify a container in three ways: 152 153 - UUID long identifier 154 ("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778") 155 - UUID short identifier ("f78375b1c487") 156 - Name ("evil_ptolemy") 157 158 The UUID identifiers come from the Docker daemon, and if you do not 159 assign a name to the container with `--name` then the daemon will also 160 generate a random string name too. The name can become a handy way to 161 add meaning to a container since you can use this name when defining 162 [*links*](../userguide/dockerlinks.md) (or any 163 other place you need to identify a container). This works for both 164 background and foreground Docker containers. 165 166 ### PID equivalent 167 168 Finally, to help with automation, you can have Docker write the 169 container ID out to a file of your choosing. This is similar to how some 170 programs might write out their process ID to a file (you've seen them as 171 PID files): 172 173 --cidfile="": Write the container ID to the file 174 175 ### Image[:tag] 176 177 While not strictly a means of identifying a container, you can specify a version of an 178 image you'd like to run the container with by adding `image[:tag]` to the command. For 179 example, `docker run ubuntu:14.04`. 180 181 ### Image[@digest] 182 183 Images using the v2 or later image format have a content-addressable identifier 184 called a digest. As long as the input used to generate the image is unchanged, 185 the digest value is predictable and referenceable. 186 187 ## PID settings (--pid) 188 189 --pid="" : Set the PID (Process) Namespace mode for the container, 190 'host': use the host's PID namespace inside the container 191 192 By default, all containers have the PID namespace enabled. 193 194 PID namespace provides separation of processes. The PID Namespace removes the 195 view of the system processes, and allows process ids to be reused including 196 pid 1. 197 198 In certain cases you want your container to share the host's process namespace, 199 basically allowing processes within the container to see all of the processes 200 on the system. For example, you could build a container with debugging tools 201 like `strace` or `gdb`, but want to use these tools when debugging processes 202 within the container. 203 204 $ docker run --pid=host rhel7 strace -p 1234 205 206 This command would allow you to use `strace` inside the container on pid 1234 on 207 the host. 208 209 ## UTS settings (--uts) 210 211 --uts="" : Set the UTS namespace mode for the container, 212 'host': use the host's UTS namespace inside the container 213 214 The UTS namespace is for setting the hostname and the domain that is visible 215 to running processes in that namespace. By default, all containers, including 216 those with `--net=host`, have their own UTS namespace. The `host` setting will 217 result in the container using the same UTS namespace as the host. 218 219 You may wish to share the UTS namespace with the host if you would like the 220 hostname of the container to change as the hostname of the host changes. A 221 more advanced use case would be changing the host's hostname from a container. 222 223 > **Note**: `--uts="host"` gives the container full access to change the 224 > hostname of the host and is therefore considered insecure. 225 226 ## IPC settings (--ipc) 227 228 --ipc="" : Set the IPC mode for the container, 229 'container:<name|id>': reuses another container's IPC namespace 230 'host': use the host's IPC namespace inside the container 231 232 By default, all containers have the IPC namespace enabled. 233 234 IPC (POSIX/SysV IPC) namespace provides separation of named shared memory 235 segments, semaphores and message queues. 236 237 Shared memory segments are used to accelerate inter-process communication at 238 memory speed, rather than through pipes or through the network stack. Shared 239 memory is commonly used by databases and custom-built (typically C/OpenMPI, 240 C++/using boost libraries) high performance applications for scientific 241 computing and financial services industries. If these types of applications 242 are broken into multiple containers, you might need to share the IPC mechanisms 243 of the containers. 244 245 ## Network settings 246 247 --dns=[] : Set custom dns servers for the container 248 --net="bridge" : Connects a container to a network 249 'bridge': creates a new network stack for the container on the docker bridge 250 'none': no networking for this container 251 'container:<name|id>': reuses another container network stack 252 'host': use the host network stack inside the container 253 'NETWORK': connects the container to user-created network using `docker network create` command 254 --add-host="" : Add a line to /etc/hosts (host:IP) 255 --mac-address="" : Sets the container's Ethernet device's MAC address 256 257 By default, all containers have networking enabled and they can make any 258 outgoing connections. The operator can completely disable networking 259 with `docker run --net none` which disables all incoming and outgoing 260 networking. In cases like this, you would perform I/O through files or 261 `STDIN` and `STDOUT` only. 262 263 Publishing ports and linking to other containers will not work 264 when `--net` is anything other than the default (bridge). 265 266 Your container will use the same DNS servers as the host by default, but 267 you can override this with `--dns`. 268 269 By default, the MAC address is generated using the IP address allocated to the 270 container. You can set the container's MAC address explicitly by providing a 271 MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`). 272 273 Supported networks : 274 275 <table> 276 <thead> 277 <tr> 278 <th class="no-wrap">Network</th> 279 <th>Description</th> 280 </tr> 281 </thead> 282 <tbody> 283 <tr> 284 <td class="no-wrap"><strong>none</strong></td> 285 <td> 286 No networking in the container. 287 </td> 288 </tr> 289 <tr> 290 <td class="no-wrap"><strong>bridge</strong> (default)</td> 291 <td> 292 Connect the container to the bridge via veth interfaces. 293 </td> 294 </tr> 295 <tr> 296 <td class="no-wrap"><strong>host</strong></td> 297 <td> 298 Use the host's network stack inside the container. 299 </td> 300 </tr> 301 <tr> 302 <td class="no-wrap"><strong>container</strong>:<name|id></td> 303 <td> 304 Use the network stack of another container, specified via 305 its *name* or *id*. 306 </td> 307 </tr> 308 <tr> 309 <td class="no-wrap"><strong>NETWORK</strong></td> 310 <td> 311 Connects the container to a user created network (using `docker network create` command) 312 </td> 313 </tr> 314 </tbody> 315 </table> 316 317 #### Network: none 318 319 With the network is `none` a container will not have 320 access to any external routes. The container will still have a 321 `loopback` interface enabled in the container but it does not have any 322 routes to external traffic. 323 324 #### Network: bridge 325 326 With the network set to `bridge` a container will use docker's 327 default networking setup. A bridge is setup on the host, commonly named 328 `docker0`, and a pair of `veth` interfaces will be created for the 329 container. One side of the `veth` pair will remain on the host attached 330 to the bridge while the other side of the pair will be placed inside the 331 container's namespaces in addition to the `loopback` interface. An IP 332 address will be allocated for containers on the bridge's network and 333 traffic will be routed though this bridge to the container. 334 335 #### Network: host 336 337 With the network set to `host` a container will share the host's 338 network stack and all interfaces from the host will be available to the 339 container. The container's hostname will match the hostname on the host 340 system. Note that `--add-host` `--hostname` `--dns` `--dns-search` 341 `--dns-opt` and `--mac-address` are invalid in `host` netmode. 342 343 Compared to the default `bridge` mode, the `host` mode gives *significantly* 344 better networking performance since it uses the host's native networking stack 345 whereas the bridge has to go through one level of virtualization through the 346 docker daemon. It is recommended to run containers in this mode when their 347 networking performance is critical, for example, a production Load Balancer 348 or a High Performance Web Server. 349 350 > **Note**: `--net="host"` gives the container full access to local system 351 > services such as D-bus and is therefore considered insecure. 352 353 #### Network: container 354 355 With the network set to `container` a container will share the 356 network stack of another container. The other container's name must be 357 provided in the format of `--net container:<name|id>`. Note that `--add-host` 358 `--hostname` `--dns` `--dns-search` `--dns-opt` and `--mac-address` are 359 invalid in `container` netmode, and `--publish` `--publish-all` `--expose` are 360 also invalid in `container` netmode. 361 362 Example running a Redis container with Redis binding to `localhost` then 363 running the `redis-cli` command and connecting to the Redis server over the 364 `localhost` interface. 365 366 $ docker run -d --name redis example/redis --bind 127.0.0.1 367 $ # use the redis container's network stack to access localhost 368 $ docker run --rm -it --net container:redis example/redis-cli -h 127.0.0.1 369 370 #### Network: User-Created NETWORK 371 372 In addition to all the above special networks, user can create a network using 373 their favorite network driver or external plugin. The driver used to create the 374 network takes care of all the network plumbing requirements for the container 375 connected to that network. 376 377 Example creating a network using the inbuilt overlay network driver and running 378 a container in the created network 379 380 ``` 381 $ docker network create -d overlay multi-host-network 382 $ docker run --net=multi-host-network -itd --name=container3 busybox 383 ``` 384 385 ### Managing /etc/hosts 386 387 Your container will have lines in `/etc/hosts` which define the hostname of the 388 container itself as well as `localhost` and a few other common things. The 389 `--add-host` flag can be used to add additional lines to `/etc/hosts`. 390 391 $ docker run -it --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts 392 172.17.0.22 09d03f76bf2c 393 fe00::0 ip6-localnet 394 ff00::0 ip6-mcastprefix 395 ff02::1 ip6-allnodes 396 ff02::2 ip6-allrouters 397 127.0.0.1 localhost 398 ::1 localhost ip6-localhost ip6-loopback 399 86.75.30.9 db-static 400 401 ## Restart policies (--restart) 402 403 Using the `--restart` flag on Docker run you can specify a restart policy for 404 how a container should or should not be restarted on exit. 405 406 When a restart policy is active on a container, it will be shown as either `Up` 407 or `Restarting` in [`docker ps`](commandline/ps.md). It can also be 408 useful to use [`docker events`](commandline/events.md) to see the 409 restart policy in effect. 410 411 Docker supports the following restart policies: 412 413 <table> 414 <thead> 415 <tr> 416 <th>Policy</th> 417 <th>Result</th> 418 </tr> 419 </thead> 420 <tbody> 421 <tr> 422 <td><strong>no</strong></td> 423 <td> 424 Do not automatically restart the container when it exits. This is the 425 default. 426 </td> 427 </tr> 428 <tr> 429 <td> 430 <span style="white-space: nowrap"> 431 <strong>on-failure</strong>[:max-retries] 432 </span> 433 </td> 434 <td> 435 Restart only if the container exits with a non-zero exit status. 436 Optionally, limit the number of restart retries the Docker 437 daemon attempts. 438 </td> 439 </tr> 440 <tr> 441 <td><strong>always</strong></td> 442 <td> 443 Always restart the container regardless of the exit status. 444 When you specify always, the Docker daemon will try to restart 445 the container indefinitely. The container will also always start 446 on daemon startup, regardless of the current state of the container. 447 </td> 448 </tr> 449 <tr> 450 <td><strong>unless-stopped</strong></td> 451 <td> 452 Always restart the container regardless of the exit status, but 453 do not start it on daemon startup if the container has been put 454 to a stopped state before. 455 </td> 456 </tr> 457 </tbody> 458 </table> 459 460 An ever increasing delay (double the previous delay, starting at 100 461 milliseconds) is added before each restart to prevent flooding the server. 462 This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, 463 and so on until either the `on-failure` limit is hit, or when you `docker stop` 464 or `docker rm -f` the container. 465 466 If a container is successfully restarted (the container is started and runs 467 for at least 10 seconds), the delay is reset to its default value of 100 ms. 468 469 You can specify the maximum amount of times Docker will try to restart the 470 container when using the **on-failure** policy. The default is that Docker 471 will try forever to restart the container. The number of (attempted) restarts 472 for a container can be obtained via [`docker inspect`](commandline/inspect.md). For example, to get the number of restarts 473 for container "my-container"; 474 475 $ docker inspect -f "{{ .RestartCount }}" my-container 476 # 2 477 478 Or, to get the last time the container was (re)started; 479 480 $ docker inspect -f "{{ .State.StartedAt }}" my-container 481 # 2015-03-04T23:47:07.691840179Z 482 483 You cannot set any restart policy in combination with 484 ["clean up (--rm)"](#clean-up-rm). Setting both `--restart` and `--rm` 485 results in an error. 486 487 ### Examples 488 489 $ docker run --restart=always redis 490 491 This will run the `redis` container with a restart policy of **always** 492 so that if the container exits, Docker will restart it. 493 494 $ docker run --restart=on-failure:10 redis 495 496 This will run the `redis` container with a restart policy of **on-failure** 497 and a maximum restart count of 10. If the `redis` container exits with a 498 non-zero exit status more than 10 times in a row Docker will abort trying to 499 restart the container. Providing a maximum restart limit is only valid for the 500 **on-failure** policy. 501 502 ## Clean up (--rm) 503 504 By default a container's file system persists even after the container 505 exits. This makes debugging a lot easier (since you can inspect the 506 final state) and you retain all your data by default. But if you are 507 running short-term **foreground** processes, these container file 508 systems can really pile up. If instead you'd like Docker to 509 **automatically clean up the container and remove the file system when 510 the container exits**, you can add the `--rm` flag: 511 512 --rm=false: Automatically remove the container when it exits (incompatible with -d) 513 514 > **Note**: When you set the `--rm` flag, Docker also removes the volumes 515 associated with the container when the container is removed. This is similar 516 to running `docker rm -v my-container`. 517 518 ## Security configuration 519 --security-opt="label:user:USER" : Set the label user for the container 520 --security-opt="label:role:ROLE" : Set the label role for the container 521 --security-opt="label:type:TYPE" : Set the label type for the container 522 --security-opt="label:level:LEVEL" : Set the label level for the container 523 --security-opt="label:disable" : Turn off label confinement for the container 524 --security-opt="apparmor:PROFILE" : Set the apparmor profile to be applied 525 to the container 526 527 You can override the default labeling scheme for each container by specifying 528 the `--security-opt` flag. For example, you can specify the MCS/MLS level, a 529 requirement for MLS systems. Specifying the level in the following command 530 allows you to share the same content between containers. 531 532 $ docker run --security-opt label:level:s0:c100,c200 -i -t fedora bash 533 534 An MLS example might be: 535 536 $ docker run --security-opt label:level:TopSecret -i -t rhel7 bash 537 538 To disable the security labeling for this container versus running with the 539 `--permissive` flag, use the following command: 540 541 $ docker run --security-opt label:disable -i -t fedora bash 542 543 If you want a tighter security policy on the processes within a container, 544 you can specify an alternate type for the container. You could run a container 545 that is only allowed to listen on Apache ports by executing the following 546 command: 547 548 $ docker run --security-opt label:type:svirt_apache_t -i -t centos bash 549 550 > **Note**: You would have to write policy defining a `svirt_apache_t` type. 551 552 ## Specifying custom cgroups 553 554 Using the `--cgroup-parent` flag, you can pass a specific cgroup to run a 555 container in. This allows you to create and manage cgroups on their own. You can 556 define custom resources for those cgroups and put containers under a common 557 parent group. 558 559 ## Runtime constraints on resources 560 561 The operator can also adjust the performance parameters of the 562 container: 563 564 | Option | Description | 565 |----------------------------|---------------------------------------------------------------------------------------------| 566 | `-m`, `--memory="" ` | Memory limit (format: `<number>[<unit>]`, where unit = b, k, m or g) | 567 | `--memory-swap=""` | Total memory limit (memory + swap, format: `<number>[<unit>]`, where unit = b, k, m or g) | 568 | `--memory-reservation=""` | Memory soft limit (format: `<number>[<unit>]`, where unit = b, k, m or g) | 569 | `--kernel-memory=""` | Kernel memory limit (format: `<number>[<unit>]`, where unit = b, k, m or g) | 570 | `-c`, `--cpu-shares=0` | CPU shares (relative weight) | 571 | `--cpu-period=0` | Limit the CPU CFS (Completely Fair Scheduler) period | 572 | `--cpuset-cpus="" ` | CPUs in which to allow execution (0-3, 0,1) | 573 | `--cpuset-mems=""` | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. | 574 | `--cpu-quota=0` | Limit the CPU CFS (Completely Fair Scheduler) quota | 575 | `--blkio-weight=0` | Block IO weight (relative weight) accepts a weight value between 10 and 1000. | 576 | `--oom-kill-disable=false` | Whether to disable OOM Killer for the container or not. | 577 | `--memory-swappiness="" ` | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. | 578 579 ### User memory constraints 580 581 We have four ways to set user memory usage: 582 583 <table> 584 <thead> 585 <tr> 586 <th>Option</th> 587 <th>Result</th> 588 </tr> 589 </thead> 590 <tbody> 591 <tr> 592 <td class="no-wrap"> 593 <strong>memory=inf, memory-swap=inf</strong> (default) 594 </td> 595 <td> 596 There is no memory limit for the container. The container can use 597 as much memory as needed. 598 </td> 599 </tr> 600 <tr> 601 <td class="no-wrap"><strong>memory=L<inf, memory-swap=inf</strong></td> 602 <td> 603 (specify memory and set memory-swap as <code>-1</code>) The container is 604 not allowed to use more than L bytes of memory, but can use as much swap 605 as is needed (if the host supports swap memory). 606 </td> 607 </tr> 608 <tr> 609 <td class="no-wrap"><strong>memory=L<inf, memory-swap=2*L</strong></td> 610 <td> 611 (specify memory without memory-swap) The container is not allowed to 612 use more than L bytes of memory, swap *plus* memory usage is double 613 of that. 614 </td> 615 </tr> 616 <tr> 617 <td class="no-wrap"> 618 <strong>memory=L<inf, memory-swap=S<inf, L<=S</strong> 619 </td> 620 <td> 621 (specify both memory and memory-swap) The container is not allowed to 622 use more than L bytes of memory, swap *plus* memory usage is limited 623 by S. 624 </td> 625 </tr> 626 </tbody> 627 </table> 628 629 Examples: 630 631 $ docker run -ti ubuntu:14.04 /bin/bash 632 633 We set nothing about memory, this means the processes in the container can use 634 as much memory and swap memory as they need. 635 636 $ docker run -ti -m 300M --memory-swap -1 ubuntu:14.04 /bin/bash 637 638 We set memory limit and disabled swap memory limit, this means the processes in 639 the container can use 300M memory and as much swap memory as they need (if the 640 host supports swap memory). 641 642 $ docker run -ti -m 300M ubuntu:14.04 /bin/bash 643 644 We set memory limit only, this means the processes in the container can use 645 300M memory and 300M swap memory, by default, the total virtual memory size 646 (--memory-swap) will be set as double of memory, in this case, memory + swap 647 would be 2*300M, so processes can use 300M swap memory as well. 648 649 $ docker run -ti -m 300M --memory-swap 1G ubuntu:14.04 /bin/bash 650 651 We set both memory and swap memory, so the processes in the container can use 652 300M memory and 700M swap memory. 653 654 Memory reservation is a kind of memory soft limit that allows for greater 655 sharing of memory. Under normal circumstances, containers can use as much of 656 the memory as needed and are constrained only by the hard limits set with the 657 `-m`/`--memory` option. When memory reservation is set, Docker detects memory 658 contention or low memory and forces containers to restrict their consumption to 659 a reservation limit. 660 661 Always set the memory reservation value below the hard limit, otherwise the hard 662 limit takes precedence. A reservation of 0 is the same as setting no 663 reservation. By default (without reservation set), memory reservation is the 664 same as the hard memory limit. 665 666 Memory reservation is a soft-limit feature and does not guarantee the limit 667 won't be exceeded. Instead, the feature attempts to ensure that, when memory is 668 heavily contended for, memory is allocated based on the reservation hints/setup. 669 670 The following example limits the memory (`-m`) to 500M and sets the memory 671 reservation to 200M. 672 673 ```bash 674 $ docker run -ti -m 500M --memory-reservation 200M ubuntu:14.04 /bin/bash 675 ``` 676 677 Under this configuration, when the container consumes memory more than 200M and 678 less than 500M, the next system memory reclaim attempts to shrink container 679 memory below 200M. 680 681 The following example set memory reservation to 1G without a hard memory limit. 682 683 ```bash 684 $ docker run -ti --memory-reservation 1G ubuntu:14.04 /bin/bash 685 ``` 686 687 The container can use as much memory as it needs. The memory reservation setting 688 ensures the container doesn't consume too much memory for long time, because 689 every memory reclaim shrinks the container's consumption to the reservation. 690 691 By default, kernel kills processes in a container if an out-of-memory (OOM) 692 error occurs. To change this behaviour, use the `--oom-kill-disable` option. 693 Only disable the OOM killer on containers where you have also set the 694 `-m/--memory` option. If the `-m` flag is not set, this can result in the host 695 running out of memory and require killing the host's system processes to free 696 memory. 697 698 The following example limits the memory to 100M and disables the OOM killer for 699 this container: 700 701 $ docker run -ti -m 100M --oom-kill-disable ubuntu:14.04 /bin/bash 702 703 The following example, illustrates a dangerous way to use the flag: 704 705 $ docker run -ti --oom-kill-disable ubuntu:14.04 /bin/bash 706 707 The container has unlimited memory which can cause the host to run out memory 708 and require killing system processes to free memory. 709 710 ### Kernel memory constraints 711 712 Kernel memory is fundamentally different than user memory as kernel memory can't 713 be swapped out. The inability to swap makes it possible for the container to 714 block system services by consuming too much kernel memory. Kernel memory includes: 715 716 - stack pages 717 - slab pages 718 - sockets memory pressure 719 - tcp memory pressure 720 721 You can setup kernel memory limit to constrain these kinds of memory. For example, 722 every process consumes some stack pages. By limiting kernel memory, you can 723 prevent new processes from being created when the kernel memory usage is too high. 724 725 Kernel memory is never completely independent of user memory. Instead, you limit 726 kernel memory in the context of the user memory limit. Assume "U" is the user memory 727 limit and "K" the kernel limit. There are three possible ways to set limits: 728 729 <table> 730 <thead> 731 <tr> 732 <th>Option</th> 733 <th>Result</th> 734 </tr> 735 </thead> 736 <tbody> 737 <tr> 738 <td class="no-wrap"><strong>U != 0, K = inf</strong> (default)</td> 739 <td> 740 This is the standard memory limitation mechanism already present before using 741 kernel memory. Kernel memory is completely ignored. 742 </td> 743 </tr> 744 <tr> 745 <td class="no-wrap"><strong>U != 0, K < U</strong></td> 746 <td> 747 Kernel memory is a subset of the user memory. This setup is useful in 748 deployments where the total amount of memory per-cgroup is overcommitted. 749 Overcommitting kernel memory limits is definitely not recommended, since the 750 box can still run out of non-reclaimable memory. 751 In this case, the you can configure K so that the sum of all groups is 752 never greater than the total memory. Then, freely set U at the expense of 753 the system's service quality. 754 </td> 755 </tr> 756 <tr> 757 <td class="no-wrap"><strong>U != 0, K > U</strong></td> 758 <td> 759 Since kernel memory charges are also fed to the user counter and reclamation 760 is triggered for the container for both kinds of memory. This configuration 761 gives the admin a unified view of memory. It is also useful for people 762 who just want to track kernel memory usage. 763 </td> 764 </tr> 765 </tbody> 766 </table> 767 768 Examples: 769 770 $ docker run -ti -m 500M --kernel-memory 50M ubuntu:14.04 /bin/bash 771 772 We set memory and kernel memory, so the processes in the container can use 773 500M memory in total, in this 500M memory, it can be 50M kernel memory tops. 774 775 $ docker run -ti --kernel-memory 50M ubuntu:14.04 /bin/bash 776 777 We set kernel memory without **-m**, so the processes in the container can 778 use as much memory as they want, but they can only use 50M kernel memory. 779 780 ### Swappiness constraint 781 782 By default, a container's kernel can swap out a percentage of anonymous pages. 783 To set this percentage for a container, specify a `--memory-swappiness` value 784 between 0 and 100. A value of 0 turns off anonymous page swapping. A value of 785 100 sets all anonymous pages as swappable. By default, if you are not using 786 `--memory-swappiness`, memory swappiness value will be inherited from the parent. 787 788 For example, you can set: 789 790 $ docker run -ti --memory-swappiness=0 ubuntu:14.04 /bin/bash 791 792 Setting the `--memory-swappiness` option is helpful when you want to retain the 793 container's working set and to avoid swapping performance penalties. 794 795 ### CPU share constraint 796 797 By default, all containers get the same proportion of CPU cycles. This proportion 798 can be modified by changing the container's CPU share weighting relative 799 to the weighting of all other running containers. 800 801 To modify the proportion from the default of 1024, use the `-c` or `--cpu-shares` 802 flag to set the weighting to 2 or higher. If 0 is set, the system will ignore the 803 value and use the default of 1024. 804 805 The proportion will only apply when CPU-intensive processes are running. 806 When tasks in one container are idle, other containers can use the 807 left-over CPU time. The actual amount of CPU time will vary depending on 808 the number of containers running on the system. 809 810 For example, consider three containers, one has a cpu-share of 1024 and 811 two others have a cpu-share setting of 512. When processes in all three 812 containers attempt to use 100% of CPU, the first container would receive 813 50% of the total CPU time. If you add a fourth container with a cpu-share 814 of 1024, the first container only gets 33% of the CPU. The remaining containers 815 receive 16.5%, 16.5% and 33% of the CPU. 816 817 On a multi-core system, the shares of CPU time are distributed over all CPU 818 cores. Even if a container is limited to less than 100% of CPU time, it can 819 use 100% of each individual CPU core. 820 821 For example, consider a system with more than three cores. If you start one 822 container `{C0}` with `-c=512` running one process, and another container 823 `{C1}` with `-c=1024` running two processes, this can result in the following 824 division of CPU shares: 825 826 PID container CPU CPU share 827 100 {C0} 0 100% of CPU0 828 101 {C1} 1 100% of CPU1 829 102 {C1} 2 100% of CPU2 830 831 ### CPU period constraint 832 833 The default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use 834 `--cpu-period` to set the period of CPUs to limit the container's CPU usage. 835 And usually `--cpu-period` should work with `--cpu-quota`. 836 837 Examples: 838 839 $ docker run -ti --cpu-period=50000 --cpu-quota=25000 ubuntu:14.04 /bin/bash 840 841 If there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms. 842 843 For more information, see the [CFS documentation on bandwidth limiting](https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt). 844 845 ### Cpuset constraint 846 847 We can set cpus in which to allow execution for containers. 848 849 Examples: 850 851 $ docker run -ti --cpuset-cpus="1,3" ubuntu:14.04 /bin/bash 852 853 This means processes in container can be executed on cpu 1 and cpu 3. 854 855 $ docker run -ti --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash 856 857 This means processes in container can be executed on cpu 0, cpu 1 and cpu 2. 858 859 We can set mems in which to allow execution for containers. Only effective 860 on NUMA systems. 861 862 Examples: 863 864 $ docker run -ti --cpuset-mems="1,3" ubuntu:14.04 /bin/bash 865 866 This example restricts the processes in the container to only use memory from 867 memory nodes 1 and 3. 868 869 $ docker run -ti --cpuset-mems="0-2" ubuntu:14.04 /bin/bash 870 871 This example restricts the processes in the container to only use memory from 872 memory nodes 0, 1 and 2. 873 874 ### CPU quota constraint 875 876 The `--cpu-quota` flag limits the container's CPU usage. The default 0 value 877 allows the container to take 100% of a CPU resource (1 CPU). The CFS (Completely Fair 878 Scheduler) handles resource allocation for executing processes and is default 879 Linux Scheduler used by the kernel. Set this value to 50000 to limit the container 880 to 50% of a CPU resource. For multiple CPUs, adjust the `--cpu-quota` as necessary. 881 For more information, see the [CFS documentation on bandwidth limiting](https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt). 882 883 ### Block IO bandwidth (Blkio) constraint 884 885 By default, all containers get the same proportion of block IO bandwidth 886 (blkio). This proportion is 500. To modify this proportion, change the 887 container's blkio weight relative to the weighting of all other running 888 containers using the `--blkio-weight` flag. 889 890 The `--blkio-weight` flag can set the weighting to a value between 10 to 1000. 891 For example, the commands below create two containers with different blkio 892 weight: 893 894 $ docker run -ti --name c1 --blkio-weight 300 ubuntu:14.04 /bin/bash 895 $ docker run -ti --name c2 --blkio-weight 600 ubuntu:14.04 /bin/bash 896 897 If you do block IO in the two containers at the same time, by, for example: 898 899 $ time dd if=/mnt/zerofile of=test.out bs=1M count=1024 oflag=direct 900 901 You'll find that the proportion of time is the same as the proportion of blkio 902 weights of the two containers. 903 904 > **Note:** The blkio weight setting is only available for direct IO. Buffered IO 905 > is not currently supported. 906 907 ## Additional groups 908 --group-add: Add Linux capabilities 909 910 By default, the docker container process runs with the supplementary groups looked 911 up for the specified user. If one wants to add more to that list of groups, then 912 one can use this flag: 913 914 $ docker run -ti --rm --group-add audio --group-add dbus --group-add 777 busybox id 915 uid=0(root) gid=0(root) groups=10(wheel),29(audio),81(dbus),777 916 917 ## Runtime privilege, Linux capabilities, and LXC configuration 918 919 --cap-add: Add Linux capabilities 920 --cap-drop: Drop Linux capabilities 921 --privileged=false: Give extended privileges to this container 922 --device=[]: Allows you to run devices inside the container without the --privileged flag. 923 --lxc-conf=[]: Add custom lxc options 924 925 By default, Docker containers are "unprivileged" and cannot, for 926 example, run a Docker daemon inside a Docker container. This is because 927 by default a container is not allowed to access any devices, but a 928 "privileged" container is given access to all devices (see [lxc-template.go]( 929 https://github.com/docker/docker/blob/master/daemon/execdriver/lxc/lxc_template.go) 930 and documentation on [cgroups devices]( 931 https://www.kernel.org/doc/Documentation/cgroups/devices.txt)). 932 933 When the operator executes `docker run --privileged`, Docker will enable 934 to access to all devices on the host as well as set some configuration 935 in AppArmor or SELinux to allow the container nearly all the same access to the 936 host as processes running outside containers on the host. Additional 937 information about running with `--privileged` is available on the 938 [Docker Blog](http://blog.docker.com/2013/09/docker-can-now-run-within-docker/). 939 940 If you want to limit access to a specific device or devices you can use 941 the `--device` flag. It allows you to specify one or more devices that 942 will be accessible within the container. 943 944 $ docker run --device=/dev/snd:/dev/snd ... 945 946 By default, the container will be able to `read`, `write`, and `mknod` these devices. 947 This can be overridden using a third `:rwm` set of options to each `--device` flag: 948 949 $ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc 950 951 Command (m for help): q 952 $ docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc 953 You will not be able to write the partition table. 954 955 Command (m for help): q 956 957 $ docker run --device=/dev/sda:/dev/xvdc:w --rm -it ubuntu fdisk /dev/xvdc 958 crash.... 959 960 $ docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc 961 fdisk: unable to open /dev/xvdc: Operation not permitted 962 963 In addition to `--privileged`, the operator can have fine grain control over the 964 capabilities using `--cap-add` and `--cap-drop`. By default, Docker has a default 965 list of capabilities that are kept. The following table lists the Linux capability options which can be added or dropped. 966 967 | Capability Key | Capability Description | 968 | -------------- | ---------------------- | 969 | SETPCAP | Modify process capabilities. | 970 | SYS_MODULE| Load and unload kernel modules. | 971 | SYS_RAWIO | Perform I/O port operations (iopl(2) and ioperm(2)). | 972 | SYS_PACCT | Use acct(2), switch process accounting on or off. | 973 | SYS_ADMIN | Perform a range of system administration operations. | 974 | SYS_NICE | Raise process nice value (nice(2), setpriority(2)) and change the nice value for arbitrary processes. | 975 | SYS_RESOURCE | Override resource Limits. | 976 | SYS_TIME | Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time (hardware) clock. | 977 | SYS_TTY_CONFIG | Use vhangup(2); employ various privileged ioctl(2) operations on virtual terminals. | 978 | MKNOD | Create special files using mknod(2). | 979 | AUDIT_WRITE | Write records to kernel auditing log. | 980 | AUDIT_CONTROL | Enable and disable kernel auditing; change auditing filter rules; retrieve auditing status and filtering rules. | 981 | MAC_OVERRIDE | Allow MAC configuration or state changes. Implemented for the Smack LSM. | 982 | MAC_ADMIN | Override Mandatory Access Control (MAC). Implemented for the Smack Linux Security Module (LSM). | 983 | NET_ADMIN | Perform various network-related operations. | 984 | SYSLOG | Perform privileged syslog(2) operations. | 985 | CHOWN | Make arbitrary changes to file UIDs and GIDs (see chown(2)). | 986 | NET_RAW | Use RAW and PACKET sockets. | 987 | DAC_OVERRIDE | Bypass file read, write, and execute permission checks. | 988 | FOWNER | Bypass permission checks on operations that normally require the file system UID of the process to match the UID of the file. | 989 | DAC_READ_SEARCH | Bypass file read permission checks and directory read and execute permission checks. | 990 | FSETID | Don't clear set-user-ID and set-group-ID permission bits when a file is modified. | 991 | KILL | Bypass permission checks for sending signals. | 992 | SETGID | Make arbitrary manipulations of process GIDs and supplementary GID list. | 993 | SETUID | Make arbitrary manipulations of process UIDs. | 994 | LINUX_IMMUTABLE | Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags. | 995 | NET_BIND_SERVICE | Bind a socket to internet domain privileged ports (port numbers less than 1024). | 996 | NET_BROADCAST | Make socket broadcasts, and listen to multicasts. | 997 | IPC_LOCK | Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)). | 998 | IPC_OWNER | Bypass permission checks for operations on System V IPC objects. | 999 | SYS_CHROOT | Use chroot(2), change root directory. | 1000 | SYS_PTRACE | Trace arbitrary processes using ptrace(2). | 1001 | SYS_BOOT | Use reboot(2) and kexec_load(2), reboot and load a new kernel for later execution. | 1002 | LEASE | Establish leases on arbitrary files (see fcntl(2)). | 1003 | SETFCAP | Set file capabilities.| 1004 | WAKE_ALARM | Trigger something that will wake up the system. | 1005 | BLOCK_SUSPEND | Employ features that can block system suspend. | 1006 1007 Further reference information is available on the [capabilities(7) - Linux man page](http://linux.die.net/man/7/capabilities) 1008 1009 Both flags support the value `ALL`, so if the 1010 operator wants to have all capabilities but `MKNOD` they could use: 1011 1012 $ docker run --cap-add=ALL --cap-drop=MKNOD ... 1013 1014 For interacting with the network stack, instead of using `--privileged` they 1015 should use `--cap-add=NET_ADMIN` to modify the network interfaces. 1016 1017 $ docker run -t -i --rm ubuntu:14.04 ip link add dummy0 type dummy 1018 RTNETLINK answers: Operation not permitted 1019 $ docker run -t -i --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy 1020 1021 To mount a FUSE based filesystem, you need to combine both `--cap-add` and 1022 `--device`: 1023 1024 $ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs sven@10.10.10.20:/home/sven /mnt 1025 fuse: failed to open /dev/fuse: Operation not permitted 1026 $ docker run --rm -it --device /dev/fuse sshfs sshfs sven@10.10.10.20:/home/sven /mnt 1027 fusermount: mount failed: Operation not permitted 1028 $ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs 1029 # sshfs sven@10.10.10.20:/home/sven /mnt 1030 The authenticity of host '10.10.10.20 (10.10.10.20)' can't be established. 1031 ECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6. 1032 Are you sure you want to continue connecting (yes/no)? yes 1033 sven@10.10.10.20's password: 1034 root@30aa0cfaf1b5:/# ls -la /mnt/src/docker 1035 total 1516 1036 drwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 . 1037 drwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 .. 1038 -rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore 1039 -rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml 1040 drwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git 1041 -rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore 1042 .... 1043 1044 1045 If the Docker daemon was started using the `lxc` exec-driver 1046 (`docker daemon --exec-driver=lxc`) then the operator can also specify LXC options 1047 using one or more `--lxc-conf` parameters. These can be new parameters or 1048 override existing parameters from the [lxc-template.go]( 1049 https://github.com/docker/docker/blob/master/daemon/execdriver/lxc/lxc_template.go). 1050 Note that in the future, a given host's docker daemon may not use LXC, so this 1051 is an implementation-specific configuration meant for operators already 1052 familiar with using LXC directly. 1053 1054 > **Note:** 1055 > If you use `--lxc-conf` to modify a container's configuration which is also 1056 > managed by the Docker daemon, then the Docker daemon will not know about this 1057 > modification, and you will need to manage any conflicts yourself. For example, 1058 > you can use `--lxc-conf` to set a container's IP address, but this will not be 1059 > reflected in the `/etc/hosts` file. 1060 1061 ## Logging drivers (--log-driver) 1062 1063 The container can have a different logging driver than the Docker daemon. Use 1064 the `--log-driver=VALUE` with the `docker run` command to configure the 1065 container's logging driver. The following options are supported: 1066 1067 | `none` | Disables any logging for the container. `docker logs` won't be available with this driver. | 1068 |-------------|-------------------------------------------------------------------------------------------------------------------------------| 1069 | `json-file` | Default logging driver for Docker. Writes JSON messages to file. No logging options are supported for this driver. | 1070 | `syslog` | Syslog logging driver for Docker. Writes log messages to syslog. | 1071 | `journald` | Journald logging driver for Docker. Writes log messages to `journald`. | 1072 | `gelf` | Graylog Extended Log Format (GELF) logging driver for Docker. Writes log messages to a GELF endpoint likeGraylog or Logstash. | 1073 | `fluentd` | Fluentd logging driver for Docker. Writes log messages to `fluentd` (forward input). | 1074 | `awslogs` | Amazon CloudWatch Logs logging driver for Docker. Writes log messages to Amazon CloudWatch Logs | 1075 1076 The `docker logs` command is available only for the `json-file` and `journald` 1077 logging drivers. For detailed information on working with logging drivers, see 1078 [Configure a logging driver](logging/overview.md). 1079 1080 1081 ## Overriding Dockerfile image defaults 1082 1083 When a developer builds an image from a [*Dockerfile*](builder.md) 1084 or when she commits it, the developer can set a number of default parameters 1085 that take effect when the image starts up as a container. 1086 1087 Four of the Dockerfile commands cannot be overridden at runtime: `FROM`, 1088 `MAINTAINER`, `RUN`, and `ADD`. Everything else has a corresponding override 1089 in `docker run`. We'll go through what the developer might have set in each 1090 Dockerfile instruction and how the operator can override that setting. 1091 1092 - [CMD (Default Command or Options)](#cmd-default-command-or-options) 1093 - [ENTRYPOINT (Default Command to Execute at Runtime)]( 1094 #entrypoint-default-command-to-execute-at-runtime) 1095 - [EXPOSE (Incoming Ports)](#expose-incoming-ports) 1096 - [ENV (Environment Variables)](#env-environment-variables) 1097 - [VOLUME (Shared Filesystems)](#volume-shared-filesystems) 1098 - [USER](#user) 1099 - [WORKDIR](#workdir) 1100 1101 ### CMD (default command or options) 1102 1103 Recall the optional `COMMAND` in the Docker 1104 commandline: 1105 1106 $ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...] 1107 1108 This command is optional because the person who created the `IMAGE` may 1109 have already provided a default `COMMAND` using the Dockerfile `CMD` 1110 instruction. As the operator (the person running a container from the 1111 image), you can override that `CMD` instruction just by specifying a new 1112 `COMMAND`. 1113 1114 If the image also specifies an `ENTRYPOINT` then the `CMD` or `COMMAND` 1115 get appended as arguments to the `ENTRYPOINT`. 1116 1117 ### ENTRYPOINT (default command to execute at runtime) 1118 1119 --entrypoint="": Overwrite the default entrypoint set by the image 1120 1121 The `ENTRYPOINT` of an image is similar to a `COMMAND` because it 1122 specifies what executable to run when the container starts, but it is 1123 (purposely) more difficult to override. The `ENTRYPOINT` gives a 1124 container its default nature or behavior, so that when you set an 1125 `ENTRYPOINT` you can run the container *as if it were that binary*, 1126 complete with default options, and you can pass in more options via the 1127 `COMMAND`. But, sometimes an operator may want to run something else 1128 inside the container, so you can override the default `ENTRYPOINT` at 1129 runtime by using a string to specify the new `ENTRYPOINT`. Here is an 1130 example of how to run a shell in a container that has been set up to 1131 automatically run something else (like `/usr/bin/redis-server`): 1132 1133 $ docker run -i -t --entrypoint /bin/bash example/redis 1134 1135 or two examples of how to pass more parameters to that ENTRYPOINT: 1136 1137 $ docker run -i -t --entrypoint /bin/bash example/redis -c ls -l 1138 $ docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help 1139 1140 ### EXPOSE (incoming ports) 1141 1142 The following `run` command options work with container networking: 1143 1144 --expose=[]: Expose a port or a range of ports inside the container. 1145 These are additional to those exposed by the `EXPOSE` instruction 1146 -P=false : Publish all exposed ports to the host interfaces 1147 -p=[] : Publish a container᾿s port or a range of ports to the host 1148 format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort 1149 Both hostPort and containerPort can be specified as a 1150 range of ports. When specifying ranges for both, the 1151 number of container ports in the range must match the 1152 number of host ports in the range, for example: 1153 -p 1234-1236:1234-1236/tcp 1154 1155 When specifying a range for hostPort only, the 1156 containerPort must not be a range. In this case the 1157 container port is published somewhere within the 1158 specified hostPort range. (e.g., `-p 1234-1236:1234/tcp`) 1159 1160 (use 'docker port' to see the actual mapping) 1161 1162 --link="" : Add link to another container (<name or id>:alias or <name or id>) 1163 1164 With the exception of the `EXPOSE` directive, an image developer hasn't 1165 got much control over networking. The `EXPOSE` instruction defines the 1166 initial incoming ports that provide services. These ports are available 1167 to processes inside the container. An operator can use the `--expose` 1168 option to add to the exposed ports. 1169 1170 To expose a container's internal port, an operator can start the 1171 container with the `-P` or `-p` flag. The exposed port is accessible on 1172 the host and the ports are available to any client that can reach the 1173 host. 1174 1175 The `-P` option publishes all the ports to the host interfaces. Docker 1176 binds each exposed port to a random port on the host. The range of 1177 ports are within an *ephemeral port range* defined by 1178 `/proc/sys/net/ipv4/ip_local_port_range`. Use the `-p` flag to 1179 explicitly map a single port or range of ports. 1180 1181 The port number inside the container (where the service listens) does 1182 not need to match the port number exposed on the outside of the 1183 container (where clients connect). For example, inside the container an 1184 HTTP service is listening on port 80 (and so the image developer 1185 specifies `EXPOSE 80` in the Dockerfile). At runtime, the port might be 1186 bound to 42800 on the host. To find the mapping between the host ports 1187 and the exposed ports, use `docker port`. 1188 1189 If the operator uses `--link` when starting a new client container, 1190 then the client container can access the exposed port via a private 1191 networking interface. Docker will set some environment variables in the 1192 client container to help indicate which interface and port to use. For 1193 more information on linking, see [the guide on linking container 1194 together](../userguide/dockerlinks.md) 1195 1196 ### ENV (environment variables) 1197 1198 When a new container is created, Docker will set the following environment 1199 variables automatically: 1200 1201 <table> 1202 <tr> 1203 <th>Variable</th> 1204 <th>Value</th> 1205 </tr> 1206 <tr> 1207 <td><code>HOME</code></td> 1208 <td> 1209 Set based on the value of <code>USER</code> 1210 </td> 1211 </tr> 1212 <tr> 1213 <td><code>HOSTNAME</code></td> 1214 <td> 1215 The hostname associated with the container 1216 </td> 1217 </tr> 1218 <tr> 1219 <td><code>PATH</code></td> 1220 <td> 1221 Includes popular directories, such as :<br> 1222 <code>/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</code> 1223 </td> 1224 <tr> 1225 <td><code>TERM</code></td> 1226 <td><code>xterm</code> if the container is allocated a pseudo-TTY</td> 1227 </tr> 1228 </table> 1229 1230 The container may also include environment variables defined 1231 as a result of the container being linked with another container. See 1232 the [*Container Links*](../userguide/dockerlinks.md#connect-with-the-linking-system) 1233 section for more details. 1234 1235 Additionally, the operator can **set any environment variable** in the 1236 container by using one or more `-e` flags, even overriding those mentioned 1237 above, or already defined by the developer with a Dockerfile `ENV`: 1238 1239 $ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export 1240 declare -x HOME="/" 1241 declare -x HOSTNAME="85bc26a0e200" 1242 declare -x OLDPWD 1243 declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" 1244 declare -x PWD="/" 1245 declare -x SHLVL="1" 1246 declare -x container="lxc" 1247 declare -x deep="purple" 1248 1249 Similarly the operator can set the **hostname** with `-h`. 1250 1251 `--link <name or id>:alias` also sets environment variables, using the *alias* string to 1252 define environment variables within the container that give the IP and PORT 1253 information for connecting to the service container. Let's imagine we have a 1254 container running Redis: 1255 1256 # Start the service container, named redis-name 1257 $ docker run -d --name redis-name dockerfiles/redis 1258 4241164edf6f5aca5b0e9e4c9eccd899b0b8080c64c0cd26efe02166c73208f3 1259 1260 # The redis-name container exposed port 6379 1261 $ docker ps 1262 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1263 4241164edf6f $ dockerfiles/redis:latest /redis-stable/src/re 5 seconds ago Up 4 seconds 6379/tcp redis-name 1264 1265 # Note that there are no public ports exposed since we didn᾿t use -p or -P 1266 $ docker port 4241164edf6f 6379 1267 2014/01/25 00:55:38 Error: No public port '6379' published for 4241164edf6f 1268 1269 Yet we can get information about the Redis container's exposed ports 1270 with `--link`. Choose an alias that will form a 1271 valid environment variable! 1272 1273 $ docker run --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c export 1274 declare -x HOME="/" 1275 declare -x HOSTNAME="acda7f7b1cdc" 1276 declare -x OLDPWD 1277 declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" 1278 declare -x PWD="/" 1279 declare -x REDIS_ALIAS_NAME="/distracted_wright/redis" 1280 declare -x REDIS_ALIAS_PORT="tcp://172.17.0.32:6379" 1281 declare -x REDIS_ALIAS_PORT_6379_TCP="tcp://172.17.0.32:6379" 1282 declare -x REDIS_ALIAS_PORT_6379_TCP_ADDR="172.17.0.32" 1283 declare -x REDIS_ALIAS_PORT_6379_TCP_PORT="6379" 1284 declare -x REDIS_ALIAS_PORT_6379_TCP_PROTO="tcp" 1285 declare -x SHLVL="1" 1286 declare -x container="lxc" 1287 1288 And we can use that information to connect from another container as a client: 1289 1290 $ docker run -i -t --rm --link redis-name:redis_alias --entrypoint /bin/bash dockerfiles/redis -c '/redis-stable/src/redis-cli -h $REDIS_ALIAS_PORT_6379_TCP_ADDR -p $REDIS_ALIAS_PORT_6379_TCP_PORT' 1291 172.17.0.32:6379> 1292 1293 Docker will also map the private IP address to the alias of a linked 1294 container by inserting an entry into `/etc/hosts`. You can use this 1295 mechanism to communicate with a linked container by its alias: 1296 1297 $ docker run -d --name servicename busybox sleep 30 1298 $ docker run -i -t --link servicename:servicealias busybox ping -c 1 servicealias 1299 1300 If you restart the source container (`servicename` in this case), the recipient 1301 container's `/etc/hosts` entry will be automatically updated. 1302 1303 > **Note**: 1304 > Unlike host entries in the `/etc/hosts` file, IP addresses stored in the 1305 > environment variables are not automatically updated if the source container is 1306 > restarted. We recommend using the host entries in `/etc/hosts` to resolve the 1307 > IP address of linked containers. 1308 1309 ### VOLUME (shared filesystems) 1310 1311 -v=[]: Create a bind mount with: [host-dir:]container-dir[:<options>], where 1312 options are comma delimited and selected from [rw|ro] and [z|Z]. 1313 If 'host-dir' is missing, then docker creates a new volume. 1314 If neither 'rw' or 'ro' is specified then the volume is mounted 1315 in read-write mode. 1316 --volumes-from="": Mount all volumes from the given container(s) 1317 1318 > **Note**: 1319 > The auto-creation of the host path has been [*deprecated*](../misc/deprecated.md#auto-creating-missing-host-paths-for-bind-mounts). 1320 1321 The volumes commands are complex enough to have their own documentation 1322 in section [*Managing data in 1323 containers*](../userguide/dockervolumes.md). A developer can define 1324 one or more `VOLUME`'s associated with an image, but only the operator 1325 can give access from one container to another (or from a container to a 1326 volume mounted on the host). 1327 1328 The `container-dir` must always be an absolute path such as `/src/docs`. 1329 The `host-dir` can either be an absolute path or a `name` value. If you 1330 supply an absolute path for the `host-dir`, Docker bind-mounts to the path 1331 you specify. If you supply a `name`, Docker creates a named volume by that `name`. 1332 1333 A `name` value must start with start with an alphanumeric character, 1334 followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen). 1335 An absolute path starts with a `/` (forward slash). 1336 1337 For example, you can specify either `/foo` or `foo` for a `host-dir` value. 1338 If you supply the `/foo` value, Docker creates a bind-mount. If you supply 1339 the `foo` specification, Docker creates a named volume. 1340 1341 ### USER 1342 1343 `root` (id = 0) is the default user within a container. The image developer can 1344 create additional users. Those users are accessible by name. When passing a numeric 1345 ID, the user does not have to exist in the container. 1346 1347 The developer can set a default user to run the first process with the 1348 Dockerfile `USER` instruction. When starting a container, the operator can override 1349 the `USER` instruction by passing the `-u` option. 1350 1351 -u="": Username or UID 1352 1353 > **Note:** if you pass a numeric uid, it must be in the range of 0-2147483647. 1354 1355 ### WORKDIR 1356 1357 The default working directory for running binaries within a container is the 1358 root directory (`/`), but the developer can set a different default with the 1359 Dockerfile `WORKDIR` command. The operator can override this with: 1360 1361 -w="": Working directory inside the container