github.com/robertojrojas/docker@v1.9.1/docs/reference/run.md (about) 1 <!--[metadata]> 2 +++ 3 title = "Docker run reference" 4 description = "Configure containers at runtime" 5 keywords = ["docker, run, configure, runtime"] 6 [menu.main] 7 parent = "mn_reference" 8 +++ 9 <![end-metadata]--> 10 11 <!-- TODO (@thaJeztah) define more flexible table/td classes --> 12 <style> 13 .content-body table .no-wrap { 14 white-space: nowrap; 15 } 16 </style> 17 # Docker run reference 18 19 Docker runs processes in isolated containers. A container is a process 20 which runs on a host. The host may be local or remote. When an operator 21 executes `docker run`, the container process that runs is isolated in 22 that it has its own file system, its own networking, and its own 23 isolated process tree separate from the host. 24 25 This page details how to use the `docker run` command to define the 26 container's resources at runtime. 27 28 ## General form 29 30 The basic `docker run` command takes this form: 31 32 $ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...] 33 34 The `docker run` command must specify an [*IMAGE*](glossary.md#image) 35 to derive the container from. An image developer can define image 36 defaults related to: 37 38 * detached or foreground running 39 * container identification 40 * network settings 41 * runtime constraints on CPU and memory 42 * privileges and LXC configuration 43 44 With the `docker run [OPTIONS]` an operator can add to or override the 45 image defaults set by a developer. And, additionally, operators can 46 override nearly all the defaults set by the Docker runtime itself. The 47 operator's ability to override image and Docker runtime defaults is why 48 [*run*](commandline/run.md) has more options than any 49 other `docker` command. 50 51 To learn how to interpret the types of `[OPTIONS]`, see [*Option 52 types*](commandline/cli.md#option-types). 53 54 > **Note**: Depending on your Docker system configuration, you may be 55 > required to preface the `docker run` command with `sudo`. To avoid 56 > having to use `sudo` with the `docker` command, your system 57 > administrator can create a Unix group called `docker` and add users to 58 > it. For more information about this configuration, refer to the Docker 59 > installation documentation for your operating system. 60 61 62 ## Operator exclusive options 63 64 Only the operator (the person executing `docker run`) can set the 65 following options. 66 67 - [Detached vs foreground](#detached-vs-foreground) 68 - [Detached (-d)](#detached-d) 69 - [Foreground](#foreground) 70 - [Container identification](#container-identification) 71 - [Name (--name)](#name-name) 72 - [PID equivalent](#pid-equivalent) 73 - [IPC settings (--ipc)](#ipc-settings-ipc) 74 - [Network settings](#network-settings) 75 - [Restart policies (--restart)](#restart-policies-restart) 76 - [Clean up (--rm)](#clean-up-rm) 77 - [Runtime constraints on resources](#runtime-constraints-on-resources) 78 - [Runtime privilege, Linux capabilities, and LXC configuration](#runtime-privilege-linux-capabilities-and-lxc-configuration) 79 80 ## Detached vs foreground 81 82 When starting a Docker container, you must first decide if you want to 83 run the container in the background in a "detached" mode or in the 84 default foreground mode: 85 86 -d=false: Detached mode: Run container in the background, print new container id 87 88 ### Detached (-d) 89 90 To start a container in detached mode, you use `-d=true` or just `-d` option. By 91 design, containers started in detached mode exit when the root process used to 92 run the container exits. A container in detached mode cannot be automatically 93 removed when it stops, this means you cannot use the `--rm` option with `-d` option. 94 95 Do not pass a `service x start` command to a detached container. For example, this 96 command attempts to start the `nginx` service. 97 98 $ docker run -d -p 80:80 my_image service nginx start 99 100 This succeeds in starting the `nginx` service inside the container. However, it 101 fails the detached container paradigm in that, the root process (`service nginx 102 start`) returns and the detached container stops as designed. As a result, the 103 `nginx` service is started but could not be used. Instead, to start a process 104 such as the `nginx` web server do the following: 105 106 $ docker run -d -p 80:80 my_image nginx -g 'daemon off;' 107 108 To do input/output with a detached container use network connections or shared 109 volumes. These are required because the container is no longer listening to the 110 command line where `docker run` was run. 111 112 To reattach to a detached container, use `docker` 113 [*attach*](commandline/attach.md) command. 114 115 ### Foreground 116 117 In foreground mode (the default when `-d` is not specified), `docker 118 run` can start the process in the container and attach the console to 119 the process's standard input, output, and standard error. It can even 120 pretend to be a TTY (this is what most command line executables expect) 121 and pass along signals. All of that is configurable: 122 123 -a=[] : Attach to `STDIN`, `STDOUT` and/or `STDERR` 124 -t=false : Allocate a pseudo-tty 125 --sig-proxy=true: Proxify all received signal to the process (non-TTY mode only) 126 -i=false : Keep STDIN open even if not attached 127 128 If you do not specify `-a` then Docker will [attach all standard 129 streams]( https://github.com/docker/docker/blob/75a7f4d90cde0295bcfb7213004abce8d4779b75/commands.go#L1797). 130 You can specify to which of the three standard streams (`STDIN`, `STDOUT`, 131 `STDERR`) you'd like to connect instead, as in: 132 133 $ docker run -a stdin -a stdout -i -t ubuntu /bin/bash 134 135 For interactive processes (like a shell), you must use `-i -t` together in 136 order to allocate a tty for the container process. `-i -t` is often written `-it` 137 as you'll see in later examples. Specifying `-t` is forbidden when the client 138 standard output is redirected or piped, such as in: 139 `echo test | docker run -i busybox cat`. 140 141 >**Note**: A process running as PID 1 inside a container is treated 142 >specially by Linux: it ignores any signal with the default action. 143 >So, the process will not terminate on `SIGINT` or `SIGTERM` unless it is 144 >coded to do so. 145 146 ## Container identification 147 148 ### Name (--name) 149 150 The operator can identify a container in three ways: 151 152 - UUID long identifier 153 ("f78375b1c487e03c9438c729345e54db9d20cfa2ac1fc3494b6eb60872e74778") 154 - UUID short identifier ("f78375b1c487") 155 - Name ("evil_ptolemy") 156 157 The UUID identifiers come from the Docker daemon. If you do not assign a 158 container name with the `--name` option, then the daemon generates a random 159 string name for you. Defining a `name` can be a handy way to add meaning to a 160 container. If you specify a `name`, you can use it when referencing the 161 container within a Docker network. This works for both background and foreground 162 Docker containers. 163 164 **Note**: Containers on the default bridge network must be linked to communicate by name. 165 166 ### PID equivalent 167 168 Finally, to help with automation, you can have Docker write the 169 container ID out to a file of your choosing. This is similar to how some 170 programs might write out their process ID to a file (you've seen them as 171 PID files): 172 173 --cidfile="": Write the container ID to the file 174 175 ### Image[:tag] 176 177 While not strictly a means of identifying a container, you can specify a version of an 178 image you'd like to run the container with by adding `image[:tag]` to the command. For 179 example, `docker run ubuntu:14.04`. 180 181 ### Image[@digest] 182 183 Images using the v2 or later image format have a content-addressable identifier 184 called a digest. As long as the input used to generate the image is unchanged, 185 the digest value is predictable and referenceable. 186 187 ## PID settings (--pid) 188 189 --pid="" : Set the PID (Process) Namespace mode for the container, 190 'host': use the host's PID namespace inside the container 191 192 By default, all containers have the PID namespace enabled. 193 194 PID namespace provides separation of processes. The PID Namespace removes the 195 view of the system processes, and allows process ids to be reused including 196 pid 1. 197 198 In certain cases you want your container to share the host's process namespace, 199 basically allowing processes within the container to see all of the processes 200 on the system. For example, you could build a container with debugging tools 201 like `strace` or `gdb`, but want to use these tools when debugging processes 202 within the container. 203 204 $ docker run --pid=host rhel7 strace -p 1234 205 206 This command would allow you to use `strace` inside the container on pid 1234 on 207 the host. 208 209 ## UTS settings (--uts) 210 211 --uts="" : Set the UTS namespace mode for the container, 212 'host': use the host's UTS namespace inside the container 213 214 The UTS namespace is for setting the hostname and the domain that is visible 215 to running processes in that namespace. By default, all containers, including 216 those with `--net=host`, have their own UTS namespace. The `host` setting will 217 result in the container using the same UTS namespace as the host. 218 219 You may wish to share the UTS namespace with the host if you would like the 220 hostname of the container to change as the hostname of the host changes. A 221 more advanced use case would be changing the host's hostname from a container. 222 223 > **Note**: `--uts="host"` gives the container full access to change the 224 > hostname of the host and is therefore considered insecure. 225 226 ## IPC settings (--ipc) 227 228 --ipc="" : Set the IPC mode for the container, 229 'container:<name|id>': reuses another container's IPC namespace 230 'host': use the host's IPC namespace inside the container 231 232 By default, all containers have the IPC namespace enabled. 233 234 IPC (POSIX/SysV IPC) namespace provides separation of named shared memory 235 segments, semaphores and message queues. 236 237 Shared memory segments are used to accelerate inter-process communication at 238 memory speed, rather than through pipes or through the network stack. Shared 239 memory is commonly used by databases and custom-built (typically C/OpenMPI, 240 C++/using boost libraries) high performance applications for scientific 241 computing and financial services industries. If these types of applications 242 are broken into multiple containers, you might need to share the IPC mechanisms 243 of the containers. 244 245 ## Network settings 246 247 --dns=[] : Set custom dns servers for the container 248 --net="bridge" : Connects a container to a network 249 'bridge': creates a new network stack for the container on the docker bridge 250 'none': no networking for this container 251 'container:<name|id>': reuses another container network stack 252 'host': use the host network stack inside the container 253 'NETWORK': connects the container to user-created network using `docker network create` command 254 --add-host="" : Add a line to /etc/hosts (host:IP) 255 --mac-address="" : Sets the container's Ethernet device's MAC address 256 257 By default, all containers have networking enabled and they can make any 258 outgoing connections. The operator can completely disable networking 259 with `docker run --net none` which disables all incoming and outgoing 260 networking. In cases like this, you would perform I/O through files or 261 `STDIN` and `STDOUT` only. 262 263 Publishing ports and linking to other containers only works with the the default (bridge). The linking feature is a legacy feature. You should always prefer using Docker network drivers over linking. 264 265 Your container will use the same DNS servers as the host by default, but 266 you can override this with `--dns`. 267 268 By default, the MAC address is generated using the IP address allocated to the 269 container. You can set the container's MAC address explicitly by providing a 270 MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`). 271 272 Supported networks : 273 274 <table> 275 <thead> 276 <tr> 277 <th class="no-wrap">Network</th> 278 <th>Description</th> 279 </tr> 280 </thead> 281 <tbody> 282 <tr> 283 <td class="no-wrap"><strong>none</strong></td> 284 <td> 285 No networking in the container. 286 </td> 287 </tr> 288 <tr> 289 <td class="no-wrap"><strong>bridge</strong> (default)</td> 290 <td> 291 Connect the container to the bridge via veth interfaces. 292 </td> 293 </tr> 294 <tr> 295 <td class="no-wrap"><strong>host</strong></td> 296 <td> 297 Use the host's network stack inside the container. 298 </td> 299 </tr> 300 <tr> 301 <td class="no-wrap"><strong>container</strong>:<name|id></td> 302 <td> 303 Use the network stack of another container, specified via 304 its *name* or *id*. 305 </td> 306 </tr> 307 <tr> 308 <td class="no-wrap"><strong>NETWORK</strong></td> 309 <td> 310 Connects the container to a user created network (using `docker network create` command) 311 </td> 312 </tr> 313 </tbody> 314 </table> 315 316 #### Network: none 317 318 With the network is `none` a container will not have 319 access to any external routes. The container will still have a 320 `loopback` interface enabled in the container but it does not have any 321 routes to external traffic. 322 323 #### Network: bridge 324 325 With the network set to `bridge` a container will use docker's 326 default networking setup. A bridge is setup on the host, commonly named 327 `docker0`, and a pair of `veth` interfaces will be created for the 328 container. One side of the `veth` pair will remain on the host attached 329 to the bridge while the other side of the pair will be placed inside the 330 container's namespaces in addition to the `loopback` interface. An IP 331 address will be allocated for containers on the bridge's network and 332 traffic will be routed though this bridge to the container. 333 334 Containers can communicate via their IP addresses by default. To communicate by 335 name, they must be linked. 336 337 #### Network: host 338 339 With the network set to `host` a container will share the host's 340 network stack and all interfaces from the host will be available to the 341 container. The container's hostname will match the hostname on the host 342 system. Note that `--add-host` `--hostname` `--dns` `--dns-search` 343 `--dns-opt` and `--mac-address` are invalid in `host` netmode. 344 345 Compared to the default `bridge` mode, the `host` mode gives *significantly* 346 better networking performance since it uses the host's native networking stack 347 whereas the bridge has to go through one level of virtualization through the 348 docker daemon. It is recommended to run containers in this mode when their 349 networking performance is critical, for example, a production Load Balancer 350 or a High Performance Web Server. 351 352 > **Note**: `--net="host"` gives the container full access to local system 353 > services such as D-bus and is therefore considered insecure. 354 355 #### Network: container 356 357 With the network set to `container` a container will share the 358 network stack of another container. The other container's name must be 359 provided in the format of `--net container:<name|id>`. Note that `--add-host` 360 `--hostname` `--dns` `--dns-search` `--dns-opt` and `--mac-address` are 361 invalid in `container` netmode, and `--publish` `--publish-all` `--expose` are 362 also invalid in `container` netmode. 363 364 Example running a Redis container with Redis binding to `localhost` then 365 running the `redis-cli` command and connecting to the Redis server over the 366 `localhost` interface. 367 368 $ docker run -d --name redis example/redis --bind 127.0.0.1 369 $ # use the redis container's network stack to access localhost 370 $ docker run --rm -it --net container:redis example/redis-cli -h 127.0.0.1 371 372 #### User-defined network 373 374 You can create a network using a Docker network driver or an external network 375 driver plugin. You can connect multiple containers to the same network. Once 376 connected to a user-defined network, the containers can communicate easily using 377 only another container's IP address or name. 378 379 For `overlay` networks or custom plugins that support multi-host connectivity, 380 containers connected to the same multi-host network but launched from different 381 Engines can also communicate in this way. 382 383 The following example creates a network using the built-in `bridge` network 384 driver and running a container in the created network 385 386 ``` 387 $ docker network create -d overlay my-net 388 $ docker run --net=my-net -itd --name=container3 busybox 389 ``` 390 391 ### Managing /etc/hosts 392 393 Your container will have lines in `/etc/hosts` which define the hostname of the 394 container itself as well as `localhost` and a few other common things. The 395 `--add-host` flag can be used to add additional lines to `/etc/hosts`. 396 397 $ docker run -it --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts 398 172.17.0.22 09d03f76bf2c 399 fe00::0 ip6-localnet 400 ff00::0 ip6-mcastprefix 401 ff02::1 ip6-allnodes 402 ff02::2 ip6-allrouters 403 127.0.0.1 localhost 404 ::1 localhost ip6-localhost ip6-loopback 405 86.75.30.9 db-static 406 407 If a container is connected to the default bridge network and `linked` 408 with other containers, then the container's `/etc/hosts` file is updated 409 with the linked container's name. 410 411 If the container is connected to user-defined network, the container's 412 `/etc/hosts` file is updated with names of all other containers in that 413 user-defined network. 414 415 > **Note** Since Docker may live update the container’s `/etc/hosts` file, there 416 may be situations when processes inside the container can end up reading an 417 empty or incomplete `/etc/hosts` file. In most cases, retrying the read again 418 should fix the problem. 419 420 ## Restart policies (--restart) 421 422 Using the `--restart` flag on Docker run you can specify a restart policy for 423 how a container should or should not be restarted on exit. 424 425 When a restart policy is active on a container, it will be shown as either `Up` 426 or `Restarting` in [`docker ps`](commandline/ps.md). It can also be 427 useful to use [`docker events`](commandline/events.md) to see the 428 restart policy in effect. 429 430 Docker supports the following restart policies: 431 432 <table> 433 <thead> 434 <tr> 435 <th>Policy</th> 436 <th>Result</th> 437 </tr> 438 </thead> 439 <tbody> 440 <tr> 441 <td><strong>no</strong></td> 442 <td> 443 Do not automatically restart the container when it exits. This is the 444 default. 445 </td> 446 </tr> 447 <tr> 448 <td> 449 <span style="white-space: nowrap"> 450 <strong>on-failure</strong>[:max-retries] 451 </span> 452 </td> 453 <td> 454 Restart only if the container exits with a non-zero exit status. 455 Optionally, limit the number of restart retries the Docker 456 daemon attempts. 457 </td> 458 </tr> 459 <tr> 460 <td><strong>always</strong></td> 461 <td> 462 Always restart the container regardless of the exit status. 463 When you specify always, the Docker daemon will try to restart 464 the container indefinitely. The container will also always start 465 on daemon startup, regardless of the current state of the container. 466 </td> 467 </tr> 468 <tr> 469 <td><strong>unless-stopped</strong></td> 470 <td> 471 Always restart the container regardless of the exit status, but 472 do not start it on daemon startup if the container has been put 473 to a stopped state before. 474 </td> 475 </tr> 476 </tbody> 477 </table> 478 479 An ever increasing delay (double the previous delay, starting at 100 480 milliseconds) is added before each restart to prevent flooding the server. 481 This means the daemon will wait for 100 ms, then 200 ms, 400, 800, 1600, 482 and so on until either the `on-failure` limit is hit, or when you `docker stop` 483 or `docker rm -f` the container. 484 485 If a container is successfully restarted (the container is started and runs 486 for at least 10 seconds), the delay is reset to its default value of 100 ms. 487 488 You can specify the maximum amount of times Docker will try to restart the 489 container when using the **on-failure** policy. The default is that Docker 490 will try forever to restart the container. The number of (attempted) restarts 491 for a container can be obtained via [`docker inspect`](commandline/inspect.md). For example, to get the number of restarts 492 for container "my-container"; 493 494 $ docker inspect -f "{{ .RestartCount }}" my-container 495 # 2 496 497 Or, to get the last time the container was (re)started; 498 499 $ docker inspect -f "{{ .State.StartedAt }}" my-container 500 # 2015-03-04T23:47:07.691840179Z 501 502 You cannot set any restart policy in combination with 503 ["clean up (--rm)"](#clean-up-rm). Setting both `--restart` and `--rm` 504 results in an error. 505 506 ### Examples 507 508 $ docker run --restart=always redis 509 510 This will run the `redis` container with a restart policy of **always** 511 so that if the container exits, Docker will restart it. 512 513 $ docker run --restart=on-failure:10 redis 514 515 This will run the `redis` container with a restart policy of **on-failure** 516 and a maximum restart count of 10. If the `redis` container exits with a 517 non-zero exit status more than 10 times in a row Docker will abort trying to 518 restart the container. Providing a maximum restart limit is only valid for the 519 **on-failure** policy. 520 521 ## Clean up (--rm) 522 523 By default a container's file system persists even after the container 524 exits. This makes debugging a lot easier (since you can inspect the 525 final state) and you retain all your data by default. But if you are 526 running short-term **foreground** processes, these container file 527 systems can really pile up. If instead you'd like Docker to 528 **automatically clean up the container and remove the file system when 529 the container exits**, you can add the `--rm` flag: 530 531 --rm=false: Automatically remove the container when it exits (incompatible with -d) 532 533 > **Note**: When you set the `--rm` flag, Docker also removes the volumes 534 associated with the container when the container is removed. This is similar 535 to running `docker rm -v my-container`. 536 537 ## Security configuration 538 --security-opt="label:user:USER" : Set the label user for the container 539 --security-opt="label:role:ROLE" : Set the label role for the container 540 --security-opt="label:type:TYPE" : Set the label type for the container 541 --security-opt="label:level:LEVEL" : Set the label level for the container 542 --security-opt="label:disable" : Turn off label confinement for the container 543 --security-opt="apparmor:PROFILE" : Set the apparmor profile to be applied 544 to the container 545 546 You can override the default labeling scheme for each container by specifying 547 the `--security-opt` flag. For example, you can specify the MCS/MLS level, a 548 requirement for MLS systems. Specifying the level in the following command 549 allows you to share the same content between containers. 550 551 $ docker run --security-opt label:level:s0:c100,c200 -i -t fedora bash 552 553 An MLS example might be: 554 555 $ docker run --security-opt label:level:TopSecret -i -t rhel7 bash 556 557 To disable the security labeling for this container versus running with the 558 `--permissive` flag, use the following command: 559 560 $ docker run --security-opt label:disable -i -t fedora bash 561 562 If you want a tighter security policy on the processes within a container, 563 you can specify an alternate type for the container. You could run a container 564 that is only allowed to listen on Apache ports by executing the following 565 command: 566 567 $ docker run --security-opt label:type:svirt_apache_t -i -t centos bash 568 569 > **Note**: You would have to write policy defining a `svirt_apache_t` type. 570 571 ## Specifying custom cgroups 572 573 Using the `--cgroup-parent` flag, you can pass a specific cgroup to run a 574 container in. This allows you to create and manage cgroups on their own. You can 575 define custom resources for those cgroups and put containers under a common 576 parent group. 577 578 ## Runtime constraints on resources 579 580 The operator can also adjust the performance parameters of the 581 container: 582 583 | Option | Description | 584 |----------------------------|---------------------------------------------------------------------------------------------| 585 | `-m`, `--memory="" ` | Memory limit (format: `<number>[<unit>]`, where unit = b, k, m or g) | 586 | `--memory-swap=""` | Total memory limit (memory + swap, format: `<number>[<unit>]`, where unit = b, k, m or g) | 587 | `--memory-reservation=""` | Memory soft limit (format: `<number>[<unit>]`, where unit = b, k, m or g) | 588 | `--kernel-memory=""` | Kernel memory limit (format: `<number>[<unit>]`, where unit = b, k, m or g) | 589 | `-c`, `--cpu-shares=0` | CPU shares (relative weight) | 590 | `--cpu-period=0` | Limit the CPU CFS (Completely Fair Scheduler) period | 591 | `--cpuset-cpus="" ` | CPUs in which to allow execution (0-3, 0,1) | 592 | `--cpuset-mems=""` | Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems. | 593 | `--cpu-quota=0` | Limit the CPU CFS (Completely Fair Scheduler) quota | 594 | `--blkio-weight=0` | Block IO weight (relative weight) accepts a weight value between 10 and 1000. | 595 | `--oom-kill-disable=false` | Whether to disable OOM Killer for the container or not. | 596 | `--memory-swappiness="" ` | Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100. | 597 598 ### User memory constraints 599 600 We have four ways to set user memory usage: 601 602 <table> 603 <thead> 604 <tr> 605 <th>Option</th> 606 <th>Result</th> 607 </tr> 608 </thead> 609 <tbody> 610 <tr> 611 <td class="no-wrap"> 612 <strong>memory=inf, memory-swap=inf</strong> (default) 613 </td> 614 <td> 615 There is no memory limit for the container. The container can use 616 as much memory as needed. 617 </td> 618 </tr> 619 <tr> 620 <td class="no-wrap"><strong>memory=L<inf, memory-swap=inf</strong></td> 621 <td> 622 (specify memory and set memory-swap as <code>-1</code>) The container is 623 not allowed to use more than L bytes of memory, but can use as much swap 624 as is needed (if the host supports swap memory). 625 </td> 626 </tr> 627 <tr> 628 <td class="no-wrap"><strong>memory=L<inf, memory-swap=2*L</strong></td> 629 <td> 630 (specify memory without memory-swap) The container is not allowed to 631 use more than L bytes of memory, swap *plus* memory usage is double 632 of that. 633 </td> 634 </tr> 635 <tr> 636 <td class="no-wrap"> 637 <strong>memory=L<inf, memory-swap=S<inf, L<=S</strong> 638 </td> 639 <td> 640 (specify both memory and memory-swap) The container is not allowed to 641 use more than L bytes of memory, swap *plus* memory usage is limited 642 by S. 643 </td> 644 </tr> 645 </tbody> 646 </table> 647 648 Examples: 649 650 $ docker run -ti ubuntu:14.04 /bin/bash 651 652 We set nothing about memory, this means the processes in the container can use 653 as much memory and swap memory as they need. 654 655 $ docker run -ti -m 300M --memory-swap -1 ubuntu:14.04 /bin/bash 656 657 We set memory limit and disabled swap memory limit, this means the processes in 658 the container can use 300M memory and as much swap memory as they need (if the 659 host supports swap memory). 660 661 $ docker run -ti -m 300M ubuntu:14.04 /bin/bash 662 663 We set memory limit only, this means the processes in the container can use 664 300M memory and 300M swap memory, by default, the total virtual memory size 665 (--memory-swap) will be set as double of memory, in this case, memory + swap 666 would be 2*300M, so processes can use 300M swap memory as well. 667 668 $ docker run -ti -m 300M --memory-swap 1G ubuntu:14.04 /bin/bash 669 670 We set both memory and swap memory, so the processes in the container can use 671 300M memory and 700M swap memory. 672 673 Memory reservation is a kind of memory soft limit that allows for greater 674 sharing of memory. Under normal circumstances, containers can use as much of 675 the memory as needed and are constrained only by the hard limits set with the 676 `-m`/`--memory` option. When memory reservation is set, Docker detects memory 677 contention or low memory and forces containers to restrict their consumption to 678 a reservation limit. 679 680 Always set the memory reservation value below the hard limit, otherwise the hard 681 limit takes precedence. A reservation of 0 is the same as setting no 682 reservation. By default (without reservation set), memory reservation is the 683 same as the hard memory limit. 684 685 Memory reservation is a soft-limit feature and does not guarantee the limit 686 won't be exceeded. Instead, the feature attempts to ensure that, when memory is 687 heavily contended for, memory is allocated based on the reservation hints/setup. 688 689 The following example limits the memory (`-m`) to 500M and sets the memory 690 reservation to 200M. 691 692 ```bash 693 $ docker run -ti -m 500M --memory-reservation 200M ubuntu:14.04 /bin/bash 694 ``` 695 696 Under this configuration, when the container consumes memory more than 200M and 697 less than 500M, the next system memory reclaim attempts to shrink container 698 memory below 200M. 699 700 The following example set memory reservation to 1G without a hard memory limit. 701 702 ```bash 703 $ docker run -ti --memory-reservation 1G ubuntu:14.04 /bin/bash 704 ``` 705 706 The container can use as much memory as it needs. The memory reservation setting 707 ensures the container doesn't consume too much memory for long time, because 708 every memory reclaim shrinks the container's consumption to the reservation. 709 710 By default, kernel kills processes in a container if an out-of-memory (OOM) 711 error occurs. To change this behaviour, use the `--oom-kill-disable` option. 712 Only disable the OOM killer on containers where you have also set the 713 `-m/--memory` option. If the `-m` flag is not set, this can result in the host 714 running out of memory and require killing the host's system processes to free 715 memory. 716 717 The following example limits the memory to 100M and disables the OOM killer for 718 this container: 719 720 $ docker run -ti -m 100M --oom-kill-disable ubuntu:14.04 /bin/bash 721 722 The following example, illustrates a dangerous way to use the flag: 723 724 $ docker run -ti --oom-kill-disable ubuntu:14.04 /bin/bash 725 726 The container has unlimited memory which can cause the host to run out memory 727 and require killing system processes to free memory. 728 729 ### Kernel memory constraints 730 731 Kernel memory is fundamentally different than user memory as kernel memory can't 732 be swapped out. The inability to swap makes it possible for the container to 733 block system services by consuming too much kernel memory. Kernel memory includes: 734 735 - stack pages 736 - slab pages 737 - sockets memory pressure 738 - tcp memory pressure 739 740 You can setup kernel memory limit to constrain these kinds of memory. For example, 741 every process consumes some stack pages. By limiting kernel memory, you can 742 prevent new processes from being created when the kernel memory usage is too high. 743 744 Kernel memory is never completely independent of user memory. Instead, you limit 745 kernel memory in the context of the user memory limit. Assume "U" is the user memory 746 limit and "K" the kernel limit. There are three possible ways to set limits: 747 748 <table> 749 <thead> 750 <tr> 751 <th>Option</th> 752 <th>Result</th> 753 </tr> 754 </thead> 755 <tbody> 756 <tr> 757 <td class="no-wrap"><strong>U != 0, K = inf</strong> (default)</td> 758 <td> 759 This is the standard memory limitation mechanism already present before using 760 kernel memory. Kernel memory is completely ignored. 761 </td> 762 </tr> 763 <tr> 764 <td class="no-wrap"><strong>U != 0, K < U</strong></td> 765 <td> 766 Kernel memory is a subset of the user memory. This setup is useful in 767 deployments where the total amount of memory per-cgroup is overcommitted. 768 Overcommitting kernel memory limits is definitely not recommended, since the 769 box can still run out of non-reclaimable memory. 770 In this case, the you can configure K so that the sum of all groups is 771 never greater than the total memory. Then, freely set U at the expense of 772 the system's service quality. 773 </td> 774 </tr> 775 <tr> 776 <td class="no-wrap"><strong>U != 0, K > U</strong></td> 777 <td> 778 Since kernel memory charges are also fed to the user counter and reclamation 779 is triggered for the container for both kinds of memory. This configuration 780 gives the admin a unified view of memory. It is also useful for people 781 who just want to track kernel memory usage. 782 </td> 783 </tr> 784 </tbody> 785 </table> 786 787 Examples: 788 789 $ docker run -ti -m 500M --kernel-memory 50M ubuntu:14.04 /bin/bash 790 791 We set memory and kernel memory, so the processes in the container can use 792 500M memory in total, in this 500M memory, it can be 50M kernel memory tops. 793 794 $ docker run -ti --kernel-memory 50M ubuntu:14.04 /bin/bash 795 796 We set kernel memory without **-m**, so the processes in the container can 797 use as much memory as they want, but they can only use 50M kernel memory. 798 799 ### Swappiness constraint 800 801 By default, a container's kernel can swap out a percentage of anonymous pages. 802 To set this percentage for a container, specify a `--memory-swappiness` value 803 between 0 and 100. A value of 0 turns off anonymous page swapping. A value of 804 100 sets all anonymous pages as swappable. By default, if you are not using 805 `--memory-swappiness`, memory swappiness value will be inherited from the parent. 806 807 For example, you can set: 808 809 $ docker run -ti --memory-swappiness=0 ubuntu:14.04 /bin/bash 810 811 Setting the `--memory-swappiness` option is helpful when you want to retain the 812 container's working set and to avoid swapping performance penalties. 813 814 ### CPU share constraint 815 816 By default, all containers get the same proportion of CPU cycles. This proportion 817 can be modified by changing the container's CPU share weighting relative 818 to the weighting of all other running containers. 819 820 To modify the proportion from the default of 1024, use the `-c` or `--cpu-shares` 821 flag to set the weighting to 2 or higher. If 0 is set, the system will ignore the 822 value and use the default of 1024. 823 824 The proportion will only apply when CPU-intensive processes are running. 825 When tasks in one container are idle, other containers can use the 826 left-over CPU time. The actual amount of CPU time will vary depending on 827 the number of containers running on the system. 828 829 For example, consider three containers, one has a cpu-share of 1024 and 830 two others have a cpu-share setting of 512. When processes in all three 831 containers attempt to use 100% of CPU, the first container would receive 832 50% of the total CPU time. If you add a fourth container with a cpu-share 833 of 1024, the first container only gets 33% of the CPU. The remaining containers 834 receive 16.5%, 16.5% and 33% of the CPU. 835 836 On a multi-core system, the shares of CPU time are distributed over all CPU 837 cores. Even if a container is limited to less than 100% of CPU time, it can 838 use 100% of each individual CPU core. 839 840 For example, consider a system with more than three cores. If you start one 841 container `{C0}` with `-c=512` running one process, and another container 842 `{C1}` with `-c=1024` running two processes, this can result in the following 843 division of CPU shares: 844 845 PID container CPU CPU share 846 100 {C0} 0 100% of CPU0 847 101 {C1} 1 100% of CPU1 848 102 {C1} 2 100% of CPU2 849 850 ### CPU period constraint 851 852 The default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use 853 `--cpu-period` to set the period of CPUs to limit the container's CPU usage. 854 And usually `--cpu-period` should work with `--cpu-quota`. 855 856 Examples: 857 858 $ docker run -ti --cpu-period=50000 --cpu-quota=25000 ubuntu:14.04 /bin/bash 859 860 If there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms. 861 862 For more information, see the [CFS documentation on bandwidth limiting](https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt). 863 864 ### Cpuset constraint 865 866 We can set cpus in which to allow execution for containers. 867 868 Examples: 869 870 $ docker run -ti --cpuset-cpus="1,3" ubuntu:14.04 /bin/bash 871 872 This means processes in container can be executed on cpu 1 and cpu 3. 873 874 $ docker run -ti --cpuset-cpus="0-2" ubuntu:14.04 /bin/bash 875 876 This means processes in container can be executed on cpu 0, cpu 1 and cpu 2. 877 878 We can set mems in which to allow execution for containers. Only effective 879 on NUMA systems. 880 881 Examples: 882 883 $ docker run -ti --cpuset-mems="1,3" ubuntu:14.04 /bin/bash 884 885 This example restricts the processes in the container to only use memory from 886 memory nodes 1 and 3. 887 888 $ docker run -ti --cpuset-mems="0-2" ubuntu:14.04 /bin/bash 889 890 This example restricts the processes in the container to only use memory from 891 memory nodes 0, 1 and 2. 892 893 ### CPU quota constraint 894 895 The `--cpu-quota` flag limits the container's CPU usage. The default 0 value 896 allows the container to take 100% of a CPU resource (1 CPU). The CFS (Completely Fair 897 Scheduler) handles resource allocation for executing processes and is default 898 Linux Scheduler used by the kernel. Set this value to 50000 to limit the container 899 to 50% of a CPU resource. For multiple CPUs, adjust the `--cpu-quota` as necessary. 900 For more information, see the [CFS documentation on bandwidth limiting](https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt). 901 902 ### Block IO bandwidth (Blkio) constraint 903 904 By default, all containers get the same proportion of block IO bandwidth 905 (blkio). This proportion is 500. To modify this proportion, change the 906 container's blkio weight relative to the weighting of all other running 907 containers using the `--blkio-weight` flag. 908 909 The `--blkio-weight` flag can set the weighting to a value between 10 to 1000. 910 For example, the commands below create two containers with different blkio 911 weight: 912 913 $ docker run -ti --name c1 --blkio-weight 300 ubuntu:14.04 /bin/bash 914 $ docker run -ti --name c2 --blkio-weight 600 ubuntu:14.04 /bin/bash 915 916 If you do block IO in the two containers at the same time, by, for example: 917 918 $ time dd if=/mnt/zerofile of=test.out bs=1M count=1024 oflag=direct 919 920 You'll find that the proportion of time is the same as the proportion of blkio 921 weights of the two containers. 922 923 > **Note:** The blkio weight setting is only available for direct IO. Buffered IO 924 > is not currently supported. 925 926 ## Additional groups 927 --group-add: Add Linux capabilities 928 929 By default, the docker container process runs with the supplementary groups looked 930 up for the specified user. If one wants to add more to that list of groups, then 931 one can use this flag: 932 933 $ docker run -ti --rm --group-add audio --group-add dbus --group-add 777 busybox id 934 uid=0(root) gid=0(root) groups=10(wheel),29(audio),81(dbus),777 935 936 ## Runtime privilege, Linux capabilities, and LXC configuration 937 938 --cap-add: Add Linux capabilities 939 --cap-drop: Drop Linux capabilities 940 --privileged=false: Give extended privileges to this container 941 --device=[]: Allows you to run devices inside the container without the --privileged flag. 942 --lxc-conf=[]: Add custom lxc options 943 944 By default, Docker containers are "unprivileged" and cannot, for 945 example, run a Docker daemon inside a Docker container. This is because 946 by default a container is not allowed to access any devices, but a 947 "privileged" container is given access to all devices (see [lxc-template.go]( 948 https://github.com/docker/docker/blob/master/daemon/execdriver/lxc/lxc_template.go) 949 and documentation on [cgroups devices]( 950 https://www.kernel.org/doc/Documentation/cgroups/devices.txt)). 951 952 When the operator executes `docker run --privileged`, Docker will enable 953 to access to all devices on the host as well as set some configuration 954 in AppArmor or SELinux to allow the container nearly all the same access to the 955 host as processes running outside containers on the host. Additional 956 information about running with `--privileged` is available on the 957 [Docker Blog](http://blog.docker.com/2013/09/docker-can-now-run-within-docker/). 958 959 If you want to limit access to a specific device or devices you can use 960 the `--device` flag. It allows you to specify one or more devices that 961 will be accessible within the container. 962 963 $ docker run --device=/dev/snd:/dev/snd ... 964 965 By default, the container will be able to `read`, `write`, and `mknod` these devices. 966 This can be overridden using a third `:rwm` set of options to each `--device` flag: 967 968 $ docker run --device=/dev/sda:/dev/xvdc --rm -it ubuntu fdisk /dev/xvdc 969 970 Command (m for help): q 971 $ docker run --device=/dev/sda:/dev/xvdc:r --rm -it ubuntu fdisk /dev/xvdc 972 You will not be able to write the partition table. 973 974 Command (m for help): q 975 976 $ docker run --device=/dev/sda:/dev/xvdc:w --rm -it ubuntu fdisk /dev/xvdc 977 crash.... 978 979 $ docker run --device=/dev/sda:/dev/xvdc:m --rm -it ubuntu fdisk /dev/xvdc 980 fdisk: unable to open /dev/xvdc: Operation not permitted 981 982 In addition to `--privileged`, the operator can have fine grain control over the 983 capabilities using `--cap-add` and `--cap-drop`. By default, Docker has a default 984 list of capabilities that are kept. The following table lists the Linux capability options which can be added or dropped. 985 986 | Capability Key | Capability Description | 987 | -------------- | ---------------------- | 988 | SETPCAP | Modify process capabilities. | 989 | SYS_MODULE| Load and unload kernel modules. | 990 | SYS_RAWIO | Perform I/O port operations (iopl(2) and ioperm(2)). | 991 | SYS_PACCT | Use acct(2), switch process accounting on or off. | 992 | SYS_ADMIN | Perform a range of system administration operations. | 993 | SYS_NICE | Raise process nice value (nice(2), setpriority(2)) and change the nice value for arbitrary processes. | 994 | SYS_RESOURCE | Override resource Limits. | 995 | SYS_TIME | Set system clock (settimeofday(2), stime(2), adjtimex(2)); set real-time (hardware) clock. | 996 | SYS_TTY_CONFIG | Use vhangup(2); employ various privileged ioctl(2) operations on virtual terminals. | 997 | MKNOD | Create special files using mknod(2). | 998 | AUDIT_WRITE | Write records to kernel auditing log. | 999 | AUDIT_CONTROL | Enable and disable kernel auditing; change auditing filter rules; retrieve auditing status and filtering rules. | 1000 | MAC_OVERRIDE | Allow MAC configuration or state changes. Implemented for the Smack LSM. | 1001 | MAC_ADMIN | Override Mandatory Access Control (MAC). Implemented for the Smack Linux Security Module (LSM). | 1002 | NET_ADMIN | Perform various network-related operations. | 1003 | SYSLOG | Perform privileged syslog(2) operations. | 1004 | CHOWN | Make arbitrary changes to file UIDs and GIDs (see chown(2)). | 1005 | NET_RAW | Use RAW and PACKET sockets. | 1006 | DAC_OVERRIDE | Bypass file read, write, and execute permission checks. | 1007 | FOWNER | Bypass permission checks on operations that normally require the file system UID of the process to match the UID of the file. | 1008 | DAC_READ_SEARCH | Bypass file read permission checks and directory read and execute permission checks. | 1009 | FSETID | Don't clear set-user-ID and set-group-ID permission bits when a file is modified. | 1010 | KILL | Bypass permission checks for sending signals. | 1011 | SETGID | Make arbitrary manipulations of process GIDs and supplementary GID list. | 1012 | SETUID | Make arbitrary manipulations of process UIDs. | 1013 | LINUX_IMMUTABLE | Set the FS_APPEND_FL and FS_IMMUTABLE_FL i-node flags. | 1014 | NET_BIND_SERVICE | Bind a socket to internet domain privileged ports (port numbers less than 1024). | 1015 | NET_BROADCAST | Make socket broadcasts, and listen to multicasts. | 1016 | IPC_LOCK | Lock memory (mlock(2), mlockall(2), mmap(2), shmctl(2)). | 1017 | IPC_OWNER | Bypass permission checks for operations on System V IPC objects. | 1018 | SYS_CHROOT | Use chroot(2), change root directory. | 1019 | SYS_PTRACE | Trace arbitrary processes using ptrace(2). | 1020 | SYS_BOOT | Use reboot(2) and kexec_load(2), reboot and load a new kernel for later execution. | 1021 | LEASE | Establish leases on arbitrary files (see fcntl(2)). | 1022 | SETFCAP | Set file capabilities.| 1023 | WAKE_ALARM | Trigger something that will wake up the system. | 1024 | BLOCK_SUSPEND | Employ features that can block system suspend. | 1025 1026 Further reference information is available on the [capabilities(7) - Linux man page](http://linux.die.net/man/7/capabilities) 1027 1028 Both flags support the value `ALL`, so if the 1029 operator wants to have all capabilities but `MKNOD` they could use: 1030 1031 $ docker run --cap-add=ALL --cap-drop=MKNOD ... 1032 1033 For interacting with the network stack, instead of using `--privileged` they 1034 should use `--cap-add=NET_ADMIN` to modify the network interfaces. 1035 1036 $ docker run -t -i --rm ubuntu:14.04 ip link add dummy0 type dummy 1037 RTNETLINK answers: Operation not permitted 1038 $ docker run -t -i --rm --cap-add=NET_ADMIN ubuntu:14.04 ip link add dummy0 type dummy 1039 1040 To mount a FUSE based filesystem, you need to combine both `--cap-add` and 1041 `--device`: 1042 1043 $ docker run --rm -it --cap-add SYS_ADMIN sshfs sshfs sven@10.10.10.20:/home/sven /mnt 1044 fuse: failed to open /dev/fuse: Operation not permitted 1045 $ docker run --rm -it --device /dev/fuse sshfs sshfs sven@10.10.10.20:/home/sven /mnt 1046 fusermount: mount failed: Operation not permitted 1047 $ docker run --rm -it --cap-add SYS_ADMIN --device /dev/fuse sshfs 1048 # sshfs sven@10.10.10.20:/home/sven /mnt 1049 The authenticity of host '10.10.10.20 (10.10.10.20)' can't be established. 1050 ECDSA key fingerprint is 25:34:85:75:25:b0:17:46:05:19:04:93:b5:dd:5f:c6. 1051 Are you sure you want to continue connecting (yes/no)? yes 1052 sven@10.10.10.20's password: 1053 root@30aa0cfaf1b5:/# ls -la /mnt/src/docker 1054 total 1516 1055 drwxrwxr-x 1 1000 1000 4096 Dec 4 06:08 . 1056 drwxrwxr-x 1 1000 1000 4096 Dec 4 11:46 .. 1057 -rw-rw-r-- 1 1000 1000 16 Oct 8 00:09 .dockerignore 1058 -rwxrwxr-x 1 1000 1000 464 Oct 8 00:09 .drone.yml 1059 drwxrwxr-x 1 1000 1000 4096 Dec 4 06:11 .git 1060 -rw-rw-r-- 1 1000 1000 461 Dec 4 06:08 .gitignore 1061 .... 1062 1063 1064 If the Docker daemon was started using the `lxc` exec-driver 1065 (`docker daemon --exec-driver=lxc`) then the operator can also specify LXC options 1066 using one or more `--lxc-conf` parameters. These can be new parameters or 1067 override existing parameters from the [lxc-template.go]( 1068 https://github.com/docker/docker/blob/master/daemon/execdriver/lxc/lxc_template.go). 1069 Note that in the future, a given host's docker daemon may not use LXC, so this 1070 is an implementation-specific configuration meant for operators already 1071 familiar with using LXC directly. 1072 1073 > **Note:** 1074 > If you use `--lxc-conf` to modify a container's configuration which is also 1075 > managed by the Docker daemon, then the Docker daemon will not know about this 1076 > modification, and you will need to manage any conflicts yourself. For example, 1077 > you can use `--lxc-conf` to set a container's IP address, but this will not be 1078 > reflected in the `/etc/hosts` file. 1079 1080 ## Logging drivers (--log-driver) 1081 1082 The container can have a different logging driver than the Docker daemon. Use 1083 the `--log-driver=VALUE` with the `docker run` command to configure the 1084 container's logging driver. The following options are supported: 1085 1086 | `none` | Disables any logging for the container. `docker logs` won't be available with this driver. | 1087 |-------------|-------------------------------------------------------------------------------------------------------------------------------| 1088 | `json-file` | Default logging driver for Docker. Writes JSON messages to file. No logging options are supported for this driver. | 1089 | `syslog` | Syslog logging driver for Docker. Writes log messages to syslog. | 1090 | `journald` | Journald logging driver for Docker. Writes log messages to `journald`. | 1091 | `gelf` | Graylog Extended Log Format (GELF) logging driver for Docker. Writes log messages to a GELF endpoint likeGraylog or Logstash. | 1092 | `fluentd` | Fluentd logging driver for Docker. Writes log messages to `fluentd` (forward input). | 1093 | `awslogs` | Amazon CloudWatch Logs logging driver for Docker. Writes log messages to Amazon CloudWatch Logs | 1094 1095 The `docker logs` command is available only for the `json-file` and `journald` 1096 logging drivers. For detailed information on working with logging drivers, see 1097 [Configure a logging driver](logging/overview.md). 1098 1099 1100 ## Overriding Dockerfile image defaults 1101 1102 When a developer builds an image from a [*Dockerfile*](builder.md) 1103 or when she commits it, the developer can set a number of default parameters 1104 that take effect when the image starts up as a container. 1105 1106 Four of the Dockerfile commands cannot be overridden at runtime: `FROM`, 1107 `MAINTAINER`, `RUN`, and `ADD`. Everything else has a corresponding override 1108 in `docker run`. We'll go through what the developer might have set in each 1109 Dockerfile instruction and how the operator can override that setting. 1110 1111 - [CMD (Default Command or Options)](#cmd-default-command-or-options) 1112 - [ENTRYPOINT (Default Command to Execute at Runtime)]( 1113 #entrypoint-default-command-to-execute-at-runtime) 1114 - [EXPOSE (Incoming Ports)](#expose-incoming-ports) 1115 - [ENV (Environment Variables)](#env-environment-variables) 1116 - [VOLUME (Shared Filesystems)](#volume-shared-filesystems) 1117 - [USER](#user) 1118 - [WORKDIR](#workdir) 1119 1120 ### CMD (default command or options) 1121 1122 Recall the optional `COMMAND` in the Docker 1123 commandline: 1124 1125 $ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...] 1126 1127 This command is optional because the person who created the `IMAGE` may 1128 have already provided a default `COMMAND` using the Dockerfile `CMD` 1129 instruction. As the operator (the person running a container from the 1130 image), you can override that `CMD` instruction just by specifying a new 1131 `COMMAND`. 1132 1133 If the image also specifies an `ENTRYPOINT` then the `CMD` or `COMMAND` 1134 get appended as arguments to the `ENTRYPOINT`. 1135 1136 ### ENTRYPOINT (default command to execute at runtime) 1137 1138 --entrypoint="": Overwrite the default entrypoint set by the image 1139 1140 The `ENTRYPOINT` of an image is similar to a `COMMAND` because it 1141 specifies what executable to run when the container starts, but it is 1142 (purposely) more difficult to override. The `ENTRYPOINT` gives a 1143 container its default nature or behavior, so that when you set an 1144 `ENTRYPOINT` you can run the container *as if it were that binary*, 1145 complete with default options, and you can pass in more options via the 1146 `COMMAND`. But, sometimes an operator may want to run something else 1147 inside the container, so you can override the default `ENTRYPOINT` at 1148 runtime by using a string to specify the new `ENTRYPOINT`. Here is an 1149 example of how to run a shell in a container that has been set up to 1150 automatically run something else (like `/usr/bin/redis-server`): 1151 1152 $ docker run -i -t --entrypoint /bin/bash example/redis 1153 1154 or two examples of how to pass more parameters to that ENTRYPOINT: 1155 1156 $ docker run -i -t --entrypoint /bin/bash example/redis -c ls -l 1157 $ docker run -i -t --entrypoint /usr/bin/redis-cli example/redis --help 1158 1159 ### EXPOSE (incoming ports) 1160 1161 The following `run` command options work with container networking: 1162 1163 --expose=[]: Expose a port or a range of ports inside the container. 1164 These are additional to those exposed by the `EXPOSE` instruction 1165 -P=false : Publish all exposed ports to the host interfaces 1166 -p=[] : Publish a container᾿s port or a range of ports to the host 1167 format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort 1168 Both hostPort and containerPort can be specified as a 1169 range of ports. When specifying ranges for both, the 1170 number of container ports in the range must match the 1171 number of host ports in the range, for example: 1172 -p 1234-1236:1234-1236/tcp 1173 1174 When specifying a range for hostPort only, the 1175 containerPort must not be a range. In this case the 1176 container port is published somewhere within the 1177 specified hostPort range. (e.g., `-p 1234-1236:1234/tcp`) 1178 1179 (use 'docker port' to see the actual mapping) 1180 1181 --link="" : Add link to another container (<name or id>:alias or <name or id>) 1182 1183 With the exception of the `EXPOSE` directive, an image developer hasn't 1184 got much control over networking. The `EXPOSE` instruction defines the 1185 initial incoming ports that provide services. These ports are available 1186 to processes inside the container. An operator can use the `--expose` 1187 option to add to the exposed ports. 1188 1189 To expose a container's internal port, an operator can start the 1190 container with the `-P` or `-p` flag. The exposed port is accessible on 1191 the host and the ports are available to any client that can reach the 1192 host. 1193 1194 The `-P` option publishes all the ports to the host interfaces. Docker 1195 binds each exposed port to a random port on the host. The range of 1196 ports are within an *ephemeral port range* defined by 1197 `/proc/sys/net/ipv4/ip_local_port_range`. Use the `-p` flag to 1198 explicitly map a single port or range of ports. 1199 1200 The port number inside the container (where the service listens) does 1201 not need to match the port number exposed on the outside of the 1202 container (where clients connect). For example, inside the container an 1203 HTTP service is listening on port 80 (and so the image developer 1204 specifies `EXPOSE 80` in the Dockerfile). At runtime, the port might be 1205 bound to 42800 on the host. To find the mapping between the host ports 1206 and the exposed ports, use `docker port`. 1207 1208 If the operator uses `--link` when starting a new client container, then the 1209 client container can access the exposed port via a private networking interface. 1210 Linking is a legacy feature that is only supported on the default bridge 1211 network. You should prefer the Docker networks feature instead. For more 1212 information on this feature, see the [*Docker network 1213 overview*""](../userguide/networking/index.md)). 1214 1215 ### ENV (environment variables) 1216 1217 When a new container is created, Docker will set the following environment 1218 variables automatically: 1219 1220 <table> 1221 <tr> 1222 <th>Variable</th> 1223 <th>Value</th> 1224 </tr> 1225 <tr> 1226 <td><code>HOME</code></td> 1227 <td> 1228 Set based on the value of <code>USER</code> 1229 </td> 1230 </tr> 1231 <tr> 1232 <td><code>HOSTNAME</code></td> 1233 <td> 1234 The hostname associated with the container 1235 </td> 1236 </tr> 1237 <tr> 1238 <td><code>PATH</code></td> 1239 <td> 1240 Includes popular directories, such as :<br> 1241 <code>/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</code> 1242 </td> 1243 <tr> 1244 <td><code>TERM</code></td> 1245 <td><code>xterm</code> if the container is allocated a pseudo-TTY</td> 1246 </tr> 1247 </table> 1248 1249 Additionally, the operator can **set any environment variable** in the 1250 container by using one or more `-e` flags, even overriding those mentioned 1251 above, or already defined by the developer with a Dockerfile `ENV`: 1252 1253 $ docker run -e "deep=purple" --rm ubuntu /bin/bash -c export 1254 declare -x HOME="/" 1255 declare -x HOSTNAME="85bc26a0e200" 1256 declare -x OLDPWD 1257 declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" 1258 declare -x PWD="/" 1259 declare -x SHLVL="1" 1260 declare -x container="lxc" 1261 declare -x deep="purple" 1262 1263 Similarly the operator can set the **hostname** with `-h`. 1264 1265 ### VOLUME (shared filesystems) 1266 1267 -v=[]: Create a bind mount with: [host-dir:]container-dir[:<options>], where 1268 options are comma delimited and selected from [rw|ro] and [z|Z]. 1269 If 'host-dir' is missing, then docker creates a new volume. 1270 If neither 'rw' or 'ro' is specified then the volume is mounted 1271 in read-write mode. 1272 --volumes-from="": Mount all volumes from the given container(s) 1273 1274 > **Note**: 1275 > The auto-creation of the host path has been [*deprecated*](../misc/deprecated.md#auto-creating-missing-host-paths-for-bind-mounts). 1276 1277 The volumes commands are complex enough to have their own documentation 1278 in section [*Managing data in 1279 containers*](../userguide/dockervolumes.md). A developer can define 1280 one or more `VOLUME`'s associated with an image, but only the operator 1281 can give access from one container to another (or from a container to a 1282 volume mounted on the host). 1283 1284 The `container-dir` must always be an absolute path such as `/src/docs`. 1285 The `host-dir` can either be an absolute path or a `name` value. If you 1286 supply an absolute path for the `host-dir`, Docker bind-mounts to the path 1287 you specify. If you supply a `name`, Docker creates a named volume by that `name`. 1288 1289 A `name` value must start with start with an alphanumeric character, 1290 followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen). 1291 An absolute path starts with a `/` (forward slash). 1292 1293 For example, you can specify either `/foo` or `foo` for a `host-dir` value. 1294 If you supply the `/foo` value, Docker creates a bind-mount. If you supply 1295 the `foo` specification, Docker creates a named volume. 1296 1297 ### USER 1298 1299 `root` (id = 0) is the default user within a container. The image developer can 1300 create additional users. Those users are accessible by name. When passing a numeric 1301 ID, the user does not have to exist in the container. 1302 1303 The developer can set a default user to run the first process with the 1304 Dockerfile `USER` instruction. When starting a container, the operator can override 1305 the `USER` instruction by passing the `-u` option. 1306 1307 -u="": Username or UID 1308 1309 > **Note:** if you pass a numeric uid, it must be in the range of 0-2147483647. 1310 1311 ### WORKDIR 1312 1313 The default working directory for running binaries within a container is the 1314 root directory (`/`), but the developer can set a different default with the 1315 Dockerfile `WORKDIR` command. The operator can override this with: 1316 1317 -w="": Working directory inside the container