github.com/kaisenlinux/docker.io@v0.0.0-20230510090727-ea55db55fac7/cli/docs/reference/commandline/service_create.md (about) 1 --- 2 title: "service create" 3 description: "The service create command description and usage" 4 keywords: "service, create" 5 --- 6 7 # service create 8 9 ```Markdown 10 Usage: docker service create [OPTIONS] IMAGE [COMMAND] [ARG...] 11 12 Create a new service 13 14 Options: 15 --cap-add list Add Linux capabilities 16 --cap-drop list Drop Linux capabilities 17 --config config Specify configurations to expose to the service 18 --constraint list Placement constraints 19 --container-label list Container labels 20 --credential-spec credential-spec Credential spec for managed service account (Windows only) 21 -d, --detach Exit immediately instead of waiting for the service to converge (default true) 22 --dns list Set custom DNS servers 23 --dns-option list Set DNS options 24 --dns-search list Set custom DNS search domains 25 --endpoint-mode string Endpoint mode (vip or dnsrr) (default "vip") 26 --entrypoint command Overwrite the default ENTRYPOINT of the image 27 -e, --env list Set environment variables 28 --env-file list Read in a file of environment variables 29 --generic-resource list User defined resources request 30 --group list Set one or more supplementary user groups for the container 31 --health-cmd string Command to run to check health 32 --health-interval duration Time between running the check (ms|s|m|h) 33 --health-retries int Consecutive failures needed to report unhealthy 34 --health-start-period duration Start period for the container to initialize before counting retries towards unstable (ms|s|m|h) 35 --health-timeout duration Maximum time to allow one check to run (ms|s|m|h) 36 --help Print usage 37 --host list Set one or more custom host-to-IP mappings (host:ip) 38 --hostname string Container hostname 39 --init bool Use an init inside each service container to forward signals and reap processes 40 --isolation string Service container isolation mode 41 -l, --label list Service labels 42 --limit-cpu decimal Limit CPUs 43 --limit-memory bytes Limit Memory 44 --limit-pids int Limit maximum number of processes (default 0 = unlimited) 45 --log-driver string Logging driver for service 46 --log-opt list Logging driver options 47 --max-concurrent Number of job tasks to run at once (default equal to --replicas) 48 --mode string Service mode (replicated, global, replicated-job, or global-job) (default "replicated") 49 --mount mount Attach a filesystem mount to the service 50 --name string Service name 51 --network network Network attachments 52 --no-healthcheck Disable any container-specified HEALTHCHECK 53 --no-resolve-image Do not query the registry to resolve image digest and supported platforms 54 --placement-pref pref Add a placement preference 55 -p, --publish port Publish a port as a node port 56 -q, --quiet Suppress progress output 57 --read-only Mount the container's root filesystem as read only 58 --replicas uint Number of tasks 59 --replicas-max-per-node uint Maximum number of tasks per node (default 0 = unlimited) 60 --reserve-cpu decimal Reserve CPUs 61 --reserve-memory bytes Reserve Memory 62 --restart-condition string Restart when condition is met ("none"|"on-failure"|"any") (default "any") 63 --restart-delay duration Delay between restart attempts (ns|us|ms|s|m|h) (default 5s) 64 --restart-max-attempts uint Maximum number of restarts before giving up 65 --restart-window duration Window used to evaluate the restart policy (ns|us|ms|s|m|h) 66 --rollback-delay duration Delay between task rollbacks (ns|us|ms|s|m|h) (default 0s) 67 --rollback-failure-action string Action on rollback failure ("pause"|"continue") (default "pause") 68 --rollback-max-failure-ratio float Failure rate to tolerate during a rollback (default 0) 69 --rollback-monitor duration Duration after each task rollback to monitor for failure (ns|us|ms|s|m|h) (default 5s) 70 --rollback-order string Rollback order ("start-first"|"stop-first") (default "stop-first") 71 --rollback-parallelism uint Maximum number of tasks rolled back simultaneously (0 to roll back all at once) (default 1) 72 --secret secret Specify secrets to expose to the service 73 --stop-grace-period duration Time to wait before force killing a container (ns|us|ms|s|m|h) (default 10s) 74 --stop-signal string Signal to stop the container 75 --sysctl list Sysctl options 76 -t, --tty Allocate a pseudo-TTY 77 --ulimit ulimit Ulimit options (default []) 78 --update-delay duration Delay between updates (ns|us|ms|s|m|h) (default 0s) 79 --update-failure-action string Action on update failure ("pause"|"continue"|"rollback") (default "pause") 80 --update-max-failure-ratio float Failure rate to tolerate during an update (default 0) 81 --update-monitor duration Duration after each task update to monitor for failure (ns|us|ms|s|m|h) (default 5s) 82 --update-order string Update order ("start-first"|"stop-first") (default "stop-first") 83 --update-parallelism uint Maximum number of tasks updated simultaneously (0 to update all at once) (default 1) 84 -u, --user string Username or UID (format: <name|uid>[:<group|gid>]) 85 --with-registry-auth Send registry authentication details to swarm agents 86 -w, --workdir string Working directory inside the container 87 ``` 88 89 ## Description 90 91 Creates a service as described by the specified parameters. 92 93 > **Note** 94 > 95 > This is a cluster management command, and must be executed on a swarm 96 > manager node. To learn about managers and workers, refer to the 97 > [Swarm mode section](https://docs.docker.com/engine/swarm/) in the 98 > documentation. 99 100 ## Examples 101 102 ### Create a service 103 104 ```console 105 $ docker service create --name redis redis:3.0.6 106 107 dmu1ept4cxcfe8k8lhtux3ro3 108 109 $ docker service create --mode global --name redis2 redis:3.0.6 110 111 a8q9dasaafudfs8q8w32udass 112 113 $ docker service ls 114 115 ID NAME MODE REPLICAS IMAGE 116 dmu1ept4cxcf redis replicated 1/1 redis:3.0.6 117 a8q9dasaafud redis2 global 1/1 redis:3.0.6 118 ``` 119 120 #### <a name="with-registry-auth"></a> Create a service using an image on a private registry (--with-registry-auth) 121 122 If your image is available on a private registry which requires login, use the 123 `--with-registry-auth` flag with `docker service create`, after logging in. If 124 your image is stored on `registry.example.com`, which is a private registry, use 125 a command like the following: 126 127 ```console 128 $ docker login registry.example.com 129 130 $ docker service create \ 131 --with-registry-auth \ 132 --name my_service \ 133 registry.example.com/acme/my_image:latest 134 ``` 135 136 This passes the login token from your local client to the swarm nodes where the 137 service is deployed, using the encrypted WAL logs. With this information, the 138 nodes are able to log into the registry and pull the image. 139 140 ### <a name="replicas"></a> Create a service with 5 replica tasks (--replicas) 141 142 Use the `--replicas` flag to set the number of replica tasks for a replicated 143 service. The following command creates a `redis` service with `5` replica tasks: 144 145 ```console 146 $ docker service create --name redis --replicas=5 redis:3.0.6 147 148 4cdgfyky7ozwh3htjfw0d12qv 149 ``` 150 151 The above command sets the *desired* number of tasks for the service. Even 152 though the command returns immediately, actual scaling of the service may take 153 some time. The `REPLICAS` column shows both the *actual* and *desired* number 154 of replica tasks for the service. 155 156 In the following example the desired state is `5` replicas, but the current 157 number of `RUNNING` tasks is `3`: 158 159 ```console 160 $ docker service ls 161 162 ID NAME MODE REPLICAS IMAGE 163 4cdgfyky7ozw redis replicated 3/5 redis:3.0.7 164 ``` 165 166 Once all the tasks are created and `RUNNING`, the actual number of tasks is 167 equal to the desired number: 168 169 ```console 170 $ docker service ls 171 172 ID NAME MODE REPLICAS IMAGE 173 4cdgfyky7ozw redis replicated 5/5 redis:3.0.7 174 ``` 175 176 ### <a name="secret"></a> Create a service with secrets (--secret) 177 178 Use the `--secret` flag to give a container access to a 179 [secret](secret_create.md). 180 181 Create a service specifying a secret: 182 183 ```console 184 $ docker service create --name redis --secret secret.json redis:3.0.6 185 186 4cdgfyky7ozwh3htjfw0d12qv 187 ``` 188 189 Create a service specifying the secret, target, user/group ID, and mode: 190 191 ```console 192 $ docker service create --name redis \ 193 --secret source=ssh-key,target=ssh \ 194 --secret source=app-key,target=app,uid=1000,gid=1001,mode=0400 \ 195 redis:3.0.6 196 197 4cdgfyky7ozwh3htjfw0d12qv 198 ``` 199 200 To grant a service access to multiple secrets, use multiple `--secret` flags. 201 202 Secrets are located in `/run/secrets` in the container if no target is specified. 203 If no target is specified, the name of the secret is used as the in memory file 204 in the container. If a target is specified, that is used as the filename. In the 205 example above, two files are created: `/run/secrets/ssh` and 206 `/run/secrets/app` for each of the secret targets specified. 207 208 ### <a name="config"></a> Create a service with configs (--config) 209 210 Use the `--config` flag to give a container access to a 211 [config](config_create.md). 212 213 Create a service with a config. The config will be mounted into `redis-config`, 214 be owned by the user who runs the command inside the container (often `root`), 215 and have file mode `0444` or world-readable. You can specify the `uid` and `gid` 216 as numerical IDs or names. When using names, the provided group/user names must 217 pre-exist in the container. The `mode` is specified as a 4-number sequence such 218 as `0755`. 219 220 ```console 221 $ docker service create --name=redis --config redis-conf redis:3.0.6 222 ``` 223 224 Create a service with a config and specify the target location and file mode: 225 226 ```console 227 $ docker service create --name redis \ 228 --config source=redis-conf,target=/etc/redis/redis.conf,mode=0400 redis:3.0.6 229 ``` 230 231 To grant a service access to multiple configs, use multiple `--config` flags. 232 233 Configs are located in `/` in the container if no target is specified. If no 234 target is specified, the name of the config is used as the name of the file in 235 the container. If a target is specified, that is used as the filename. 236 237 ### <a name="update-delay"></a> Create a service with a rolling update policy 238 239 ```console 240 $ docker service create \ 241 --replicas 10 \ 242 --name redis \ 243 --update-delay 10s \ 244 --update-parallelism 2 \ 245 redis:3.0.6 246 ``` 247 248 When you run a [service update](service_update.md), the scheduler updates a 249 maximum of 2 tasks at a time, with `10s` between updates. For more information, 250 refer to the [rolling updates 251 tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/rolling-update/). 252 253 ### <a name="env"></a> Set environment variables (-e, --env) 254 255 This sets an environment variable for all tasks in a service. For example: 256 257 ```console 258 $ docker service create \ 259 --name redis_2 \ 260 --replicas 5 \ 261 --env MYVAR=foo \ 262 redis:3.0.6 263 ``` 264 265 To specify multiple environment variables, specify multiple `--env` flags, each 266 with a separate key-value pair. 267 268 ```console 269 $ docker service create \ 270 --name redis_2 \ 271 --replicas 5 \ 272 --env MYVAR=foo \ 273 --env MYVAR2=bar \ 274 redis:3.0.6 275 ``` 276 277 ### <a name="hostname"></a> Create a service with specific hostname (--hostname) 278 279 This option sets the docker service containers hostname to a specific string. 280 For example: 281 282 ```console 283 $ docker service create --name redis --hostname myredis redis:3.0.6 284 ``` 285 286 ### <a name="label"></a> Set metadata on a service (-l, --label) 287 288 A label is a `key=value` pair that applies metadata to a service. To label a 289 service with two labels: 290 291 ```console 292 $ docker service create \ 293 --name redis_2 \ 294 --label com.example.foo="bar" \ 295 --label bar=baz \ 296 redis:3.0.6 297 ``` 298 299 For more information about labels, refer to [apply custom 300 metadata](https://docs.docker.com/config/labels-custom-metadata/). 301 302 ### <a name="mount"></a> Add bind mounts, volumes or memory filesystems (--mount) 303 304 Docker supports three different kinds of mounts, which allow containers to read 305 from or write to files or directories, either on the host operating system, or 306 on memory filesystems. These types are _data volumes_ (often referred to simply 307 as volumes), _bind mounts_, _tmpfs_, and _named pipes_. 308 309 A **bind mount** makes a file or directory on the host available to the 310 container it is mounted within. A bind mount may be either read-only or 311 read-write. For example, a container might share its host's DNS information by 312 means of a bind mount of the host's `/etc/resolv.conf` or a container might 313 write logs to its host's `/var/log/myContainerLogs` directory. If you use 314 bind mounts and your host and containers have different notions of permissions, 315 access controls, or other such details, you will run into portability issues. 316 317 A **named volume** is a mechanism for decoupling persistent data needed by your 318 container from the image used to create the container and from the host machine. 319 Named volumes are created and managed by Docker, and a named volume persists 320 even when no container is currently using it. Data in named volumes can be 321 shared between a container and the host machine, as well as between multiple 322 containers. Docker uses a _volume driver_ to create, manage, and mount volumes. 323 You can back up or restore volumes using Docker commands. 324 325 A **tmpfs** mounts a tmpfs inside a container for volatile data. 326 327 A **npipe** mounts a named pipe from the host into the container. 328 329 Consider a situation where your image starts a lightweight web server. You could 330 use that image as a base image, copy in your website's HTML files, and package 331 that into another image. Each time your website changed, you'd need to update 332 the new image and redeploy all of the containers serving your website. A better 333 solution is to store the website in a named volume which is attached to each of 334 your web server containers when they start. To update the website, you just 335 update the named volume. 336 337 For more information about named volumes, see 338 [Data Volumes](https://docs.docker.com/storage/volumes/). 339 340 The following table describes options which apply to both bind mounts and named 341 volumes in a service: 342 343 <table> 344 <tr> 345 <th>Option</th> 346 <th>Required</th> 347 <th>Description</th> 348 </tr> 349 <tr> 350 <td><b>type</b></td> 351 <td></td> 352 <td> 353 <p>The type of mount, can be either <tt>volume</tt>, <tt>bind</tt>, <tt>tmpfs</tt>, or <tt>npipe</tt>. Defaults to <tt>volume</tt> if no type is specified.</p> 354 <ul> 355 <li><tt>volume</tt>: mounts a <a href="https://docs.docker.com/engine/reference/commandline/volume_create/">managed volume</a> 356 into the container.</li> <li><tt>bind</tt>: 357 bind-mounts a directory or file from the host into the container.</li> 358 <li><tt>tmpfs</tt>: mount a tmpfs in the container</li> 359 <li><tt>npipe</tt>: mounts named pipe from the host into the container (Windows containers only).</li> 360 </ul> 361 </td> 362 </tr> 363 <tr> 364 <td><b>src</b> or <b>source</b></td> 365 <td>for <tt>type=bind</tt> and <tt>type=npipe</tt></td> 366 <td> 367 <ul> 368 <li> 369 <tt>type=volume</tt>: <tt>src</tt> is an optional way to specify the name of the volume (for example, <tt>src=my-volume</tt>). 370 If the named volume does not exist, it is automatically created. If no <tt>src</tt> is specified, the volume is 371 assigned a random name which is guaranteed to be unique on the host, but may not be unique cluster-wide. 372 A randomly-named volume has the same lifecycle as its container and is destroyed when the <i>container</i> 373 is destroyed (which is upon <tt>service update</tt>, or when scaling or re-balancing the service) 374 </li> 375 <li> 376 <tt>type=bind</tt>: <tt>src</tt> is required, and specifies an absolute path to the file or directory to bind-mount 377 (for example, <tt>src=/path/on/host/</tt>). An error is produced if the file or directory does not exist. 378 </li> 379 <li> 380 <tt>type=tmpfs</tt>: <tt>src</tt> is not supported. 381 </li> 382 </ul> 383 </td> 384 </tr> 385 <tr> 386 <td><p><b>dst</b> or <b>destination</b> or <b>target</b></p></td> 387 <td>yes</td> 388 <td> 389 <p>Mount path inside the container, for example <tt>/some/path/in/container/</tt>. 390 If the path does not exist in the container's filesystem, the Engine creates 391 a directory at the specified location before mounting the volume or bind mount.</p> 392 </td> 393 </tr> 394 <tr> 395 <td><p><b>readonly</b> or <b>ro</b></p></td> 396 <td></td> 397 <td> 398 <p>The Engine mounts binds and volumes <tt>read-write</tt> unless <tt>readonly</tt> option 399 is given when mounting the bind or volume. Note that setting <tt>readonly</tt> for a 400 bind-mount does not make its submounts <tt>readonly</tt> on the current Linux implementation. See also <tt>bind-nonrecursive</tt>.</p> 401 <ul> 402 <li><tt>true</tt> or <tt>1</tt> or no value: Mounts the bind or volume read-only.</li> 403 <li><tt>false</tt> or <tt>0</tt>: Mounts the bind or volume read-write.</li> 404 </ul> 405 </td> 406 </tr> 407 </table> 408 409 #### Options for Bind Mounts 410 411 The following options can only be used for bind mounts (`type=bind`): 412 413 414 <table> 415 <tr> 416 <th>Option</th> 417 <th>Description</th> 418 </tr> 419 <tr> 420 <td><b>bind-propagation</b></td> 421 <td> 422 <p>See the <a href="#bind-propagation">bind propagation section</a>.</p> 423 </td> 424 </tr> 425 <tr> 426 <td><b>consistency</b></td> 427 <td> 428 <p>The consistency requirements for the mount; one of </p> 429 <ul> 430 <li><tt>default</tt>: Equivalent to <tt>consistent</tt>.</li> 431 <li><tt>consistent</tt>: Full consistency. The container runtime and the host maintain an identical view of the mount at all times.</li> 432 <li><tt>cached</tt>: The host's view of the mount is authoritative. There may be delays before updates made on the host are visible within a container.</li> 433 <li><tt>delegated</tt>: The container runtime's view of the mount is authoritative. There may be delays before updates made in a container are visible on the host.</li> 434 </ul> 435 </td> 436 </tr> 437 <tr> 438 <td><b>bind-nonrecursive</b></td> 439 <td> 440 By default, submounts are recursively bind-mounted as well. However, this behavior can be confusing when a 441 bind mount is configured with <tt>readonly</tt> option, because submounts are not mounted as read-only. 442 Set <tt>bind-nonrecursive</tt> to disable recursive bind-mount.<br /> 443 <br /> 444 A value is optional:<br /> 445 <br /> 446 <ul> 447 <li><tt>true</tt> or <tt>1</tt>: Disables recursive bind-mount.</li> 448 <li><tt>false</tt> or <tt>0</tt>: Default if you do not provide a value. Enables recursive bind-mount.</li> 449 </ul> 450 </td> 451 </tr> 452 </table> 453 454 ##### Bind propagation 455 456 Bind propagation refers to whether or not mounts created within a given 457 bind mount or named volume can be propagated to replicas of that mount. Consider 458 a mount point `/mnt`, which is also mounted on `/tmp`. The propagation settings 459 control whether a mount on `/tmp/a` would also be available on `/mnt/a`. Each 460 propagation setting has a recursive counterpoint. In the case of recursion, 461 consider that `/tmp/a` is also mounted as `/foo`. The propagation settings 462 control whether `/mnt/a` and/or `/tmp/a` would exist. 463 464 The `bind-propagation` option defaults to `rprivate` for both bind mounts and 465 volume mounts, and is only configurable for bind mounts. In other words, named 466 volumes do not support bind propagation. 467 468 - **`shared`**: Sub-mounts of the original mount are exposed to replica mounts, 469 and sub-mounts of replica mounts are also propagated to the 470 original mount. 471 - **`slave`**: similar to a shared mount, but only in one direction. If the 472 original mount exposes a sub-mount, the replica mount can see it. 473 However, if the replica mount exposes a sub-mount, the original 474 mount cannot see it. 475 - **`private`**: The mount is private. Sub-mounts within it are not exposed to 476 replica mounts, and sub-mounts of replica mounts are not 477 exposed to the original mount. 478 - **`rshared`**: The same as shared, but the propagation also extends to and from 479 mount points nested within any of the original or replica mount 480 points. 481 - **`rslave`**: The same as `slave`, but the propagation also extends to and from 482 mount points nested within any of the original or replica mount 483 points. 484 - **`rprivate`**: The default. The same as `private`, meaning that no mount points 485 anywhere within the original or replica mount points propagate 486 in either direction. 487 488 For more information about bind propagation, see the 489 [Linux kernel documentation for shared subtree](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). 490 491 #### Options for named volumes 492 493 The following options can only be used for named volumes (`type=volume`): 494 495 496 <table> 497 <tr> 498 <th>Option</th> 499 <th>Description</th> 500 </tr> 501 <tr> 502 <td><b>volume-driver</b></td> 503 <td> 504 <p>Name of the volume-driver plugin to use for the volume. Defaults to 505 <tt>"local"</tt>, to use the local volume driver to create the volume if the 506 volume does not exist.</p> 507 </td> 508 </tr> 509 <tr> 510 <td><b>volume-label</b></td> 511 <td> 512 One or more custom metadata ("labels") to apply to the volume upon 513 creation. For example, 514 <tt>volume-label=mylabel=hello-world,my-other-label=hello-mars</tt>. For more 515 information about labels, refer to 516 <a href="https://docs.docker.com/config/labels-custom-metadata/">apply custom metadata</a>. 517 </td> 518 </tr> 519 <tr> 520 <td><b>volume-nocopy</b></td> 521 <td> 522 By default, if you attach an empty volume to a container, and files or 523 directories already existed at the mount-path in the container (<tt>dst</tt>), 524 the Engine copies those files and directories into the volume, allowing 525 the host to access them. Set <tt>volume-nocopy</tt> to disable copying files 526 from the container's filesystem to the volume and mount the empty volume.<br /> 527 <br /> 528 A value is optional:<br /> 529 <br /> 530 <ul> 531 <li><tt>true</tt> or <tt>1</tt>: Default if you do not provide a value. Disables copying.</li> 532 <li><tt>false</tt> or <tt>0</tt>: Enables copying.</li> 533 </ul> 534 </td> 535 </tr> 536 <tr> 537 <td><b>volume-opt</b></td> 538 <td> 539 Options specific to a given volume driver, which will be passed to the 540 driver when creating the volume. Options are provided as a comma-separated 541 list of key/value pairs, for example, 542 <tt>volume-opt=some-option=some-value,volume-opt=some-other-option=some-other-value</tt>. 543 For available options for a given driver, refer to that driver's 544 documentation. 545 </td> 546 </tr> 547 </table> 548 549 550 #### Options for tmpfs 551 552 The following options can only be used for tmpfs mounts (`type=tmpfs`); 553 554 555 <table> 556 <tr> 557 <th>Option</th> 558 <th>Description</th> 559 </tr> 560 <tr> 561 <td><b>tmpfs-size</b></td> 562 <td>Size of the tmpfs mount in bytes. Unlimited by default in Linux.</td> 563 </tr> 564 <tr> 565 <td><b>tmpfs-mode</b></td> 566 <td>File mode of the tmpfs in octal. (e.g. <tt>"700"</tt> or <tt>"0700"</tt>.) Defaults to <tt>"1777"</tt> in Linux.</td> 567 </tr> 568 </table> 569 570 571 #### Differences between "--mount" and "--volume" 572 573 The `--mount` flag supports most options that are supported by the `-v` 574 or `--volume` flag for `docker run`, with some important exceptions: 575 576 - The `--mount` flag allows you to specify a volume driver and volume driver 577 options *per volume*, without creating the volumes in advance. In contrast, 578 `docker run` allows you to specify a single volume driver which is shared 579 by all volumes, using the `--volume-driver` flag. 580 581 - The `--mount` flag allows you to specify custom metadata ("labels") for a volume, 582 before the volume is created. 583 584 - When you use `--mount` with `type=bind`, the host-path must refer to an *existing* 585 path on the host. The path will not be created for you and the service will fail 586 with an error if the path does not exist. 587 588 - The `--mount` flag does not allow you to relabel a volume with `Z` or `z` flags, 589 which are used for `selinux` labeling. 590 591 #### Create a service using a named volume 592 593 The following example creates a service that uses a named volume: 594 595 ```console 596 $ docker service create \ 597 --name my-service \ 598 --replicas 3 \ 599 --mount type=volume,source=my-volume,destination=/path/in/container,volume-label="color=red",volume-label="shape=round" \ 600 nginx:alpine 601 ``` 602 603 For each replica of the service, the engine requests a volume named "my-volume" 604 from the default ("local") volume driver where the task is deployed. If the 605 volume does not exist, the engine creates a new volume and applies the "color" 606 and "shape" labels. 607 608 When the task is started, the volume is mounted on `/path/in/container/` inside 609 the container. 610 611 Be aware that the default ("local") volume is a locally scoped volume driver. 612 This means that depending on where a task is deployed, either that task gets a 613 *new* volume named "my-volume", or shares the same "my-volume" with other tasks 614 of the same service. Multiple containers writing to a single shared volume can 615 cause data corruption if the software running inside the container is not 616 designed to handle concurrent processes writing to the same location. Also take 617 into account that containers can be re-scheduled by the Swarm orchestrator and 618 be deployed on a different node. 619 620 #### Create a service that uses an anonymous volume 621 622 The following command creates a service with three replicas with an anonymous 623 volume on `/path/in/container`: 624 625 ```console 626 $ docker service create \ 627 --name my-service \ 628 --replicas 3 \ 629 --mount type=volume,destination=/path/in/container \ 630 nginx:alpine 631 ``` 632 633 In this example, no name (`source`) is specified for the volume, so a new volume 634 is created for each task. This guarantees that each task gets its own volume, 635 and volumes are not shared between tasks. Anonymous volumes are removed after 636 the task using them is complete. 637 638 #### Create a service that uses a bind-mounted host directory 639 640 The following example bind-mounts a host directory at `/path/in/container` in 641 the containers backing the service: 642 643 ```console 644 $ docker service create \ 645 --name my-service \ 646 --mount type=bind,source=/path/on/host,destination=/path/in/container \ 647 nginx:alpine 648 ``` 649 650 ### Set service mode (--mode) 651 652 The service mode determines whether this is a _replicated_ service or a _global_ 653 service. A replicated service runs as many tasks as specified, while a global 654 service runs on each active node in the swarm. 655 656 The following command creates a global service: 657 658 ```console 659 $ docker service create \ 660 --name redis_2 \ 661 --mode global \ 662 redis:3.0.6 663 ``` 664 665 ### <a name="constraint"></a> Specify service constraints (--constraint) 666 667 You can limit the set of nodes where a task can be scheduled by defining 668 constraint expressions. Constraint expressions can either use a _match_ (`==`) 669 or _exclude_ (`!=`) rule. Multiple constraints find nodes that satisfy every 670 expression (AND match). Constraints can match node or Docker Engine labels as 671 follows: 672 673 | node attribute | matches | example | 674 |----------------------|--------------------------------|-----------------------------------------------| 675 | `node.id` | Node ID | `node.id==2ivku8v2gvtg4` | 676 | `node.hostname` | Node hostname | `node.hostname!=node-2` | 677 | `node.role` | Node role (`manager`/`worker`) | `node.role==manager` | 678 | `node.platform.os` | Node operating system | `node.platform.os==windows` | 679 | `node.platform.arch` | Node architecture | `node.platform.arch==x86_64` | 680 | `node.labels` | User-defined node labels | `node.labels.security==high` | 681 | `engine.labels` | Docker Engine's labels | `engine.labels.operatingsystem==ubuntu-22.04` | 682 683 `engine.labels` apply to Docker Engine labels like operating system, drivers, 684 etc. Swarm administrators add `node.labels` for operational purposes by using 685 the [`docker node update`](node_update.md) command. 686 687 For example, the following limits tasks for the redis service to nodes where the 688 node type label equals queue: 689 690 ```console 691 $ docker service create \ 692 --name redis_2 \ 693 --constraint node.platform.os==linux \ 694 --constraint node.labels.type==queue \ 695 redis:3.0.6 696 ``` 697 698 If the service constraints exclude all nodes in the cluster, a message is printed 699 that no suitable node is found, but the scheduler will start a reconciliation 700 loop and deploy the service once a suitable node becomes available. 701 702 In the example below, no node satisfying the constraint was found, causing the 703 service to not reconcile with the desired state: 704 705 ```console 706 $ docker service create \ 707 --name web \ 708 --constraint node.labels.region==east \ 709 nginx:alpine 710 711 lx1wrhhpmbbu0wuk0ybws30bc 712 overall progress: 0 out of 1 tasks 713 1/1: no suitable node (scheduling constraints not satisfied on 5 nodes) 714 715 $ docker service ls 716 ID NAME MODE REPLICAS IMAGE PORTS 717 b6lww17hrr4e web replicated 0/1 nginx:alpine 718 ``` 719 720 After adding the `region=east` label to a node in the cluster, the service 721 reconciles, and the desired number of replicas are deployed: 722 723 ```console 724 $ docker node update --label-add region=east yswe2dm4c5fdgtsrli1e8ya5l 725 yswe2dm4c5fdgtsrli1e8ya5l 726 727 $ docker service ls 728 ID NAME MODE REPLICAS IMAGE PORTS 729 b6lww17hrr4e web replicated 1/1 nginx:alpine 730 ``` 731 732 ### <a name="placement-pref"></a> Specify service placement preferences (--placement-pref) 733 734 You can set up the service to divide tasks evenly over different categories of 735 nodes. One example of where this can be useful is to balance tasks over a set 736 of datacenters or availability zones. The example below illustrates this: 737 738 ```console 739 $ docker service create \ 740 --replicas 9 \ 741 --name redis_2 \ 742 --placement-pref spread=node.labels.datacenter \ 743 redis:3.0.6 744 ``` 745 746 This uses `--placement-pref` with a `spread` strategy (currently the only 747 supported strategy) to spread tasks evenly over the values of the `datacenter` 748 node label. In this example, we assume that every node has a `datacenter` node 749 label attached to it. If there are three different values of this label among 750 nodes in the swarm, one third of the tasks will be placed on the nodes 751 associated with each value. This is true even if there are more nodes with one 752 value than another. For example, consider the following set of nodes: 753 754 - Three nodes with `node.labels.datacenter=east` 755 - Two nodes with `node.labels.datacenter=south` 756 - One node with `node.labels.datacenter=west` 757 758 Since we are spreading over the values of the `datacenter` label and the 759 service has 9 replicas, 3 replicas will end up in each datacenter. There are 760 three nodes associated with the value `east`, so each one will get one of the 761 three replicas reserved for this value. There are two nodes with the value 762 `south`, and the three replicas for this value will be divided between them, 763 with one receiving two replicas and another receiving just one. Finally, `west` 764 has a single node that will get all three replicas reserved for `west`. 765 766 If the nodes in one category (for example, those with 767 `node.labels.datacenter=south`) can't handle their fair share of tasks due to 768 constraints or resource limitations, the extra tasks will be assigned to other 769 nodes instead, if possible. 770 771 Both engine labels and node labels are supported by placement preferences. The 772 example above uses a node label, because the label is referenced with 773 `node.labels.datacenter`. To spread over the values of an engine label, use 774 `--placement-pref spread=engine.labels.<labelname>`. 775 776 It is possible to add multiple placement preferences to a service. This 777 establishes a hierarchy of preferences, so that tasks are first divided over 778 one category, and then further divided over additional categories. One example 779 of where this may be useful is dividing tasks fairly between datacenters, and 780 then splitting the tasks within each datacenter over a choice of racks. To add 781 multiple placement preferences, specify the `--placement-pref` flag multiple 782 times. The order is significant, and the placement preferences will be applied 783 in the order given when making scheduling decisions. 784 785 The following example sets up a service with multiple placement preferences. 786 Tasks are spread first over the various datacenters, and then over racks 787 (as indicated by the respective labels): 788 789 ```console 790 $ docker service create \ 791 --replicas 9 \ 792 --name redis_2 \ 793 --placement-pref 'spread=node.labels.datacenter' \ 794 --placement-pref 'spread=node.labels.rack' \ 795 redis:3.0.6 796 ``` 797 798 When updating a service with `docker service update`, `--placement-pref-add` 799 appends a new placement preference after all existing placement preferences. 800 `--placement-pref-rm` removes an existing placement preference that matches the 801 argument. 802 803 ### <a name="reserve-memory"></a> Specify memory requirements and constraints for a service (--reserve-memory and --limit-memory) 804 805 If your service needs a minimum amount of memory in order to run correctly, 806 you can use `--reserve-memory` to specify that the service should only be 807 scheduled on a node with this much memory available to reserve. If no node is 808 available that meets the criteria, the task is not scheduled, but remains in a 809 pending state. 810 811 The following example requires that 4GB of memory be available and reservable 812 on a given node before scheduling the service to run on that node. 813 814 ```console 815 $ docker service create --reserve-memory=4GB --name=too-big nginx:alpine 816 ``` 817 818 The managers won't schedule a set of containers on a single node whose combined 819 reservations exceed the memory available on that node. 820 821 After a task is scheduled and running, `--reserve-memory` does not enforce a 822 memory limit. Use `--limit-memory` to ensure that a task uses no more than a 823 given amount of memory on a node. This example limits the amount of memory used 824 by the task to 4GB. The task will be scheduled even if each of your nodes has 825 only 2GB of memory, because `--limit-memory` is an upper limit. 826 827 ```console 828 $ docker service create --limit-memory=4GB --name=too-big nginx:alpine 829 ``` 830 831 Using `--reserve-memory` and `--limit-memory` does not guarantee that Docker 832 will not use more memory on your host than you want. For instance, you could 833 create many services, the sum of whose memory usage could exhaust the available 834 memory. 835 836 You can prevent this scenario from exhausting the available memory by taking 837 into account other (non-containerized) software running on the host as well. If 838 `--reserve-memory` is greater than or equal to `--limit-memory`, Docker won't 839 schedule a service on a host that doesn't have enough memory. `--limit-memory` 840 will limit the service's memory to stay within that limit, so if every service 841 has a memory-reservation and limit set, Docker services will be less likely to 842 saturate the host. Other non-service containers or applications running directly 843 on the Docker host could still exhaust memory. 844 845 There is a downside to this approach. Reserving memory also means that you may 846 not make optimum use of the memory available on the node. Consider a service 847 that under normal circumstances uses 100MB of memory, but depending on load can 848 "peak" at 500MB. Reserving 500MB for that service (to guarantee can have 500MB 849 for those "peaks") results in 400MB of memory being wasted most of the time. 850 851 In short, you can take a more conservative or more flexible approach: 852 853 - **Conservative**: reserve 500MB, and limit to 500MB. Basically you're now 854 treating the service containers as VMs, and you may be losing a big advantage 855 containers, which is greater density of services per host. 856 857 - **Flexible**: limit to 500MB in the assumption that if the service requires 858 more than 500MB, it is malfunctioning. Reserve something between the 100MB 859 "normal" requirement and the 500MB "peak" requirement". This assumes that when 860 this service is at "peak", other services or non-container workloads probably 861 won't be. 862 863 The approach you take depends heavily on the memory-usage patterns of your 864 workloads. You should test under normal and peak conditions before settling 865 on an approach. 866 867 On Linux, you can also limit a service's overall memory footprint on a given 868 host at the level of the host operating system, using `cgroups` or other 869 relevant operating system tools. 870 871 ### <a name="replicas-max-per-node"></a> Specify maximum replicas per node (--replicas-max-per-node) 872 873 Use the `--replicas-max-per-node` flag to set the maximum number of replica tasks that can run on a node. 874 The following command creates a nginx service with 2 replica tasks but only one replica task per node. 875 876 One example where this can be useful is to balance tasks over a set of data centers together with `--placement-pref` 877 and let `--replicas-max-per-node` setting make sure that replicas are not migrated to another datacenter during 878 maintenance or datacenter failure. 879 880 The example below illustrates this: 881 882 ```console 883 $ docker service create \ 884 --name nginx \ 885 --replicas 2 \ 886 --replicas-max-per-node 1 \ 887 --placement-pref 'spread=node.labels.datacenter' \ 888 nginx 889 ``` 890 891 ### <a name="network"></a> Attach a service to an existing network (--network) 892 893 You can use overlay networks to connect one or more services within the swarm. 894 895 First, create an overlay network on a manager node the docker network create 896 command: 897 898 ```console 899 $ docker network create --driver overlay my-network 900 901 etjpu59cykrptrgw0z0hk5snf 902 ``` 903 904 After you create an overlay network in swarm mode, all manager nodes have 905 access to the network. 906 907 When you create a service and pass the `--network` flag to attach the service to 908 the overlay network: 909 910 ```console 911 $ docker service create \ 912 --replicas 3 \ 913 --network my-network \ 914 --name my-web \ 915 nginx 916 917 716thylsndqma81j6kkkb5aus 918 ``` 919 920 The swarm extends my-network to each node running the service. 921 922 Containers on the same network can access each other using 923 [service discovery](https://docs.docker.com/network/overlay/#container-discovery). 924 925 Long form syntax of `--network` allows to specify list of aliases and driver options: 926 `--network name=my-network,alias=web1,driver-opt=field1=value1` 927 928 ### <a name="publish"></a> Publish service ports externally to the swarm (-p, --publish) 929 930 You can publish service ports to make them available externally to the swarm 931 using the `--publish` flag. The `--publish` flag can take two different styles 932 of arguments. The short version is positional, and allows you to specify the 933 published port and target port separated by a colon (`:`). 934 935 ```console 936 $ docker service create --name my_web --replicas 3 --publish 8080:80 nginx 937 ``` 938 939 There is also a long format, which is easier to read and allows you to specify 940 more options. The long format is preferred. You cannot specify the service's 941 mode when using the short format. Here is an example of using the long format 942 for the same service as above: 943 944 ```console 945 $ docker service create --name my_web --replicas 3 --publish published=8080,target=80 nginx 946 ``` 947 948 The options you can specify are: 949 950 <table> 951 <thead> 952 <tr> 953 <th>Option</th> 954 <th>Short syntax</th> 955 <th>Long syntax</th> 956 <th>Description</th> 957 </tr> 958 </thead> 959 <tr> 960 <td>published and target port</td> 961 <td><tt>--publish 8080:80</tt></td> 962 <td><tt>--publish published=8080,target=80</tt></td> 963 <td><p> 964 The target port within the container and the port to map it to on the 965 nodes, using the routing mesh (<tt>ingress</tt>) or host-level networking. 966 More options are available, later in this table. The key-value syntax is 967 preferred, because it is somewhat self-documenting. 968 </p></td> 969 </tr> 970 <tr> 971 <td>mode</td> 972 <td>Not possible to set using short syntax.</td> 973 <td><tt>--publish published=8080,target=80,mode=host</tt></td> 974 <td><p> 975 The mode to use for binding the port, either <tt>ingress</tt> or <tt>host</tt>. 976 Defaults to <tt>ingress</tt> to use the routing mesh. 977 </p></td> 978 </tr> 979 <tr> 980 <td>protocol</td> 981 <td><tt>--publish 8080:80/tcp</tt></td> 982 <td><tt>--publish published=8080,target=80,protocol=tcp</tt></td> 983 <td><p> 984 The protocol to use, <tt>tcp</tt> , <tt>udp</tt>, or <tt>sctp</tt>. Defaults to 985 <tt>tcp</tt>. To bind a port for both protocols, specify the <tt>-p</tt> or 986 <tt>--publish</tt> flag twice. 987 </p></td> 988 </tr> 989 </table> 990 991 When you publish a service port using `ingress` mode, the swarm routing mesh 992 makes the service accessible at the published port on every node regardless if 993 there is a task for the service running on the node. If you use `host` mode, 994 the port is only bound on nodes where the service is running, and a given port 995 on a node can only be bound once. You can only set the publication mode using 996 the long syntax. For more information refer to 997 [Use swarm mode routing mesh](https://docs.docker.com/engine/swarm/ingress/). 998 999 ### <a name="credentials-spec"></a> Provide credential specs for managed service accounts (--credentials-spec) 1000 1001 This option is only used for services using Windows containers. The 1002 `--credential-spec` must be in the format `file://<filename>` or 1003 `registry://<value-name>`. 1004 1005 When using the `file://<filename>` format, the referenced file must be 1006 present in the `CredentialSpecs` subdirectory in the docker data directory, 1007 which defaults to `C:\ProgramData\Docker\` on Windows. For example, 1008 specifying `file://spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`. 1009 1010 When using the `registry://<value-name>` format, the credential spec is 1011 read from the Windows registry on the daemon's host. The specified 1012 registry value must be located in: 1013 1014 HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs 1015 1016 1017 ### Create services using templates 1018 1019 You can use templates for some flags of `service create`, using the syntax 1020 provided by the Go's [text/template](https://golang.org/pkg/text/template/) package. 1021 1022 The supported flags are the following : 1023 1024 - `--hostname` 1025 - `--mount` 1026 - `--env` 1027 1028 Valid placeholders for the Go template are listed below: 1029 1030 1031 <table> 1032 <tr> 1033 <th>Placeholder</th> 1034 <th>Description</th> 1035 </tr> 1036 <tr> 1037 <td><tt>.Service.ID</tt></td> 1038 <td>Service ID</td> 1039 </tr> 1040 <tr> 1041 <td><tt>.Service.Name</tt></td> 1042 <td>Service name</td> 1043 </tr> 1044 <tr> 1045 <td><tt>.Service.Labels</tt></td> 1046 <td>Service labels</td> 1047 </tr> 1048 <tr> 1049 <td><tt>.Node.ID</tt></td> 1050 <td>Node ID</td> 1051 </tr> 1052 <tr> 1053 <td><tt>.Node.Hostname</tt></td> 1054 <td>Node Hostname</td> 1055 </tr> 1056 <tr> 1057 <td><tt>.Task.ID</tt></td> 1058 <td>Task ID</td> 1059 </tr> 1060 <tr> 1061 <td><tt>.Task.Name</tt></td> 1062 <td>Task name</td> 1063 </tr> 1064 <tr> 1065 <td><tt>.Task.Slot</tt></td> 1066 <td>Task slot</td> 1067 </tr> 1068 </table> 1069 1070 1071 #### Template example 1072 1073 In this example, we are going to set the template of the created containers based on the 1074 service's name, the node's ID and hostname where it sits. 1075 1076 ```console 1077 $ docker service create \ 1078 --name hosttempl \ 1079 --hostname="{{.Node.Hostname}}-{{.Node.ID}}-{{.Service.Name}}"\ 1080 busybox top 1081 1082 va8ew30grofhjoychbr6iot8c 1083 1084 $ docker service ps va8ew30grofhjoychbr6iot8c 1085 1086 ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 1087 wo41w8hg8qan hosttempl.1 busybox:latest@sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 2e7a8a9c4da2 Running Running about a minute ago 1088 1089 $ docker inspect --format="{{.Config.Hostname}}" 2e7a8a9c4da2-wo41w8hg8qanxwjwsg4kxpprj-hosttempl 1090 1091 x3ti0erg11rjpg64m75kej2mz-hosttempl 1092 ``` 1093 1094 ### <a name="isolation"></a> Specify isolation mode on Windows (--isolation) 1095 1096 By default, tasks scheduled on Windows nodes are run using the default isolation mode 1097 configured for this particular node. To force a specific isolation mode, you can use 1098 the `--isolation` flag: 1099 1100 ```console 1101 $ docker service create --name myservice --isolation=process microsoft/nanoserver 1102 ``` 1103 1104 Supported isolation modes on Windows are: 1105 - `default`: use default settings specified on the node running the task 1106 - `process`: use process isolation (Windows server only) 1107 - `hyperv`: use Hyper-V isolation 1108 1109 ### <a name="generic-resources"></a> Create services requesting Generic Resources (--generic-resources) 1110 1111 You can narrow the kind of nodes your task can land on through the using the 1112 `--generic-resource` flag (if the nodes advertise these resources): 1113 1114 ```console 1115 $ docker service create \ 1116 --name cuda \ 1117 --generic-resource "NVIDIA-GPU=2" \ 1118 --generic-resource "SSD=1" \ 1119 nvidia/cuda 1120 ``` 1121 1122 ### Running as a job 1123 1124 Jobs are a special kind of service designed to run an operation to completion 1125 and then stop, as opposed to running long-running daemons. When a Task 1126 belonging to a job exits successfully (return value 0), the Task is marked as 1127 "Completed", and is not run again. 1128 1129 Jobs are started by using one of two modes, `replicated-job` or `global-job` 1130 1131 ```console 1132 $ docker service create --name myjob \ 1133 --mode replicated-job \ 1134 bash "true" 1135 ``` 1136 1137 This command will run one Task, which will, using the `bash` image, execute the 1138 command `true`, which will return 0 and then exit. 1139 1140 Though Jobs are ultimately a different kind of service, they a couple of 1141 caveats compared to other services: 1142 1143 - None of the update or rollback configuration options are valid. Jobs can be 1144 updated, but cannot be rolled out or rolled back, making these configuration 1145 options moot. 1146 - Jobs are never restarted on reaching the `Complete` state. This means that 1147 for jobs, setting `--restart-condition` to `any` is the same as setting it to 1148 `on-failure`. 1149 1150 Jobs are available in both replicated and global modes. 1151 1152 #### Replicated Jobs 1153 1154 A replicated job is like a replicated service. Setting the `--replicas` flag 1155 will specify total number of iterations of a job to execute. 1156 1157 By default, all replicas of a replicated job will launch at once. To control 1158 the total number of replicas that are executing simultaneously at any one time, 1159 the `--max-concurrent` flag can be used: 1160 1161 ```console 1162 $ docker service create \ 1163 --name mythrottledjob \ 1164 --mode replicated-job \ 1165 --replicas 10 \ 1166 --max-concurrent 2 \ 1167 bash "true" 1168 ``` 1169 1170 The above command will execute 10 Tasks in total, but only 2 of them will be 1171 run at any given time. 1172 1173 #### Global Jobs 1174 1175 Global jobs are like global services, in that a Task is executed once on each node 1176 matching placement constraints. Global jobs are represented by the mode `global-job`. 1177 1178 Note that after a Global job is created, any new Nodes added to the cluster 1179 will have a Task from that job started on them. The Global Job does not as a 1180 whole have a "done" state, except insofar as every Node meeting the job's 1181 constraints has a Completed task. 1182 1183 ## Related commands 1184 1185 * [service inspect](service_inspect.md) 1186 * [service logs](service_logs.md) 1187 * [service ls](service_ls.md) 1188 * [service ps](service_ps.md) 1189 * [service rm](service_rm.md) 1190 * [service rollback](service_rollback.md) 1191 * [service scale](service_scale.md) 1192 * [service update](service_update.md) 1193 1194 <style>table tr > td:first-child { white-space: nowrap;}</style>