github.com/Ilhicas/nomad@v1.0.4-0.20210304152020-e86851182bc3/website/content/docs/job-specification/group.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: group Stanza - Job Specification
     4  sidebar_title: group
     5  description: |-
     6    The "group" stanza defines a series of tasks that should be co-located on the
     7    same Nomad client. Any task within a group will be placed on the same client.
     8  ---
     9  
    10  # `group` Stanza
    11  
    12  <Placement groups={['job', 'group']} />
    13  
    14  The `group` stanza defines a series of tasks that should be co-located on the
    15  same Nomad client. Any [task][] within a group will be placed on the same
    16  client.
    17  
    18  ```hcl
    19  job "docs" {
    20    group "example" {
    21      # ...
    22    }
    23  }
    24  ```
    25  
    26  ## `group` Parameters
    27  
    28  - `constraint` <code>([Constraint][]: nil)</code> -
    29    This can be provided multiple times to define additional constraints.
    30  
    31  - `affinity` <code>([Affinity][]: nil)</code> - This can be provided
    32    multiple times to define preferred placement criteria.
    33  
    34  - `spread` <code>([Spread][spread]: nil)</code> - This can be provided
    35    multiple times to define criteria for spreading allocations across a
    36    node attribute or metadata. See the
    37    [Nomad spread reference](/docs/job-specification/spread) for more details.
    38  
    39  - `count` `(int)` - Specifies the number of instances that should be running
    40    under for this group. This value must be non-negative. This defaults to the
    41    `min` value specified in the [`scaling`](/docs/job-specification/scaling)
    42    block, if present; otherwise, this defaults to `1`.
    43  
    44  - `ephemeral_disk` <code>([EphemeralDisk][]: nil)</code> - Specifies the
    45    ephemeral disk requirements of the group. Ephemeral disks can be marked as
    46    sticky and support live data migrations.
    47  
    48  - `meta` <code>([Meta][]: nil)</code> - Specifies a key-value map that annotates
    49    with user-defined metadata.
    50  
    51  - `migrate` <code>([Migrate][]: nil)</code> - Specifies the group strategy for
    52    migrating off of draining nodes. Only service jobs with a count greater than
    53    1 support migrate stanzas.
    54  
    55  - `network` <code>([Network][]: &lt;optional&gt;)</code> - Specifies the network
    56    requirements and configuration, including static and dynamic port allocations,
    57    for the group.
    58  
    59  - `reschedule` <code>([Reschedule][]: nil)</code> - Allows to specify a
    60    rescheduling strategy. Nomad will then attempt to schedule the task on another
    61    node if any of the group allocation statuses become "failed".
    62  
    63  - `restart` <code>([Restart][]: nil)</code> - Specifies the restart policy for
    64    all tasks in this group. If omitted, a default policy exists for each job
    65    type, which can be found in the [restart stanza documentation][restart].
    66  
    67  - `service` <code>([Service][]: nil)</code> - Specifies integrations with
    68    [Consul](/docs/configuration/consul) for service discovery. 
    69    Nomad automatically registers each service when an allocation
    70    is started and de-registers them when the allocation is destroyed.
    71  
    72  - `shutdown_delay` `(string: "0s")` - Specifies the duration to wait when
    73    stopping a group's tasks. The delay occurs between Consul deregistration
    74    and sending each task a shutdown signal. Ideally, services would fail
    75    healthchecks once they receive a shutdown signal. Alternatively
    76    `shutdown_delay` may be set to give in-flight requests time to complete
    77    before shutting down. A group level `shutdown_delay` will run regardless
    78    if there are any defined group services. In addition, tasks may have their
    79    own [`shutdown_delay`](/docs/job-specification/task#shutdown_delay)
    80    which waits between deregistering task services and stopping the task.
    81  
    82  - `stop_after_client_disconnect` `(string: "")` - Specifies a duration
    83    after which a Nomad client that cannot communicate with the servers
    84    will stop allocations based on this task group. By default, a client
    85    will not stop an allocation until explicitly told to by a server. A
    86    client that fails to heartbeat to a server within the
    87    [`heartbeat_grace`] window and any allocations running on it will be
    88    marked "lost" and Nomad will schedule replacement
    89    allocations. However, these replaced allocations will continue to
    90    run on the non-responsive client; an operator may desire that these
    91    replaced allocations are also stopped in this case — for example,
    92    allocations requiring exclusive access to an external resource. When
    93    specified, the Nomad client will stop them after this duration. The
    94    Nomad client process must be running for this to occur.
    95  
    96  - `task` <code>([Task][]: &lt;required&gt;)</code> - Specifies one or more tasks to run
    97    within this group. This can be specified multiple times, to add a task as part
    98    of the group.
    99  
   100  - `vault` <code>([Vault][]: nil)</code> - Specifies the set of Vault policies
   101    required by all tasks in this group. Overrides a `vault` block set at the
   102    `job` level.
   103  
   104  - `volume` <code>([Volume][]: nil)</code> - Specifies the volumes that are
   105    required by tasks within the group.
   106  
   107  ## `group` Examples
   108  
   109  The following examples only show the `group` stanzas. Remember that the
   110  `group` stanza is only valid in the placements listed above.
   111  
   112  ### Specifying Count
   113  
   114  This example specifies that 5 instances of the tasks within this group should be
   115  running:
   116  
   117  ```hcl
   118  group "example" {
   119    count = 5
   120  }
   121  ```
   122  
   123  ### Tasks with Constraint
   124  
   125  This example shows two abbreviated tasks with a constraint on the group. This
   126  will restrict the tasks to 64-bit operating systems.
   127  
   128  ```hcl
   129  group "example" {
   130    constraint {
   131      attribute = "${attr.cpu.arch}"
   132      value     = "amd64"
   133    }
   134  
   135    task "cache" {
   136      # ...
   137    }
   138  
   139    task "server" {
   140      # ...
   141    }
   142  }
   143  ```
   144  
   145  ### Metadata
   146  
   147  This example show arbitrary user-defined metadata on the group:
   148  
   149  ```hcl
   150  group "example" {
   151    meta {
   152      my-key = "my-value"
   153    }
   154  }
   155  ```
   156  
   157  ### Network
   158  
   159  This example shows network constraints as specified in the [network][] stanza
   160  which uses the `bridge` networking mode, dynamically allocates two ports, and
   161  statically allocates one port:
   162  
   163  ```hcl
   164  group "example" {
   165    network {
   166      mode = "bridge"
   167      port "http" {}
   168      port "https" {}
   169      port "lb" {
   170        static = "8889"
   171      }
   172    }
   173  }
   174  ```
   175  
   176  ### Service Discovery
   177  
   178  This example creates a service in Consul. To read more about service discovery
   179  in Nomad, please see the [Nomad service discovery documentation][service_discovery].
   180  
   181  ```hcl
   182  group "example" {
   183    network {
   184      port "api" {}
   185    }
   186  
   187    service {
   188      name = "example"
   189      port = "api"
   190      tags = ["default"]
   191  
   192      check {
   193        type     = "tcp"
   194        interval = "10s"
   195        timeout  = "2s"
   196      }
   197    }
   198  
   199    task "api" { ... }
   200  }
   201  ```
   202  
   203  ### Stop After Client Disconnect
   204  
   205  This example shows how `stop_after_client_disconnect` interacts with
   206  other stanzas. For the `first` group, after the default 10 second
   207  [`heartbeat_grace`] window expires and 90 more seconds passes, the
   208  server will reschedule the allocation. The client will wait 90 seconds
   209  before sending a stop signal (`SIGTERM`) to the `first-task`
   210  task. After 15 more seconds because of the task's `kill_timeout`, the
   211  client will send `SIGKILL`. The `second` group does not have
   212  `stop_after_client_disconnect`, so the server will reschedule the
   213  allocation after the 10 second [`heartbeat_grace`] expires. It will
   214  not be stopped on the client, regardless of how long the client is out
   215  of touch.
   216  
   217  Note that if the server's clocks are not closely synchronized with
   218  each other, the server may reschedule the group before the client has
   219  stopped the allocation. Operators should ensure that clock drift
   220  between servers is as small as possible.
   221  
   222  Note also that a group using this feature will be stopped on the
   223  client if the Nomad server cluster fails, since the client will be
   224  unable to contact any server in that case. Groups opting in to this
   225  feature are therefore exposed to an additional runtime dependency and
   226  potential point of failure.
   227  
   228  ```hcl
   229  group "first" {
   230    stop_after_client_disconnect = "90s"
   231  
   232    task "first-task" {
   233      kill_timeout = "15s"
   234    }
   235  }
   236  
   237  group "second" {
   238  
   239    task "second-task" {
   240      kill_timeout = "5s"
   241    }
   242  }
   243  ```
   244  
   245  [task]: /docs/job-specification/task 'Nomad task Job Specification'
   246  [job]: /docs/job-specification/job 'Nomad job Job Specification'
   247  [constraint]: /docs/job-specification/constraint 'Nomad constraint Job Specification'
   248  [spread]: /docs/job-specification/spread 'Nomad spread Job Specification'
   249  [affinity]: /docs/job-specification/affinity 'Nomad affinity Job Specification'
   250  [ephemeraldisk]: /docs/job-specification/ephemeral_disk 'Nomad ephemeral_disk Job Specification'
   251  [`heartbeat_grace`]: /docs/configuration/server#heartbeat_grace
   252  [meta]: /docs/job-specification/meta 'Nomad meta Job Specification'
   253  [migrate]: /docs/job-specification/migrate 'Nomad migrate Job Specification'
   254  [network]: /docs/job-specification/network 'Nomad network Job Specification'
   255  [reschedule]: /docs/job-specification/reschedule 'Nomad reschedule Job Specification'
   256  [restart]: /docs/job-specification/restart 'Nomad restart Job Specification'
   257  [service]: /docs/job-specification/service 'Nomad service Job Specification'
   258  [service_discovery]: /docs/integrations/consul-integration#service-discovery 'Nomad Service Discovery'
   259  [vault]: /docs/job-specification/vault 'Nomad vault Job Specification'
   260  [volume]: /docs/job-specification/volume 'Nomad volume Job Specification'