github.com/iqoqo/nomad@v0.11.3-0.20200911112621-d7021c74d101/website/pages/docs/job-specification/group.mdx (about)

     1  ---
     2  layout: docs
     3  page_title: group Stanza - Job Specification
     4  sidebar_title: group
     5  description: |-
     6    The "group" stanza defines a series of tasks that should be co-located on the
     7    same Nomad client. Any task within a group will be placed on the same client.
     8  ---
     9  
    10  # `group` Stanza
    11  
    12  <Placement groups={['job', 'group']} />
    13  
    14  The `group` stanza defines a series of tasks that should be co-located on the
    15  same Nomad client. Any [task][] within a group will be placed on the same
    16  client.
    17  
    18  ```hcl
    19  job "docs" {
    20    group "example" {
    21      # ...
    22    }
    23  }
    24  ```
    25  
    26  ## `group` Parameters
    27  
    28  - `constraint` <code>([Constraint][]: nil)</code> -
    29    This can be provided multiple times to define additional constraints.
    30  
    31  - `affinity` <code>([Affinity][]: nil)</code> - This can be provided
    32    multiple times to define preferred placement criteria.
    33  
    34  - `spread` <code>([Spread][spread]: nil)</code> - This can be provided
    35    multiple times to define criteria for spreading allocations across a
    36    node attribute or metadata. See the
    37    [Nomad spread reference](/docs/job-specification/spread) for more details.
    38  
    39  - `count` `(int: 1)` - Specifies the number of the task groups that should
    40    be running under this group. This value must be non-negative.
    41  
    42  - `ephemeral_disk` <code>([EphemeralDisk][]: nil)</code> - Specifies the
    43    ephemeral disk requirements of the group. Ephemeral disks can be marked as
    44    sticky and support live data migrations.
    45  
    46  - `meta` <code>([Meta][]: nil)</code> - Specifies a key-value map that annotates
    47    with user-defined metadata.
    48  
    49  - `migrate` <code>([Migrate][]: nil)</code> - Specifies the group strategy for
    50    migrating off of draining nodes. Only service jobs with a count greater than
    51    1 support migrate stanzas.
    52  
    53  - `reschedule` <code>([Reschedule][]: nil)</code> - Allows to specify a
    54    rescheduling strategy. Nomad will then attempt to schedule the task on another
    55    node if any of the group allocation statuses become "failed".
    56  
    57  - `restart` <code>([Restart][]: nil)</code> - Specifies the restart policy for
    58    all tasks in this group. If omitted, a default policy exists for each job
    59    type, which can be found in the [restart stanza documentation][restart].
    60  
    61  - `shutdown_delay` `(string: "0s")` - Specifies the duration to wait when
    62    stopping a group's tasks. The delay occurs between Consul deregistration
    63    and sending each task a shutdown signal. Ideally, services would fail
    64    healthchecks once they receive a shutdown signal. Alternatively
    65    `shutdown_delay` may be set to give in flight requests time to complete
    66    before shutting down. A group level `shutdown_delay` will run regardless
    67    if there are any defined group services. In addition, tasks may have their
    68    own [`shutdown_delay`](/docs/job-specification/task#shutdown_delay)
    69    which waits between deregistering task services and stopping the task.
    70  
    71  - `stop_after_client_disconnect` `(string: "")` - Specifies a duration
    72    after which a Nomad client that cannot communicate with the servers
    73    will stop allocations based on this task group. By default, a client
    74    will not stop an allocation until explicitly told to by a server. A
    75    client that fails to heartbeat to a server within the
    76    `hearbeat_grace` window and any allocations running on it will be
    77    marked "lost" and Nomad will schedule replacement
    78    allocations. However, these replaced allocations will continue to
    79    run on the non-responsive client; an operator may desire that these
    80    replaced allocations are also stopped in this case — for example,
    81    allocations requiring exclusive access to an external resource. When
    82    specified, the Nomad client will stop them after this duration. The
    83    Nomad client process must be running for this to occur.
    84  
    85  - `task` <code>([Task][]: &lt;required&gt;)</code> - Specifies one or more tasks to run
    86    within this group. This can be specified multiple times, to add a task as part
    87    of the group.
    88  
    89  - `vault` <code>([Vault][]: nil)</code> - Specifies the set of Vault policies
    90    required by all tasks in this group. Overrides a `vault` block set at the
    91    `job` level.
    92  
    93  - `volume` <code>([Volume][]: nil)</code> - Specifies the volumes that are
    94    required by tasks within the group.
    95  
    96  ## `group` Examples
    97  
    98  The following examples only show the `group` stanzas. Remember that the
    99  `group` stanza is only valid in the placements listed above.
   100  
   101  ### Specifying Count
   102  
   103  This example specifies that 5 instances of the tasks within this group should be
   104  running:
   105  
   106  ```hcl
   107  group "example" {
   108    count = 5
   109  }
   110  ```
   111  
   112  ### Tasks with Constraint
   113  
   114  This example shows two abbreviated tasks with a constraint on the group. This
   115  will restrict the tasks to 64-bit operating systems.
   116  
   117  ```hcl
   118  group "example" {
   119    constraint {
   120      attribute = "${attr.cpu.arch}"
   121      value     = "amd64"
   122    }
   123  
   124    task "cache" {
   125      # ...
   126    }
   127  
   128    task "server" {
   129      # ...
   130    }
   131  }
   132  ```
   133  
   134  ### Metadata
   135  
   136  This example show arbitrary user-defined metadata on the group:
   137  
   138  ```hcl
   139  group "example" {
   140    meta {
   141      "my-key" = "my-value"
   142    }
   143  }
   144  ```
   145  
   146  ### Stop After Client Disconnect
   147  
   148  This example shows how `stop_after_client_disconnect` interacts with
   149  other stanzas. For the `first` group, after the default 10 second
   150  [`heartbeat_grace`] window expires and 90 more seconds passes, the
   151  server will reschedule the allocation. The client will wait 90 seconds
   152  before sending a stop signal (`SIGTERM`) to the `first-task`
   153  task. After 15 more seconds because of the task's `kill_timeout`, the
   154  client will send `SIGKILL`. The `second` group does not have
   155  `stop_after_client_disconnect`, so the server will reschedule the
   156  allocation after the 10 second [`heartbeat_grace`] expires. It will
   157  not be stopped on the client, regardless of how long the client is out
   158  of touch.
   159  
   160  Note that if the server's clocks are not closely synchronized with
   161  each other, the server may reschedule the group before the client has
   162  stopped the allocation. Operators should ensure that clock drift
   163  between servers is as small as possible.
   164  
   165  Note also that a group using this feature will be stopped on the
   166  client if the Nomad server cluster fails, since the client will be
   167  unable to contact any server in that case. Groups opting in to this
   168  feature are therefore exposed to an additional runtime dependency and
   169  potential point of failure.
   170  
   171  ```hcl
   172  group "first" {
   173    stop_after_client_disconnect = "90s"
   174  
   175    task "first-task" {
   176      kill_timeout = "15s"
   177    }
   178  }
   179  
   180  group "second" {
   181  
   182    task "second-task" {
   183      kill_timeout = "5s"
   184    }
   185  }
   186  ```
   187  
   188  [task]: /docs/job-specification/task 'Nomad task Job Specification'
   189  [job]: /docs/job-specification/job 'Nomad job Job Specification'
   190  [constraint]: /docs/job-specification/constraint 'Nomad constraint Job Specification'
   191  [spread]: /docs/job-specification/spread 'Nomad spread Job Specification'
   192  [affinity]: /docs/job-specification/affinity 'Nomad affinity Job Specification'
   193  [ephemeraldisk]: /docs/job-specification/ephemeral_disk 'Nomad ephemeral_disk Job Specification'
   194  [`heartbeat_grace`]: /docs/configuration/server/#heartbeat_grace
   195  [meta]: /docs/job-specification/meta 'Nomad meta Job Specification'
   196  [migrate]: /docs/job-specification/migrate 'Nomad migrate Job Specification'
   197  [reschedule]: /docs/job-specification/reschedule 'Nomad reschedule Job Specification'
   198  [restart]: /docs/job-specification/restart 'Nomad restart Job Specification'
   199  [vault]: /docs/job-specification/vault 'Nomad vault Job Specification'
   200  [volume]: /docs/job-specification/volume 'Nomad volume Job Specification'