github.com/anuvu/nomad@v0.8.7-atom1/website/source/docs/job-specification/update.html.md (about)

     1  ---
     2  layout: "docs"
     3  page_title: "update Stanza - Job Specification"
     4  sidebar_current: "docs-job-specification-update"
     5  description: |-
     6    The "update" stanza specifies the group's update strategy. The update strategy
     7    is used to control things like rolling upgrades and canary deployments. If
     8    omitted, rolling updates and canaries are disabled.
     9  ---
    10  
    11  # `update` Stanza
    12  
    13  <table class="table table-bordered table-striped">
    14    <tr>
    15      <th width="120">Placement</th>
    16      <td>
    17        <code>job -> **update**</code>
    18        <br>
    19        <code>job -> group -> **update**</code>
    20      </td>
    21    </tr>
    22  </table>
    23  
    24  The `update` stanza specifies the group's update strategy. The update strategy
    25  is used to control things like rolling upgrades and canary deployments. If
    26  omitted, rolling updates and canaries are disabled. If specified at the job
    27  level, the configuration will apply to all groups within the job. If multiple
    28  `update` stanzas are specified, they are merged with the group stanza taking the
    29  highest precedence and then the job.
    30  
    31  ```hcl
    32  job "docs" {
    33    update {
    34      max_parallel      = 3
    35      health_check      = "checks"
    36      min_healthy_time  = "10s"
    37      healthy_deadline  = "5m"
    38      progress_deadline = "10m"
    39      auto_revert       = true
    40      canary            = 1
    41      stagger           = "30s"
    42    }
    43  }
    44  ```
    45  
    46  ~> For `system` jobs, only `max_parallel` and `stagger` are enforced. The job is
    47  updated at a rate of `max_parallel`, waiting `stagger` duration before the next
    48  set of updates. The `system` scheduler will be updated to support the new
    49  `update` stanza in a future release.
    50  
    51  ## `update` Parameters
    52  
    53  - `max_parallel` `(int: 0)` - Specifies the number of task groups that can be
    54    updated at the same time.
    55  
    56  - `health_check` `(string: "checks")` - Specifies the mechanism in which
    57    allocations health is determined. The potential values are:
    58  
    59    - "checks" - Specifies that the allocation should be considered healthy when
    60      all of its tasks are running and their associated [checks][] are healthy,
    61      and unhealthy if any of the tasks fail or not all checks become healthy.
    62      This is a superset of "task_states" mode.
    63  
    64    - "task_states" - Specifies that the allocation should be considered healthy when
    65      all its tasks are running and unhealthy if tasks fail.
    66  
    67    - "manual" - Specifies that Nomad should not automatically determine health
    68      and that the operator will specify allocation health using the [HTTP
    69      API](/api/deployments.html#set-allocation-health-in-deployment).
    70  
    71  - `min_healthy_time` `(string: "10s")` - Specifies the minimum time the
    72    allocation must be in the healthy state before it is marked as healthy and
    73    unblocks further allocations from being updated. This is specified using a
    74    label suffix like "30s" or "15m".
    75  
    76  - `healthy_deadline` `(string: "5m")` - Specifies the deadline in which the
    77    allocation must be marked as healthy after which the allocation is
    78    automatically transitioned to unhealthy. This is specified using a label
    79    suffix like "2m" or "1h".
    80  
    81  - `progress_deadline` `(string: "10m")` - Specifies the deadline in which an
    82    allocation must be marked as healthy. The deadline begins when the first
    83    allocation for the deployment is created and is reset whenever an allocation
    84    as part of the deployment transitions to a healthy state. If no allocation
    85    transitions to the healthy state before the progress deadline, the deployment
    86    is marked as failed. If the `progress_deadline` is set to `0`, the first
    87    allocation to be marked as unhealthy causes the deployment to fail. This is
    88    specified using a label suffix like "2m" or "1h".
    89  
    90  - `auto_revert` `(bool: false)` - Specifies if the job should auto-revert to the
    91    last stable job on deployment failure. A job is marked as stable if all the
    92    allocations as part of its deployment were marked healthy.
    93  
    94  - `canary` `(int: 0)` - Specifies that changes to the job that would result in
    95    destructive updates should create the specified number of canaries without
    96    stopping any previous allocations. Once the operator determines the canaries
    97    are healthy, they can be promoted which unblocks a rolling update of the
    98    remaining allocations at a rate of `max_parallel`.
    99  
   100  - `stagger` `(string: "30s")` - Specifies the delay between migrating
   101    allocations off nodes marked for draining. This is specified using a label
   102    suffix like "30s" or "1h".
   103  
   104  ## `update` Examples
   105  
   106  The following examples only show the `update` stanzas. Remember that the
   107  `update` stanza is only valid in the placements listed above.
   108  
   109  ### Parallel Upgrades Based on Checks
   110  
   111  This example performs 3 upgrades at a time and requires the allocations be
   112  healthy for a minimum of 30 seconds before continuing the rolling upgrade. Each
   113  allocation is given at most 2 minutes to determine its health before it is
   114  automatically marked unhealthy and the deployment is failed.
   115  
   116  ```hcl
   117  update {
   118    max_parallel     = 3
   119    min_healthy_time = "30s"
   120    healthy_deadline = "2m"
   121  }
   122  ```
   123  
   124  ### Parallel Upgrades Based on Task State
   125  
   126  This example is the same as the last but only requires the tasks to be healthy
   127  and does not require registered service checks to be healthy.
   128  
   129  ```hcl
   130  update {
   131    max_parallel     = 3
   132    min_healthy_time = "30s"
   133    healthy_deadline = "2m"
   134    health_check     = "task_states"
   135  }
   136  ```
   137  
   138  ### Canary Upgrades
   139  
   140  This example creates a canary allocation when the job is updated. The canary is
   141  created without stopping any previous allocations from the job and allows
   142  operators to determine if the new version of the job should be rolled out. 
   143  
   144  ```hcl
   145  update {
   146    canary       = 1
   147    max_parallel = 3
   148  }
   149  ```
   150  
   151  Once the operator has determined the new job should be deployed, the deployment
   152  can be promoted and a rolling update will occur performing 3 updates at a time
   153  until the remainder of the groups allocations have been rolled to the new
   154  version.
   155  
   156  ```text
   157  # Promote the canaries for the job.
   158  $ nomad job promote <job-id>
   159  ```
   160  
   161  ### Blue/Green Upgrades
   162  
   163  By setting the canary count equal to that of the task group, blue/green
   164  deployments can be achieved. When a new version of the job is submitted, instead
   165  of doing a rolling upgrade of the existing allocations, the new version of the
   166  group is deployed along side the existing set. While this duplicates the
   167  resources required during the upgrade process, it allows very safe deployments
   168  as the original version of the group is untouched.
   169  
   170  ```hcl
   171  group "api-server" {
   172      count = 3
   173  
   174      update {
   175        canary       = 3
   176        max_parallel = 3
   177      }
   178      ...
   179  }
   180  ```
   181  
   182  Once the operator is satisfied that the new version of the group is stable, the
   183  group can be promoted which will result in all allocations for the old versions
   184  of the group to be shutdown. This completes the upgrade from blue to green, or
   185  old to new version.
   186  
   187  ```text
   188  # Promote the canaries for the job.
   189  $ nomad job promote <job-id>
   190  ```
   191  
   192  ### Serial Upgrades
   193  
   194  This example uses a serial upgrade strategy, meaning exactly one task group will
   195  be updated at a time. The allocation must be healthy for the default
   196  `min_healthy_time` of 10 seconds.
   197  
   198  ```hcl
   199  update {
   200    max_parallel = 1
   201  }
   202  ```
   203  
   204  ### Upgrade Stanza Inheritance
   205  
   206  This example shows how inheritance can simplify the job when there are multiple
   207  task groups.
   208  
   209  ```hcl
   210  job "example" {
   211    ...
   212  
   213    update {
   214      max_parallel     = 2
   215      health_check     = "task_states"
   216      healthy_deadline = "10m"
   217    }
   218  
   219    group "one" {
   220      ...
   221  
   222      update {
   223        canary = 1      
   224      }
   225    }
   226  
   227    group "two" {
   228      ...
   229  
   230      update {
   231        min_healthy_time = "3m" 
   232      }
   233    }
   234  }
   235  ```
   236  
   237  By placing the shared parameters in the job's update stanza, each groups update
   238  stanza may be kept to a minimum. The merged update stanzas for each group
   239  becomes:
   240  
   241  ```hcl
   242  group "one" {
   243    update {
   244      canary           = 1
   245      max_parallel     = 2
   246      health_check     = "task_states"
   247      healthy_deadline = "10m"
   248    }
   249  }
   250  
   251  group "two" {
   252    update {
   253      min_healthy_time = "3m" 
   254      max_parallel     = 2
   255      health_check     = "task_states"
   256      healthy_deadline = "10m"
   257    }
   258  }
   259  ```
   260  
   261  [checks]: /docs/job-specification/service.html#check-parameters "Nomad check Job Specification"