github.com/hspak/nomad@v0.7.2-0.20180309000617-bc4ae22a39a5/website/source/docs/job-specification/update.html.md (about)

     1  ---
     2  layout: "docs"
     3  page_title: "update Stanza - Job Specification"
     4  sidebar_current: "docs-job-specification-update"
     5  description: |-
     6    The "update" stanza specifies the group's update strategy. The update strategy
     7    is used to control things like rolling upgrades and canary deployments. If
     8    omitted, rolling updates and canaries are disabled.
     9  ---
    10  
    11  # `update` Stanza
    12  
    13  <table class="table table-bordered table-striped">
    14    <tr>
    15      <th width="120">Placement</th>
    16      <td>
    17        <code>job -> **update**</code>
    18      </td>
    19      <td>
    20        <code>job -> group -> **update**</code>
    21      </td>
    22    </tr>
    23  </table>
    24  
    25  The `update` stanza specifies the group's update strategy. The update strategy
    26  is used to control things like rolling upgrades and canary deployments. If
    27  omitted, rolling updates and canaries are disabled. If specified at the job
    28  level, the configuration will apply to all groups within the job. If multiple
    29  `update` stanzas are specified, they are merged with the group stanza taking the
    30  highest precedence and then the job.
    31  
    32  ```hcl
    33  job "docs" {
    34    update {
    35      max_parallel     = 3
    36      health_check     = "checks"
    37      min_healthy_time = "10s"
    38      healthy_deadline = "10m"
    39      auto_revert      = true
    40      canary           = 1
    41      stagger          = "30s"
    42    }
    43  }
    44  ```
    45  
    46  ~> For `system` jobs, only `max_parallel` and `stagger` are enforced. The job is
    47  updated at a rate of `max_parallel`, waiting `stagger` duration before the next
    48  set of updates. The `system` scheduler will be updated to support the new
    49  `update` stanza in a future release.
    50  
    51  ## `update` Parameters
    52  
    53  - `max_parallel` `(int: 0)` - Specifies the number of task groups that can be
    54    updated at the same time.
    55  
    56  - `health_check` `(string: "checks")` - Specifies the mechanism in which
    57    allocations health is determined. The potential values are:
    58  
    59    - "checks" - Specifies that the allocation should be considered healthy when
    60      all of its tasks are running and their associated [checks][] are healthy,
    61      and unhealthy if any of the tasks fail or not all checks become healthy.
    62      This is a superset of "task_states" mode.
    63  
    64    - "task_states" - Specifies that the allocation should be considered healthy when
    65      all its tasks are running and unhealthy if tasks fail.
    66  
    67    - "manual" - Specifies that Nomad should not automatically determine health
    68      and that the operator will specify allocation health using the [HTTP
    69      API](/api/deployments.html#set-allocation-health-in-deployment).
    70  
    71  - `min_healthy_time` `(string: "10s")` - Specifies the minimum time the
    72    allocation must be in the healthy state before it is marked as healthy and
    73    unblocks further allocations from being updated. This is specified using a
    74    label suffix like "30s" or "15m".
    75  
    76  - `healthy_deadline` `(string: "5m")` - Specifies the deadline in which the
    77    allocation must be marked as healthy after which the allocation is
    78    automatically transitioned to unhealthy. This is specified using a label
    79    suffix like "2m" or "1h".
    80  
    81  - `auto_revert` `(bool: false)` - Specifies if the job should auto-revert to the
    82    last stable job on deployment failure. A job is marked as stable if all the
    83    allocations as part of its deployment were marked healthy.
    84  
    85  - `canary` `(int: 0)` - Specifies that changes to the job that would result in
    86    destructive updates should create the specified number of canaries without
    87    stopping any previous allocations. Once the operator determines the canaries
    88    are healthy, they can be promoted which unblocks a rolling update of the
    89    remaining allocations at a rate of `max_parallel`.
    90  
    91  - `stagger` `(string: "30s")` - Specifies the delay between migrating
    92    allocations off nodes marked for draining. This is specified using a label
    93    suffix like "30s" or "1h".
    94  
    95  ## `update` Examples
    96  
    97  The following examples only show the `update` stanzas. Remember that the
    98  `update` stanza is only valid in the placements listed above.
    99  
   100  ### Parallel Upgrades Based on Checks
   101  
   102  This example performs 3 upgrades at a time and requires the allocations be
   103  healthy for a minimum of 30 seconds before continuing the rolling upgrade. Each
   104  allocation is given at most 2 minutes to determine its health before it is
   105  automatically marked unhealthy and the deployment is failed.
   106  
   107  ```hcl
   108  update {
   109    max_parallel     = 3
   110    min_healthy_time = "30s"
   111    healthy_deadline = "2m"
   112  }
   113  ```
   114  
   115  ### Parallel Upgrades Based on Task State
   116  
   117  This example is the same as the last but only requires the tasks to be healthy
   118  and does not require registered service checks to be healthy.
   119  
   120  ```hcl
   121  update {
   122    max_parallel     = 3
   123    min_healthy_time = "30s"
   124    healthy_deadline = "2m"
   125    health_check     = "task_states"
   126  }
   127  ```
   128  
   129  ### Canary Upgrades
   130  
   131  This example creates a canary allocation when the job is updated. The canary is
   132  created without stopping any previous allocations from the job and allows
   133  operators to determine if the new version of the job should be rolled out. 
   134  
   135  ```hcl
   136  update {
   137    canary       = 1
   138    max_parallel = 3
   139  }
   140  ```
   141  
   142  Once the operator has determined the new job should be deployed, the deployment
   143  can be promoted and a rolling update will occur performing 3 updates at a time
   144  until the remainder of the groups allocations have been rolled to the new
   145  version.
   146  
   147  ```text
   148  # Promote the canaries for the job.
   149  $ nomad job promote <job-id>
   150  ```
   151  
   152  ### Blue/Green Upgrades
   153  
   154  By setting the canary count equal to that of the task group, blue/green
   155  deployments can be achieved. When a new version of the job is submitted, instead
   156  of doing a rolling upgrade of the existing allocations, the new version of the
   157  group is deployed along side the existing set. While this duplicates the
   158  resources required during the upgrade process, it allows very safe deployments
   159  as the original version of the group is untouched.
   160  
   161  ```hcl
   162  group "api-server" {
   163      count = 3
   164  
   165      update {
   166        canary       = 3
   167        max_parallel = 3
   168      }
   169      ...
   170  }
   171  ```
   172  
   173  Once the operator is satisfied that the new version of the group is stable, the
   174  group can be promoted which will result in all allocations for the old versions
   175  of the group to be shutdown. This completes the upgrade from blue to green, or
   176  old to new version.
   177  
   178  ```text
   179  # Promote the canaries for the job.
   180  $ nomad job promote <job-id>
   181  ```
   182  
   183  ### Serial Upgrades
   184  
   185  This example uses a serial upgrade strategy, meaning exactly one task group will
   186  be updated at a time. The allocation must be healthy for the default
   187  `min_healthy_time` of 10 seconds.
   188  
   189  ```hcl
   190  update {
   191    max_parallel = 1
   192  }
   193  ```
   194  
   195  ### Upgrade Stanza Inheritance
   196  
   197  This example shows how inheritance can simplify the job when there are multiple
   198  task groups.
   199  
   200  ```hcl
   201  job "example" {
   202    ...
   203  
   204    update {
   205      max_parallel     = 2
   206      health_check     = "task_states"
   207      healthy_deadline = "10m"
   208    }
   209  
   210    group "one" {
   211      ...
   212  
   213      update {
   214        canary = 1      
   215      }
   216    }
   217  
   218    group "two" {
   219      ...
   220  
   221      update {
   222        min_healthy_time = "3m" 
   223      }
   224    }
   225  }
   226  ```
   227  
   228  By placing the shared parameters in the job's update stanza, each groups update
   229  stanza may be kept to a minimum. The merged update stanzas for each group
   230  becomes:
   231  
   232  ```hcl
   233  group "one" {
   234    update {
   235      canary           = 1
   236      max_parallel     = 2
   237      health_check     = "task_states"
   238      healthy_deadline = "10m"
   239    }
   240  }
   241  
   242  group "two" {
   243    update {
   244      min_healthy_time = "3m" 
   245      max_parallel     = 2
   246      health_check     = "task_states"
   247      healthy_deadline = "10m"
   248    }
   249  }
   250  ```
   251  
   252  [checks]: /docs/job-specification/service.html#check-parameters "Nomad check Job Specification"