github.com/ferranbt/nomad@v0.9.3-0.20190607002617-85c449b7667c/website/source/docs/job-specification/update.html.md (about) 1 --- 2 layout: "docs" 3 page_title: "update Stanza - Job Specification" 4 sidebar_current: "docs-job-specification-update" 5 description: |- 6 The "update" stanza specifies the group's update strategy. The update strategy 7 is used to control things like rolling upgrades and canary deployments. If 8 omitted, rolling updates and canaries are disabled. 9 --- 10 11 # `update` Stanza 12 13 <table class="table table-bordered table-striped"> 14 <tr> 15 <th width="120">Placement</th> 16 <td> 17 <code>job -> **update**</code> 18 <br> 19 <code>job -> group -> **update**</code> 20 </td> 21 </tr> 22 </table> 23 24 The `update` stanza specifies the group's update strategy. The update strategy 25 is used to control things like [rolling upgrades][rolling] and [canary 26 deployments][canary]. If omitted, rolling updates and canaries are disabled. If 27 specified at the job level, the configuration will apply to all groups within 28 the job. If multiple `update` stanzas are specified, they are merged with the 29 group stanza taking the highest precedence and then the job. 30 31 ```hcl 32 job "docs" { 33 update { 34 max_parallel = 3 35 health_check = "checks" 36 min_healthy_time = "10s" 37 healthy_deadline = "5m" 38 progress_deadline = "10m" 39 auto_revert = true 40 auto_promote = true 41 canary = 1 42 stagger = "30s" 43 } 44 } 45 ``` 46 47 ~> For `system` jobs, only [`max_parallel`](#max_parallel) and 48 [`stagger`](#stagger) are enforced. The job is updated at a rate of 49 `max_parallel`, waiting `stagger` duration before the next set of updates. 50 The `system` scheduler will be updated to support the new `update` stanza in 51 a future release. 52 53 ## `update` Parameters 54 55 - `max_parallel` `(int: 0)` - Specifies the number of allocations within a task group that can be 56 updated at the same time. The task groups themselves are updated in parallel. 57 58 - `health_check` `(string: "checks")` - Specifies the mechanism in which 59 allocations health is determined. The potential values are: 60 61 - "checks" - Specifies that the allocation should be considered healthy when 62 all of its tasks are running and their associated [checks][] are healthy, 63 and unhealthy if any of the tasks fail or not all checks become healthy. 64 This is a superset of "task_states" mode. 65 66 - "task_states" - Specifies that the allocation should be considered healthy when 67 all its tasks are running and unhealthy if tasks fail. 68 69 - "manual" - Specifies that Nomad should not automatically determine health 70 and that the operator will specify allocation health using the [HTTP 71 API](/api/deployments.html#set-allocation-health-in-deployment). 72 73 - `min_healthy_time` `(string: "10s")` - Specifies the minimum time the 74 allocation must be in the healthy state before it is marked as healthy and 75 unblocks further allocations from being updated. This is specified using a 76 label suffix like "30s" or "15m". 77 78 - `healthy_deadline` `(string: "5m")` - Specifies the deadline in which the 79 allocation must be marked as healthy after which the allocation is 80 automatically transitioned to unhealthy. This is specified using a label 81 suffix like "2m" or "1h". 82 83 - `progress_deadline` `(string: "10m")` - Specifies the deadline in which an 84 allocation must be marked as healthy. The deadline begins when the first 85 allocation for the deployment is created and is reset whenever an allocation 86 as part of the deployment transitions to a healthy state. If no allocation 87 transitions to the healthy state before the progress deadline, the deployment 88 is marked as failed. If the `progress_deadline` is set to `0`, the first 89 allocation to be marked as unhealthy causes the deployment to fail. This is 90 specified using a label suffix like "2m" or "1h". 91 92 - `auto_revert` `(bool: false)` - Specifies if the job should auto-revert to the 93 last stable job on deployment failure. A job is marked as stable if all the 94 allocations as part of its deployment were marked healthy. 95 96 - `auto_promote` `(bool: false)` - Specifies if the job should auto-promote to the 97 canary version when all canaries become healthy during a deployment. Defaults to 98 false which means canaries must be manually updated with the `nomad deployment promote` 99 command. 100 101 - `canary` `(int: 0)` - Specifies that changes to the job that would result in 102 destructive updates should create the specified number of canaries without 103 stopping any previous allocations. Once the operator determines the canaries 104 are healthy, they can be promoted which unblocks a rolling update of the 105 remaining allocations at a rate of `max_parallel`. 106 107 - `stagger` `(string: "30s")` - Specifies the delay between each set of 108 [`max_parallel`](#max_parallel) updates when updating system jobs. This 109 setting no longer applies to service jobs which use 110 [deployments.][strategies] 111 112 ## `update` Examples 113 114 The following examples only show the `update` stanzas. Remember that the 115 `update` stanza is only valid in the placements listed above. 116 117 ### Parallel Upgrades Based on Checks 118 119 This example performs 3 upgrades at a time and requires the allocations be 120 healthy for a minimum of 30 seconds before continuing the rolling upgrade. Each 121 allocation is given at most 2 minutes to determine its health before it is 122 automatically marked unhealthy and the deployment is failed. 123 124 ```hcl 125 update { 126 max_parallel = 3 127 min_healthy_time = "30s" 128 healthy_deadline = "2m" 129 } 130 ``` 131 132 ### Parallel Upgrades Based on Task State 133 134 This example is the same as the last but only requires the tasks to be healthy 135 and does not require registered service checks to be healthy. 136 137 ```hcl 138 update { 139 max_parallel = 3 140 min_healthy_time = "30s" 141 healthy_deadline = "2m" 142 health_check = "task_states" 143 } 144 ``` 145 146 ### Canary Upgrades 147 148 This example creates a canary allocation when the job is updated. The canary is 149 created without stopping any previous allocations from the job and allows 150 operators to determine if the new version of the job should be rolled out. 151 152 ```hcl 153 update { 154 canary = 1 155 max_parallel = 3 156 } 157 ``` 158 159 Once the operator has determined the new job should be deployed, the deployment 160 can be promoted and a rolling update will occur performing 3 updates at a time 161 until the remainder of the groups allocations have been rolled to the new 162 version. 163 164 ```text 165 # Promote the canaries for the job. 166 $ nomad job promote <job-id> 167 ``` 168 169 ### Blue/Green Upgrades 170 171 By setting the canary count equal to that of the task group, blue/green 172 deployments can be achieved. When a new version of the job is submitted, instead 173 of doing a rolling upgrade of the existing allocations, the new version of the 174 group is deployed along side the existing set. While this duplicates the 175 resources required during the upgrade process, it allows very safe deployments 176 as the original version of the group is untouched. 177 178 ```hcl 179 group "api-server" { 180 count = 3 181 182 update { 183 canary = 3 184 max_parallel = 3 185 } 186 ... 187 } 188 ``` 189 190 Once the operator is satisfied that the new version of the group is stable, the 191 group can be promoted which will result in all allocations for the old versions 192 of the group to be shutdown. This completes the upgrade from blue to green, or 193 old to new version. 194 195 ```text 196 # Promote the canaries for the job. 197 $ nomad job promote <job-id> 198 ``` 199 200 ### Serial Upgrades 201 202 This example uses a serial upgrade strategy, meaning exactly one task group will 203 be updated at a time. The allocation must be healthy for the default 204 `min_healthy_time` of 10 seconds. 205 206 ```hcl 207 update { 208 max_parallel = 1 209 } 210 ``` 211 212 ### Upgrade Stanza Inheritance 213 214 This example shows how inheritance can simplify the job when there are multiple 215 task groups. 216 217 ```hcl 218 job "example" { 219 ... 220 221 update { 222 max_parallel = 2 223 health_check = "task_states" 224 healthy_deadline = "10m" 225 } 226 227 group "one" { 228 ... 229 230 update { 231 canary = 1 232 } 233 } 234 235 group "two" { 236 ... 237 238 update { 239 min_healthy_time = "3m" 240 } 241 } 242 } 243 ``` 244 245 By placing the shared parameters in the job's update stanza, each groups update 246 stanza may be kept to a minimum. The merged update stanzas for each group 247 becomes: 248 249 ```hcl 250 group "one" { 251 update { 252 canary = 1 253 max_parallel = 2 254 health_check = "task_states" 255 healthy_deadline = "10m" 256 } 257 } 258 259 group "two" { 260 update { 261 min_healthy_time = "3m" 262 max_parallel = 2 263 health_check = "task_states" 264 healthy_deadline = "10m" 265 } 266 } 267 ``` 268 269 [canary]: /guides/operating-a-job/update-strategies/blue-green-and-canary-deployments.html "Nomad Canary Deployments" 270 [checks]: /docs/job-specification/service.html#check-parameters "Nomad check Job Specification" 271 [rolling]: /guides/operating-a-job/update-strategies/rolling-upgrades.html "Nomad Rolling Upgrades" 272 [strategies]: /guides/operating-a-job/update-strategies/index.html "Nomad Update Strategies"