github.com/Ilhicas/nomad@v1.0.4-0.20210304152020-e86851182bc3/website/content/docs/job-specification/update.mdx (about) 1 --- 2 layout: docs 3 page_title: update Stanza - Job Specification 4 sidebar_title: update 5 description: |- 6 The "update" stanza specifies the group's update strategy. The update strategy 7 is used to control things like rolling upgrades and canary deployments. If 8 omitted, rolling updates and canaries are disabled. 9 --- 10 11 # `update` Stanza 12 13 <Placement 14 groups={[ 15 ['job', 'update'], 16 ['job', 'group', 'update'], 17 ]} 18 /> 19 20 The `update` stanza specifies the group's update strategy. The update strategy 21 is used to control things like [rolling upgrades][rolling] and [canary 22 deployments][canary]. If omitted, rolling updates and canaries are disabled. If 23 specified at the job level, the configuration will apply to all groups within 24 the job. If multiple `update` stanzas are specified, they are merged with the 25 group stanza taking the highest precedence and then the job. 26 27 ```hcl 28 job "docs" { 29 update { 30 max_parallel = 3 31 health_check = "checks" 32 min_healthy_time = "10s" 33 healthy_deadline = "5m" 34 progress_deadline = "10m" 35 auto_revert = true 36 auto_promote = true 37 canary = 1 38 stagger = "30s" 39 } 40 } 41 ``` 42 43 ~> For `system` jobs, only [`max_parallel`](#max_parallel) and 44 [`stagger`](#stagger) are enforced. The job is updated at a rate of 45 `max_parallel`, waiting `stagger` duration before the next set of updates. 46 The `system` scheduler will be updated to support the new `update` stanza in 47 a future release. 48 49 ## `update` Parameters 50 51 - `max_parallel` `(int: 1)` - Specifies the number of allocations within a task group that can be 52 updated at the same time. The task groups themselves are updated in parallel. 53 54 - `max_parallel = 0` - Specifies that the allocation should use forced updates instead of deployments 55 56 - `health_check` `(string: "checks")` - Specifies the mechanism in which 57 allocations health is determined. The potential values are: 58 59 - "checks" - Specifies that the allocation should be considered healthy when 60 all of its tasks are running and their associated [checks][] are healthy, 61 and unhealthy if any of the tasks fail or not all checks become healthy. 62 This is a superset of "task_states" mode. 63 64 - "task_states" - Specifies that the allocation should be considered healthy when 65 all its tasks are running and unhealthy if tasks fail. 66 67 - "manual" - Specifies that Nomad should not automatically determine health 68 and that the operator will specify allocation health using the [HTTP 69 API](/api-docs/deployments#set-allocation-health-in-deployment). 70 71 - `min_healthy_time` `(string: "10s")` - Specifies the minimum time the 72 allocation must be in the healthy state before it is marked as healthy and 73 unblocks further allocations from being updated. This is specified using a 74 label suffix like "30s" or "15m". 75 76 - `healthy_deadline` `(string: "5m")` - Specifies the deadline in which the 77 allocation must be marked as healthy after which the allocation is 78 automatically transitioned to unhealthy. This is specified using a label suffix 79 like "2m" or "1h". If [`progress_deadline`](#progress_deadline) is non-zero, it 80 must be greater than `healthy_deadline`. Otherwise the `progress_deadline` may 81 fail a deployment before an allocation reaches its `healthy_deadline`. 82 83 - `progress_deadline` `(string: "10m")` - Specifies the deadline in which an 84 allocation must be marked as healthy. The deadline begins when the first 85 allocation for the deployment is created and is reset whenever an allocation 86 as part of the deployment transitions to a healthy state or when a 87 deployment is manually promoted. If no allocation transitions to the healthy 88 state before the progress deadline, the deployment is marked as failed. If 89 the `progress_deadline` is set to `0`, the first allocation to be marked as 90 unhealthy causes the deployment to fail. This is specified using a label 91 suffix like "2m" or "1h". 92 93 - `auto_revert` `(bool: false)` - Specifies if the job should auto-revert to the 94 last stable job on deployment failure. A job is marked as stable if all the 95 allocations as part of its deployment were marked healthy. 96 97 - `auto_promote` `(bool: false)` - Specifies if the job should auto-promote to 98 the canary version when all canaries become healthy during a 99 deployment. Defaults to false which means canaries must be manually updated 100 with the `nomad deployment promote` command. If a job has multiple task 101 groups, all must be set to `auto_promote = true` in order for the deployment 102 to be promoted automatically. 103 104 - `canary` `(int: 0)` - Specifies that changes to the job that would result in 105 destructive updates should create the specified number of canaries without 106 stopping any previous allocations. Once the operator determines the canaries 107 are healthy, they can be promoted which unblocks a rolling update of the 108 remaining allocations at a rate of `max_parallel`. 109 110 - `stagger` `(string: "30s")` - Specifies the delay between each set of 111 [`max_parallel`](#max_parallel) updates when updating system jobs. This 112 setting no longer applies to service jobs which use 113 [deployments.][strategies] 114 115 ## `update` Examples 116 117 The following examples only show the `update` stanzas. Remember that the 118 `update` stanza is only valid in the placements listed above. 119 120 ### Parallel Upgrades Based on Checks 121 122 This example performs 3 upgrades at a time and requires the allocations be 123 healthy for a minimum of 30 seconds before continuing the rolling upgrade. Each 124 allocation is given at most 2 minutes to determine its health before it is 125 automatically marked unhealthy and the deployment is failed. 126 127 ```hcl 128 update { 129 max_parallel = 3 130 min_healthy_time = "30s" 131 healthy_deadline = "2m" 132 } 133 ``` 134 135 ### Parallel Upgrades Based on Task State 136 137 This example is the same as the last but only requires the tasks to be healthy 138 and does not require registered service checks to be healthy. 139 140 ```hcl 141 update { 142 max_parallel = 3 143 min_healthy_time = "30s" 144 healthy_deadline = "2m" 145 health_check = "task_states" 146 } 147 ``` 148 149 ### Canary Upgrades 150 151 This example creates a canary allocation when the job is updated. The canary is 152 created without stopping any previous allocations from the job and allows 153 operators to determine if the new version of the job should be rolled out. 154 155 ```hcl 156 update { 157 canary = 1 158 max_parallel = 3 159 } 160 ``` 161 162 Once the operator has determined the new job should be deployed, the deployment 163 can be promoted and a rolling update will occur performing 3 updates at a time 164 until the remainder of the groups allocations have been rolled to the new 165 version. 166 167 ```text 168 # Promote the canaries for the job. 169 $ nomad job promote <job-id> 170 ``` 171 172 ### Blue/Green Upgrades 173 174 By setting the canary count equal to that of the task group, blue/green 175 deployments can be achieved. When a new version of the job is submitted, instead 176 of doing a rolling upgrade of the existing allocations, the new version of the 177 group is deployed along side the existing set. While this duplicates the 178 resources required during the upgrade process, it allows very safe deployments 179 as the original version of the group is untouched. 180 181 ```hcl 182 group "api-server" { 183 count = 3 184 185 update { 186 canary = 3 187 max_parallel = 3 188 } 189 ... 190 } 191 ``` 192 193 Once the operator is satisfied that the new version of the group is stable, the 194 group can be promoted which will result in all allocations for the old versions 195 of the group to be shutdown. This completes the upgrade from blue to green, or 196 old to new version. 197 198 ```text 199 # Promote the canaries for the job. 200 $ nomad job promote <job-id> 201 ``` 202 203 ### Serial Upgrades 204 205 This example uses a serial upgrade strategy, meaning exactly one task group will 206 be updated at a time. The allocation must be healthy for the default 207 `min_healthy_time` of 10 seconds. 208 209 ```hcl 210 update { 211 max_parallel = 1 212 } 213 ``` 214 215 ### Update Stanza Inheritance 216 217 This example shows how inheritance can simplify the job when there are multiple 218 task groups. 219 220 ```hcl 221 job "example" { 222 ... 223 224 update { 225 max_parallel = 2 226 health_check = "task_states" 227 healthy_deadline = "10m" 228 } 229 230 group "one" { 231 ... 232 233 update { 234 canary = 1 235 } 236 } 237 238 group "two" { 239 ... 240 241 update { 242 min_healthy_time = "3m" 243 } 244 } 245 } 246 ``` 247 248 By placing the shared parameters in the job's update stanza, each groups update 249 stanza may be kept to a minimum. The merged update stanzas for each group 250 becomes: 251 252 ```hcl 253 group "one" { 254 update { 255 canary = 1 256 max_parallel = 2 257 health_check = "task_states" 258 healthy_deadline = "10m" 259 } 260 } 261 262 group "two" { 263 update { 264 min_healthy_time = "3m" 265 max_parallel = 2 266 health_check = "task_states" 267 healthy_deadline = "10m" 268 } 269 } 270 ``` 271 272 [canary]: https://learn.hashicorp.com/tutorials/nomad/job-blue-green-and-canary-deployments 'Nomad Canary Deployments' 273 [checks]: /docs/job-specification/service#check-parameters 'Nomad check Job Specification' 274 [rolling]: https://learn.hashicorp.com/tutorials/nomad/job-rolling-update 'Nomad Rolling Upgrades' 275 [strategies]: https://learn.hashicorp.com/collections/nomad/job-updates 'Nomad Update Strategies'