github.com/jrxfive/nomad@v0.6.1-0.20170802162750-1fef470e89bf/website/source/intro/getting-started/jobs.html.md (about) 1 --- 2 layout: "intro" 3 page_title: "Jobs" 4 sidebar_current: "getting-started-jobs" 5 description: |- 6 Learn how to submit, modify and stop jobs in Nomad. 7 --- 8 9 # Jobs 10 11 Jobs are the primary configuration that users interact with when using 12 Nomad. A job is a declarative specification of tasks that Nomad should run. 13 Jobs have a globally unique name, one or many task groups, which are themselves 14 collections of one or many tasks. 15 16 The format of the jobs is documented in the [job specification][jobspec]. They 17 can either be specified in [HashiCorp Configuration Language][hcl] or JSON, 18 however we recommend only using JSON when the configuration is generated by a machine. 19 20 ## Running a Job 21 22 To get started, we will use the [`init` command](/docs/commands/init.html) which 23 generates a skeleton job file: 24 25 ``` 26 $ nomad init 27 Example job file written to example.nomad 28 ``` 29 30 You can view the contents of this file by running `cat example.nomad`. In this 31 example job file, we have declared a single task 'redis' which is using 32 the Docker driver to run the task. The primary way you interact with Nomad 33 is with the [`run` command](/docs/commands/run.html). The `run` command takes 34 a job file and registers it with Nomad. This is used both to register new 35 jobs and to update existing jobs. 36 37 We can register our example job now: 38 39 ``` 40 $ nomad run example.nomad 41 ==> Monitoring evaluation "26cfc69e" 42 Evaluation triggered by job "example" 43 Allocation "8ba85cef" created: node "171a583b", group "cache" 44 Evaluation status changed: "pending" -> "complete" 45 ==> Evaluation "26cfc69e" finished with status "complete" 46 ``` 47 48 Anytime a job is updated, Nomad creates an evaluation to determine what 49 actions need to take place. In this case, because this is a new job, Nomad has 50 determined that an allocation should be created and has scheduled it on our 51 local agent. 52 53 To inspect the status of our job we use the [`status` command](/docs/commands/status.html): 54 55 ``` 56 $ nomad status example 57 ID = example 58 Name = example 59 Submit Date = 07/25/17 23:14:43 UTC 60 Type = service 61 Priority = 50 62 Datacenters = dc1 63 Status = running 64 Periodic = false 65 Parameterized = false 66 67 Summary 68 Task Group Queued Starting Running Failed Complete Lost 69 cache 0 0 1 0 0 0 70 71 Latest Deployment 72 ID = 11c5cdc8 73 Status = successful 74 Description = Deployment completed successfully 75 76 Deployed 77 Task Group Desired Placed Healthy Unhealthy 78 cache 1 1 1 0 79 80 Allocations 81 ID Node ID Task Group Version Desired Status Created At 82 8ba85cef 171a583b cache 0 run running 07/25/17 23:14:43 UTC 83 ``` 84 85 Here we can see that the result of our evaluation was the creation of an 86 allocation that is now running on the local node. 87 88 An allocation represents an instance of Task Group placed on a node. To inspect 89 an allocation we use the [`alloc-status` command](/docs/commands/alloc-status.html): 90 91 ``` 92 $ nomad alloc-status 8ba85cef 93 ID = 8ba85cef 94 Eval ID = 61b0b423 95 Name = example.cache[0] 96 Node ID = 171a583b 97 Job ID = example 98 Job Version = 0 99 Client Status = running 100 Client Description = <none> 101 Desired Status = run 102 Desired Description = <none> 103 Created At = 07/25/17 23:14:43 UTC 104 Deployment ID = fa882a5b 105 Deployment Health = healthy 106 107 Task "redis" is "running" 108 Task Resources 109 CPU Memory Disk IOPS Addresses 110 2/500 6.3 MiB/256 MiB 300 MiB 0 db: 127.0.0.1:30329 111 112 Recent Events: 113 Time Type Description 114 07/25/17 23:14:53 UTC Started Task started by client 115 07/25/17 23:14:43 UTC Driver Downloading image redis:3.2 116 07/25/17 23:14:43 UTC Task Setup Building Task Directory 117 07/25/17 23:14:43 UTC Received Task received by client 118 ``` 119 120 We can see that Nomad reports the state of the allocation as well as its 121 current resource usage. By supplying the `-stats` flag, more detailed resource 122 usage statistics will be reported. 123 124 To see the logs of a task, we can use the [logs command](/docs/commands/logs.html): 125 126 ``` 127 $ nomad logs 8ba85cef redis 128 _._ 129 _.-``__ ''-._ 130 _.-`` `. `_. ''-._ Redis 3.2.1 (00000000/0) 64 bit 131 .-`` .-```. ```\/ _.,_ ''-._ 132 ( ' , .-` | `, ) Running in standalone mode 133 |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 134 | `-._ `._ / _.-' | PID: 1 135 `-._ `-._ `-./ _.-' _.-' 136 |`-._`-._ `-.__.-' _.-'_.-'| 137 | `-._`-._ _.-'_.-' | http://redis.io 138 `-._ `-._`-.__.-'_.-' _.-' 139 |`-._`-._ `-.__.-' _.-'_.-'| 140 | `-._`-._ _.-'_.-' | 141 `-._ `-._`-.__.-'_.-' _.-' 142 `-._ `-.__.-' _.-' 143 `-._ _.-' 144 `-.__.-' 145 ... 146 ``` 147 148 ## Modifying a Job 149 150 The definition of a job is not static, and is meant to be updated over time. 151 You may update a job to change the docker container, to update the application version, 152 or to change the count of a task group to scale with load. 153 154 For now, edit the `example.nomad` file to update the count and set it to 3: 155 156 ``` 157 # The "count" parameter specifies the number of the task groups that should 158 # be running under this group. This value must be non-negative and defaults 159 # to 1. 160 count = 3 161 ``` 162 163 Once you have finished modifying the job specification, use the [`plan` 164 command](/docs/commands/plan.html) to invoke a dry-run of the scheduler to see 165 what would happen if you ran the updated job: 166 167 ``` 168 $ nomad plan example.nomad 169 +/- Job: "example" 170 +/- Task Group: "cache" (2 create, 1 in-place update) 171 +/- Count: "1" => "3" (forces create) 172 Task: "redis" 173 174 Scheduler dry-run: 175 - All tasks successfully allocated. 176 177 Job Modify Index: 6 178 To submit the job with version verification run: 179 180 nomad run -check-index 6 example.nomad 181 182 When running the job with the check-index flag, the job will only be run if the 183 server side version matches the job modify index returned. If the index has 184 changed, another user has modified the job and the plan's results are 185 potentially invalid. 186 ``` 187 188 We can see that the scheduler detected the change in count and informs us that 189 it will cause 2 new instances to be created. The in-place update that will 190 occur is to push the update job specification to the existing allocation and 191 will not cause any service interruption. We can then run the job with the 192 run command the `plan` emitted. 193 194 By running with the `-check-index` flag, Nomad checks that the job has not 195 been modified since the plan was run. This is useful if multiple people are 196 interacting with the job at the same time to ensure the job hasn't changed 197 before you apply your modifications. 198 199 ``` 200 $ nomad run -check-index 6 example.nomad 201 ==> Monitoring evaluation "127a49d0" 202 Evaluation triggered by job "example" 203 Evaluation within deployment: "2e2c818f" 204 Allocation "8ab24eef" created: node "171a583b", group "cache" 205 Allocation "f6c29874" created: node "171a583b", group "cache" 206 Allocation "8ba85cef" modified: node "171a583b", group "cache" 207 Evaluation status changed: "pending" -> "complete" 208 ==> Evaluation "127a49d0" finished with status "complete" 209 ``` 210 211 Because we set the count of the task group to three, Nomad created two 212 additional allocations to get to the desired state. It is idempotent to 213 run the same job specification again and no new allocations will be created. 214 215 Now, let's try to do an application update. In this case, we will simply change 216 the version of redis we want to run. Edit the `example.nomad` file and change 217 the Docker image from "redis:3.2" to "redis:4.0": 218 219 ``` 220 # Configure Docker driver with the image 221 config { 222 image = "redis:4.0" 223 } 224 ``` 225 226 We can run `plan` again to see what will happen if we submit this change: 227 228 ``` 229 $ nomad plan example.nomad 230 +/- Job: "example" 231 +/- Task Group: "cache" (1 create/destroy update, 2 ignore) 232 +/- Task: "redis" (forces create/destroy update) 233 +/- Config { 234 +/- image: "redis:3.2" => "redis:4.0" 235 port_map[0][db]: "6379" 236 } 237 238 Scheduler dry-run: 239 - All tasks successfully allocated. 240 - Rolling update, next evaluation will be in 10s. 241 242 Job Modify Index: 42 243 To submit the job with version verification run: 244 245 nomad run -check-index 42 example.nomad 246 247 When running the job with the check-index flag, the job will only be run if the 248 server side version matches the job modify index returned. If the index has 249 changed, another user has modified the job and the plan's results are 250 potentially invalid. 251 ``` 252 253 Here we can see the `plan` reports it will ignore two allocations and do one 254 create/destroy update which stops the old allocation and starts the new 255 allocation because we have changed the version of redis to run. 256 257 The reason the plan only reports a single change to occur is because the job 258 file has an `update` stanza that tells Nomad to perform rolling updates when the 259 job changes at a rate of `max_parallel`, which is set to 1 in the example file. 260 261 Once ready, use `run` to push the updated specification: 262 263 ``` 264 $ nomad run example.nomad 265 ==> Monitoring evaluation "02161762" 266 Evaluation triggered by job "example" 267 Evaluation within deployment: "429f8160" 268 Allocation "de4e3f7a" created: node "6c027e58", group "cache" 269 Evaluation status changed: "pending" -> "complete" 270 ==> Evaluation "02161762" finished with status "complete" 271 ``` 272 273 After running, the rolling upgrade can be followed by running `nomad status` and 274 watching the deployed count. 275 276 We can see that Nomad handled the update in three phases, only updating a single 277 allocation in each phase and waiting for it to be healthy for `min_healthy_time` 278 of 10 seconds before moving on to the next. The update strategy can be 279 configured, but rolling updates makes it easy to upgrade an application at large 280 scale. 281 282 ## Stopping a Job 283 284 So far we've created, run and modified a job. The final step in a job lifecycle 285 is stopping the job. This is done with the [`stop` command](/docs/commands/stop.html): 286 287 ``` 288 $ nomad stop example 289 ==> Monitoring evaluation "ddc4eb7d" 290 Evaluation triggered by job "example" 291 Evaluation within deployment: "ec46fb3b" 292 Evaluation status changed: "pending" -> "complete" 293 ==> Evaluation "ddc4eb7d" finished with status "complete" 294 ``` 295 296 When we stop a job, it creates an evaluation which is used to stop all 297 the existing allocations. If we now query the job status, we can see it is 298 now marked as `dead (stopped)`, indicating that the job has been stopped and 299 Nomad is no longer running it: 300 301 ``` 302 $ nomad status example 303 ID = example 304 Name = example 305 Submit Date = 07/26/17 17:51:01 UTC 306 Type = service 307 Priority = 50 308 Datacenters = dc1 309 Status = dead (stopped) 310 Periodic = false 311 Parameterized = false 312 313 Summary 314 Task Group Queued Starting Running Failed Complete Lost 315 cache 0 0 0 0 3 0 316 317 Latest Deployment 318 ID = ec46fb3b 319 Status = successful 320 Description = Deployment completed successfully 321 322 Deployed 323 Task Group Desired Placed Healthy Unhealthy 324 cache 3 3 3 0 325 326 Allocations 327 ID Node ID Task Group Version Desired Status Created At 328 8ace140d 2cfe061e cache 2 stop complete 07/26/17 17:51:01 UTC 329 8af5330a 2cfe061e cache 2 stop complete 07/26/17 17:51:01 UTC 330 df50c3ae 2cfe061e cache 2 stop complete 07/26/17 17:51:01 UTC 331 ``` 332 333 If we wanted to start the job again, we could simply `run` it again. 334 335 ## Next Steps 336 337 Users of Nomad primarily interact with jobs, and we've now seen 338 how to create and scale our job, perform an application update, 339 and do a job tear down. Next we will add another Nomad 340 client to [create our first cluster](cluster.html) 341 342 [jobspec]: /docs/job-specification/index.html "Nomad Job Specification" 343 [hcl]: https://github.com/hashicorp/hcl "HashiCorp Configuration Language"