github.com/uchennaokeke444/nomad@v0.11.8/website/pages/intro/getting-started/jobs.mdx (about)

     1  ---
     2  layout: intro
     3  page_title: Jobs
     4  sidebar_title: Jobs
     5  description: 'Learn how to submit, modify and stop jobs in Nomad.'
     6  ---
     7  
     8  # Jobs
     9  
    10  Jobs are the primary configuration that users interact with when using
    11  Nomad. A job is a declarative specification of tasks that Nomad should run.
    12  Jobs have a globally unique name, one or many task groups, which are themselves
    13  collections of one or many tasks.
    14  
    15  The format of the jobs is documented in the [job specification][jobspec]. They
    16  can either be specified in [HashiCorp Configuration Language][hcl] or JSON,
    17  however we recommend only using JSON when the configuration is generated by a machine.
    18  
    19  ## Running a Job
    20  
    21  To get started, we will use the [`job init` command](/docs/commands/job/init) which
    22  generates a skeleton job file:
    23  
    24  ```shell-session
    25  $ nomad job init
    26  Example job file written to example.nomad
    27  ```
    28  
    29  You can view the contents of this file by running `cat example.nomad`. In this
    30  example job file, we have declared a single task 'redis' which is using
    31  the Docker driver to run the task. The primary way you interact with Nomad
    32  is with the [`job run` command](/docs/commands/job/run). The `run` command takes
    33  a job file and registers it with Nomad. This is used both to register new
    34  jobs and to update existing jobs.
    35  
    36  We can register our example job now:
    37  
    38  ```shell-session
    39  $ nomad job run example.nomad
    40  ==> Monitoring evaluation "13ebb66d"
    41      Evaluation triggered by job "example"
    42      Allocation "883269bf" created: node "e42d6f19", group "cache"
    43      Evaluation within deployment: "b0a84e74"
    44      Evaluation status changed: "pending" -> "complete"
    45  ==> Evaluation "13ebb66d" finished with status "complete"
    46  ```
    47  
    48  Anytime a job is updated, Nomad creates an evaluation to determine what
    49  actions need to take place. In this case, because this is a new job, Nomad has
    50  determined that an allocation should be created and has scheduled it on our
    51  local agent.
    52  
    53  To inspect the status of our job we use the [`status` command](/docs/commands/status):
    54  
    55  ```shell-session
    56  $ nomad status example
    57  ID            = example
    58  Name          = example
    59  Submit Date   = 10/31/17 22:58:40 UTC
    60  Type          = service
    61  Priority      = 50
    62  Datacenters   = dc1
    63  Status        = running
    64  Periodic      = false
    65  Parameterized = false
    66  
    67  Summary
    68  Task Group  Queued  Starting  Running  Failed  Complete  Lost
    69  cache       0       0         1        0       0         0
    70  
    71  Latest Deployment
    72  ID          = b0a84e74
    73  Status      = successful
    74  Description = Deployment completed successfully
    75  
    76  Deployed
    77  Task Group  Desired  Placed  Healthy  Unhealthy
    78  cache       1        1       1        0
    79  
    80  Allocations
    81  ID        Node ID   Task Group  Version  Desired  Status   Created  Modified
    82  8ba85cef  171a583b  cache       0        run      running  5m ago   5m ago
    83  ```
    84  
    85  Here we can see that the result of our evaluation was the creation of an
    86  allocation that is now running on the local node.
    87  
    88  An allocation represents an instance of Task Group placed on a node. To inspect
    89  an allocation we use the [`alloc status` command](/docs/commands/alloc/status):
    90  
    91  ```shell-session
    92  $ nomad alloc status 8ba85cef
    93  ID                  = 8ba85cef
    94  Eval ID             = 13ebb66d
    95  Name                = example.cache[0]
    96  Node ID             = e42d6f19
    97  Job ID              = example
    98  Job Version         = 0
    99  Client Status       = running
   100  Client Description  = <none>
   101  Desired Status      = run
   102  Desired Description = <none>
   103  Created             = 5m ago
   104  Modified            = 5m ago
   105  Deployment ID       = fa882a5b
   106  Deployment Health   = healthy
   107  
   108  Task "redis" is "running"
   109  Task Resources
   110  CPU        Memory           Disk     Addresses
   111  8/500 MHz  6.3 MiB/256 MiB  300 MiB  db: 127.0.0.1:22672
   112  
   113  Task Events:
   114  Started At     = 10/31/17 22:58:49 UTC
   115  Finished At    = N/A
   116  Total Restarts = 0
   117  Last Restart   = N/A
   118  
   119  Recent Events:
   120  Time                   Type        Description
   121  10/31/17 22:58:49 UTC  Started     Task started by client
   122  10/31/17 22:58:40 UTC  Driver      Downloading image redis:3.2
   123  10/31/17 22:58:40 UTC  Task Setup  Building Task Directory
   124  10/31/17 22:58:40 UTC  Received    Task received by client
   125  ```
   126  
   127  We can see that Nomad reports the state of the allocation as well as its
   128  current resource usage. By supplying the `-stats` flag, more detailed resource
   129  usage statistics will be reported.
   130  
   131  To see the logs of a task, we can use the [`logs` command](/docs/commands/alloc/logs):
   132  
   133  ````shell-session
   134  $ nomad alloc logs 8ba85cef redis
   135                   _._
   136              _.-``__ ''-._
   137         _.-``    `.  `_.  ''-._           Redis 3.2.1 (00000000/0) 64 bit
   138     .-`` .-```.  ```\/    _.,_ ''-._
   139    (    '      ,       .-`  | `,    )     Running in standalone mode
   140    |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
   141    |    `-._   `._    /     _.-'    |     PID: 1
   142     `-._    `-._  `-./  _.-'    _.-'
   143    |`-._`-._    `-.__.-'    _.-'_.-'|
   144    |    `-._`-._        _.-'_.-'    |           http://redis.io
   145     `-._    `-._`-.__.-'_.-'    _.-'
   146    |`-._`-._    `-.__.-'    _.-'_.-'|
   147    |    `-._`-._        _.-'_.-'    |
   148     `-._    `-._`-.__.-'_.-'    _.-'
   149         `-._    `-.__.-'    _.-'
   150             `-._        _.-'
   151                 `-.__.-'
   152  ...
   153  ````
   154  
   155  ## Modifying a Job
   156  
   157  The definition of a job is not static, and is meant to be updated over time.
   158  You may update a job to change the docker container, to update the application version,
   159  or to change the count of a task group to scale with load.
   160  
   161  For now, edit the `example.nomad` file to update the count and set it to 3:
   162  
   163  ```
   164  # The "count" parameter specifies the number of the task groups that should
   165  # be running under this group. This value must be non-negative and defaults
   166  # to 1.
   167  count = 3
   168  ```
   169  
   170  Once you have finished modifying the job specification, use the [`job plan`
   171  command](/docs/commands/job/plan) to invoke a dry-run of the scheduler to see
   172  what would happen if you ran the updated job:
   173  
   174  ```shell-session
   175  $ nomad job plan example.nomad
   176  +/- Job: "example"
   177  +/- Task Group: "cache" (2 create, 1 in-place update)
   178    +/- Count: "1" => "3" (forces create)
   179        Task: "redis"
   180  
   181  Scheduler dry-run:
   182  - All tasks successfully allocated.
   183  
   184  Job Modify Index: 7
   185  To submit the job with version verification run:
   186  
   187  nomad job run -check-index 7 example.nomad
   188  
   189  When running the job with the check-index flag, the job will only be run if the
   190  job modify index given matches the server-side version. If the index has
   191  changed, another user has modified the job and the plan's results are
   192  potentially invalid.
   193  ```
   194  
   195  We can see that the scheduler detected the change in count and informs us that
   196  it will cause 2 new instances to be created. The in-place update that will
   197  occur is to push the updated job specification to the existing allocation and
   198  will not cause any service interruption. We can then run the job with the run
   199  command the `plan` emitted.
   200  
   201  By running with the `-check-index` flag, Nomad checks that the job has not
   202  been modified since the plan was run. This is useful if multiple people are
   203  interacting with the job at the same time to ensure the job hasn't changed
   204  before you apply your modifications.
   205  
   206  ```shell-session
   207  $ nomad job run -check-index 7 example.nomad
   208  ==> Monitoring evaluation "93d16471"
   209      Evaluation triggered by job "example"
   210      Evaluation within deployment: "0d06e1b6"
   211      Allocation "3249e320" created: node "e42d6f19", group "cache"
   212      Allocation "453b210f" created: node "e42d6f19", group "cache"
   213      Allocation "883269bf" modified: node "e42d6f19", group "cache"
   214      Evaluation status changed: "pending" -> "complete"
   215  ==> Evaluation "93d16471" finished with status "complete"
   216  ```
   217  
   218  Because we set the count of the task group to three, Nomad created two
   219  additional allocations to get to the desired state. It is idempotent to
   220  run the same job specification again and no new allocations will be created.
   221  
   222  Now, let's try to do an application update. In this case, we will simply change
   223  the version of redis we want to run. Edit the `example.nomad` file and change
   224  the Docker image from "redis:3.2" to "redis:4.0":
   225  
   226  ```
   227  # Configure Docker driver with the image
   228  config {
   229      image = "redis:4.0"
   230  }
   231  ```
   232  
   233  We can run `plan` again to see what will happen if we submit this change:
   234  
   235  ```text
   236  +/- Job: "example"
   237  +/- Task Group: "cache" (1 create/destroy update, 2 ignore)
   238    +/- Task: "redis" (forces create/destroy update)
   239      +/- Config {
   240        +/- image:           "redis:3.2" => "redis:4.0"
   241            port_map[0][db]: "6379"
   242          }
   243  
   244  Scheduler dry-run:
   245  - All tasks successfully allocated.
   246  
   247  Job Modify Index: 1127
   248  To submit the job with version verification run:
   249  
   250  nomad job run -check-index 1127 example.nomad
   251  
   252  When running the job with the check-index flag, the job will only be run if the
   253  job modify index given matches the server-side version. If the index has
   254  changed, another user has modified the job and the plan's results are
   255  potentially invalid.
   256  ```
   257  
   258  The plan output shows us that one allocation will be updated and that the other
   259  two will be ignored. This is due to the `max_parallel` setting in the `update`
   260  stanza, which is set to 1 to instruct Nomad to perform only a single change at
   261  a time.
   262  
   263  Once ready, use `run` to push the updated specification:
   264  
   265  ```shell-session
   266  $ nomad job run example.nomad
   267  ==> Monitoring evaluation "293b313a"
   268      Evaluation triggered by job "example"
   269      Evaluation within deployment: "f4047b3a"
   270      Allocation "27bd4a41" created: node "e42d6f19", group "cache"
   271      Evaluation status changed: "pending" -> "complete"
   272  ==> Evaluation "293b313a" finished with status "complete"
   273  ```
   274  
   275  After running, the rolling upgrade can be followed by running `nomad status` and
   276  watching the deployed count.
   277  
   278  We can see that Nomad handled the update in three phases, only updating a single
   279  allocation in each phase and waiting for it to be healthy for `min_healthy_time`
   280  of 10 seconds before moving on to the next. The update strategy can be
   281  configured, but rolling updates makes it easy to upgrade an application at large
   282  scale.
   283  
   284  ## Stopping a Job
   285  
   286  So far we've created, run and modified a job. The final step in a job lifecycle
   287  is stopping the job. This is done with the [`job stop` command](/docs/commands/job/stop):
   288  
   289  ```shell-session
   290  $ nomad job stop example
   291  ==> Monitoring evaluation "6d4cd6ca"
   292      Evaluation triggered by job "example"
   293      Evaluation within deployment: "f4047b3a"
   294      Evaluation status changed: "pending" -> "complete"
   295  ==> Evaluation "6d4cd6ca" finished with status "complete"
   296  ```
   297  
   298  When we stop a job, it creates an evaluation which is used to stop all
   299  the existing allocations. If we now query the job status, we can see it is
   300  now marked as `dead (stopped)`, indicating that the job has been stopped and
   301  Nomad is no longer running it:
   302  
   303  ```shell-session
   304  $ nomad status example
   305  ID            = example
   306  Name          = example
   307  Submit Date   = 11/01/17 17:30:40 UTC
   308  Type          = service
   309  Priority      = 50
   310  Datacenters   = dc1
   311  Status        = dead (stopped)
   312  Periodic      = false
   313  Parameterized = false
   314  
   315  Summary
   316  Task Group  Queued  Starting  Running  Failed  Complete  Lost
   317  cache       0       0         0        0       6         0
   318  
   319  Latest Deployment
   320  ID          = f4047b3a
   321  Status      = successful
   322  Description = Deployment completed successfully
   323  
   324  Deployed
   325  Task Group  Desired  Placed  Healthy  Unhealthy
   326  cache       3        3       3        0
   327  
   328  Allocations
   329  ID        Node ID   Task Group  Version  Desired  Status    Created    Modified
   330  8ace140d  2cfe061e  cache       2        stop     complete  5m ago     5m ago
   331  8af5330a  2cfe061e  cache       2        stop     complete  6m ago     6m ago
   332  df50c3ae  2cfe061e  cache       2        stop     complete  6m ago     6m ago
   333  ```
   334  
   335  If we wanted to start the job again, we could simply `run` it again.
   336  
   337  ## Next Steps
   338  
   339  Users of Nomad primarily interact with jobs, and we've now seen
   340  how to create and scale our job, perform an application update,
   341  and do a job tear down. Next we will add another Nomad
   342  client to [create our first cluster](/intro/getting-started/cluster)
   343  
   344  [jobspec]: /docs/job-specification 'Nomad Job Specification'
   345  [hcl]: https://github.com/hashicorp/hcl 'HashiCorp Configuration Language'