github.com/anth0d/nomad@v0.0.0-20221214183521-ae3a0a2cad06/website/content/docs/job-specification/resources.mdx (about) 1 --- 2 layout: docs 3 page_title: resources Stanza - Job Specification 4 description: |- 5 The "resources" stanza describes the requirements a task needs to execute. 6 Resource requirements include memory, cpu, and more. 7 --- 8 9 # `resources` Stanza 10 11 <Placement groups={['job', 'group', 'task', 'resources']} /> 12 13 The `resources` stanza describes the requirements a task needs to execute. 14 Resource requirements include memory, CPU, and more. 15 16 ```hcl 17 job "docs" { 18 group "example" { 19 task "server" { 20 resources { 21 cpu = 100 22 memory = 256 23 24 device "nvidia/gpu" { 25 count = 2 26 } 27 } 28 } 29 } 30 } 31 ``` 32 33 ## `resources` Parameters 34 35 - `cpu` `(int: 100)` - Specifies the CPU required to run this task in MHz. 36 37 - `cores` <code>(`int`: <optional>)</code> <sup>1.1 Beta</sup> - Specifies the number of CPU cores to reserve 38 for the task. This may not be used with `cpu`. 39 40 - `memory` `(int: 300)` - Specifies the memory required in MB. 41 42 - `memory_max` <code>(`int`: <optional>)</code> <sup>1.1 Beta</sup> - Optionally, specifies the maximum memory the task may use, if the client has excess memory capacity, in MB. See [Memory Oversubscription](#memory-oversubscription) for more details. 43 44 - `device` <code>([Device][]: <optional>)</code> - Specifies the device 45 requirements. This may be repeated to request multiple device types. 46 47 ## `resources` Examples 48 49 The following examples only show the `resources` stanzas. Remember that the 50 `resources` stanza is only valid in the placements listed above. 51 52 ### Cores 53 54 This example specifies that the task requires 2 reserved cores. With this stanza, Nomad will find 55 a client with enough spare capacity to reserve 2 cores exclusively for the task. Unlike the `cpu` field, the 56 task will not share cpu time with any other tasks managed by Nomad on the client. 57 58 ```hcl 59 resources { 60 cores = 2 61 } 62 ``` 63 64 If `cores` and `cpu` are both defined in the same resource stanza, validation of the job will fail. 65 66 ### Memory 67 68 This example specifies the task requires 2 GB of RAM to operate. 2 GB is the 69 equivalent of 2000 MB: 70 71 ```hcl 72 resources { 73 memory = 2000 74 } 75 ``` 76 77 ### Devices 78 79 This example shows a device constraints as specified in the [device][] stanza 80 which require two nvidia GPUs to be made available: 81 82 ```hcl 83 resources { 84 device "nvidia/gpu" { 85 count = 2 86 } 87 } 88 ``` 89 ## Memory Oversubscription 90 91 Setting task memory limits requires balancing the risk of interrupting tasks 92 against the risk of wasting resources. If a task memory limit is set too low, 93 the task may exceed the limit and be interrupted; if the task memory is too 94 high, the cluster is left underutilized. 95 96 To help maximize cluster memory utilization while allowing a safety margin for 97 unexpected load spikes, Nomad 1.1. lets job authors set two separate memory 98 limits: 99 100 * `memory`: the reserve limit to represent the task’s typical memory usage — 101 this number is used by the Nomad scheduler to reserve and place the task 102 103 * `memory_max`: the maximum memory the task may use, if the client has excess 104 available memory, and may be terminated if it exceeds 105 106 If a client's memory becomes contended or low, the operating system will 107 pressure the running tasks to free up memory. If the contention persists, Nomad 108 may kill oversubscribed tasks and reschedule them to other clients. The exact 109 mechanism for memory pressure is specific to the task driver, operating system, 110 and application runtime. 111 112 The new max limit attribute is currently supported by the official `docker`, 113 `exec`, and `java` task drivers. Consult the documentation of 114 community-supported task drivers for their memory oversubscription support. 115 116 Memory oversubscription is opt-in. Nomad operators can enable [Memory Oversubscription in the scheduler 117 configuration](/api-docs/operator/scheduler#update-scheduler-configuration). Enterprise customers can use [Resource 118 Quotas](https://learn.hashicorp.com/tutorials/nomad/quotas) to limit the memory 119 oversubscription. 120 121 To avoid degrading the cluster experience, we recommend examining and monitoring 122 resource utilization and considering the following suggestions: 123 124 * Set `oom_score_adj` for Linux host services that aren't managed by Nomad, e.g. 125 Docker, logging services, and the Nomad agent itself. For Systemd services, you can use the [`OOMScoreAdj` field](https://github.com/hashicorp/nomad/blob/v1.0.0/dist/systemd/nomad.service#L25). 126 127 * Monitor hosts for memory utilization and set alerts on Out-Of-Memory errors 128 129 * Set the [client `reserved`](/docs/configuration/client#reserved) with enough 130 memory for host services that aren't managed by Nomad as well as a buffer 131 for the memory excess. For example, if the client reserved memory is 1GB, 132 the allocations on the host may exceed their soft memory limit by almost 133 1GB in aggregate before the memory becomes contended and allocations get 134 killed. 135 136 [device]: /docs/job-specification/device 'Nomad device Job Specification'