github.com/dkerwin/nomad@v0.3.3-0.20160525181927-74554135514b/website/source/docs/internals/scheduling.html.md (about) 1 --- 2 layout: "docs" 3 page_title: "Scheduling" 4 sidebar_current: "docs-internals-scheduling" 5 description: |- 6 Learn about how scheduling works in Nomad. 7 --- 8 9 # Scheduling 10 11 Scheduling is a core function of Nomad. It is the process of assigning tasks 12 from jobs to client machines. This process must respect the constraints as declared 13 in the job, and optimize for resource utilization. This page documents the details 14 of how scheduling works in Nomad to help both users and developers 15 build a mental model. The design is heavily inspired by Google's 16 work on both [Omega: flexible, scalable schedulers for large compute clusters](https://research.google.com/pubs/pub41684.html) 17 and [Large-scale cluster management at Google with Borg](https://research.google.com/pubs/pub43438.html). 18 19 ~> **Advanced Topic!** This page covers technical details 20 of Nomad. You do not need to understand these details to 21 effectively use Nomad. The details are documented here for 22 those who wish to learn about them without having to go 23 spelunking through the source code. 24 25 # Scheduling in Nomad 26 27 [![Nomad Data Model](/assets/images/nomad-data-model.png)](/assets/images/nomad-data-model.png) 28 29 There are four primary "nouns" in Nomad; jobs, nodes, allocations, and evaluations. 30 Jobs are submitted by users and represent a _desired state_. A job is a declarative description 31 of tasks to run which are bounded by constraints and require resources. Tasks can be scheduled on 32 nodes in the cluster running the Nomad client. The mapping of tasks in a job to clients is done 33 using allocations. An allocation is used to declare that a set of tasks in a job should be run 34 on a particular node. Scheduling is the process of determining the appropriate allocations and 35 is done as part of an evaluation. 36 37 An evaluation is created any time the external state, either desired or emergent, changes. The desired 38 state is based on jobs, meaning the desired state changes if a new job is submitted, an 39 existing job is updated, or a job is deregistered. The emergent state is based on the client 40 nodes, and so we must handle the failure of any clients in the system. These events trigger 41 the creation of a new evaluation, as Nomad must _evaluate_ the state of the world and reconcile 42 it with the desired state. 43 44 This diagram shows the flow of an evaluation through Nomad: 45 46 [![Nomad Evaluation Flow](/assets/images/nomad-evaluation-flow.png)](/assets/images/nomad-evaluation-flow.png) 47 48 The lifecycle of an evaluation begins with an event causing the evaluation to be 49 created. Evaluations are created in the `pending` state and are enqueued into the 50 evaluation broker. There is a single evaluation broker which runs on the leader server. 51 The evaluation broker is used to manage the queue of pending evaluations, provide priority ordering, 52 and ensure at least once delivery. 53 54 Nomad servers run scheduling workers, defaulting to one per CPU core, which are used to 55 process evaluations. The workers dequeue evaluations from the broker, and then invoke 56 the appropriate scheduler as specified by the job. Nomad ships with a `service` scheduler 57 that optimizes for long-lived services, a `batch` scheduler that is used for fast placement 58 of batch jobs, a `system` scheduler that is used to run jobs on every node, 59 and a `core` scheduler which is used for internal maintenance. 60 Nomad can be extended to support custom schedulers as well. 61 62 Schedulers are responsible for processing an evaluation and generating an allocation _plan_. 63 The plan is the set of allocations to evict, update, or create. The specific logic used to 64 generate a plan may vary by scheduler, but generally the scheduler needs to first reconcile 65 the desired state with the real state to determine what must be done. New allocations need 66 to be placed and existing allocations may need to be updated, migrated, or stopped. 67 68 Placing allocations is split into two distinct phases, feasibility 69 checking and ranking. In the first phase the scheduler finds nodes that are 70 feasible by filtering unhealthy nodes, those missing necessary drivers, and those 71 failing the specified constraints. 72 73 The second phase is ranking, where the scheduler scores feasible nodes to find the best fit. 74 Scoring is primarily based on bin packing, which is used to optimize the resource utilization 75 and density of applications, but is also augmented by affinity and anti-affinity rules. 76 Once the scheduler has ranked enough nodes, the highest ranking node is selected and 77 added to the allocation plan. 78 79 When planning is complete, the scheduler submits the plan to the leader which adds 80 the plan to the plan queue. The plan queue manages pending plans, provides priority 81 ordering, and allows Nomad to handle concurrency races. Multiple schedulers are running 82 in parallel without locking or reservations, making Nomad optimistically concurrent. 83 As a result, schedulers might overlap work on the same node and cause resource 84 over-subscription. The plan queue allows the leader node to protect against this and 85 do partial or complete rejections of a plan. 86 87 As the leader processes plans, it creates allocations when there is no conflict 88 and otherwise informs the scheduler of a failure in the plan result. The plan result 89 provides feedback to the scheduler, allowing it to terminate or explore alternate plans 90 if the previous plan was partially or completely rejected. 91 92 Once the scheduler has finished processing an evaluation, it updates the status of 93 the evaluation and acknowledges delivery with the evaluation broker. This completes 94 the lifecycle of an evaluation. Allocations that were created, modified or deleted 95 as a result will be picked up by client nodes and will begin execution. 96