github.com/ferranbt/nomad@v0.9.3-0.20190607002617-85c449b7667c/website/source/guides/operating-a-job/advanced-scheduling/affinity.html.md (about)

     1  ---
     2  layout: "guides"
     3  page_title: "Affinity"
     4  sidebar_current: "guides-operating-a-job-advanced-scheduling-affinity"
     5  description: |-
     6     The following guide walks the user through using the affinity stanza in Nomad.
     7  ---
     8  
     9  # Expressing Job Placement Preferences with Affinities
    10  
    11  The [affinity][affinity-stanza] stanza allows operators to express placement preferences for their jobs on particular types of nodes. Note that there is a key difference between the [constraint][constraint] stanza and the affinity stanza. The constraint stanza strictly filters where jobs are run based on [attributes][attributes] and [client metadata][client-metadata]. If no nodes are found to match, the placement does not succeed. The affinity stanza acts like a "soft constraint." Nomad will attempt to match the desired affinity, but placement will succeed even if no nodes match the desired criteria. This is done in conjunction with scoring based on the Nomad scheduler's bin packing algorithm which you can read more about [here][scheduling].
    12  
    13  ## Reference Material
    14  
    15  - The [affinity][affinity-stanza] stanza documentation
    16  - [Scheduling][scheduling] with Nomad
    17  
    18  ## Estimated Time to Complete
    19  
    20  20 minutes
    21  
    22  ## Challenge
    23  
    24  Your application can run in datacenters `dc1` and `dc2`, but you have a strong preference to run it in `dc2`. Configure your job to tell the scheduler your preference while still allowing it to place your workload in `dc1` if the desired resources aren't available.
    25  
    26  ## Solution
    27  
    28  Specify an affinity with the proper [weight][weight] so that the Nomad scheduler can find the best nodes on which to place your job. The affinity weight will be included when scoring nodes for placement along with other factors like the bin packing algorithm.
    29  
    30  ## Prerequisites
    31  
    32  To perform the tasks described in this guide, you need to have a Nomad
    33  environment with Consul installed. You can use this
    34  [repo](https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud)
    35  to easily provision a sandbox environment. This guide will assume a cluster with
    36  one server node and three client nodes.
    37  
    38  -> **Please Note:** This guide is for demo purposes and is only using a single server
    39  node. In a production cluster, 3 or 5 server nodes are recommended.
    40  
    41  ## Steps
    42  
    43  ### Step 1: Place One of the Client Nodes in a Different Datacenter
    44  
    45  We are going express our job placement preference based on the datacenter our
    46  nodes are located in. Choose one of your client nodes and edit `/etc/nomad.d/nomad.hcl` to change its location to `dc2`. A snippet of an example configuration file is show below with the required change is shown below.
    47  
    48  ```shell
    49  data_dir = "/opt/nomad/data"
    50  bind_addr = "0.0.0.0"
    51  datacenter = "dc2"
    52  
    53  # Enable the client
    54  client {
    55    enabled = true
    56  ...
    57  ```
    58  After making the change on your chosen client node, restart the Nomad service
    59  
    60  ```shell
    61  $ sudo systemctl restart nomad
    62  ```
    63  
    64  If everything worked correctly, you should be able to run the `nomad` [node status][node-status] command and see that one of your nodes is now in datacenter `dc2`.
    65  
    66  ```shell
    67  $ nomad node status
    68  ID        DC   Name              Class   Drain  Eligibility  Status
    69  3592943e  dc1  ip-172-31-27-159  <none>  false  eligible     ready
    70  3dea0188  dc1  ip-172-31-16-175  <none>  false  eligible     ready
    71  6b6e9518  dc2  ip-172-31-27-25   <none>  false  eligible     ready
    72  ```
    73  
    74  ### Step 2: Create a Job with the `affinity` Stanza
    75  
    76  Create a file with the name `redis.nomad` and place the following content in it:
    77  
    78  ```hcl
    79  job "redis" {
    80   datacenters = ["dc1", "dc2"]
    81   type = "service"
    82  
    83   affinity {
    84     attribute = "${node.datacenter}"
    85     value = "dc2"
    86     weight = 100
    87   }
    88  
    89   group "cache1" {
    90     count = 4
    91  
    92     task "redis" {
    93       driver = "docker"
    94  
    95       config {
    96         image = "redis:latest"
    97         port_map {
    98           db = 6379
    99         }
   100       }
   101  
   102       resources {
   103         network {
   104           port "db" {}
   105         }
   106       }
   107  
   108       service {
   109         name = "redis-cache"
   110         port = "db"
   111         check {
   112           name     = "alive"
   113           type     = "tcp"
   114           interval = "10s"
   115           timeout  = "2s"
   116         }
   117       }
   118     }
   119   }
   120  }
   121  ```
   122  Note that we used the `affinity` stanza and specified `dc2` as the
   123  value for the [attribute][attributes] `${node.datacenter}`. We used the value `100` for the [weight][weight] which will cause the Nomad scheduler to rank nodes in datacenter `dc2` with a higher score. Keep in mind that weights can range from -100 to 100, inclusive. Negative weights serve as anti-affinities which cause Nomad to avoid placing allocations on nodes that match the criteria.
   124  
   125  ### Step 3: Register the Job `redis.nomad`
   126  
   127  Run the Nomad job with the following command:
   128  
   129  ```shell
   130  $ nomad run redis.nomad
   131  ==> Monitoring evaluation "11388ef2"
   132      Evaluation triggered by job "redis"
   133      Allocation "0dfcf0ba" created: node "6b6e9518", group "cache1"
   134      Allocation "89a9aae9" created: node "3592943e", group "cache1"
   135      Allocation "9a00f742" created: node "6b6e9518", group "cache1"
   136      Allocation "fc0f21bc" created: node "3dea0188", group "cache1"
   137      Evaluation status changed: "pending" -> "complete"
   138  ==> Evaluation "11388ef2" finished with status "complete"
   139  ```
   140  
   141  Note that two of the allocations in this example have been placed on node `6b6e9518`. This is the node we configured to be in datacenter `dc2`. The Nomad scheduler selected this node because of the affinity we specified. All of the allocations have not been placed on this node because the Nomad scheduler considers other factors in the scoring such as bin packing. This helps avoid placing too many instances of the same job on a node and prevents reduced capacity during a node level failure. We will take a detailed look at the scoring in the next few steps.
   142  
   143  ### Step 4: Check the Status of the `redis` Job
   144  
   145  At this point, we are going to check the status of our job and verify where our
   146  allocations have been placed. Run the following command:
   147  
   148  ```shell
   149  $ nomad status redis
   150  ```
   151  
   152  You should see 4 instances of your job running in the `Summary` section of the
   153  output as shown below:
   154  
   155  ```shell
   156  ...
   157  Summary
   158  Task Group  Queued  Starting  Running  Failed  Complete  Lost
   159  cache1      0       0         4        0       0         0
   160  
   161  Allocations
   162  ID        Node ID   Task Group  Version  Desired  Status   Created    Modified
   163  0dfcf0ba  6b6e9518  cache1      0        run      running  1h44m ago  1h44m ago
   164  89a9aae9  3592943e  cache1      0        run      running  1h44m ago  1h44m ago
   165  9a00f742  6b6e9518  cache1      0        run      running  1h44m ago  1h44m ago
   166  fc0f21bc  3dea0188  cache1      0        run      running  1h44m ago  1h44m ago
   167  ```
   168  
   169  You can cross-check this output with the results of the `nomad node status` command to verify that the majority of your workload has been placed on the node in `dc2` (in our case, that node is `6b6e9518`).
   170  
   171  ### Step 5: Obtain Detailed Scoring Information on Job Placement
   172  
   173  The Nomad scheduler will not always place all of your workload on nodes you have specified in the `affinity` stanza even if the resources are available. This is because affinity scoring is combined with other metrics as well before making a scheduling decision. In this step, we will take a look at some of those other factors.
   174  
   175  Using the output from the previous step, find an allocation that has been placed
   176  on a node in `dc2` and use the nomad [alloc status][alloc status] command with
   177  the [verbose][verbose] option to obtain detailed scoring information on it. In
   178  this example, we will use the allocation ID `0dfcf0ba` (your allocation IDs will
   179  be different).
   180  
   181  ```shell
   182  $ nomad alloc status -verbose 0dfcf0ba
   183  ```
   184  The resulting output will show the `Placement Metrics` section at the bottom.
   185  
   186  ```shell
   187  ...
   188  Placement Metrics
   189  Node                                  binpack  job-anti-affinity  node-reschedule-penalty  node-affinity  final score
   190  6b6e9518-d2a4-82c8-af3b-6805c8cdc29c  0.33     0                  0                        1              0.665
   191  3dea0188-ae06-ad98-64dd-a761ab2b1bf3  0.33     0                  0                        0              0.33
   192  3592943e-67e4-461f-d888-d5842372a4d4  0.33     0                  0                        0              0.33
   193  ```
   194  
   195  Note that the results from the `binpack`, `job-anti-affinity`,
   196  `node-reschedule-penalty`, and `node-affinity` columns are combined to produce the
   197  numbers listed in the `final score` column for each node. The Nomad scheduler
   198  uses the final score for each node in deciding where to make placements.
   199  
   200  ## Next Steps
   201  
   202  Experiment with the weight provided in the `affinity` stanza (the value can be
   203  from -100 through 100) and observe how the final score given to each node
   204  changes (use the `nomad alloc status` command as shown in the previous step).
   205  
   206  [affinity-stanza]: /docs/job-specification/affinity.html
   207  [alloc status]: /docs/commands/alloc/status.html
   208  [attributes]: /docs/runtime/interpolation.html#node-variables-
   209  [constraint]: /docs/job-specification/constraint.html
   210  [client-metadata]: /docs/configuration/client.html#meta
   211  [node-status]: /docs/commands/node/status.html
   212  [scheduling]: /docs/internals/scheduling/scheduling.html
   213  [verbose]: /docs/commands/alloc/status.html#verbose
   214  [weight]: /docs/job-specification/affinity.html#weight