github.com/smintz/nomad@v0.8.3/website/source/intro/getting-started/cluster.html.md (about)

     1  ---
     2  layout: "intro"
     3  page_title: "Clustering"
     4  sidebar_current: "getting-started-cluster"
     5  description: |-
     6    Join another Nomad client to create your first cluster.
     7  ---
     8  
     9  # Clustering
    10  
    11  We have started our first agent and run a job against it in development mode.
    12  This demonstrates the ease of use and the workflow of Nomad, but did not show how
    13  this could be extended to a scalable, production-grade configuration. In this step,
    14  we will create our first real cluster with multiple nodes.
    15  
    16  ## Starting the Server
    17  
    18  The first step is to create the config file for the server. Either download the
    19  [file from the repository][server.hcl], or paste this into a file called
    20  `server.hcl`:
    21  
    22  ```hcl
    23  # Increase log verbosity
    24  log_level = "DEBUG"
    25  
    26  # Setup data dir
    27  data_dir = "/tmp/server1"
    28  
    29  # Enable the server
    30  server {
    31      enabled = true
    32  
    33      # Self-elect, should be 3 or 5 for production
    34      bootstrap_expect = 1
    35  }
    36  ```
    37  
    38  This is a fairly minimal server configuration file, but it
    39  is enough to start an agent in server only mode and have it
    40  elected as a leader. The major change that should be made for
    41  production is to run more than one server, and to change the
    42  corresponding `bootstrap_expect` value.
    43  
    44  Once the file is created, start the agent in a new tab:
    45  
    46  ```text
    47  $ nomad agent -config server.hcl
    48  ==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
    49  ==> Starting Nomad agent...
    50  ==> Nomad agent configuration:
    51  
    52                  Client: false
    53               Log Level: DEBUG
    54                  Region: global (DC: dc1)
    55                  Server: true
    56                 Version: 0.7.0
    57  
    58  ==> Nomad agent started! Log data will stream in below:
    59  
    60      [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
    61      [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
    62      [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
    63      [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
    64      [WARN] raft: Heartbeat timeout reached, starting election
    65      [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
    66      [DEBUG] raft: Votes needed: 1
    67      [DEBUG] raft: Vote granted. Tally: 1
    68      [INFO] raft: Election won. Tally: 1
    69      [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    70      [INFO] nomad: cluster leadership acquired
    71      [INFO] raft: Disabling EnableSingleNode (bootstrap)
    72      [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
    73  ```
    74  
    75  We can see above that client mode is disabled, and that we are
    76  only running as the server. This means that this server will manage
    77  state and make scheduling decisions but will not run any tasks.
    78  Now we need some agents to run tasks!
    79  
    80  ## Starting the Clients
    81  
    82  Similar to the server, we must first configure the clients. Either download
    83  the configuration for `client1` and `client2` from the
    84  [repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant), or
    85  paste the following into `client1.hcl`:
    86  
    87  ```hcl
    88  # Increase log verbosity
    89  log_level = "DEBUG"
    90  
    91  # Setup data dir
    92  data_dir = "/tmp/client1"
    93  
    94  # Give the agent a unique name. Defaults to hostname
    95  name = "client1"
    96  
    97  # Enable the client
    98  client {
    99      enabled = true
   100  
   101      # For demo assume we are talking to server1. For production,
   102      # this should be like "nomad.service.consul:4647" and a system
   103      # like Consul used for service discovery.
   104      servers = ["127.0.0.1:4647"]
   105  }
   106  
   107  # Modify our port to avoid a collision with server1
   108  ports {
   109      http = 5656
   110  }
   111  ```
   112  
   113  Copy that file to `client2.hcl`. Change the `data_dir` to be `/tmp/client2`,
   114  the `name` to `client2`, and the `http` port to 5657. Once you have created
   115  both `client1.hcl` and `client2.hcl`, open a tab for each and start the agents:
   116  
   117  ```text
   118  $ sudo nomad agent -config client1.hcl
   119  ==> Starting Nomad agent...
   120  ==> Nomad agent configuration:
   121  
   122                  Client: true
   123               Log Level: DEBUG
   124                  Region: global (DC: dc1)
   125                  Server: false
   126                 Version: 0.7.0
   127  
   128  ==> Nomad agent started! Log data will stream in below:
   129  
   130      [DEBUG] client: applied fingerprints [host memory storage arch cpu]
   131      [DEBUG] client: available drivers [docker exec]
   132      [DEBUG] client: node registration complete
   133      ...
   134  ```
   135  
   136  In the output we can see the agent is running in client mode only.
   137  This agent will be available to run tasks but will not participate
   138  in managing the cluster or making scheduling decisions.
   139  
   140  Using the [`node status` command](/docs/commands/node/status.html)
   141  we should see both nodes in the `ready` state:
   142  
   143  ```text 
   144  $ nomad node status
   145  ID        DC   Name     Class   Drain  Eligibility  Status
   146  fca62612  dc1  client1  <none>  false  eligible     ready
   147  c887deef  dc1  client2  <none>  false  eligible     ready
   148  ```
   149  
   150  We now have a simple three node cluster running. The only difference
   151  between a demo and full production cluster is that we are running a
   152  single server instead of three or five.
   153  
   154  ## Submit a Job
   155  
   156  Now that we have a simple cluster, we can use it to schedule a job.
   157  We should still have the `example.nomad` job file from before, but
   158  verify that the `count` is still set to 3.
   159  
   160  Then, use the [`job run` command](/docs/commands/job/run.html) to submit the job:
   161  
   162  ```text
   163  $ nomad job run example.nomad
   164  ==> Monitoring evaluation "8e0a7cf9"
   165      Evaluation triggered by job "example"
   166      Evaluation within deployment: "0917b771"
   167      Allocation "501154ac" created: node "c887deef", group "cache"
   168      Allocation "7e2b3900" created: node "fca62612", group "cache"
   169      Allocation "9c66fcaf" created: node "c887deef", group "cache"
   170      Evaluation status changed: "pending" -> "complete"
   171  ==> Evaluation "8e0a7cf9" finished with status "complete"
   172  ```
   173  
   174  We can see in the output that the scheduler assigned two of the
   175  tasks for one of the client nodes and the remaining task to the
   176  second client.
   177  
   178  We can again use the [`status` command](/docs/commands/status.html) to verify:
   179  
   180  ```
   181  $ nomad status example
   182  ID          = example
   183  Name        = example
   184  Submit Date   = 07/26/17 16:34:58 UTC
   185  Type        = service
   186  Priority    = 50
   187  Datacenters = dc1
   188  Status      = running
   189  Periodic    = false
   190  Parameterized = false
   191  
   192  Summary
   193  Task Group  Queued  Starting  Running  Failed  Complete  Lost
   194  cache       0       0         3        0       0         0
   195  
   196  Latest Deployment
   197  ID          = fc49bd6c
   198  Status      = running
   199  Description = Deployment is running
   200  
   201  Deployed
   202  Task Group  Desired  Placed  Healthy  Unhealthy
   203  cache       3        3       0        0
   204  
   205  Allocations
   206  ID        Eval ID   Node ID   Task Group  Desired  Status   Created At
   207  501154ac  8e0a7cf9  c887deef  cache       run      running  08/08/16 21:03:19 CDT
   208  7e2b3900  8e0a7cf9  fca62612  cache       run      running  08/08/16 21:03:19 CDT
   209  9c66fcaf  8e0a7cf9  c887deef  cache       run      running  08/08/16 21:03:19 CDT
   210  ```
   211  
   212  We can see that all our tasks have been allocated and are running.
   213  Once we are satisfied that our job is happily running, we can tear
   214  it down with `nomad job stop`.
   215  
   216  ## Next Steps
   217  
   218  Nomad is now up and running. The cluster can be entirely managed from the commandline,
   219  but Nomad also comes with a web interface that is hosted alongside the HTTP API.
   220  Next, we'll [visit the UI in the browser](ui.html).
   221  
   222  [server.hcl]: https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/server.hcl