github.com/hhrutter/nomad@v0.6.0-rc2.0.20170723054333-80c4b03f0705/website/source/intro/getting-started/cluster.html.md (about)

     1  ---
     2  layout: "intro"
     3  page_title: "Clustering"
     4  sidebar_current: "getting-started-cluster"
     5  description: |-
     6    Join another Nomad client to create your first cluster.
     7  ---
     8  
     9  # Clustering
    10  
    11  We have started our first agent and run a job against it in development mode.
    12  This demonstrates the ease of use and the workflow of Nomad, but did not show how
    13  this could be extended to a scalable, production-grade configuration. In this step,
    14  we will create our first real cluster with multiple nodes.
    15  
    16  ## Starting the Server
    17  
    18  The first step is to create the config file for the server. Either download
    19  the file from the [repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant),
    20  or paste this into a file called `server.hcl`:
    21  
    22  ```hcl
    23  # Increase log verbosity
    24  log_level = "DEBUG"
    25  
    26  # Setup data dir
    27  data_dir = "/tmp/server1"
    28  
    29  # Enable the server
    30  server {
    31      enabled = true
    32  
    33      # Self-elect, should be 3 or 5 for production
    34      bootstrap_expect = 1
    35  }
    36  ```
    37  
    38  This is a fairly minimal server configuration file, but it
    39  is enough to start an agent in server only mode and have it
    40  elected as a leader. The major change that should be made for
    41  production is to run more than one server, and to change the
    42  corresponding `bootstrap_expect` value.
    43  
    44  Once the file is created, start the agent in a new tab:
    45  
    46  ```
    47  $ sudo nomad agent -config server.hcl
    48  ==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
    49  ==> Starting Nomad agent...
    50  ==> Nomad agent configuration:
    51  
    52                  Client: false
    53               Log Level: DEBUG
    54                  Region: global (DC: dc1)
    55                  Server: true
    56  
    57  ==> Nomad agent started! Log data will stream in below:
    58  
    59      [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
    60      [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
    61      [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
    62      [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
    63      [WARN] raft: Heartbeat timeout reached, starting election
    64      [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
    65      [DEBUG] raft: Votes needed: 1
    66      [DEBUG] raft: Vote granted. Tally: 1
    67      [INFO] raft: Election won. Tally: 1
    68      [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    69      [INFO] nomad: cluster leadership acquired
    70      [INFO] raft: Disabling EnableSingleNode (bootstrap)
    71      [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
    72  ```
    73  
    74  We can see above that client mode is disabled, and that we are
    75  only running as the server. This means that this server will manage
    76  state and make scheduling decisions but will not run any tasks.
    77  Now we need some agents to run tasks!
    78  
    79  ## Starting the Clients
    80  
    81  Similar to the server, we must first configure the clients. Either download
    82  the configuration for client1 and client2 from the
    83  [repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant), or
    84  paste the following into `client1.hcl`:
    85  
    86  ```
    87  # Increase log verbosity
    88  log_level = "DEBUG"
    89  
    90  # Setup data dir
    91  data_dir = "/tmp/client1"
    92  
    93  # Enable the client
    94  client {
    95      enabled = true
    96  
    97      # For demo assume we are talking to server1. For production,
    98      # this should be like "nomad.service.consul:4647" and a system
    99      # like Consul used for service discovery.
   100      servers = ["127.0.0.1:4647"]
   101  }
   102  
   103  # Modify our port to avoid a collision with server1
   104  ports {
   105      http = 5656
   106  }
   107  ```
   108  
   109  Copy that file to `client2.hcl` and change the `data_dir` to
   110  be "/tmp/client2" and the `http` port to 5657. Once you've created
   111  both `client1.hcl` and `client2.hcl`, open a tab for each and
   112  start the agents:
   113  
   114  ```
   115  $ sudo nomad agent -config client1.hcl
   116  ==> Starting Nomad agent...
   117  ==> Nomad agent configuration:
   118  
   119                  Client: true
   120               Log Level: DEBUG
   121                  Region: global (DC: dc1)
   122                  Server: false
   123  
   124  ==> Nomad agent started! Log data will stream in below:
   125  
   126      [DEBUG] client: applied fingerprints [host memory storage arch cpu]
   127      [DEBUG] client: available drivers [docker exec]
   128      [DEBUG] client: node registration complete
   129      ...
   130  ```
   131  
   132  In the output we can see the agent is running in client mode only.
   133  This agent will be available to run tasks but will not participate
   134  in managing the cluster or making scheduling decisions.
   135  
   136  Using the [`node-status` command](/docs/commands/node-status.html)
   137  we should see both nodes in the `ready` state:
   138  
   139  ```
   140  $ nomad node-status
   141  ID        Datacenter  Name   Class   Drain  Status
   142  fca62612  dc1         nomad  <none>  false  ready
   143  c887deef  dc1         nomad  <none>  false  ready
   144  ```
   145  
   146  We now have a simple three node cluster running. The only difference
   147  between a demo and full production cluster is that we are running a
   148  single server instead of three or five.
   149  
   150  ## Submit a Job
   151  
   152  Now that we have a simple cluster, we can use it to schedule a job.
   153  We should still have the `example.nomad` job file from before, but
   154  verify that the `count` is still set to 3.
   155  
   156  Then, use the [`run` command](/docs/commands/run.html) to submit the job:
   157  
   158  ```
   159  $ nomad run example.nomad
   160  ==> Monitoring evaluation "8e0a7cf9"
   161      Evaluation triggered by job "example"
   162      Allocation "501154ac" created: node "c887deef", group "cache"
   163      Allocation "7e2b3900" created: node "fca62612", group "cache"
   164      Allocation "9c66fcaf" created: node "c887deef", group "cache"
   165      Evaluation status changed: "pending" -> "complete"
   166  ==> Evaluation "8e0a7cf9" finished with status "complete"
   167  ```
   168  
   169  We can see in the output that the scheduler assigned two of the
   170  tasks for one of the client nodes and the remaining task to the
   171  second client.
   172  
   173  We can again use the [`status` command](/docs/commands/status.html) to verify:
   174  
   175  ```
   176  $ nomad status example
   177  ID          = example
   178  Name        = example
   179  Type        = service
   180  Priority    = 50
   181  Datacenters = dc1
   182  Status      = running
   183  Periodic    = false
   184  
   185  Allocations
   186  ID        Eval ID   Node ID   Task Group  Desired  Status   Created At
   187  501154ac  8e0a7cf9  c887deef  cache       run      running  08/08/16 21:03:19 CDT
   188  7e2b3900  8e0a7cf9  fca62612  cache       run      running  08/08/16 21:03:19 CDT
   189  9c66fcaf  8e0a7cf9  c887deef  cache       run      running  08/08/16 21:03:19 CDT
   190  ```
   191  
   192  We can see that all our tasks have been allocated and are running.
   193  Once we are satisfied that our job is happily running, we can tear
   194  it down with `nomad stop`.
   195  
   196  ## Next Steps
   197  
   198  We've now concluded the getting started guide, however there are a number
   199  of [next steps](next-steps.html) to get started with Nomad.
   200