github.com/uchennaokeke444/nomad@v0.11.8/website/pages/intro/getting-started/cluster.mdx (about)

     1  ---
     2  layout: intro
     3  page_title: Clustering
     4  sidebar_title: Clustering
     5  description: Join another Nomad client to create your first cluster.
     6  ---
     7  
     8  # Clustering
     9  
    10  We have started our first agent and run a job against it in development mode.
    11  This demonstrates the ease of use and the workflow of Nomad, but did not show how
    12  this could be extended to a scalable, production-grade configuration. In this step,
    13  we will create our first real cluster with multiple nodes.
    14  
    15  ## Starting the Server
    16  
    17  The first step is to create the config file for the server. Either download the
    18  [file from the repository][server.hcl], or paste this into a file called
    19  `server.hcl`:
    20  
    21  ```hcl
    22  # Increase log verbosity
    23  log_level = "DEBUG"
    24  
    25  # Setup data dir
    26  data_dir = "/tmp/server1"
    27  
    28  # Enable the server
    29  server {
    30      enabled = true
    31  
    32      # Self-elect, should be 3 or 5 for production
    33      bootstrap_expect = 1
    34  }
    35  ```
    36  
    37  This is a fairly minimal server configuration file, but it
    38  is enough to start an agent in server only mode and have it
    39  elected as a leader. The major change that should be made for
    40  production is to run more than one server, and to change the
    41  corresponding `bootstrap_expect` value.
    42  
    43  Once the file is created, start the agent in a new tab:
    44  
    45  ```shell-session
    46  $ nomad agent -config server.hcl
    47  ==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
    48  ==> Starting Nomad agent...
    49  ==> Nomad agent configuration:
    50  
    51                  Client: false
    52               Log Level: DEBUG
    53                  Region: global (DC: dc1)
    54                  Server: true
    55                 Version: 0.7.0
    56  
    57  ==> Nomad agent started! Log data will stream in below:
    58  
    59      [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
    60      [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
    61      [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
    62      [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
    63      [WARN] raft: Heartbeat timeout reached, starting election
    64      [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
    65      [DEBUG] raft: Votes needed: 1
    66      [DEBUG] raft: Vote granted. Tally: 1
    67      [INFO] raft: Election won. Tally: 1
    68      [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    69      [INFO] nomad: cluster leadership acquired
    70      [INFO] raft: Disabling EnableSingleNode (bootstrap)
    71      [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
    72  ```
    73  
    74  We can see above that client mode is disabled, and that we are
    75  only running as the server. This means that this server will manage
    76  state and make scheduling decisions but will not run any tasks.
    77  Now we need some agents to run tasks!
    78  
    79  ## Starting the Clients
    80  
    81  Similar to the server, we must first configure the clients. Either download
    82  the configuration for `client1` and `client2` from the
    83  [repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant), or
    84  paste the following into `client1.hcl`:
    85  
    86  ```hcl
    87  # Increase log verbosity
    88  log_level = "DEBUG"
    89  
    90  # Setup data dir
    91  data_dir = "/tmp/client1"
    92  
    93  # Give the agent a unique name. Defaults to hostname
    94  name = "client1"
    95  
    96  # Enable the client
    97  client {
    98      enabled = true
    99  
   100      # For demo assume we are talking to server1. For production,
   101      # this should be like "nomad.service.consul:4647" and a system
   102      # like Consul used for service discovery.
   103      servers = ["127.0.0.1:4647"]
   104  }
   105  
   106  # Modify our port to avoid a collision with server1
   107  ports {
   108      http = 5656
   109  }
   110  ```
   111  
   112  Copy that file to `client2.hcl`. Change the `data_dir` to be `/tmp/client2`,
   113  the `name` to `client2`, and the `http` port to 5657. Once you have created
   114  both `client1.hcl` and `client2.hcl`, open a tab for each and start the agents:
   115  
   116  ```shell-session
   117  $ sudo nomad agent -config client1.hcl
   118  ==> Starting Nomad agent...
   119  ==> Nomad agent configuration:
   120  
   121                  Client: true
   122               Log Level: DEBUG
   123                  Region: global (DC: dc1)
   124                  Server: false
   125                 Version: 0.7.0
   126  
   127  ==> Nomad agent started! Log data will stream in below:
   128  
   129      [DEBUG] client: applied fingerprints [host memory storage arch cpu]
   130      [DEBUG] client: available drivers [docker exec]
   131      [DEBUG] client: node registration complete
   132      ...
   133  ```
   134  
   135  In the output we can see the agent is running in client mode only.
   136  This agent will be available to run tasks but will not participate
   137  in managing the cluster or making scheduling decisions.
   138  
   139  Using the [`node status` command](/docs/commands/node/status)
   140  we should see both nodes in the `ready` state:
   141  
   142  ```shell-session
   143  $ nomad node status
   144  ID        DC   Name     Class   Drain  Eligibility  Status
   145  fca62612  dc1  client1  <none>  false  eligible     ready
   146  c887deef  dc1  client2  <none>  false  eligible     ready
   147  ```
   148  
   149  We now have a simple three node cluster running. The only difference
   150  between a demo and full production cluster is that we are running a
   151  single server instead of three or five.
   152  
   153  ## Submit a Job
   154  
   155  Now that we have a simple cluster, we can use it to schedule a job.
   156  We should still have the `example.nomad` job file from before, but
   157  verify that the `count` is still set to 3.
   158  
   159  Then, use the [`job run` command](/docs/commands/job/run) to submit the job:
   160  
   161  ```shell-session
   162  $ nomad job run example.nomad
   163  ==> Monitoring evaluation "8e0a7cf9"
   164      Evaluation triggered by job "example"
   165      Evaluation within deployment: "0917b771"
   166      Allocation "501154ac" created: node "c887deef", group "cache"
   167      Allocation "7e2b3900" created: node "fca62612", group "cache"
   168      Allocation "9c66fcaf" created: node "c887deef", group "cache"
   169      Evaluation status changed: "pending" -> "complete"
   170  ==> Evaluation "8e0a7cf9" finished with status "complete"
   171  ```
   172  
   173  We can see in the output that the scheduler assigned two of the
   174  tasks for one of the client nodes and the remaining task to the
   175  second client.
   176  
   177  We can again use the [`status` command](/docs/commands/status) to verify:
   178  
   179  ```shell-session
   180  $ nomad status example
   181  ID          = example
   182  Name        = example
   183  Submit Date   = 07/26/17 16:34:58 UTC
   184  Type        = service
   185  Priority    = 50
   186  Datacenters = dc1
   187  Status      = running
   188  Periodic    = false
   189  Parameterized = false
   190  
   191  Summary
   192  Task Group  Queued  Starting  Running  Failed  Complete  Lost
   193  cache       0       0         3        0       0         0
   194  
   195  Latest Deployment
   196  ID          = fc49bd6c
   197  Status      = running
   198  Description = Deployment is running
   199  
   200  Deployed
   201  Task Group  Desired  Placed  Healthy  Unhealthy
   202  cache       3        3       0        0
   203  
   204  Allocations
   205  ID        Eval ID   Node ID   Task Group  Desired  Status   Created At
   206  501154ac  8e0a7cf9  c887deef  cache       run      running  08/08/16 21:03:19 CDT
   207  7e2b3900  8e0a7cf9  fca62612  cache       run      running  08/08/16 21:03:19 CDT
   208  9c66fcaf  8e0a7cf9  c887deef  cache       run      running  08/08/16 21:03:19 CDT
   209  ```
   210  
   211  We can see that all our tasks have been allocated and are running.
   212  Once we are satisfied that our job is happily running, we can tear
   213  it down with `nomad job stop`.
   214  
   215  ## Next Steps
   216  
   217  Nomad is now up and running. The cluster can be entirely managed from the command line,
   218  but Nomad also comes with a web interface that is hosted alongside the HTTP API.
   219  Next, we'll [visit the UI in the browser](/intro/getting-started/ui).
   220  
   221  [server.hcl]: https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/server.hcl