github.com/ferranbt/nomad@v0.9.3-0.20190607002617-85c449b7667c/website/source/intro/getting-started/running.html.md (about)

     1  ---
     2  layout: "intro"
     3  page_title: "Running Nomad"
     4  sidebar_current: "getting-started-running"
     5  description: |-
     6    Learn about the Nomad agent, and the lifecycle of running and stopping.
     7  ---
     8  
     9  # Running Nomad
    10  
    11  Nomad relies on a long running agent on every machine in the cluster.
    12  The agent can run either in server or client mode. Each region must
    13  have at least one server, though a cluster of 3 or 5 servers is recommended.
    14  A single server deployment is _**highly**_ discouraged as data loss is inevitable
    15  in a failure scenario.
    16  
    17  All other agents run in client mode. A Nomad client is a very lightweight
    18  process that registers the host machine, performs heartbeating, and runs the tasks
    19  that are assigned to it by the servers. The agent must be run on every node that
    20  is part of the cluster so that the servers can assign work to those machines.
    21  
    22  ## Starting the Agent
    23  
    24  For simplicity, we will run a single Nomad agent in development mode. This mode
    25  is used to quickly start an agent that is acting as a client and server to test
    26  job configurations or prototype interactions. It should _**not**_ be used in
    27  production as it does not persist state.
    28  
    29  ```text
    30  $ sudo nomad agent -dev
    31  
    32  ==> Starting Nomad agent...
    33  ==> Nomad agent configuration:
    34  
    35                  Client: true
    36               Log Level: DEBUG
    37                  Region: global (DC: dc1)
    38                  Server: true
    39  
    40  ==> Nomad agent started! Log data will stream in below:
    41  
    42      [INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
    43      [INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
    44      [INFO] client: using alloc directory /tmp/NomadClient599911093
    45      [INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
    46      [INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
    47      [WARN] fingerprint.network: Ethtool not found, checking /sys/net speed file
    48      [WARN] raft: Heartbeat timeout reached, starting election
    49      [INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
    50      [DEBUG] raft: Votes needed: 1
    51      [DEBUG] raft: Vote granted. Tally: 1
    52      [INFO] raft: Election won. Tally: 1
    53      [INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    54      [INFO] raft: Disabling EnableSingleNode (bootstrap)
    55      [DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
    56      [INFO] nomad: cluster leadership acquired
    57      [DEBUG] client: applied fingerprints [arch cpu host memory storage network]
    58      [DEBUG] client: available drivers [docker exec java]
    59      [DEBUG] client: node registration complete
    60      [DEBUG] client: updated allocations at index 1 (0 allocs)
    61      [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
    62      [DEBUG] client: state updated to ready
    63  ```
    64  
    65  As you can see, the Nomad agent has started and has output some log
    66  data. From the log data, you can see that our agent is running in both
    67  client and server mode, and has claimed leadership of the cluster.
    68  Additionally, the local client has been registered and marked as ready.
    69  
    70  -> **Note:** Typically any agent running in client mode must be run with root level
    71  privilege. Nomad makes use of operating system primitives for resource isolation
    72  which require elevated permissions. The agent will function as non-root, but
    73  certain task drivers will not be available.
    74  
    75  ## Cluster Nodes
    76  
    77  If you run [`nomad node status`](/docs/commands/node/status.html) in another
    78  terminal, you can see the registered nodes of the Nomad cluster:
    79  
    80  ```text
    81  $ nomad node status
    82  ID        DC   Name   Class   Drain  Eligibility  Status
    83  171a583b  dc1  nomad  <none>  false  eligible     ready
    84  ```
    85  
    86  The output shows our Node ID, which is a randomly generated UUID,
    87  its datacenter, node name, node class, drain mode and current status.
    88  We can see that our node is in the ready state, and task draining is
    89  currently off.
    90  
    91  The agent is also running in server mode, which means it is part of
    92  the [gossip protocol](/docs/internals/gossip.html) used to connect all
    93  the server instances together. We can view the members of the gossip
    94  ring using the [`server members`](/docs/commands/server/members.html) command:
    95  
    96  ```text
    97  $ nomad server members
    98  Name          Address    Port  Status  Leader  Protocol  Build  Datacenter  Region
    99  nomad.global  127.0.0.1  4648  alive   true    2         0.7.0  dc1         global
   100  ```
   101  
   102  The output shows our own agent, the address it is running on, its
   103  health state, some version information, and the datacenter and region.
   104  Additional metadata can be viewed by providing the `-detailed` flag.
   105  
   106  ## <a name="stopping"></a>Stopping the Agent
   107  
   108  You can use `Ctrl-C` (the interrupt signal) to halt the agent.
   109  By default, all signals will cause the agent to forcefully shutdown.
   110  The agent [can be configured](/docs/configuration/index.html#leave_on_terminate) to
   111  gracefully leave on either the interrupt or terminate signals.
   112  
   113  After interrupting the agent, you should see it leave the cluster
   114  and shut down:
   115  
   116  ```
   117  ^C==> Caught signal: interrupt
   118      [DEBUG] http: Shutting down http server
   119      [INFO] agent: requesting shutdown
   120      [INFO] client: shutting down
   121      [INFO] nomad: shutting down server
   122      [WARN] serf: Shutdown without a Leave
   123      [INFO] agent: shutdown complete
   124  ```
   125  
   126  By gracefully leaving, Nomad clients update their status to prevent
   127  further tasks from being scheduled and to start migrating any tasks that are
   128  already assigned. Nomad servers notify their peers they intend to leave.
   129  When a server leaves, replication to that server stops. If a server fails,
   130  replication continues to be attempted until the node recovers. Nomad will
   131  automatically try to reconnect to _failed_ nodes, allowing it to recover from
   132  certain network conditions, while _left_ nodes are no longer contacted.
   133  
   134  If an agent is operating as a server, [`leave_on_terminate`](/docs/configuration/index.html#leave_on_terminate) should only
   135  be set if the server will never rejoin the cluster again. The default value of `false` for `leave_on_terminate` and `leave_on_interrupt`
   136  work well for most scenarios. If Nomad servers are part of an auto scaling group where new servers are brought up to replace
   137  failed servers, using graceful leave avoids causing a potential availability outage affecting the [consensus protocol](/docs/internals/consensus.html).
   138  As of Nomad 0.8, Nomad includes Autopilot which automatically removes failed or dead servers. This allows the operator to skip setting `leave_on_terminate`.
   139  
   140  If a server does forcefully exit and will not be returning into service, the
   141  [`server force-leave` command](/docs/commands/server/force-leave.html) should
   142  be used to force the server from a _failed_ to a _left_ state.
   143  
   144  ## Next Steps
   145  
   146  If you shut down the development Nomad agent as instructed above, ensure that it is back up and running again and let's try to [run a job](jobs.html)!