github.com/portworx/docker@v1.12.1/docs/swarm/manage-nodes.md (about)

     1  <!--[metadata]>
     2  +++
     3  title = "Manage nodes in a swarm"
     4  description = "Manage existing nodes in a swarm"
     5  keywords = ["guide, swarm mode, node"]
     6  [menu.main]
     7  identifier="manage-nodes-guide"
     8  parent="engine_swarm"
     9  weight=14
    10  +++
    11  <![end-metadata]-->
    12  
    13  # Manage nodes in a swarm
    14  
    15  As part of the swarm management lifecycle, you may need to view or update a node as follows:
    16  
    17  * [list nodes in the swarm](#list-nodes)
    18  * [inspect an individual node](#inspect-an-individual-node)
    19  * [update a node](#update-a-node)
    20  * [leave the swarm](#leave-the-swarm)
    21  
    22  ## List nodes
    23  
    24  To view a list of nodes in the swarm run `docker node ls` from a manager node:
    25  
    26  ```bash
    27  $ docker node ls
    28  
    29  ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
    30  46aqrk4e473hjbt745z53cr3t    node-5    Ready   Active        Reachable
    31  61pi3d91s0w3b90ijw3deeb2q    node-4    Ready   Active        Reachable
    32  a5b2m3oghd48m8eu391pefq5u    node-3    Ready   Active
    33  e7p8btxeu3ioshyuj6lxiv6g0    node-2    Ready   Active
    34  ehkv3bcimagdese79dn78otj5 *  node-1    Ready   Active        Leader
    35  ```
    36  
    37  The `AVAILABILITY` column shows whether or not the scheduler can assign tasks to
    38  the node:
    39  
    40  * `Active` means that the scheduler can assign tasks to a node.
    41  * `Pause` means the scheduler doesn't assign new tasks to the node, but existing
    42  tasks remain running.
    43  * `Drain` means the scheduler doesn't assign new tasks to the node. The
    44  scheduler shuts down any existing tasks and schedules them on an available
    45  node.
    46  
    47  The `MANAGER STATUS` column shows node participation in the Raft consensus:
    48  
    49  * No value indicates a worker node that does not participate in swarm
    50  management.
    51  * `Leader` means the node is the primary manager node that makes all swarm
    52  management and orchestration decisions for the swarm.
    53  * `Reachable` means the node is a manager node is participating in the Raft
    54  consensus. If the leader node becomes unavailable, the node is eligible for
    55  election as the new leader.
    56  * `Unavailable` means the node is a manager that is not able to communicate with
    57  other managers. If a manager node becomes unavailable, you should either join a
    58  new manager node to the swarm or promote a worker node to be a
    59  manager.
    60  
    61  For more information on swarm administration refer to the [Swarm administration guide](admin_guide.md).
    62  
    63  ## Inspect an individual node
    64  
    65  You can run `docker node inspect <NODE-ID>` on a manager node to view the
    66  details for an individual node. The output defaults to JSON format, but you can
    67  pass the `--pretty` flag to print the results in human-readable format. For example:
    68  
    69  ```bash
    70  docker node inspect self --pretty
    71  
    72  ID:                     ehkv3bcimagdese79dn78otj5
    73  Hostname:               node-1
    74  Joined at:              2016-06-16 22:52:44.9910662 +0000 utc
    75  Status:
    76   State:                 Ready
    77   Availability:          Active
    78  Manager Status:
    79   Address:               172.17.0.2:2377
    80   Raft Status:           Reachable
    81   Leader:                Yes
    82  Platform:
    83   Operating System:      linux
    84   Architecture:          x86_64
    85  Resources:
    86   CPUs:                  2
    87   Memory:                1.954 GiB
    88  Plugins:
    89    Network:              overlay, host, bridge, overlay, null
    90    Volume:               local
    91  Engine Version:         1.12.0-dev
    92  ```
    93  
    94  ## Update a node
    95  
    96  You can modify node attributes as follows:
    97  
    98  * [change node availability](#change-node-availability)
    99  * [add or remove label metadata](#add-or-remove-label-metadata)
   100  * [change a node role](#promote-or-demote-a-node)
   101  
   102  ### Change node availability
   103  
   104  Changing node availability lets you:
   105  
   106  * drain a manager node so that only performs swarm management tasks and is
   107  unavailable for task assignment.
   108  * drain a node so you can take it down for maintenance.
   109  * pause a node so it is unavailable to receive new tasks.
   110  * restore unavailable or paused nodes available status.
   111  
   112  For example, to change a manager node to `Drain` availability:
   113  
   114  ```bash
   115  $ docker node update --availability drain node-1
   116  
   117  node-1
   118  ```
   119  
   120  See [list nodes](#list-nodes) for descriptions of the different availability
   121  options.
   122  
   123  ### Add or remove label metadata
   124  
   125  Node labels provide a flexible method of node organization. You can also use
   126  node labels in service constraints. Apply constraints when you create a service
   127  to limit the nodes where the scheduler assigns tasks for the service.
   128  
   129  Run `docker node update --label-add` on a manager node to add label metadata to
   130  a node. The `--label-add` flag supports either a `<key>` or a `<key>=<value>`
   131  pair.
   132  
   133  Pass the `--label-add` flag once for each node label you want to add:
   134  
   135  ```bash
   136  $ docker node update --label-add foo --label-add bar=baz node-1
   137  
   138  node-1
   139  ```
   140  
   141  The labels you set for nodes using docker node update apply only to the node
   142  entity within the swarm. Do not confuse them with the docker daemon labels for
   143  [dockerd](../userguide/labels-custom-metadata.md#daemon-labels).
   144  
   145  Refer to the `docker service create` [CLI reference](../reference/commandline/service_create.md)
   146  for more information about service constraints.
   147  
   148  ### Promote or demote a node
   149  
   150  You can promote a worker node to the manager role. This is useful when a
   151  manager node becomes unavailable or if you want to take a manager offline for
   152  maintenance. Similarly, you can demote a manager node to the worker role.
   153  
   154  Regardless of your reason to promote or demote a node, you should always
   155  maintain an odd number of manager nodes in the swarm. For more information refer
   156  to the [Swarm administration guide](admin_guide.md).
   157  
   158  To promote a node or set of nodes, run `docker node promote` from a manager
   159  node:
   160  
   161  ```bash
   162  $ docker node promote node-3 node-2
   163  
   164  Node node-3 promoted to a manager in the swarm.
   165  Node node-2 promoted to a manager in the swarm.
   166  ```
   167  
   168  To demote a node or set of nodes, run `docker node demote` from a manager node:
   169  
   170  ```bash
   171  $ docker node demote node-3 node-2
   172  
   173  Manager node-3 demoted in the swarm.
   174  Manager node-2 demoted in the swarm.
   175  ```
   176  
   177  `docker node promote` and `docker node demote` are convenience commands for
   178  `docker node update --role manager` and `docker node update --role worker`
   179  respectively.
   180  
   181  
   182  ## Leave the swarm
   183  
   184  Run the `docker swarm leave` command on a node to remove it from the swarm.
   185  
   186  For example to leave the swarm on a worker node:
   187  
   188  ```bash
   189  $ docker swarm leave
   190  
   191  Node left the swarm.
   192  ```
   193  
   194  When a node leaves the swarm, the Docker Engine stops running in swarm
   195  mode. The orchestrator no longer schedules tasks to the node.
   196  
   197  If the node is a manager node, you will receive a warning about maintaining the
   198  quorum. To override the warning, pass the `--force` flag. If the last manager
   199  node leaves the swarm, the swarm becomes unavailable requiring you to take
   200  disaster recovery measures.
   201  
   202  For information about maintaining a quorum and disaster recovery, refer to the
   203  [Swarm administration guide](admin_guide.md).
   204  
   205  After a node leaves the swarm, you can run the `docker node rm` command on a
   206  manager node to remove the node from the node list.
   207  
   208  For instance:
   209  
   210  ```bash
   211  docker node rm node-2
   212  
   213  node-2
   214  ```
   215  
   216  ## Learn More
   217  
   218  * [Swarm administration guide](admin_guide.md)
   219  * [Docker Engine command line reference](../reference/commandline/index.md)
   220  * [Swarm mode tutorial](swarm-tutorial/index.md)