go.etcd.io/etcd@v3.3.27+incompatible/Documentation/upgrades/upgrade_3_5.md (about)

     1  ---
     2  title: Upgrade etcd from 3.4 to 3.5
     3  ---
     4  
     5  In the general case, upgrading from etcd 3.4 to 3.5 can be a zero-downtime, rolling upgrade:
     6   - one by one, stop the etcd v3.4 processes and replace them with etcd v3.5 processes
     7   - after running all v3.5 processes, new features in v3.5 are available to the cluster
     8  
     9  Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
    10  
    11  
    12  
    13  ### Upgrade checklists
    14  
    15  **NOTE:** When [migrating from v2 with no v3 data](https://github.com/etcd-io/etcd/issues/9480), etcd server v3.2+ panics when etcd restores from existing snapshots but no v3 `ETCD_DATA_DIR/member/snap/db` file. This happens when the server had migrated from v2 with no previous v3 data. This also prevents accidental v3 data loss (e.g. `db` file might have been moved). etcd requires that post v3 migration can only happen with v3 data. Do not upgrade to newer v3 versions until v3.0 server contains v3 data.
    16  
    17  Highlighted breaking changes in 3.5.
    18  
    19  #### Deprecate `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metrics
    20  
    21  v3.4 promoted `etcd_debugging_mvcc_db_total_size_in_bytes` Prometheus metrics to `etcd_mvcc_db_total_size_in_bytes`, in order to encourage etcd storage monitoring. And v3.5 completely deprcates `etcd_debugging_mvcc_db_total_size_in_bytes`.
    22  
    23  ```diff
    24  -etcd_debugging_mvcc_db_total_size_in_bytes
    25  +etcd_mvcc_db_total_size_in_bytes
    26  ```
    27  
    28  Note that `etcd_debugging_*` namespace metrics have been marked as experimental. As we improve monitoring guide, we will promote more metrics.
    29  
    30  #### Deprecated in `etcd --logger capnslog`
    31  
    32  v3.4 defaults to `--logger=zap` in order to support multiple log outputs and structured logging.
    33  
    34  **`etcd --logger=capnslog` has been deprecated in v3.5**, and now `--logger=zap` is the default.
    35  
    36  ```diff
    37  -etcd --logger=capnslog
    38  +etcd --logger=zap --log-outputs=stderr
    39  
    40  +# to write logs to stderr and a.log file at the same time
    41  +etcd --logger=zap --log-outputs=stderr,a.log
    42  ```
    43  
    44  TODO(add more monitoring guides); v3.4 adds `etcd --logger=zap` support for structured logging and multiple log outputs. Main motivation is to promote automated etcd monitoring, rather than looking back server logs when it starts breaking. Future development will make etcd log as few as possible, and make etcd easier to monitor with metrics and alerts. **`etcd --logger=capnslog` will be deprecated in v3.5.**
    45  
    46  #### Deprecated in `etcd --log-output`
    47  
    48  v3.4 renamed [`etcd --log-output` to `--log-outputs`](https://github.com/etcd-io/etcd/pull/9624) to support multiple log outputs.
    49  
    50  **`etcd --log-output` has been deprecated in v3.5.**
    51  
    52  ```diff
    53  -etcd --log-output=stderr
    54  +etcd --log-outputs=stderr
    55  ```
    56  
    57  #### Deprecated `etcd --log-package-levels`
    58  
    59  **`etcd --log-package-levels` flag for `capnslog` has been deprecated.**
    60  
    61  Now, **`etcd --logger=zap`** is the default.
    62  
    63  ```diff
    64  -etcd --log-package-levels 'etcdmain=CRITICAL,etcdserver=DEBUG'
    65  +etcd --logger=zap --log-outputs=stderr
    66  ```
    67  
    68  #### Deprecated `[CLIENT-URL]/config/local/log`
    69  
    70  **`/config/local/log` endpoint is being deprecated in v3.5, as is `etcd --log-package-levels` flag.**
    71  
    72  ```diff
    73  -$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
    74  -# debug logging enabled
    75  ```
    76  
    77  #### Changed gRPC gateway HTTP endpoints (deprecated `/v3beta`)
    78  
    79  Before
    80  
    81  ```bash
    82  curl -L http://localhost:2379/v3beta/kv/put \
    83    -X POST -d '{"key": "Zm9v", "value": "YmFy"}'
    84  ```
    85  
    86  After
    87  
    88  ```bash
    89  curl -L http://localhost:2379/v3/kv/put \
    90    -X POST -d '{"key": "Zm9v", "value": "YmFy"}'
    91  ```
    92  
    93  `/v3beta` has been removed in 3.5 release.
    94  
    95  
    96  
    97  ### Server upgrade checklists
    98  
    99  #### Upgrade requirements
   100  
   101  To upgrade an existing etcd deployment to 3.5, the running cluster must be 3.4 or greater. If it's before 3.4, please [upgrade to 3.4](upgrade_3_3.md) before upgrading to 3.5.
   102  
   103  Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. Check the health of the cluster by using the `etcdctl endpoint health` command before proceeding.
   104  
   105  #### Preparation
   106  
   107  Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
   108  
   109  Before beginning, [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup). Should something go wrong with the upgrade, it is possible to use this backup to [downgrade](#downgrade) back to existing etcd version. Please note that the `snapshot` command only backs up the v3 data. For v2 data, see [backing up v2 datastore](../v2/admin_guide.md#backing-up-the-datastore).
   110  
   111  #### Mixed versions
   112  
   113  While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 3.5. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
   114  
   115  #### Limitations
   116  
   117  Note: If the cluster only has v3 data and no v2 data, it is not subject to this limitation.
   118  
   119  If the cluster is serving a v2 data set larger than 50MB, each newly upgraded member may take up to two minutes to catch up with the existing cluster. Check the size of a recent snapshot to estimate the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
   120  
   121  For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we'll be happy to provide advice on the procedure.
   122  
   123  #### Downgrade
   124  
   125  If all members have been upgraded to v3.5, the cluster will be upgraded to v3.5, and downgrade from this completed state is **not possible**. If any single member is still v3.4, however, the cluster and its operations remains "v3.4", and it is possible from this mixed cluster state to return to using a v3.4 etcd binary on all members.
   126  
   127  Please [download the snapshot backup](../op-guide/maintenance.md#snapshot-backup) to make downgrading the cluster possible even after it has been completely upgraded.
   128  
   129  ### Upgrade procedure
   130  
   131  This example shows how to upgrade a 3-member v3.4 ectd cluster running on a local machine.
   132  
   133  #### Step 1: check upgrade requirements
   134  
   135  Is the cluster healthy and running v3.4.x?
   136  
   137  ```bash
   138  etcdctl --endpoints=localhost:2379,localhost:22379,localhost:32379 endpoint health
   139  <<COMMENT
   140  localhost:2379 is healthy: successfully committed proposal: took = 2.118638ms
   141  localhost:22379 is healthy: successfully committed proposal: took = 3.631388ms
   142  localhost:32379 is healthy: successfully committed proposal: took = 2.157051ms
   143  COMMENT
   144  
   145  curl http://localhost:2379/version
   146  <<COMMENT
   147  {"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
   148  COMMENT
   149  
   150  curl http://localhost:22379/version
   151  <<COMMENT
   152  {"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
   153  COMMENT
   154  
   155  curl http://localhost:32379/version
   156  <<COMMENT
   157  {"etcdserver":"3.4.0","etcdcluster":"3.4.0"}
   158  COMMENT
   159  ```
   160  
   161  #### Step 2: download snapshot backup from leader
   162  
   163  [Download the snapshot backup](../op-guide/maintenance.md#snapshot-backup) to provide a downgrade path should any problems occur.
   164  
   165  etcd leader is guaranteed to have the latest application data, thus fetch snapshot from leader:
   166  
   167  ```bash
   168  curl -sL http://localhost:2379/metrics | grep etcd_server_is_leader
   169  <<COMMENT
   170  # HELP etcd_server_is_leader Whether or not this member is a leader. 1 if is, 0 otherwise.
   171  # TYPE etcd_server_is_leader gauge
   172  etcd_server_is_leader 1
   173  COMMENT
   174  
   175  curl -sL http://localhost:22379/metrics | grep etcd_server_is_leader
   176  <<COMMENT
   177  etcd_server_is_leader 0
   178  COMMENT
   179  
   180  curl -sL http://localhost:32379/metrics | grep etcd_server_is_leader
   181  <<COMMENT
   182  etcd_server_is_leader 0
   183  COMMENT
   184  
   185  etcdctl --endpoints=localhost:2379 snapshot save backup.db
   186  <<COMMENT
   187  {"level":"info","ts":1526585787.148433,"caller":"snapshot/v3_snapshot.go:109","msg":"created temporary db file","path":"backup.db.part"}
   188  {"level":"info","ts":1526585787.1485257,"caller":"snapshot/v3_snapshot.go:120","msg":"fetching snapshot","endpoint":"localhost:2379"}
   189  {"level":"info","ts":1526585787.1519694,"caller":"snapshot/v3_snapshot.go:133","msg":"fetched snapshot","endpoint":"localhost:2379","took":0.003502721}
   190  {"level":"info","ts":1526585787.1520295,"caller":"snapshot/v3_snapshot.go:142","msg":"saved","path":"backup.db"}
   191  Snapshot saved at backup.db
   192  COMMENT
   193  ```
   194  
   195  #### Step 3: stop one existing etcd server
   196  
   197  When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
   198  
   199  ```bash
   200  {"level":"info","ts":1526587281.2001143,"caller":"etcdserver/server.go:2249","msg":"updating cluster version","from":"3.0","to":"3.4"}
   201  {"level":"info","ts":1526587281.2010646,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","from":"3.0","from":"3.4"}
   202  {"level":"info","ts":1526587281.2012327,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}
   203  {"level":"info","ts":1526587281.2013083,"caller":"etcdserver/server.go:2272","msg":"cluster version is updated","cluster-version":"3.4"}
   204  
   205  
   206  
   207  ^C{"level":"info","ts":1526587299.0717514,"caller":"osutil/interrupt_unix.go:63","msg":"received signal; shutting down","signal":"interrupt"}
   208  {"level":"info","ts":1526587299.0718873,"caller":"embed/etcd.go:285","msg":"closing etcd server","name":"s1","data-dir":"/tmp/etcd/s1","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
   209  {"level":"info","ts":1526587299.0722554,"caller":"etcdserver/server.go:1341","msg":"leadership transfer starting","local-member-id":"7339c4e5e833c029","current-leader-member-id":"7339c4e5e833c029","transferee-member-id":"729934363faa4a24"}
   210  {"level":"info","ts":1526587299.0723994,"caller":"raft/raft.go:1107","msg":"7339c4e5e833c029 [term 3] starts to transfer leadership to 729934363faa4a24"}
   211  {"level":"info","ts":1526587299.0724802,"caller":"raft/raft.go:1113","msg":"7339c4e5e833c029 sends MsgTimeoutNow to 729934363faa4a24 immediately as 729934363faa4a24 already has up-to-date log"}
   212  {"level":"info","ts":1526587299.0737045,"caller":"raft/raft.go:797","msg":"7339c4e5e833c029 [term: 3] received a MsgVote message with higher term from 729934363faa4a24 [term: 4]"}
   213  {"level":"info","ts":1526587299.0737681,"caller":"raft/raft.go:656","msg":"7339c4e5e833c029 became follower at term 4"}
   214  {"level":"info","ts":1526587299.073831,"caller":"raft/raft.go:882","msg":"7339c4e5e833c029 [logterm: 3, index: 9, vote: 0] cast MsgVote for 729934363faa4a24 [logterm: 3, index: 9] at term 4"}
   215  {"level":"info","ts":1526587299.0738947,"caller":"raft/node.go:312","msg":"raft.node: 7339c4e5e833c029 lost leader 7339c4e5e833c029 at term 4"}
   216  {"level":"info","ts":1526587299.0748374,"caller":"raft/node.go:306","msg":"raft.node: 7339c4e5e833c029 elected leader 729934363faa4a24 at term 4"}
   217  {"level":"info","ts":1526587299.1726425,"caller":"etcdserver/server.go:1362","msg":"leadership transfer finished","local-member-id":"7339c4e5e833c029","old-leader-member-id":"7339c4e5e833c029","new-leader-member-id":"729934363faa4a24","took":0.100389359}
   218  {"level":"info","ts":1526587299.1728148,"caller":"rafthttp/peer.go:333","msg":"stopping remote peer","remote-peer-id":"b548c2511513015"}
   219  {"level":"warn","ts":1526587299.1751974,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b548c2511513015"}
   220  {"level":"warn","ts":1526587299.1752589,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b548c2511513015"}
   221  {"level":"warn","ts":1526587299.177348,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b548c2511513015"}
   222  {"level":"warn","ts":1526587299.1774004,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b548c2511513015"}
   223  {"level":"info","ts":1526587299.177515,"caller":"rafthttp/pipeline.go:86","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
   224  {"level":"warn","ts":1526587299.1777067,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015","error":"read tcp 127.0.0.1:34636->127.0.0.1:32380: use of closed network connection"}
   225  {"level":"info","ts":1526587299.1778402,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
   226  {"level":"warn","ts":1526587299.1780295,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015","error":"read tcp 127.0.0.1:34634->127.0.0.1:32380: use of closed network connection"}
   227  {"level":"info","ts":1526587299.1780987,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"b548c2511513015"}
   228  {"level":"info","ts":1526587299.1781602,"caller":"rafthttp/peer.go:340","msg":"stopped remote peer","remote-peer-id":"b548c2511513015"}
   229  {"level":"info","ts":1526587299.1781986,"caller":"rafthttp/peer.go:333","msg":"stopping remote peer","remote-peer-id":"729934363faa4a24"}
   230  {"level":"warn","ts":1526587299.1802843,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"729934363faa4a24"}
   231  {"level":"warn","ts":1526587299.1803446,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"729934363faa4a24"}
   232  {"level":"warn","ts":1526587299.1824749,"caller":"rafthttp/stream.go:291","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"729934363faa4a24"}
   233  {"level":"warn","ts":1526587299.18255,"caller":"rafthttp/stream.go:301","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"729934363faa4a24"}
   234  {"level":"info","ts":1526587299.18261,"caller":"rafthttp/pipeline.go:86","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
   235  {"level":"warn","ts":1526587299.1827736,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24","error":"read tcp 127.0.0.1:51482->127.0.0.1:22380: use of closed network connection"}
   236  {"level":"info","ts":1526587299.182845,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
   237  {"level":"warn","ts":1526587299.1830168,"caller":"rafthttp/stream.go:436","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24","error":"context canceled"}
   238  {"level":"warn","ts":1526587299.1831107,"caller":"rafthttp/peer_status.go:65","msg":"peer became inactive","peer-id":"729934363faa4a24","error":"failed to read 729934363faa4a24 on stream Message (context canceled)"}
   239  {"level":"info","ts":1526587299.1831737,"caller":"rafthttp/stream.go:459","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7339c4e5e833c029","remote-peer-id":"729934363faa4a24"}
   240  {"level":"info","ts":1526587299.1832306,"caller":"rafthttp/peer.go:340","msg":"stopped remote peer","remote-peer-id":"729934363faa4a24"}
   241  {"level":"warn","ts":1526587299.1837125,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"b548c2511513015","cluster-id":"7dee9ba76d59ed53"}
   242  {"level":"warn","ts":1526587299.1840093,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"b548c2511513015","cluster-id":"7dee9ba76d59ed53"}
   243  {"level":"warn","ts":1526587299.1842315,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"729934363faa4a24","cluster-id":"7dee9ba76d59ed53"}
   244  {"level":"warn","ts":1526587299.1844475,"caller":"rafthttp/http.go:424","msg":"failed to find remote peer in cluster","local-member-id":"7339c4e5e833c029","remote-peer-id-stream-handler":"7339c4e5e833c029","remote-peer-id-from":"729934363faa4a24","cluster-id":"7dee9ba76d59ed53"}
   245  {"level":"info","ts":1526587299.2056687,"caller":"embed/etcd.go:473","msg":"stopping serving peer traffic","address":"127.0.0.1:2380"}
   246  {"level":"info","ts":1526587299.205819,"caller":"embed/etcd.go:480","msg":"stopped serving peer traffic","address":"127.0.0.1:2380"}
   247  {"level":"info","ts":1526587299.2058413,"caller":"embed/etcd.go:289","msg":"closed etcd server","name":"s1","data-dir":"/tmp/etcd/s1","advertise-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://localhost:2379"]}
   248  ```
   249  
   250  #### Step 4: restart the etcd server with same configuration
   251  
   252  Restart the etcd server with same configuration but with the new etcd binary.
   253  
   254  ```diff
   255  -etcd-old --name s1 \
   256  +etcd-new --name s1 \
   257    --data-dir /tmp/etcd/s1 \
   258    --listen-client-urls http://localhost:2379 \
   259    --advertise-client-urls http://localhost:2379 \
   260    --listen-peer-urls http://localhost:2380 \
   261    --initial-advertise-peer-urls http://localhost:2380 \
   262    --initial-cluster s1=http://localhost:2380,s2=http://localhost:22380,s3=http://localhost:32380 \
   263    --initial-cluster-token tkn \
   264    --initial-cluster-state new
   265  ```
   266  
   267  The new v3.5 etcd will publish its information to the cluster. At this point, cluster still operates as v3.4 protocol, which is the lowest common version.
   268  
   269  > `{"level":"info","ts":1526586617.1647713,"caller":"membership/cluster.go:485","msg":"set initial cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","cluster-version":"3.0"}`
   270  
   271  > `{"level":"info","ts":1526586617.1648536,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.0"}`
   272  
   273  > `{"level":"info","ts":1526586617.1649303,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"7339c4e5e833c029","from":"3.0","from":"3.4"}`
   274  
   275  > `{"level":"info","ts":1526586617.1649797,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.4"}`
   276  
   277  > `{"level":"info","ts":1526586617.2107732,"caller":"etcdserver/server.go:1770","msg":"published local member to cluster through raft","local-member-id":"7339c4e5e833c029","local-member-attributes":"{Name:s1 ClientURLs:[http://localhost:2379]}","request-path":"/0/members/7339c4e5e833c029/attributes","cluster-id":"7dee9ba76d59ed53","publish-timeout":7}`
   278  
   279  Verify that each member, and then the entire cluster, becomes healthy with the new v3.5 etcd binary:
   280  
   281  ```bash
   282  etcdctl endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
   283  <<COMMENT
   284  localhost:32379 is healthy: successfully committed proposal: took = 2.337471ms
   285  localhost:22379 is healthy: successfully committed proposal: took = 1.130717ms
   286  localhost:2379 is healthy: successfully committed proposal: took = 2.124843ms
   287  COMMENT
   288  ```
   289  
   290  Un-upgraded members will log warnings like the following until the entire cluster is upgraded.
   291  
   292  This is expected and will cease after all etcd cluster members are upgraded to v3.5:
   293  
   294  ```
   295  :41.942121 W | etcdserver: member 7339c4e5e833c029 has a higher version 3.5.0
   296  :45.945154 W | etcdserver: the local etcd version 3.4.0 is not up-to-date
   297  ```
   298  
   299  #### Step 5: repeat *step 3* and *step 4* for rest of the members
   300  
   301  When all members are upgraded, the cluster will report upgrading to 3.5 successfully:
   302  
   303  Member 1:
   304  
   305  > `{"level":"info","ts":1526586949.0920913,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
   306  > `{"level":"info","ts":1526586949.0921566,"caller":"etcdserver/server.go:2272","msg":"cluster version is updated","cluster-version":"3.5"}`
   307  
   308  Member 2:
   309  
   310  > `{"level":"info","ts":1526586949.092117,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"729934363faa4a24","from":"3.4","from":"3.5"}`
   311  > `{"level":"info","ts":1526586949.0923078,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
   312  
   313  Member 3:
   314  
   315  > `{"level":"info","ts":1526586949.0921423,"caller":"membership/cluster.go:473","msg":"updated cluster version","cluster-id":"7dee9ba76d59ed53","local-member-id":"b548c2511513015","from":"3.4","from":"3.5"}`
   316  > `{"level":"info","ts":1526586949.0922918,"caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.5"}`
   317  
   318  
   319  ```bash
   320  endpoint health --endpoints=localhost:2379,localhost:22379,localhost:32379
   321  <<COMMENT
   322  localhost:2379 is healthy: successfully committed proposal: took = 492.834µs
   323  localhost:22379 is healthy: successfully committed proposal: took = 1.015025ms
   324  localhost:32379 is healthy: successfully committed proposal: took = 1.853077ms
   325  COMMENT
   326  
   327  curl http://localhost:2379/version
   328  <<COMMENT
   329  {"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
   330  COMMENT
   331  
   332  curl http://localhost:22379/version
   333  <<COMMENT
   334  {"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
   335  COMMENT
   336  
   337  curl http://localhost:32379/version
   338  <<COMMENT
   339  {"etcdserver":"3.5.0","etcdcluster":"3.5.0"}
   340  COMMENT
   341  ```
   342  
   343  [etcd-contact]: https://groups.google.com/forum/#!forum/etcd-dev