github.com/kaituanwang/hyperledger@v2.0.1+incompatible/docs/source/upgrading_your_components.md (about)

     1  # Upgrading your components
     2  
     3  *Audience: network administrators, node administrators*
     4  
     5  For information about special considerations for the latest release of Fabric, check out [Upgrading to the latest release of Fabric](./upgrade_to_newest_version.html).
     6  
     7  This topic will only cover the process for upgrading components. For information about how to edit a channel to change the capability level of your channels, check out [Updating a channel capability](./updating_capabilities.html).
     8  
     9  Note: when we use the term “upgrade” in Hyperledger Fabric, we’re referring to changing the version of a component (for example, going from one version of a binary to the next version). The term “update,” on the other hand, refers not to versions but to configuration changes, such as updating a channel configuration or a deployment script. As there is no data migration, technically speaking, in Fabric, we will not use the term "migration" or "migrate" here.
    10  
    11  ## Overview
    12  
    13  At a high level, upgrading the binary level of your nodes is a two step process:
    14  
    15  1. Backup the ledger and MSPs.
    16  2. Upgrade binaries to the latest version.
    17  
    18  If you own both ordering nodes and peers, it is a best practice to upgrade the ordering nodes first. If a peer falls behind or is temporarily unable to process certain transactions, it can always catch up. If enough ordering nodes go down, by comparison, a network can effectively cease to function.
    19  
    20  This topic presumes that these steps will be performed using Docker CLI commands. If you are utilizing a different deployment method (Rancher, Kubernetes, OpenShift, etc) consult their documentation on how to use their CLI.
    21  
    22  For native deployments, note that you will also need to update the YAML configuration file for the nodes (for example, the `orderer.yaml` file) with the one from the release artifacts.
    23  
    24  To do this, backup the `orderer.yaml` or `core.yaml` file (for the peer) and replace it with the `orderer.yaml` or `core.yaml` file from the release artifacts. Then port any modified variables from the backed up `orderer.yaml` or `core.yaml` to the new one. Using a utility like `diff` may be helpful. Note that updating the YAML file from the release rather than updating your old YAML file **is the recommended way to update your node YAML files**, as it reduces the likelihood of making errors.
    25  
    26  This tutorial assumes a Docker deployment where the YAML files will be baked into the images and environment variables will be used to overwrite the defaults in the configuration files.
    27  
    28  ## Environment variables for the binaries
    29  
    30  When you deploy a peer or an ordering node, you had to set a number of environment variables relevant to its configuration. A best practice is to create a file for these environment variables, give it a name relevant to the node being deployed, and save it somewhere on your local file system. That way you can be sure that when upgrading the peer or ordering node you are using the same variables you set when creating it.
    31  
    32  Here's a list of some of the **peer** environment variables (with sample values --- as you can see from the addresses, these environment variables are for a network deployed locally) that can be set that be listed in the file. Note that you may or may not need to set all of these environment variables:
    33  
    34  ```
    35  CORE_PEER_TLS_ENABLED=true
    36  CORE_PEER_GOSSIP_USELEADERELECTION=true
    37  CORE_PEER_GOSSIP_ORGLEADER=false
    38  CORE_PEER_PROFILE_ENABLED=true
    39  CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
    40  CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
    41  CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    42  CORE_PEER_ID=peer0.org1.example.com
    43  CORE_PEER_ADDRESS=peer0.org1.example.com:7051
    44  CORE_PEER_LISTENADDRESS=0.0.0.0:7051
    45  CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
    46  CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
    47  CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
    48  CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
    49  CORE_PEER_LOCALMSPID=Org1MSP
    50  ```
    51  
    52  Here are some **ordering node** variables (again, these are sample values) that might be listed in the environment variable file for a node. Again, you may or may not need to set all of these environment variables:
    53  
    54  ```
    55  ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
    56  ORDERER_GENERAL_GENESISMETHOD=file
    57  ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
    58  ORDERER_GENERAL_LOCALMSPID=OrdererMSP
    59  ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
    60  ORDERER_GENERAL_TLS_ENABLED=true
    61  ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
    62  ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
    63  ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
    64  ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
    65  ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
    66  ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
    67  ```
    68  
    69  However you choose to set your environment variables, note that they will have to be set for each node you want to upgrade.
    70  
    71  ## Ledger backup and restore
    72  
    73  While we will demonstrate the process for backing up ledger data in this tutorial, it is not strictly required to backup the ledger data of a peer or an ordering node (assuming the node is part of a larger group of nodes in an ordering service). This is because, even in the worst case of catastrophic failure of a peer (such as a disk failure), the peer can be brought up with no ledger at all. You can then have the peer re-join the desired channels and as a result, the peer will automatically create a ledger for each of the channels and will start receiving the blocks via regular block transfer mechanism from either the ordering service or the other peers in the channel. As the peer processes blocks, it will also build up its state database.
    74  
    75  However, backing up ledger data enables the restoration of a peer without the time and computational costs associated with bootstrapping from the genesis block and reprocessing all transactions, a process that can take hours (depending on the size of the ledger). In addition, ledger data backups may help to expedite the addition of a new peer, which can be achieved by backing up the ledger data from one peer and starting the new peer with the backed up ledger data.
    76  
    77  This tutorial presumes that the file path to the ledger data has not been changed from the default value of `/var/hyperledger/production/` (for peers) or `/var/hyperledger/production/orderer` (for ordering nodes). If this location has been changed for your nodes, enter the path to the data on your ledgers in the commands below.
    78  
    79  Note that there will be data for both the ledger and chaincodes at this file location. While it is a best practice to backup both, it is possible to skip the `stateLeveldb`, `historyLeveldb`, `chains/index` folders at `/var/hyperledger/production/ledgersData`. While skipping these folders reduces the storage needed for the backup, the peer recovery from the backed up data may take more time as these ledger artifacts will be re-constructed when the peer starts.
    80  
    81  If using CouchDB as state database, there will be no `stateLeveldb` directory, as the state database data would be stored within CouchDB instead. But similarly, if peer starts up and finds CouchDB databases are missing or at lower block height (based on using an older CouchDB backup), the state database will be automatically re-constructed to catch up to current block height. Therefore, if you backup peer ledger data and CouchDB data separately, ensure that the CouchDB backup is always older than the peer backup.
    82  
    83  ## Upgrade ordering nodes
    84  
    85  Orderer containers should be upgraded in a rolling fashion (one at a time). At a high level, the ordering node upgrade process goes as follows:
    86  
    87  1. Stop the ordering node.
    88  2. Back up the ordering node's ledger and MSP.
    89  3. Remove the ordering node container.
    90  4. Launch a new ordering node container using the relevant image tag.
    91  
    92  Repeat this process for each node in your ordering service until the entire ordering service has been upgraded.
    93  
    94  ### Set command environment variables
    95  
    96  Export the following environment variables before attempting to upgrade your ordering nodes.
    97  
    98  * `ORDERER_CONTAINER`: the name of your ordering node container. Note that you will need to export this variable for each node when upgrading it.
    99  * `LEDGERS_BACKUP`: the place in your local filesystem where you want to store the ledger being backed up. As you will see below, each node being backed up will have its own subfolder containing its ledger. You will need to create this folder.
   100  * `IMAGE_TAG`: the Fabric version you are upgrading to. For example, `2.0`.
   101  
   102  Note that you will have to set an **image tag** to ensure that the node you are starting using the correct images. The process you use to set the tag will depend on your deployment method.
   103  
   104  ### Upgrade containers
   105  
   106  Let’s begin the upgrade process by **bringing down the orderer**:
   107  
   108  ```
   109  docker stop $ORDERER_CONTAINER
   110  ```
   111  
   112  Once the orderer is down, you'll want to **backup its ledger and MSP**:
   113  
   114  ```
   115  docker cp $ORDERER_CONTAINER:/var/hyperledger/production/orderer/ ./$LEDGERS_BACKUP/$ORDERER_CONTAINER
   116  ```
   117  
   118  Then remove the ordering node container itself (since we will be giving our new container the same name as our old one):
   119  
   120  ```
   121  docker rm -f $ORDERER_CONTAINER
   122  ```
   123  
   124  Then you can launch the new ordering node container by issuing:
   125  
   126  ```
   127  docker run -d -v /opt/backup/$ORDERER_CONTAINER/:/var/hyperledger/production/orderer/ \
   128              -v /opt/msp/:/etc/hyperledger/fabric/msp/ \
   129              --env-file ./env<name of node>.list \
   130              --name $ORDERER_CONTAINER \
   131              hyperledger/fabric-orderer:$IMAGE_TAG orderer
   132  ```
   133  
   134  Once all of the ordering nodes have come up, you can move on to upgrading your peers.
   135  
   136  ## Upgrade the peers
   137  
   138  Peers should, like the ordering nodes, be upgraded in a rolling fashion (one at a time). As mentioned during the ordering node upgrade, ordering nodes and peers may be upgraded in parallel, but for the purposes of this tutorial we’ve separated the processes out. At a high level, we will perform the following steps:
   139  
   140  1. Stop the peer.
   141  2. Back up the peer’s ledger and MSP.
   142  3. Remove chaincode containers and images.
   143  4. Remove the peer container.
   144  5. Launch a new peer container using the relevant image tag.
   145  
   146  ### Set command environment variables
   147  
   148  Export the following environment variables before attempting to upgrade your peers.
   149  
   150  * `PEER_CONTAINER`: the name of your peer container. Note that you will need to set this variable for each node.
   151  * `LEDGERS_BACKUP`: the place in your local filesystem where you want to store the ledger being backed up. As you will see below, each node being backed up will have its own subfolder containing its ledger. You will need to create this folder.
   152  * `IMAGE_TAG`: the Fabric version you are upgrading to. For example, `2.0`.
   153  
   154  Note that you will have to set an **image tag** to ensure that the node you are starting is using the correct images. The process you use to set the tag will depend on your deployment method.
   155  
   156  Repeat this process for each of your peers until every node has been upgraded.
   157  
   158  ### Upgrade containers
   159  
   160  Let’s **bring down the first peer** with the following command:
   161  
   162  ```
   163  docker stop $PEER_CONTAINER
   164  ```
   165  
   166  We can then **backup the peer’s ledger and MSP**:
   167  
   168  ```
   169  docker cp $PEER_CONTAINER:/var/hyperledger/production ./$LEDGERS_BACKUP/$PEER_CONTAINER
   170  ```
   171  
   172  With the peer stopped and the ledger backed up, **remove the peer chaincode containers**:
   173  
   174  ```
   175  CC_CONTAINERS=$(docker ps | grep dev-$PEER_CONTAINER | awk '{print $1}')
   176  if [ -n "$CC_CONTAINERS" ] ; then docker rm -f $CC_CONTAINERS ; fi
   177  ```
   178  
   179  And the peer chaincode images:
   180  
   181  ```
   182  CC_IMAGES=$(docker images | grep dev-$PEER | awk '{print $1}')
   183  if [ -n "$CC_IMAGES" ] ; then docker rmi -f $CC_IMAGES ; fi
   184  ```
   185  
   186  Then remove the peer container itself (since we will be giving our new container the same name as our old one):
   187  
   188  ```
   189  docker rm -f $PEER_CONTAINER
   190  ```
   191  
   192  Then you can launch the new peer container by issuing:
   193  
   194  ```
   195  docker run -d -v /opt/backup/$PEER_CONTAINER/:/var/hyperledger/production/ \
   196              -v /opt/msp/:/etc/hyperledger/fabric/msp/ \
   197              --env-file ./env<name of node>.list \
   198              --name $PEER_CONTAINER \
   199              hyperledger/fabric-peer:$IMAGE_TAG peer node start
   200  ```
   201  
   202  You do not need to relaunch the chaincode container. When the peer gets a request for a chaincode, (invoke or query), it first checks if it has a copy of that chaincode running. If so, it uses it. Otherwise, as in this case, the peer launches the chaincode (rebuilding the image if required).
   203  
   204  ### Verify peer upgrade completion
   205  
   206  It's a best practice to ensure the upgrade has been completed properly with a chaincode invoke. Note that it should be possible to verify that a single peer has been successfully updated by querying one of the ledgers hosted on the peer. If you want to verify that multiple peers have been upgraded, and are updating your chaincode as part of the upgrade process, you should wait until peers from enough organizations to satisfy the endorsement policy have been upgraded.
   207  
   208  Before you attempt this, you may want to upgrade peers from enough organizations to satisfy your endorsement policy. However, this is only mandatory if you are updating your chaincode as part of the upgrade process. If you are not updating your chaincode as part of the upgrade process, it is possible to get endorsements from peers running at different Fabric versions.
   209  
   210  ## Upgrade your CAs
   211  
   212  To learn how to upgrade your Fabric CA server, click over to the [CA documentation](http://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#upgrading-the-server).
   213  
   214  ## Upgrade Node SDK clients
   215  
   216  Upgrade Fabric and Fabric CA before upgrading Node SDK clients. Fabric and Fabric CA are tested for backwards compatibility with older SDK clients. While newer SDK clients often work with older Fabric and Fabric CA releases, they may expose features that are not yet available in the older Fabric and Fabric CA releases, and are not tested for full compatibility.
   217  
   218  Use NPM to upgrade any `Node.js` client by executing these commands in the root directory of your application:
   219  
   220  ```
   221  npm install fabric-client@latest
   222  
   223  npm install fabric-ca-client@latest
   224  ```
   225  
   226  These commands install the new version of both the Fabric client and Fabric-CA client and write the new versions to `package.json`.
   227  
   228  ## Upgrading CouchDB
   229  
   230  If you are using CouchDB as state database, you should upgrade the peer's CouchDB at the same time the peer is being upgraded.
   231  
   232  To upgrade CouchDB:
   233  
   234  1. Stop CouchDB.
   235  2. Backup CouchDB data directory.
   236  3. Install the latest CouchDB binaries or update deployment scripts to use a new Docker image.
   237  4. Restart CouchDB.
   238  
   239  ## Upgrade Node chaincode shim
   240  
   241  To move to the new version of the Node chaincode shim a developer would need to:
   242  
   243  1. Change the level of `fabric-shim` in their chaincode `package.json` from their old level to the new one.
   244  2. Repackage this new chaincode package and install it on all the endorsing peers in the channel.
   245  3. Perform an upgrade to this new chaincode. To see how to do this, check out [Peer chaincode commands](./commands/peerchaincode.html).
   246  
   247  ## Upgrade Chaincodes with vendored shim
   248  
   249  For information about upgrading the Go chaincode shim specific to the v2.0 release, check out [Chaincode shim changes](./upgrade_to_newest_version.html#chaincode-shim-changes).
   250  
   251  A number of third party tools exist that will allow you to vendor a chaincode shim. If you used one of these tools, use the same one to update your vendored chaincode shim and re-package your chaincode.
   252  
   253  If your chaincode vendors the shim, after updating the shim version, you must install it to all peers which already have the chaincode. Install it with the same name, but a newer version. Then you should execute a chaincode upgrade on each channel where this chaincode has been deployed to move to the new version.
   254  
   255  <!--- Licensed under Creative Commons Attribution 4.0 International License
   256  https://creativecommons.org/licenses/by/4.0/ -->