github.com/okex/exchain@v1.8.0/libs/tendermint/docs/tendermint-core/running-in-production.md (about)

     1  ---
     2  order: 5
     3  ---
     4  
     5  # Running in production
     6  
     7  ## Database
     8  
     9  By default, Tendermint uses the `syndtr/goleveldb` package for its in-process
    10  key-value database. Unfortunately, this implementation of LevelDB seems to suffer under heavy load (see
    11  [#226](https://github.com/syndtr/goleveldb/issues/226)). It may be best to
    12  install the real C-implementation of LevelDB and compile Tendermint to use
    13  that using `make build TENDERMINT_BUILD_OPTIONS=cleveldb`. See the [install instructions](../introduction/install.md) for details.
    14  
    15  Tendermint keeps multiple distinct databases in the `$TMROOT/data`:
    16  
    17  - `blockstore.db`: Keeps the entire blockchain - stores blocks,
    18    block commits, and block meta data, each indexed by height. Used to sync new
    19    peers.
    20  - `evidence.db`: Stores all verified evidence of misbehaviour.
    21  - `state.db`: Stores the current blockchain state (ie. height, validators,
    22    consensus params). Only grows if consensus params or validators change. Also
    23    used to temporarily store intermediate results during block processing.
    24  - `tx_index.db`: Indexes txs (and their results) by tx hash and by DeliverTx result events.
    25  
    26  By default, Tendermint will only index txs by their hash, not by their DeliverTx
    27  result events. See [indexing transactions](../app-dev/indexing-transactions.md) for
    28  details.
    29  
    30  There is no current strategy for pruning the databases. Consider reducing
    31  block production by [controlling empty blocks](../tendermint-core/using-tendermint.md#no-empty-blocks)
    32  or by increasing the `consensus.timeout_commit` param. Note both of these are
    33  local settings and not enforced by the consensus.
    34  
    35  We're working on [state
    36  syncing](https://github.com/tendermint/tendermint/issues/828),
    37  which will enable history to be thrown away
    38  and recent application state to be directly synced. We'll need to develop solutions
    39  for archival nodes that allow queries on historical transactions and states.
    40  The Cosmos project has had much success just dumping the latest state of a
    41  blockchain to disk and starting a new chain from that state.
    42  
    43  ## Logging
    44  
    45  Default logging level (`main:info,state:info,*:`) should suffice for
    46  normal operation mode. Read [this
    47  post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
    48  for details on how to configure `log_level` config variable. Some of the
    49  modules can be found [here](./how-to-read-logs.md#list-of-modules). If
    50  you're trying to debug Tendermint or asked to provide logs with debug
    51  logging level, you can do so by running tendermint with
    52  `--log_level="*:debug"`.
    53  
    54  ## Write Ahead Logs (WAL)
    55  
    56  Tendermint uses write ahead logs for the consensus (`cs.wal`) and the mempool
    57  (`mempool.wal`). Both WALs have a max size of 1GB and are automatically rotated.
    58  
    59  ### Consensus WAL
    60  
    61  The `consensus.wal` is used to ensure we can recover from a crash at any point
    62  in the consensus state machine.
    63  It writes all consensus messages (timeouts, proposals, block part, or vote)
    64  to a single file, flushing to disk before processing messages from its own
    65  validator. Since Tendermint validators are expected to never sign a conflicting vote, the
    66  WAL ensures we can always recover deterministically to the latest state of the consensus without
    67  using the network or re-signing any consensus messages.
    68  
    69  If your `consensus.wal` is corrupted, see [below](#wal-corruption).
    70  
    71  ### Mempool WAL
    72  
    73  The `mempool.wal` logs all incoming txs before running CheckTx, but is
    74  otherwise not used in any programmatic way. It's just a kind of manual
    75  safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes
    76  may never make it into the blockchain if those nodes crash before being able to
    77  propose it. Clients must monitor their txs by subscribing over websockets,
    78  polling for them, or using `/broadcast_tx_commit`. In the worst case, txs can be
    79  resent from the mempool WAL manually.
    80  
    81  For the above reasons, the `mempool.wal` is disabled by default. To enable, set
    82  `mempool.wal_dir` to where you want the WAL to be located (e.g.
    83  `data/mempool.wal`).
    84  
    85  ## DOS Exposure and Mitigation
    86  
    87  Validators are supposed to setup [Sentry Node
    88  Architecture](https://blog.cosmos.network/tendermint-explained-bringing-bft-based-pos-to-the-public-blockchain-domain-f22e274a0fdb)
    89  to prevent Denial-of-service attacks. You can read more about it
    90  [here](../interviews/tendermint-bft.md).
    91  
    92  ### P2P
    93  
    94  The core of the Tendermint peer-to-peer system is `MConnection`. Each
    95  connection has `MaxPacketMsgPayloadSize`, which is the maximum packet
    96  size and bounded send & receive queues. One can impose restrictions on
    97  send & receive rate per connection (`SendRate`, `RecvRate`).
    98  
    99  ### RPC
   100  
   101  Endpoints returning multiple entries are limited by default to return 30
   102  elements (100 max). See the [RPC Documentation](https://docs.tendermint.com/master/rpc/)
   103  for more information.
   104  
   105  Rate-limiting and authentication are another key aspects to help protect
   106  against DOS attacks. While in the future we may implement these
   107  features, for now, validators are supposed to use external tools like
   108  [NGINX](https://www.nginx.com/blog/rate-limiting-nginx/) or
   109  [traefik](https://docs.traefik.io/middlewares/ratelimit/)
   110  to achieve the same things.
   111  
   112  ## Debugging Tendermint
   113  
   114  If you ever have to debug Tendermint, the first thing you should probably do is
   115  check out the logs. See [How to read logs](./how-to-read-logs.md), where we
   116  explain what certain log statements mean.
   117  
   118  If, after skimming through the logs, things are not clear still, the next thing
   119  to try is querying the `/status` RPC endpoint. It provides the necessary info:
   120  whenever the node is syncing or not, what height it is on, etc.
   121  
   122  ```sh
   123  curl http(s)://{ip}:{rpcPort}/status
   124  ```
   125  
   126  `/dump_consensus_state` will give you a detailed overview of the consensus
   127  state (proposer, latest validators, peers states). From it, you should be able
   128  to figure out why, for example, the network had halted.
   129  
   130  ```sh
   131  curl http(s)://{ip}:{rpcPort}/dump_consensus_state
   132  ```
   133  
   134  There is a reduced version of this endpoint - `/consensus_state`, which returns
   135  just the votes seen at the current height.
   136  
   137  If, after consulting with the logs and above endpoints, you still have no idea
   138  what's happening, consider using `tendermint debug kill` sub-command. This
   139  command will scrap all the available info and kill the process. See
   140  [Debugging](../tools/debugging.md) for the exact format.
   141  
   142  You can inspect the resulting archive yourself or create an issue on
   143  [Github](https://github.com/tendermint/tendermint). Before opening an issue
   144  however, be sure to check if there's [no existing
   145  issue](https://github.com/tendermint/tendermint/issues) already.
   146  
   147  ## Monitoring Tendermint
   148  
   149  Each Tendermint instance has a standard `/health` RPC endpoint, which responds
   150  with 200 (OK) if everything is fine and 500 (or no response) - if something is
   151  wrong.
   152  
   153  Other useful endpoints include mentioned earlier `/status`, `/net_info` and
   154  `/validators`.
   155  
   156  Tendermint also can report and serve Prometheus metrics. See
   157  [Metrics](./metrics.md).
   158  
   159  `tendermint debug dump` sub-command can be used to periodically dump useful
   160  information into an archive. See [Debugging](../tools/debugging.md) for more
   161  information.
   162  
   163  ## What happens when my app dies?
   164  
   165  You are supposed to run Tendermint under a [process
   166  supervisor](https://en.wikipedia.org/wiki/Process_supervision) (like
   167  systemd or runit). It will ensure Tendermint is always running (despite
   168  possible errors).
   169  
   170  Getting back to the original question, if your application dies,
   171  Tendermint will panic. After a process supervisor restarts your
   172  application, Tendermint should be able to reconnect successfully. The
   173  order of restart does not matter for it.
   174  
   175  ## Signal handling
   176  
   177  We catch SIGINT and SIGTERM and try to clean up nicely. For other
   178  signals we use the default behaviour in Go: [Default behavior of signals
   179  in Go
   180  programs](https://golang.org/pkg/os/signal/#hdr-Default_behavior_of_signals_in_Go_programs).
   181  
   182  ## Corruption
   183  
   184  **NOTE:** Make sure you have a backup of the Tendermint data directory.
   185  
   186  ### Possible causes
   187  
   188  Remember that most corruption is caused by hardware issues:
   189  
   190  - RAID controllers with faulty / worn out battery backup, and an unexpected power loss
   191  - Hard disk drives with write-back cache enabled, and an unexpected power loss
   192  - Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
   193  - Defective RAM
   194  - Defective or overheating CPU(s)
   195  
   196  Other causes can be:
   197  
   198  - Database systems configured with fsync=off and an OS crash or power loss
   199  - Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
   200  - Tendermint bugs
   201  - Operating system bugs
   202  - Admin error (e.g., directly modifying Tendermint data-directory contents)
   203  
   204  (Source: https://wiki.postgresql.org/wiki/Corruption)
   205  
   206  ### WAL Corruption
   207  
   208  If consensus WAL is corrupted at the lastest height and you are trying to start
   209  Tendermint, replay will fail with panic.
   210  
   211  Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
   212  
   213  1. Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
   214  2. Try to repair the WAL file manually:
   215  
   216  1) Create a backup of the corrupted WAL file:
   217  
   218  ```
   219  cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
   220  ```
   221  
   222  2. Use `./scripts/wal2json` to create a human-readable version
   223  
   224  ```
   225  ./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
   226  ```
   227  
   228  3. Search for a "CORRUPTED MESSAGE" line.
   229  4. By looking at the previous message and the message after the corrupted one
   230     and looking at the logs, try to rebuild the message. If the consequent
   231     messages are marked as corrupted too (this may happen if length header
   232     got corrupted or some writes did not make it to the WAL ~ truncation),
   233     then remove all the lines starting from the corrupted one and restart
   234     Tendermint.
   235  
   236  ```
   237  $EDITOR /tmp/corrupted_wal
   238  ```
   239  
   240  5. After editing, convert this file back into binary form by running:
   241  
   242  ```
   243  ./scripts/json2wal/json2wal /tmp/corrupted_wal  $TMHOME/data/cs.wal/wal
   244  ```
   245  
   246  ## Hardware
   247  
   248  ### Processor and Memory
   249  
   250  While actual specs vary depending on the load and validators count,
   251  minimal requirements are:
   252  
   253  - 1GB RAM
   254  - 25GB of disk space
   255  - 1.4 GHz CPU
   256  
   257  SSD disks are preferable for applications with high transaction
   258  throughput.
   259  
   260  Recommended:
   261  
   262  - 2GB RAM
   263  - 100GB SSD
   264  - x64 2.0 GHz 2v CPU
   265  
   266  While for now, Tendermint stores all the history and it may require
   267  significant disk space over time, we are planning to implement state
   268  syncing (See
   269  [this issue](https://github.com/tendermint/tendermint/issues/828)). So,
   270  storing all the past blocks will not be necessary.
   271  
   272  ### Operating Systems
   273  
   274  Tendermint can be compiled for a wide range of operating systems thanks
   275  to Go language (the list of \$OS/\$ARCH pairs can be found
   276  [here](https://golang.org/doc/install/source#environment)).
   277  
   278  While we do not favor any operation system, more secure and stable Linux
   279  server distributions (like Centos) should be preferred over desktop
   280  operation systems (like Mac OS).
   281  
   282  ### Miscellaneous
   283  
   284  NOTE: if you are going to use Tendermint in a public domain, make sure
   285  you read [hardware recommendations](https://cosmos.network/validators) for a validator in the
   286  Cosmos network.
   287  
   288  ## Configuration parameters
   289  
   290  - `p2p.flush_throttle_timeout`
   291  - `p2p.max_packet_msg_payload_size`
   292  - `p2p.send_rate`
   293  - `p2p.recv_rate`
   294  
   295  If you are going to use Tendermint in a private domain and you have a
   296  private high-speed network among your peers, it makes sense to lower
   297  flush throttle timeout and increase other params.
   298  
   299  ```
   300  [p2p]
   301  
   302  send_rate=20000000 # 2MB/s
   303  recv_rate=20000000 # 2MB/s
   304  flush_throttle_timeout=10
   305  max_packet_msg_payload_size=10240 # 10KB
   306  ```
   307  
   308  - `mempool.recheck`
   309  
   310  After every block, Tendermint rechecks every transaction left in the
   311  mempool to see if transactions committed in that block affected the
   312  application state, so some of the transactions left may become invalid.
   313  If that does not apply to your application, you can disable it by
   314  setting `mempool.recheck=false`.
   315  
   316  - `mempool.broadcast`
   317  
   318  Setting this to false will stop the mempool from relaying transactions
   319  to other peers until they are included in a block. It means only the
   320  peer you send the tx to will see it until it is included in a block.
   321  
   322  - `consensus.skip_timeout_commit`
   323  
   324  We want `skip_timeout_commit=false` when there is economics on the line
   325  because proposers should wait to hear for more votes. But if you don't
   326  care about that and want the fastest consensus, you can skip it. It will
   327  be kept false by default for public deployments (e.g. [Cosmos
   328  Hub](https://cosmos.network/intro/hub)) while for enterprise
   329  applications, setting it to true is not a problem.
   330  
   331  - `consensus.peer_gossip_sleep_duration`
   332  
   333  You can try to reduce the time your node sleeps before checking if
   334  theres something to send its peers.
   335  
   336  - `consensus.timeout_commit`
   337  
   338  You can also try lowering `timeout_commit` (time we sleep before
   339  proposing the next block).
   340  
   341  - `p2p.addr_book_strict`
   342  
   343  By default, Tendermint checks whenever a peer's address is routable before
   344  saving it to the address book. The address is considered as routable if the IP
   345  is [valid and within allowed
   346  ranges](https://github.com/tendermint/tendermint/blob/27bd1deabe4ba6a2d9b463b8f3e3f1e31b993e61/p2p/netaddress.go#L209).
   347  
   348  This may not be the case for private or local networks, where your IP range is usually
   349  strictly limited and private. If that case, you need to set `addr_book_strict`
   350  to `false` (turn it off).
   351  
   352  - `rpc.max_open_connections`
   353  
   354  By default, the number of simultaneous connections is limited because most OS
   355  give you limited number of file descriptors.
   356  
   357  If you want to accept greater number of connections, you will need to increase
   358  these limits.
   359  
   360  [Sysctls to tune the system to be able to open more connections](https://github.com/satori-com/tcpkali/blob/master/doc/tcpkali.man.md#sysctls-to-tune-the-system-to-be-able-to-open-more-connections)
   361  
   362  ...for N connections, such as 50k:
   363  
   364  ```
   365  kern.maxfiles=10000+2*N         # BSD
   366  kern.maxfilesperproc=100+2*N    # BSD
   367  kern.ipc.maxsockets=10000+2*N   # BSD
   368  fs.file-max=10000+2*N           # Linux
   369  net.ipv4.tcp_max_orphans=N      # Linux
   370  
   371  # For load-generating clients.
   372  net.ipv4.ip_local_port_range="10000  65535"  # Linux.
   373  net.inet.ip.portrange.first=10000  # BSD/Mac.
   374  net.inet.ip.portrange.last=65535   # (Enough for N < 55535)
   375  net.ipv4.tcp_tw_reuse=1         # Linux
   376  net.inet.tcp.maxtcptw=2*N       # BSD
   377  
   378  # If using netfilter on Linux:
   379  net.netfilter.nf_conntrack_max=N
   380  echo $((N/8)) > /sys/module/nf_conntrack/parameters/hashsize
   381  ```
   382  
   383  The similar option exists for limiting the number of gRPC connections -
   384  `rpc.grpc_max_open_connections`.