github.com/vipernet-xyz/tm@v0.34.24/docs/tendermint-core/running-in-production.md (about)

     1  ---
     2  order: 4
     3  ---
     4  
     5  # Running in production
     6  
     7  ## Database
     8  
     9  By default, Tendermint uses the `syndtr/goleveldb` package for its in-process
    10  key-value database. If you want maximal performance, it may be best to install
    11  the real C-implementation of LevelDB and compile Tendermint to use that using
    12  `make build TENDERMINT_BUILD_OPTIONS=cleveldb`. See the [install
    13  instructions](../introduction/install.md) for details.
    14  
    15  Tendermint keeps multiple distinct databases in the `$TMROOT/data`:
    16  
    17  - `blockstore.db`: Keeps the entire blockchain - stores blocks,
    18    block commits, and block meta data, each indexed by height. Used to sync new
    19    peers.
    20  - `evidence.db`: Stores all verified evidence of misbehaviour.
    21  - `state.db`: Stores the current blockchain state (ie. height, validators,
    22    consensus params). Only grows if consensus params or validators change. Also
    23    used to temporarily store intermediate results during block processing.
    24  - `tx_index.db`: Indexes txs (and their results) by tx hash and by DeliverTx result events.
    25  
    26  By default, Tendermint will only index txs by their hash and height, not by their DeliverTx
    27  result events. See [indexing transactions](../app-dev/indexing-transactions.md) for
    28  details.
    29  
    30  Applications can expose block pruning strategies to the node operator. Please read the documentation of your application
    31  to find out more details.
    32  
    33  Applications can use [state sync](state-sync.md) to help nodes bootstrap quickly.
    34  
    35  ## Logging
    36  
    37  Default logging level (`log_level = "main:info,state:info,statesync:info,*:error"`) should suffice for
    38  normal operation mode. Read [this
    39  post](https://blog.cosmos.network/one-of-the-exciting-new-features-in-0-10-0-release-is-smart-log-level-flag-e2506b4ab756)
    40  for details on how to configure `log_level` config variable. Some of the
    41  modules can be found [here](./how-to-read-logs.md#list-of-modules). If
    42  you're trying to debug Tendermint or asked to provide logs with debug
    43  logging level, you can do so by running Tendermint with
    44  `--log_level="*:debug"`.
    45  
    46  ## Write Ahead Logs (WAL)
    47  
    48  Tendermint uses write ahead logs for the consensus (`cs.wal`) and the mempool
    49  (`mempool.wal`). Both WALs have a max size of 1GB and are automatically rotated.
    50  
    51  ### Consensus WAL
    52  
    53  The `consensus.wal` is used to ensure we can recover from a crash at any point
    54  in the consensus state machine.
    55  It writes all consensus messages (timeouts, proposals, block part, or vote)
    56  to a single file, flushing to disk before processing messages from its own
    57  validator. Since Tendermint validators are expected to never sign a conflicting vote, the
    58  WAL ensures we can always recover deterministically to the latest state of the consensus without
    59  using the network or re-signing any consensus messages.
    60  
    61  If your `consensus.wal` is corrupted, see [below](#wal-corruption).
    62  
    63  ### Mempool WAL
    64  
    65  The `mempool.wal` logs all incoming txs before running CheckTx, but is
    66  otherwise not used in any programmatic way. It's just a kind of manual
    67  safe guard. Note the mempool provides no durability guarantees - a tx sent to one or many nodes
    68  may never make it into the blockchain if those nodes crash before being able to
    69  propose it. Clients must monitor their txs by subscribing over websockets,
    70  polling for them, or using `/broadcast_tx_commit`. In the worst case, txs can be
    71  resent from the mempool WAL manually.
    72  
    73  For the above reasons, the `mempool.wal` is disabled by default. To enable, set
    74  `mempool.wal_dir` to where you want the WAL to be located (e.g.
    75  `data/mempool.wal`).
    76  
    77  ## DOS Exposure and Mitigation
    78  
    79  Validators are supposed to setup [Sentry Node
    80  Architecture](./validators.md)
    81  to prevent Denial-of-service attacks.
    82  
    83  ### P2P
    84  
    85  The core of the Tendermint peer-to-peer system is `MConnection`. Each
    86  connection has `MaxPacketMsgPayloadSize`, which is the maximum packet
    87  size and bounded send & receive queues. One can impose restrictions on
    88  send & receive rate per connection (`SendRate`, `RecvRate`).
    89  
    90  The number of open P2P connections can become quite large, and hit the operating system's open
    91  file limit (since TCP connections are considered files on UNIX-based systems). Nodes should be
    92  given a sizable open file limit, e.g. 8192, via `ulimit -n 8192` or other deployment-specific
    93  mechanisms.
    94  
    95  ### RPC
    96  
    97  Endpoints returning multiple entries are limited by default to return 30
    98  elements (100 max). See the [RPC Documentation](https://docs.tendermint.com/v0.34/rpc/)
    99  for more information.
   100  
   101  Rate-limiting and authentication are another key aspects to help protect
   102  against DOS attacks. Validators are supposed to use external tools like
   103  [NGINX](https://www.nginx.com/blog/rate-limiting-nginx/) or
   104  [traefik](https://docs.traefik.io/middlewares/ratelimit/)
   105  to achieve the same things.
   106  
   107  ## Debugging Tendermint
   108  
   109  If you ever have to debug Tendermint, the first thing you should probably do is
   110  check out the logs. See [How to read logs](./how-to-read-logs.md), where we
   111  explain what certain log statements mean.
   112  
   113  If, after skimming through the logs, things are not clear still, the next thing
   114  to try is querying the `/status` RPC endpoint. It provides the necessary info:
   115  whenever the node is syncing or not, what height it is on, etc.
   116  
   117  ```bash
   118  curl http(s)://{ip}:{rpcPort}/status
   119  ```
   120  
   121  `/dump_consensus_state` will give you a detailed overview of the consensus
   122  state (proposer, latest validators, peers states). From it, you should be able
   123  to figure out why, for example, the network had halted.
   124  
   125  ```bash
   126  curl http(s)://{ip}:{rpcPort}/dump_consensus_state
   127  ```
   128  
   129  There is a reduced version of this endpoint - `/consensus_state`, which returns
   130  just the votes seen at the current height.
   131  
   132  If, after consulting with the logs and above endpoints, you still have no idea
   133  what's happening, consider using `tendermint debug kill` sub-command. This
   134  command will scrap all the available info and kill the process. See
   135  [Debugging](../tools/debugging.md) for the exact format.
   136  
   137  You can inspect the resulting archive yourself or create an issue on
   138  [Github](https://github.com/vipernet-xyz/tm). Before opening an issue
   139  however, be sure to check if there's [no existing
   140  issue](https://github.com/vipernet-xyz/tm/issues) already.
   141  
   142  ## Monitoring Tendermint
   143  
   144  Each Tendermint instance has a standard `/health` RPC endpoint, which responds
   145  with 200 (OK) if everything is fine and 500 (or no response) - if something is
   146  wrong.
   147  
   148  Other useful endpoints include mentioned earlier `/status`, `/net_info` and
   149  `/validators`.
   150  
   151  Tendermint also can report and serve Prometheus metrics. See
   152  [Metrics](./metrics.md).
   153  
   154  `tendermint debug dump` sub-command can be used to periodically dump useful
   155  information into an archive. See [Debugging](../tools/debugging.md) for more
   156  information.
   157  
   158  ## What happens when my app dies
   159  
   160  You are supposed to run Tendermint under a [process
   161  supervisor](https://en.wikipedia.org/wiki/Process_supervision) (like
   162  systemd or runit). It will ensure Tendermint is always running (despite
   163  possible errors).
   164  
   165  Getting back to the original question, if your application dies,
   166  Tendermint will panic. After a process supervisor restarts your
   167  application, Tendermint should be able to reconnect successfully. The
   168  order of restart does not matter for it.
   169  
   170  ## Signal handling
   171  
   172  We catch SIGINT and SIGTERM and try to clean up nicely. For other
   173  signals we use the default behavior in Go: [Default behavior of signals
   174  in Go
   175  programs](https://golang.org/pkg/os/signal/#hdr-Default_behavior_of_signals_in_Go_programs).
   176  
   177  ## Corruption
   178  
   179  **NOTE:** Make sure you have a backup of the Tendermint data directory.
   180  
   181  ### Possible causes
   182  
   183  Remember that most corruption is caused by hardware issues:
   184  
   185  - RAID controllers with faulty / worn out battery backup, and an unexpected power loss
   186  - Hard disk drives with write-back cache enabled, and an unexpected power loss
   187  - Cheap SSDs with insufficient power-loss protection, and an unexpected power-loss
   188  - Defective RAM
   189  - Defective or overheating CPU(s)
   190  
   191  Other causes can be:
   192  
   193  - Database systems configured with fsync=off and an OS crash or power loss
   194  - Filesystems configured to use write barriers plus a storage layer that ignores write barriers. LVM is a particular culprit.
   195  - Tendermint bugs
   196  - Operating system bugs
   197  - Admin error (e.g., directly modifying Tendermint data-directory contents)
   198  
   199  (Source: <https://wiki.postgresql.org/wiki/Corruption>)
   200  
   201  ### WAL Corruption
   202  
   203  If consensus WAL is corrupted at the latest height and you are trying to start
   204  Tendermint, replay will fail with panic.
   205  
   206  Recovering from data corruption can be hard and time-consuming. Here are two approaches you can take:
   207  
   208  1. Delete the WAL file and restart Tendermint. It will attempt to sync with other peers.
   209  2. Try to repair the WAL file manually:
   210  
   211  1) Create a backup of the corrupted WAL file:
   212  
   213      ```sh
   214      cp "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal_backup
   215      ```
   216  
   217  2) Use `./scripts/wal2json` to create a human-readable version:
   218  
   219      ```sh
   220      ./scripts/wal2json/wal2json "$TMHOME/data/cs.wal/wal" > /tmp/corrupted_wal
   221      ```
   222  
   223  3)  Search for a "CORRUPTED MESSAGE" line.
   224  4)  By looking at the previous message and the message after the corrupted one
   225     and looking at the logs, try to rebuild the message. If the consequent
   226     messages are marked as corrupted too (this may happen if length header
   227     got corrupted or some writes did not make it to the WAL ~ truncation),
   228     then remove all the lines starting from the corrupted one and restart
   229     Tendermint.
   230  
   231      ```sh
   232      $EDITOR /tmp/corrupted_wal
   233      ```
   234  
   235  5)  After editing, convert this file back into binary form by running:
   236  
   237      ```sh
   238      ./scripts/json2wal/json2wal /tmp/corrupted_wal  $TMHOME/data/cs.wal/wal
   239      ```
   240  
   241  ## Hardware
   242  
   243  ### Processor and Memory
   244  
   245  While actual specs vary depending on the load and validators count, minimal
   246  requirements are:
   247  
   248  - 1GB RAM
   249  - 25GB of disk space
   250  - 1.4 GHz CPU
   251  
   252  SSD disks are preferable for applications with high transaction throughput.
   253  
   254  Recommended:
   255  
   256  - 2GB RAM
   257  - 100GB SSD
   258  - x64 2.0 GHz 2v CPU
   259  
   260  While for now, Tendermint stores all the history and it may require significant
   261  disk space over time, we are planning to implement state syncing (See [this
   262  issue](https://github.com/vipernet-xyz/tm/issues/828)). So, storing all
   263  the past blocks will not be necessary.
   264  
   265  ### Validator signing on 32 bit architectures (or ARM)
   266  
   267  Both our `ed25519` and `secp256k1` implementations require constant time
   268  `uint64` multiplication. Non-constant time crypto can (and has) leaked
   269  private keys on both `ed25519` and `secp256k1`. This doesn't exist in hardware
   270  on 32 bit x86 platforms ([source](https://bearssl.org/ctmul.html)), and it
   271  depends on the compiler to enforce that it is constant time. It's unclear at
   272  this point whenever the Golang compiler does this correctly for all
   273  implementations.
   274  
   275  **We do not support nor recommend running a validator on 32 bit architectures OR
   276  the "VIA Nano 2000 Series", and the architectures in the ARM section rated
   277  "S-".**
   278  
   279  ### Operating Systems
   280  
   281  Tendermint can be compiled for a wide range of operating systems thanks to Go
   282  language (the list of \$OS/\$ARCH pairs can be found
   283  [here](https://golang.org/doc/install/source#environment)).
   284  
   285  While we do not favor any operation system, more secure and stable Linux server
   286  distributions (like Centos) should be preferred over desktop operation systems
   287  (like Mac OS).
   288  
   289  ### Miscellaneous
   290  
   291  NOTE: if you are going to use Tendermint in a public domain, make sure
   292  you read [hardware recommendations](https://cosmos.network/validators) for a validator in the
   293  Cosmos network.
   294  
   295  ## Configuration parameters
   296  
   297  - `p2p.flush_throttle_timeout`
   298  - `p2p.max_packet_msg_payload_size`
   299  - `p2p.send_rate`
   300  - `p2p.recv_rate`
   301  
   302  If you are going to use Tendermint in a private domain and you have a
   303  private high-speed network among your peers, it makes sense to lower
   304  flush throttle timeout and increase other params.
   305  
   306  ```toml
   307  [p2p]
   308  
   309  send_rate=20000000 # 2MB/s
   310  recv_rate=20000000 # 2MB/s
   311  flush_throttle_timeout=10
   312  max_packet_msg_payload_size=10240 # 10KB
   313  ```
   314  
   315  - `mempool.recheck`
   316  
   317  After every block, Tendermint rechecks every transaction left in the
   318  mempool to see if transactions committed in that block affected the
   319  application state, so some of the transactions left may become invalid.
   320  If that does not apply to your application, you can disable it by
   321  setting `mempool.recheck=false`.
   322  
   323  - `mempool.broadcast`
   324  
   325  Setting this to false will stop the mempool from relaying transactions
   326  to other peers until they are included in a block. It means only the
   327  peer you send the tx to will see it until it is included in a block.
   328  
   329  - `consensus.skip_timeout_commit`
   330  
   331  We want `skip_timeout_commit=false` when there is economics on the line
   332  because proposers should wait to hear for more votes. But if you don't
   333  care about that and want the fastest consensus, you can skip it. It will
   334  be kept false by default for public deployments (e.g. [Cosmos
   335  Hub](https://cosmos.network/intro/hub)) while for enterprise
   336  applications, setting it to true is not a problem.
   337  
   338  - `consensus.peer_gossip_sleep_duration`
   339  
   340  You can try to reduce the time your node sleeps before checking if
   341  theres something to send its peers.
   342  
   343  - `consensus.timeout_commit`
   344  
   345  You can also try lowering `timeout_commit` (time we sleep before
   346  proposing the next block).
   347  
   348  - `p2p.addr_book_strict`
   349  
   350  By default, Tendermint checks whenever a peer's address is routable before
   351  saving it to the address book. The address is considered as routable if the IP
   352  is [valid and within allowed
   353  ranges](https://github.com/vipernet-xyz/tm/blob/27bd1deabe4ba6a2d9b463b8f3e3f1e31b993e61/p2p/netaddress.go#L209).
   354  
   355  This may not be the case for private or local networks, where your IP range is usually
   356  strictly limited and private. If that case, you need to set `addr_book_strict`
   357  to `false` (turn it off).
   358  
   359  - `rpc.max_open_connections`
   360  
   361  By default, the number of simultaneous connections is limited because most OS
   362  give you limited number of file descriptors.
   363  
   364  If you want to accept greater number of connections, you will need to increase
   365  these limits.
   366  
   367  [Sysctls to tune the system to be able to open more connections](https://github.com/satori-com/tcpkali/blob/master/doc/tcpkali.man.md#sysctls-to-tune-the-system-to-be-able-to-open-more-connections)
   368  
   369  The process file limits must also be increased, e.g. via `ulimit -n 8192`.
   370  
   371  ...for N connections, such as 50k:
   372  
   373  ```md
   374  kern.maxfiles=10000+2*N         # BSD
   375  kern.maxfilesperproc=100+2*N    # BSD
   376  kern.ipc.maxsockets=10000+2*N   # BSD
   377  fs.file-max=10000+2*N           # Linux
   378  net.ipv4.tcp_max_orphans=N      # Linux
   379  
   380  # For load-generating clients.
   381  net.ipv4.ip_local_port_range="10000  65535"  # Linux.
   382  net.inet.ip.portrange.first=10000  # BSD/Mac.
   383  net.inet.ip.portrange.last=65535   # (Enough for N < 55535)
   384  net.ipv4.tcp_tw_reuse=1         # Linux
   385  net.inet.tcp.maxtcptw=2*N       # BSD
   386  
   387  # If using netfilter on Linux:
   388  net.netfilter.nf_conntrack_max=N
   389  echo $((N/8)) > /sys/module/nf_conntrack/parameters/hashsize
   390  ```
   391  
   392  The similar option exists for limiting the number of gRPC connections -
   393  `rpc.grpc_max_open_connections`.