go.etcd.io/etcd@v3.3.27+incompatible/Documentation/benchmarks/etcd-2-2-0-benchmarks.md (about)

     1  ---
     2  title: Benchmarking etcd v2.2.0
     3  ---
     4  
     5  ## Physical Machines
     6  
     7  GCE n1-highcpu-2 machine type
     8  
     9  - 1x dedicated local SSD mounted as etcd data directory
    10  - 1x dedicated slow disk for the OS
    11  - 1.8 GB memory
    12  - 2x CPUs
    13  
    14  ## etcd Cluster
    15  
    16  3 etcd 2.2.0 members, each runs on a single machine.
    17  
    18  Detailed versions:
    19  
    20  ```
    21  etcd Version: 2.2.0
    22  Git SHA: e4561dd
    23  Go Version: go1.5
    24  Go OS/Arch: linux/amd64
    25  ```
    26  
    27  ## Testing
    28  
    29  Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
    30  
    31  The performance is calculated through results of 100 benchmark rounds.
    32  
    33  ## Performance
    34  
    35  ### Single Key Read Performance
    36  
    37  | key size in bytes | number of clients | target etcd server | average read QPS | read QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
    38  |-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
    39  | 64 | 1 | leader only | 2303 | 200 | 0.49 | 0.06 |
    40  | 64 | 64 | leader only | 15048 | 685 | 7.60 | 0.46 |
    41  | 64 | 256 | leader only | 14508 | 434 | 29.76 | 1.05 |
    42  | 256 | 1 | leader only | 2162 | 214 | 0.52 | 0.06 |
    43  | 256 | 64 | leader only | 14789 | 792 | 7.69| 0.48 |
    44  | 256 | 256 | leader only | 14424 | 512 | 29.92 | 1.42 |
    45  | 64 | 64 | all servers | 45752 | 2048 | 2.47 | 0.14 |
    46  | 64 | 256 | all servers | 46592 | 1273 | 10.14 | 0.59 |
    47  | 256 | 64 | all servers | 45332 | 1847 | 2.48| 0.12 |
    48  | 256 | 256 | all servers | 46485 | 1340 | 10.18 | 0.74 |
    49  
    50  ### Single Key Write Performance
    51  
    52  | key size in bytes | number of clients | target etcd server | average write QPS | write QPS stddev | average 90th Percentile Latency (ms) | latency stddev |
    53  |-------------------|-------------------|--------------------|------------------|-----------------|--------------------------------------|----------------|
    54  | 64 | 1 | leader only | 55 | 4 | 24.51 | 13.26 |
    55  | 64 | 64 | leader only | 2139 | 125 | 35.23 | 3.40 |
    56  | 64 | 256 | leader only | 4581 | 581 | 70.53 | 10.22 |
    57  | 256 | 1 | leader only | 56 | 4 | 22.37| 4.33 |
    58  | 256 | 64 | leader only | 2052 | 151 | 36.83 | 4.20 |
    59  | 256 | 256 | leader only | 4442 | 560 | 71.59 | 10.03 |
    60  | 64 | 64 | all servers | 1625 | 85 | 58.51 | 5.14 |
    61  | 64 | 256 | all servers | 4461 | 298 | 89.47 | 36.48 |
    62  | 256 | 64 | all servers | 1599 | 94 | 60.11| 6.43 |
    63  | 256 | 256 | all servers | 4315 | 193 | 88.98 | 7.01 |
    64  
    65  ## Performance Changes
    66  
    67  - Because etcd now records metrics for each API call, read QPS performance seems to see a minor decrease in most scenarios. This minimal performance impact was judged a reasonable investment for the breadth of monitoring and debugging information returned.
    68  
    69  - Write QPS to cluster leaders seems to be increased by a small margin. This is because the main loop and entry apply loops were decoupled in the etcd raft logic, eliminating several blocks between them.
    70  
    71  - Write QPS to all members seems to be increased by a significant margin, because followers now receive the latest commit index sooner, and commit proposals more quickly.