github.com/square/finch@v0.0.0-20240412205204-6530c03e2b96/docs/content/intro/start-here.md (about)

     1  ---
     2  weight: 1
     3  ---
     4  
     5  See and learn how Finch works by creating and running simple benchmarks.
     6  Follow this page from top to bottom ↓
     7  
     8  ## One-time Setup
     9  
    10  Compile the `finch` binary:
    11  
    12  ```bash
    13  cd bin/finch/
    14  go build finch
    15  ```
    16  
    17  _Stay in this directory._
    18  
    19  Create a MySQL user, database, and table that Finch can use. Running the following as a MySQL root/super user.
    20  
    21  {{< hint type=tip title="Click to Copy" >}}
    22  To copy, mouse over a code block and click the button in the upper-right.
    23  {{< /hint >}}
    24  
    25  ```sql
    26  CREATE USER finch@'%' IDENTIFIED BY 'amazing';
    27  
    28  GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP,  REFERENCES, INDEX, ALTER,  CREATE TEMPORARY TABLES, LOCK TABLES
    29  ON finch.* TO finch@'%';
    30  ```
    31  
    32  ```sql
    33  CREATE DATABASE IF NOT EXISTS finch;
    34  
    35  USE finch;
    36  
    37  DROP TABLE IF EXISTS t1;
    38  
    39  CREATE TABLE t1 (
    40    id INT UNSIGNED NOT NULL PRIMARY KEY,
    41    n  INT NOT NULL,
    42    c  VARCHAR(100) NOT NULL,
    43    INDEX (n)
    44  );
    45  ```
    46  
    47  ## Benchmarks
    48  
    49  ### INSERT
    50  
    51  Copy-paste each code block into the <mark>file name</mark> given above each one.
    52  
    53  {{< columns >}}
    54  <mark>_insert-rows.sql_</mark>
    55  ```sql
    56  -- rows: 100,000
    57  INSERT INTO finch.t1 (id, n, c) VALUES (@id, @n, @c)
    58  ```
    59  <---> <!-- magic separator, between columns -->
    60  <mark>_setup.yaml_</mark>
    61  ```yaml
    62  stage:
    63    trx:
    64      - file: insert-rows.sql
    65        data:
    66          id:
    67            generator: "auto-inc"
    68          n:
    69            generator: "int"
    70          c:
    71            generator: "str-fill-az"
    72  ```
    73  
    74  {{< /columns >}}
    75  
    76  Run the INSERT benchmark:
    77  
    78  ```bash
    79  ./finch setup.yaml
    80  ```
    81  
    82  Finch should complete after few seconds with output similar to below (click to expand).
    83  
    84  {{< expand "Finch Output">}}
    85  ```none
    86  2023/06/24 15:58:02.966170 boot.go:59: map[CPU_CORES:8]
    87  #
    88  # setup.yaml
    89  #
    90  2023/06/24 15:58:02.969392 factory.go:51: finch:...@tcp(127.0.0.1:3306)/?parseTime=true
    91  2023/06/24 15:58:02.972926 stage.go:62: Connected to finch:...@tcp(127.0.0.1:3306)/?parseTime=true
    92  2023/06/24 15:58:02.974299 stage.go:136: [setup.yaml] Running (no runtime limit)
    93  2023/06/24 15:58:02.974327 stage.go:152: [setup.yaml] Execution group 1, client group 1, runnning 1 clients
    94  2023/06/24 15:58:05.980739 data.go:99: 5,001 / 100,000 = 5.0% in 3s: 1,664 rows/s (ETA 57s)
    95  2023/06/24 15:58:08.948397 data.go:99: 10,002 / 100,000 = 10.0% in 3s: 1,685 rows/s (ETA 53s)
    96  2023/06/24 15:58:12.006452 data.go:99: 15,003 / 100,000 = 15.0% in 3s: 1,635 rows/s (ETA 51s)
    97  2023/06/24 15:58:15.203869 data.go:99: 20,004 / 100,000 = 20.0% in 3s: 1,564 rows/s (ETA 51s)
    98  2023/06/24 15:58:18.389031 data.go:99: 25,005 / 100,000 = 25.0% in 3s: 1,570 rows/s (ETA 47s)
    99  2023/06/24 15:58:21.798925 data.go:99: 30,006 / 100,000 = 30.0% in 3s: 1,466 rows/s (ETA 47s)
   100  2023/06/24 15:58:25.538797 data.go:99: 35,007 / 100,000 = 35.0% in 4s: 1,337 rows/s (ETA 48s)
   101  2023/06/24 15:58:28.754100 data.go:99: 40,008 / 100,000 = 40.0% in 3s: 1,555 rows/s (ETA 38s)
   102  2023/06/24 15:58:32.253534 data.go:99: 45,009 / 100,000 = 45.0% in 3s: 1,429 rows/s (ETA 38s)
   103  2023/06/24 15:58:35.352258 data.go:99: 50,010 / 100,000 = 50.0% in 3s: 1,613 rows/s (ETA 30s)
   104  2023/06/24 15:58:38.187871 data.go:99: 55,010 / 100,000 = 55.0% in 3s: 1,763 rows/s (ETA 25s)
   105  2023/06/24 15:58:41.115210 data.go:99: 60,011 / 100,000 = 60.0% in 3s: 1,708 rows/s (ETA 23s)
   106  2023/06/24 15:58:44.081488 data.go:99: 65,012 / 100,000 = 65.0% in 3s: 1,685 rows/s (ETA 20s)
   107  2023/06/24 15:58:46.834091 data.go:99: 70,013 / 100,000 = 70.0% in 3s: 1,816 rows/s (ETA 16s)
   108  2023/06/24 15:58:49.852387 data.go:99: 75,014 / 100,000 = 75.0% in 3s: 1,656 rows/s (ETA 15s)
   109  2023/06/24 15:58:52.619915 data.go:99: 80,015 / 100,000 = 80.0% in 3s: 1,807 rows/s (ETA 11s)
   110  2023/06/24 15:58:55.257472 data.go:99: 85,016 / 100,000 = 85.0% in 3s: 1,896 rows/s (ETA 7s)
   111  2023/06/24 15:58:58.177528 data.go:99: 90,017 / 100,000 = 90.0% in 3s: 1,712 rows/s (ETA 5s)
   112  2023/06/24 15:59:01.059509 data.go:99: 95,018 / 100,000 = 95.0% in 3s: 1,735 rows/s (ETA 2s)
   113   interval| duration| runtime| clients|   QPS| min|  P999|     max| r_QPS| r_min| r_P999| r_max| w_QPS| w_min| w_P999|   w_max| TPS| c_min| c_P999| c_max| errors|compute
   114          1|     60.8|    60.8|       1| 1,645| 365| 3,019| 192,360|     0|     0|      0|     0| 1,645|   365|  3,019| 192,360|   0|     0|      0|     0|      0|local
   115  
   116  2023/06/24 15:59:03.737413 stage.go:206: [setup.yaml] Stage done
   117  ```
   118  {{< /expand >}}
   119  
   120  The benchmark statistics are printed near the end of the output like (scroll right &rarr;):
   121  
   122  ```
   123   interval| duration| runtime| clients|   QPS| min|  P999|     max| r_QPS| r_min| r_P999| r_max| w_QPS| w_min| w_P999|   w_max| TPS| c_min| c_P999| c_max| errors|compute
   124          1|     60.8|    60.8|       1| 1,645| 365| 3,019| 192,360|     0|     0|      0|     0| 1,645|   365|  3,019| 192,360|   0|     0|      0|     0|      0|local
   125  ```
   126  
   127  Your numbers will vary (possible by a lot).
   128  Since this is a quick tutorial, let's just examine these four columns that are stats for all queries:
   129  
   130  ``` 
   131     QPS| min|  P999|     max|
   132   1,645| 365| 3,019| 192,360|
   133  ```
   134  
   135  Everyone knows QPS: 1,645 on this run.
   136  The next three columns are query response times, and Finch reports all query times in microseconds (&micro;s).
   137  
   138  * The minimum query response time was 365 &micro;s.
   139  The compute has a locally-attached SSD, so this minimum is believable.
   140  * The P999 (99.9th percentile) query response time was about 3.0 _milliseconds_ (converting from &micro;s).
   141  * The maximum query response time was about 192 milliseconds (converting from &micro;s), which is high for a locally-attached SSD.
   142  
   143  ### Read-only
   144  
   145  Copy-paste each code block into the file name given above each one.
   146  
   147  {{< columns >}}
   148  _read-only.sql_
   149  ```sql
   150  SELECT n, c FROM finch.t1 WHERE id = @id
   151  ```
   152  <---> <!-- magic separator, between columns -->
   153  _read-only.yaml_
   154  ```yaml
   155  stage:
   156    runtime: 10s
   157    workload:
   158      - clients: 4
   159    trx:
   160      - file: read-only.sql
   161        data:
   162          id:
   163            generator: "int"
   164  ```
   165  {{< /columns >}}
   166  
   167  Run the read-only benchmark:
   168  
   169  ```bash
   170  ./finch read-only.yaml
   171  ```
   172  
   173  What kind of QPS and response time stats did you get on your machine?
   174  On this machine:
   175  
   176  ```
   177   interval| duration| runtime| clients|    QPS| min| P999|   max|  r_QPS| r_min| r_P999| r_max| w_QPS| w_min| w_P999| w_max| TPS| c_min| c_P999| c_max| errors|compute
   178          1|       10|      10|       4| 17,651|  47|  794| 2,406| 17,651|    47|    794| 2,406|     0|     0|      0|     0|   0|     0|      0|     0|      0|local
   179  ```
   180  
   181  17,651 QPS: not bad. A 47 microsecond read: very fast. A 2.4 millisecond read: pretty slow.
   182  
   183  ### Row Lock Contention
   184  
   185  The previous two benchmarks were too easy.
   186  Let's write a transaction (trx) that locks and update rows, then run that trx on several clients.
   187  To make it extra challenging, let's limit it to the first 1,000 rows.
   188  This should create row lock contention and slow down performance noticeably.
   189  
   190  Copy-paste each code block into the file name given above each one.
   191  
   192  {{< columns >}}
   193  _rw-trx.sql_
   194  ```sql
   195  BEGIN
   196  
   197  SELECT c FROM finch.t1 WHERE id = @id FOR UPDATE
   198  
   199  UPDATE finch.t1 SET n = n + 1 WHERE id = @id
   200  
   201  COMMIT
   202  ```
   203  <---> <!-- magic separator, between columns -->
   204  _row-lock.yaml_
   205  ```yaml
   206  stage:
   207    runtime: 20s
   208    workload:
   209      - clients: 4
   210    trx:
   211      - file: rw-trx.sql
   212        data:
   213          id:
   214            generator: "int"
   215            scope: trx
   216            params:
   217              max: 1,000
   218  ```
   219  {{< /columns >}}
   220  
   221  Before running the benchmark, execute this query on MySQL to get the current number of row lock waits:
   222  
   223  ```sql
   224  SHOW GLOBAL STATUS LIKE 'Innodb_row_lock_waits';
   225  ```
   226  
   227  Then run the slow row-lock benchmark:
   228  
   229  ```bash
   230  ./finch row-lock.yaml
   231  ```
   232  
   233  Once the benchmark completes (in 20s), execute `SHOW GLOBAL STATUS LIKE 'Innodb_row_lock_waits';` on MySQL again.
   234  The number of row locks waits should have increased, proving that the trx caused them.
   235  
   236  Query response times should be higher (slower), too:
   237  
   238  ```
   239   interval| duration| runtime| clients|   QPS| min|  P999|    max| r_QPS| r_min| r_P999|  r_max| w_QPS| w_min| w_P999|  w_max|   TPS| c_min| c_P999|  c_max| errors|compute
   240          1|       20|      20|       4| 9,461|  80| 1,659| 79,518| 2,365|   148|  1,096| 37,598| 2,365|   184|  1,202| 40,770| 2,365|   366|  2,398| 79,518|      0|local
   241  ```
   242  
   243  Scroll right (&rarr;) and notice `TPS 2,365`: Finch measures transactions per second (TPS) when the Finch trx (_rw-trx.sql_) has an explicit SQL transaction: `BEGIN` and `COMMIT`.
   244  
   245  `c_max 79,518` means the maximum `COMMIT` time was 79,518 &micro;s (80 milliseconds), which is pretty slow for a locally-attached SSD, but this is why Finch measures `COMMIT` latency: when a transaction commits, MySQL makes the data changes durable on disk, which is one of the slowest (but most important) operations of an ACID-compliant database.
   246  
   247  ---
   248  
   249  {{< hint type=tip title="Read Next" >}}
   250  Learn the [Concepts]({{< relref "intro/concepts" >}}) underlying these benchmarks and how Finch works.
   251  {{< /hint >}}
   252