github.com/square/finch@v0.0.0-20240412205204-6530c03e2b96/docs/content/benchmark/examples.md (about) 1 --- 2 weight: 2 3 --- 4 5 Finch ships with example benchmarks in subdirectory `benchmarks/`. 6 As complete working examples that range from simple to complex, they're good study material for learning how to write Finch benchmarks and use Finch features. 7 8 A quick run set of commands is included that presumes: 9 * Current working directly in repo: `bin/finch/` 10 * [Default MySQL user]({{< relref "operate/mysql#user" >}}) 11 * Database `finch` exists and is empty (no tables) 12 13 {{< hint type=note title="Default Database" >}} 14 The Finch example benchmarks do not have a default database, and they do not drop schemas or tables. 15 Create a database to use—database `finch` is suggested. 16 Drop and recreate the database to reset and rerun a benchmark. 17 {{< /hint >}} 18 19 {{< toc >}} 20 21 ## aurora 22 23 Original 2015 Amazon Aurora benchmark 24 {.tagline} 25 26 |Stage|Type|Description| 27 |-----|----|-----------| 28 |setup.yaml|DDL|Create schema and insert rows| 29 |write-only.yaml|Standard|Execute read-write transaction| 30 31 A short benchmark with a [long and complicated history](https://hackmysql.com/post/are-aurora-performance-claims-true/). 32 The short version: Amazon used this benchmark in 2015 to established its "5X greater throughput than MySQL" claim. 33 Under the hood, it's the sysbench write-only benchmark with a specific workload: 4 EC2 (compute) instances each running 1,000 clients (all in the same availability zone), querying 250 tables each with 25,0000 rows. 34 35 Since this benchmark was designed to run on multiple compute instances, it's a little difficult to run locally unless you scale down the instances and clients: 36 37 ```bash 38 finch -D finch -p instances=1 -p clients=8 benchmarks/aurora/write-only.yaml 39 ``` 40 41 ## intro 42 43 [Intro / Start Here]({{< relref "intro/start-here" >}}) benchmarks 44 {.tagline} 45 46 |Stage|Type|Description| 47 |-----|----|-----------| 48 |setup.yaml|DDL|Create schema and insert rows| 49 |read-only.yaml|Standard|Execute single SELECT| 50 |row-lock.yaml|Standard|Execute SELECT and UPDATE transaction on 1,000 rows| 51 52 Quick run: 53 54 ```sh 55 ./finch -D finch ../../benchmarks/intro/setup.yaml 56 # Takes awhile to insert 100,000 rows 57 58 ./finch -D finch ../../benchmarks/intro/read-only.yaml 59 # Runs for 10s 60 61 ./finch -D finch ../../benchmarks/intro/row-lock.yaml 62 # Runs for 20s 63 ``` 64 65 Demonstrates very basic Finch benchmarks and other concepts (like [stats output]({{< ref "benchmark/statistics" >}})). 66 Queries use `finch.t1` explicitly, so benchmark must use database `finch`. 67 68 ## sysbench 69 70 sysbench benchmarks recreated in Finch 71 {.tagline} 72 73 |Stage|Type|Description| 74 |-----|----|-----------| 75 |setup.yaml|DDL|Create schema and insert rows| 76 |read-only.yaml|Standard|sysbench OLTP read-only benchmark| 77 |write-only.yaml|Standard|sysbench OLTP write-only benchmark| 78 79 Quick run: 80 81 ```sh 82 ./finch -D finch ../../benchmarks/sysbench/setup.yaml 83 84 ./finch -D finch ../../benchmarks/sysbench/read-only.yaml 85 ``` 86 87 These recreate two of the legendary [sysbench](https://github.com/akopytov/sysbench) OLTP benchmarks: `oltp_read_only.lua` and `oltp_write_only.lua`. 88 They use one table name `sbtest1`. 89 Multiple tables are not supported, but the [aurora](#aurora) benchmark is the same benchmark on 250 tables. 90 91 ## xfer 92 93 Naïve money transfer (xfer) with three tables, millions of rows, and a complex read-write transaction 94 {.tagline} 95 96 |Stage|Type|Description| 97 |-----|----|-----------| 98 |setup.yaml|DDL|Create schema and insert rows| 99 |xfer.yaml|Standard|Execute read-write transaction| 100 101 The xfer benchmark executes a complex [read-write transaction](https://github.com/square/finch/blob/main/benchmarks/xfer/trx/xfer.sql) on a nontrivial amount of data in three tables: 102 103 * `customers`: 1 million rows 104 * `balanaces`: 3 million rows (3 per customer) 105 * `xfers`: approximately 2.2 million rows (1 GB of data) 106 107 By default, the xfer stage executes a client for each CPU core; specify `-p clients=N` to burn less CPU cores.