github.com/NVIDIA/aistore@v1.3.23-0.20240517131212-7df6609be51d/docs/aisloader.md (about)

     1  ---
     2  layout: post
     3  title: AISLOADER
     4  permalink: /docs/aisloader
     5  redirect_from:
     6   - /aisloader.md/
     7   - /docs/aisloader.md/
     8  ---
     9  
    10  # AIS Loader
    11  
    12  AIS Loader ([`aisloader`](/bench/tools/aisloader)) is a tool to measure storage performance. It is a load generator that we constantly use to benchmark and stress-test [AIStore](https://github.com/NVIDIA/aistore) or any S3-compatible backend.
    13  
    14  In fact, aisloader can list, write, and read S3(**) buckets _directly_, which makes it quite useful, convenient, and easy to use benchmark to compare storage performance **with** aistore in front of S3 and **without**.
    15  
    16  > (**) `aisloader` can be further easily extended to work directly with any Cloud storage provider including, but not limited to, aistore-supported GCP and Azure.
    17  
    18  In addition, `aisloader` generates synthetic workloads that mimic training and inference workloads - the capability that allows to run benchmarks in isolation (which is often preferable) avoiding compute-side bottlenecks (if any) and associated complexity.
    19  
    20  There's a large set of command-line switches that allow to realize almost any conceivable workload, with basic permutations always including:
    21  
    22  * number of workers
    23  * read and write sizes
    24  * read and write ratios
    25  
    26  Detailed protocol-level tracing statistics are also available - see [HTTP tracing](#http-tracing) section below for brief introduction.
    27  
    28  ## Table of Contents
    29  
    30  - [Setup](#Setup)
    31  - [Command line Options](#command-line-options)
    32      - [Often used options explanation](#often-used-options-explanation)
    33  - [Environment variables](#environment-variables)
    34  - [Examples](#examples)
    35  - [Collecting stats](#collecting-stats)
    36      - [Grafana](#grafana)
    37  - [HTTP tracing](#http-tracing)
    38  - [AISLoader-Composer](#aisloader-composer)
    39  - [References](#references)
    40  
    41  ## Setup
    42  
    43  To get started, go to root directory and run:
    44  
    45  ```console
    46  $ make aisloader
    47  ```
    48  
    49  For usage, run: `aisloader` or `aisloader usage` or `aisloader --help`.
    50  
    51  ## Command-line Options
    52  
    53  For the most recently updated command-line options and examples, please run `aisloader` or `aisloader usage`.
    54  
    55  ### Options via AIS Loader flags
    56  
    57  | Command-line option | Type | Description | Default |
    58  | --- | --- | --- | --- |
    59  | -batchsize | `int` | Batch size to list and delete | `100` |
    60  | -bprops | `json` | JSON string formatted as per the SetBucketProps API and containing bucket properties to apply | `""` |
    61  | -bucket | `string` | Bucket name. Bucket will be created if doesn't exist. If empty, aisloader generates a new random bucket name | `""` |
    62  | -cached | `bool` | list in-cluster objects - only those objects from a remote bucket that are present ("cached") | `false` |
    63  | -cksum-type | `string` | Checksum type to use for PUT object requests | `xxhash`|
    64  | -cleanup | `bool` | when true, remove bucket upon benchmark termination | `n/a` (required) |
    65  | -dry-run | `bool` | show the entire set of parameters that aisloader will use when actually running | `false` |
    66  | -duration | `string`, `int` | Benchmark duration (0 - run forever or until Ctrl-C, default 1m). Note that if both duration and totalputsize are zeros, aisloader will have nothing to do | `1m` |
    67  | -epochs | `int` |  Number of "epochs" to run whereby each epoch entails full pass through the entire listed bucket | `1`|
    68  | -etl | `string` | Built-in ETL, one-of: `tar2tf`, `md5`, or `echo`. Each object that `aisloader` GETs undergoes the selected transformation. See also: `-etl-spec` option. | `""` |
    69  | -etl-spec | `string` | Custom ETL specification (pathname). Must be compatible with Kubernetes Pod specification. Each object that `aisloader` GETs will undergo this user-defined transformation. See also: `-etl` option. | `""` |
    70  | -getconfig | `bool` | when true, generate control plane load by reading AIS proxy configuration (that is, instead of reading/writing data exercise control path) | `false` |
    71  | -getloaderid | `bool` | when true, print stored/computed unique loaderID aka aisloader identifier and exit | `false` |
    72  | -ip | `string` | AIS proxy/gateway IP address or hostname | `localhost` |
    73  | -json | `bool` | when true, print the output in JSON | `false` |
    74  | -loaderid | `string` | ID to identify a loader among multiple concurrent instances | `0` |
    75  | -loaderidhashlen | `int` | Size (in bits) of the generated aisloader identifier. Cannot be used together with loadernum | `0` |
    76  | -loadernum | `int` | total number of aisloaders running concurrently and generating combined load. If defined, must be greater than the loaderid and cannot be used together with loaderidhashlen | `0` |
    77  | -maxputs | `int` | Maximum number of objects to PUT | `0` |
    78  | -maxsize | `int` | Maximal object size, may contain [multiplicative suffix](#bytes-multiplicative-suffix) | `1GiB` |
    79  | -minsize | `int` | Minimal object size, may contain [multiplicative suffix](#bytes-multiplicative-suffix) | `1MiB` |
    80  | -numworkers | `int` | Number of goroutine workers operating on AIS in parallel | `10` |
    81  | -pctput | `int` | Percentage of PUTs in the aisloader-generated workload | `0` |
    82  | -latest | `bool` | When true, check in-cluster metadata and possibly GET the latest object version from the associated remote bucket | `false` |
    83  | -port | `int` | Port number for proxy server | `8080` |
    84  | -provider | `string` | ais - for AIS, cloud - for Cloud bucket; other supported values include "gcp" and "aws", for Amazon and Google clouds, respectively | `ais` |
    85  | -putshards | `int` | Spread generated objects over this many subdirectories (max 100k) | `0` |
    86  | -quiet | `bool` | When starting to run, do not print command line arguments, default settings, and usage examples | `false` |
    87  | -randomname | `bool` | when true, generate object names of 32 random characters. This option is ignored when loadernum is defined | `true` |
    88  | -readertype | `string` | Type of reader: sg(default). Available: `sg`, `file`, `rand`, `tar` | `sg` |
    89  | -readlen | `string`, `int` | Read range length, can contain [multiplicative suffix](#bytes-multiplicative-suffix) | `""` |
    90  | -readoff | `string`, `int` | Read range offset (can contain multiplicative suffix K, MB, GiB, etc.) | `""` |
    91  | -s3endpoint | `string` | S3 endpoint to read/write S3 bucket directly (with no aistore) | `""` |
    92  | -s3profile | `string` | Other then default S3 config profile referencing alternative credentials | `""` |
    93  | -seed | `int` | Random seed to achieve deterministic reproducible results (0 - use current time in nanoseconds) | `0` |
    94  | -skiplist | `bool` | Whether to skip listing objects in a bucket before running PUT workload | `false` |
    95  | -filelist | `string` | Local or locally accessible text file file containing object names (for subsequent reading) | `""` |
    96  | -stats-output | `string` | filename to log statistics (empty string translates as standard output (default) | `""` |
    97  | -statsdip | `string` | StatsD IP address or hostname | `localhost` |
    98  | -statsdport | `int` | StatsD UDP port | `8125` |
    99  | -statsdprobe | `bool` | Test-probe StatsD server prior to benchmarks | `true` |
   100  | -statsinterval | `int` | Interval in seconds to print performance counters; 0 - disabled | `10` |
   101  | -subdir | `string` | Virtual destination directory for all aisloader-generated objects | `""` |
   102  | -test-probe | `bool`| Test StatsD server prior to running benchmarks | `false` |
   103  | -timeout | `string` | Client HTTP timeout; `0` = infinity) | `10m` |
   104  | -tmpdir | `string` | Local directory to store temporary files | `/tmp/ais` |
   105  | -tokenfile | `string` | Authentication token (FQN) | `""`|
   106  | -totalputsize | `string`, `int` | Stop PUT workload once cumulative PUT size reaches or exceeds this value, can contain [multiplicative suffix](#bytes-multiplicative-suffix), 0 = no limit | `0` |
   107  | -trace-http | `bool` | Trace HTTP latencies (see [HTTP tracing](#http-tracing)) | `false` |
   108  | -uniquegets | `bool` | when true, GET objects randomly and equally. Meaning, make sure *not* to GET some objects more frequently than the others | `true` |
   109  | -usage | `bool` | Show command-line options, usage, and examples | `false` |
   110  | -verifyhash | `bool` | checksum-validate GET: recompute object checksums and validate it against the one received with the GET metadata | `true` |
   111  
   112  ### Often used options explanation
   113  
   114  #### Duration
   115  
   116  The loads can run for a given period of time (option `-duration <duration>`) or until the specified amount of data is generated (option `-totalputsize=<total size in KBs>`).
   117  
   118  If both options are provided the test finishes on the whatever-comes-first basis.
   119  
   120  Example 100% write into the bucket "abc" for 2 hours:
   121  
   122  ```console
   123  $ aisloader -bucket=abc -provider=ais -duration 2h -totalputsize=4000000 -pctput=100
   124  ```
   125  
   126  The above will run for two hours or until it writes around 4GB data into the bucket, whatever comes first.
   127  
   128  #### Write vs Read
   129  
   130  You can choose a percentage of writing (versus reading) by setting the option `-pctput=<put percentage>`.
   131  
   132  Example with a mixed PUT=30% and GET=70% load:
   133  
   134  ```console
   135  $ aisloader -bucket=ais://abc -duration 5m -pctput=30 -cleanup=true
   136  ```
   137  
   138  Example 100% PUT:
   139  
   140  ```console
   141  $ aisloader -bucket=abc -duration 5m -pctput=100 -cleanup=true
   142  ```
   143  
   144  The duration in both examples above is set to 5 minutes.
   145  
   146  > To test 100% read (`-pctput=0`), make sure to fill the bucket beforehand.
   147  
   148  #### Read range
   149  
   150  The loader can read the entire object (default) **or** a range of object bytes.
   151  
   152  To set the offset and length to read, use option `-readoff=<read offset (in bytes)>` and `readlen=<length to read (in bytes)>`.
   153  
   154  For convenience, both options support size suffixes: `k` - for KiB, `m` - for MiB, and `g` - for GiB.
   155  
   156  Example that reads a 32MiB segment at 1KB offset from each object stored in the bucket "abc":
   157  
   158  ```console
   159  $ aisloader -bucket=ais://abc -duration 5m -cleanup=false -readoff=1024 -readlen=32m
   160  ```
   161  
   162  The test (above) will run for 5 minutes and will not "cleanup" after itself (next section).
   163  
   164  #### Cleanup
   165  
   166  **NOTE**: `-cleanup` is a mandatory option defining whether to destroy bucket upon completion of the benchmark.
   167  
   168  The option must be specified in the command line.
   169  
   170  Example:
   171  
   172  ```console
   173  $ aisloader -bucket=ais://abc -pctput=100 -totalputsize=16348 -cleanup=false
   174  $ aisloader -bucket=ais://abc -duration 1h -pctput=0 -cleanup=true
   175  ```
   176  
   177  The first line in this example above fills the bucket "abc" with 16MiB of random data. The second - uses existing data to test read performance for 1 hour, and then removes all data.
   178  
   179  If you just need to clean up old data prior to running a test, run the loader with 0 (zero) total put size and zero duration:
   180  
   181  ```console
   182  $ aisloader -bucket=<bucket to cleanup> -duration 0s -totalputsize=0
   183  ```
   184  
   185  #### Object size
   186  
   187  For the PUT workload the loader generates randomly-filled objects. But what about object sizing?
   188  
   189  By default, object sizes are randomly selected as well in the range between 1MiB and 1GiB. To set preferred (or fixed) object size(s), use the options `-minsize=<minimal object size in KiB>` and `-maxsize=<maximum object size in KiB>`
   190  
   191  #### Setting bucket properties
   192  
   193  Before starting a test, it is possible to set `mirror` or `EC` properties on a bucket (for background, please see [storage services](/docs/storage_svcs.md)).
   194  
   195  > For background on local mirroring and erasure coding (EC), please see [storage services](/docs/storage_svcs.md).
   196  
   197  To achieve that, use the option `-bprops`. For example:
   198  
   199  ```console
   200  $ aisloader -bucket=ais://abc -pctput=0 -cleanup=false -duration 10s -bprops='{"mirror": {"copies": 2, "enabled": false}, "ec": {"enabled": false, "data_slices": 2, "parity_slices": 2}}'
   201  ```
   202  
   203  The above example shows the values that are globally default. You can omit the defaults and specify only those values that you'd want to change. For instance, to enable erasure coding on the bucket "abc":
   204  
   205  ```console
   206  $ aisloader -bucket=ais://abc -duration 1h -bprops='{"ec": {"enabled": true}}' -cleanup=false
   207  ```
   208  
   209  This example sets the number of data and parity slices to 2 which, in turn, requires the cluster to have at least 5 target nodes: 2 for data slices, 2 for parity slices and one for the original object.
   210  
   211  > Once erasure coding is enabled, its properties `data_slices` and `parity_slices` cannot be changed on the fly.
   212  
   213  > Note that (n `data_slices`, m `parity_slices`) erasure coding requires at least (n + m + 1) target nodes in a cluster.
   214  
   215  > Even though erasure coding and/or mirroring can be enabled/disabled and otherwise reconfigured at any point in time, specifically for the purposes of running benchmarks it is generally recommended to do it once _prior_ to writing any data to the bucket in question.
   216  
   217  The following sequence populates a bucket configured for both local mirroring and erasure coding, and then reads from it for 1h:
   218  
   219  ```console
   220  # Fill bucket
   221  $ aisloader -bucket=ais://abc -cleanup=false -pctput=100 -duration 100m -bprops='{"mirror": {"enabled": true}, "ec": {"enabled": true}}'
   222  
   223  # Read
   224  $ aisloader -bucket=abc -cleanup=false -pctput=0 -duration 1h
   225  ```
   226  
   227  ### Bytes Multiplicative Suffix
   228  
   229  Parameters in `aisLoader` that represent the number of bytes can be specified with a multiplicative suffix.
   230  For example: `8M` would specify 8 MiB.
   231  The following multiplicative suffixes are supported: 't' or 'T' - TiB 'g' or 'G' - GiB, 'm' or 'M' - MiB, 'k' or 'K' - KiB.
   232  Note that this is entirely optional, and therefore an input such as `300` will be interpreted as 300 Bytes.
   233  
   234  ## Environment variables
   235  
   236  | Environment Variable | Type | Description |
   237  | -- | -- | -- |
   238  | `AIS_ENDPOINT` | `string` | Cluster's endpoint: http or https address of any aistore gateway in this cluster. Overrides `ip` and `port` flags. |
   239  
   240  To state the same slightly differently, cluster endpoint can be defined in two ways:
   241  
   242  * as (plain) http://ip:port address, whereby '--ip' and '--port' are command-line options.
   243  * via `AIS_ENDPOINT` environment universally supported across all AIS clients, e.g.:
   244  
   245  ```console
   246  $ export AIS_ENDPOINT=https://10.07.56.68:51080
   247  ```
   248  
   249  In addition, environment can be used to specify client-side TLS (aka, HTTPS) configuration:
   250  
   251  | var name | description |
   252  | -- | -- |
   253  | `AIS_CRT`             | X509 certificate |
   254  | `AIS_CRT_KEY`         | X509 certificate's private key |
   255  | `AIS_CLIENT_CA`       | Certificate authority that authorized (signed) the certificate |
   256  | `AIS_SKIP_VERIFY_CRT` | when true, skip X509 cert verification (usually enabled to circumvent limitations of self-signed certs) |
   257  
   258  * See also: [TLS: testing with self-signed certificates](/docs/getting_started.md#tls-testing-with-self-signed-certificates)
   259  
   260  ## Examples
   261  
   262  For the most recently updated command-line options and examples, please run `aisloader` or `aisloader usage`.
   263  
   264  **1**. Create a 10-seconds load of 50% PUT and 50% GET requests:
   265  
   266      ```console
   267      $ aisloader -bucket=my_ais_bucket -duration=10s -pctput=50 -provider=ais
   268      Found 0 existing objects
   269      Run configuration:
   270      {
   271          "proxy": "http://172.50.0.2:8080",
   272          "provider": "ais",
   273          "bucket": "my_ais_bucket",
   274          "duration": "10s",
   275          "put upper bound": 0,
   276          "put %": 50,
   277          "minimal object size in Bytes": 1024,
   278          "maximum object size in Bytes": 1048576,
   279          "worker count": 1,
   280          "stats interval": "10s",
   281          "backed by": "sg",
   282          "cleanup": true
   283      }
   284  
   285      Actual run duration: 10.313689487s
   286  
   287      Time      OP    Count                 	Total Bytes           	Latency(min, avg, max)              	Throughput            	Error
   288      01:52:52  Put   26                    	11.19GB               	296.39ms   5.70s      14.91s        	639.73MB              	0
   289      01:52:52  Get   16                    	3.86GB                	58.89ms    220.20ms   616.72ms      	220.56MB              	0
   290      01:52:52  CFG   0                     	0B                    	0.00ms     0.00ms     0.00ms        	0B                    	0
   291      01:52:52 Clean up ...
   292      01:52:54 Clean up done
   293      ```
   294  
   295  **2**. Time-based 100% PUT into ais bucket. Upon exit the bucket is destroyed:
   296  
   297      ```console
   298      $ aisloader -bucket=nvais -duration 10s -cleanup=true -numworkers=3 -minsize=1K -maxsize=1K -pctput=100 -provider=ais
   299      ```
   300  
   301  **3**. Timed (for 1h) 100% GET from a Cloud bucket, no cleanup:
   302  
   303      ```console
   304      $ aisloader -bucket=aws://nvaws -duration 1h -numworkers=30 -pctput=0 -cleanup=false
   305      ```
   306  
   307  **4**. Mixed 30%/70% PUT and GET of variable-size objects to/from a Cloud bucket. PUT will generate random object names and is limited by the 10GB total size. Cleanup enabled - upon completion all generated objects and the bucket itself will be deleted:
   308  
   309      ```console
   310      $ aisloader -bucket=s3://nvaws -duration 0s -cleanup=true -numworkers=3 -minsize=1024 -maxsize=1MB -pctput=30 -totalputsize=10G
   311      ```
   312  
   313  **5**. PUT 1GB total into an ais bucket with cleanup disabled, object size = 1MB, duration unlimited:
   314  
   315      ```console
   316      $ aisloader -bucket=nvais -cleanup=false -totalputsize=1G -duration=0 -minsize=1MB -maxsize=1MB -numworkers=8 -pctput=100 -provider=ais
   317      ```
   318  
   319  **6**. 100% GET from an ais bucket:
   320  
   321      ```console
   322      $ aisloader -bucket=nvais -duration 5s -numworkers=3 -pctput=0 -provider=ais -cleanup=false
   323      ```
   324  
   325  **7**. PUT 2000 objects named as `aisloader/hex({0..2000}{loaderid})`:
   326  
   327      ```console
   328      $ aisloader -bucket=nvais -duration 10s -numworkers=3 -loaderid=11 -loadernum=20 -maxputs=2000 -objNamePrefix="aisloader" -cleanup=false
   329      ```
   330  
   331  **8**. Use random object names and loaderID to report statistics:
   332  
   333      ```console
   334      $ aisloader -loaderid=10
   335      ```
   336  
   337  **9**. PUT objects with random name generation being based on the specified loaderID and the total number of concurrent aisloaders:
   338  
   339      ```console
   340      $ aisloader -loaderid=10 -loadernum=20
   341      ```
   342  
   343  **10**. Same as above except that loaderID is computed by the aisloader as `hash(loaderstring) & 0xff`:
   344  
   345      ```console
   346      $ aisloader -loaderid=loaderstring -loaderidhashlen=8
   347      ```
   348  
   349  **11**. Print loaderID and exit (all 3 examples below) with the resulting loaderID shown on the right:
   350  
   351      ```console
   352      $ aisloader -getloaderid (0x0)
   353      $ aisloader -loaderid=10 -getloaderid (0xa)
   354      $ aisloader -loaderid=loaderstring -loaderidhashlen=8 -getloaderid (0xdb)
   355      ```
   356  
   357  **12**. Destroy existing ais bucket. If the bucket is Cloud-based, delete all objects:
   358  
   359      ```console
   360      $ aisloader -bucket=nvais -duration 0s -totalputsize=0 -cleanup=true
   361      ```
   362  
   363  **13**. Generate load on a cluster listening on custom IP address and port:
   364  
   365      ```console
   366      $ aisloader -ip="example.com" -port=8080
   367      ```
   368  
   369  **14**. Generate load on a cluster listening on custom IP address and port from environment variable:
   370  
   371      ```console
   372      $ AIS_ENDPOINT="examples.com:8080" aisloader
   373      ```
   374  
   375  **15**. Use HTTPS when connecting to a cluster:
   376  
   377      ```console
   378      $ aisloader -ip="https://localhost" -port=8080
   379      ```
   380  
   381  **16**. PUT TAR files with random files inside into a cluster:
   382  
   383      ```console
   384      $ aisloader -bucket=my_ais_bucket -duration=10s -pctput=100 -provider=ais -readertype=tar
   385      ```
   386  
   387  **17**. Generate load on `tar2tf` ETL. New ETL is started and then stopped at the end. TAR files are PUT to the cluster. Only available when cluster is deployed on Kubernetes.
   388  
   389      ```console
   390      $ aisloader -bucket=my_ais_bucket -duration=10s -pctput=100 -provider=ais -readertype=tar -etl=tar2tf -cleanup=false
   391      ```
   392  
   393  **18**. Timed 100% GET _directly_ from S3 bucket (notice '-s3endpoint' command line):
   394      ```console
   395      $ aisloader -bucket=s3://xyz -cleanup=false -numworkers=8 -pctput=0 -duration=10m -s3endpoint=https://s3.amazonaws.com
   396      ```
   397  
   398  **19**. PUT approx. 8000 files into s3 bucket directly, skip printing usage and defaults. Similar to the previous example, aisloader goes directly to a given S3 endpoint ('-s3endpoint'), and aistore is not being used:
   399      ```console
   400       $ aisloader -bucket=s3://xyz -cleanup=false -minsize=16B -maxsize=16B -numworkers=8 -pctput=100 -totalputsize=128k -s3endpoint=https://s3.amazonaws.com -quiet
   401      ```
   402  
   403  **20**.  Generate a list of object names (once), and then run aisloader without executing list-objects:
   404  
   405      ```console
   406      $ ais ls ais://nnn --props name -H > /tmp/a.txt
   407      $ aisloader -bucket=ais://nnn -duration 1h -numworkers=30 -pctput=0 -filelist /tmp/a.txt -cleanup=false
   408      ```
   409  
   410  ## Collecting stats
   411  
   412  Collecting is easy - `aisloader` supports at-runtime monitoring via with Graphite using StatsD.
   413  When starting up, `aisloader` will try to connect
   414  to provided StatsD server (see: `statsdip` and `statsdport` options). Once the
   415  connection is established the statistics from aisloader are send in the following
   416  format:
   417  
   418  ```
   419  <metric_type>.aisloader.<hostname>-<loaderid>.<metric>
   420  ```
   421  
   422  * `metric_type` - can be: `gauge`, `timer`, `counter`
   423  * `hostname` - is the hostname of the machine on which the loader is ran
   424  * `loaderid` - see: `-loaderid` option
   425  * `metric` - can be: `latency.*`, `get.*`, `put.*` (see: [aisloader metrics](/docs/metrics.md#ais-loader-metrics))
   426  
   427  ### Grafana
   428  
   429  Grafana helps visualize the collected statistics. It is convenient to use and
   430  provides numerous tools to measure and calculate different metrics.
   431  
   432  We provide simple [script](/deploy/dev/local/deploy_grafana.sh) which allows you to set
   433  up the Graphite and Grafana servers which run inside separate dockers. To add
   434  new dashboards and panels, please follow: [grafana tutorial](http://docs.grafana.org/guides/getting_started/).
   435  
   436  When selecting a series in panel view, it should be in the format: `stats.aisloader.<loader>.*`.
   437  Remember that metrics will not be visible (and you will not be able to select
   438  them) until you start the loader.
   439  
   440  ## HTTP tracing
   441  
   442  Following is a brief illustrated sequence to enable detailed tracing, capture statistics, and **toggle** tracing on/off at runtime.
   443  
   444  **IMPORTANT NOTE:**
   445  > The amount of generated (and extremely detailed) metrics can put a strain on your StatsD server. That's exactly the reason for runtime switch to **toggle** HTTP tracing on/off. The example below shows how to do it (in particular, see `kill -HUP`).
   446  
   447  ### 1. Run aisloader for 90s (32 workes, 100% write, sizes between 1KB and 1MB) with detailed tracing enabled:
   448  
   449  ```console
   450  $ aisloader -bucket=ais://abc -duration 90s -numworkers=32 -minsize=1K -maxsize=1M -pctput=50 --cleanup=false --trace-http=true
   451  ```
   452  
   453  ### 2. Have `netcat` listening on the default StatsD port `8125`:
   454  
   455  ```console
   456  $ nc 8125 -l -u -k
   457  
   458  # The result will look as follows - notice "*latency*" metrics (in milliseconds):
   459  
   460  ...
   461  aisloader.u18044-0.put.latency.posthttp:0.0005412320409368235|ms|@0.000049
   462  aisloader.u18044-0.put.latency.proxyheader:0.06676835268647904|ms|@0.000049
   463  aisloader.u18044-0.put.latency.targetresponse:0.7371088368431411|ms|@0.000049aisproxy.DLEp8080.put.count:587262|caistarget.vuIt8081.kalive.ms:1|ms
   464  aistarget.vuIt8081.disk.sda.avg.rsize:58982|g
   465  aistarget.vuIt8081.disk.sda.avg.wsize:506227|g
   466  aistarget.vuIt8081.put.count:587893|c
   467  aistarget.vuIt8081.put.redir.ms:2|ms
   468  aistarget.vuIt8081.disk.sda.read.mbps:5.32|g
   469  aistarget.vuIt8081.disk.sda.util:3|gaisloader.u18044-0.get.count:0|caisloader.u18044-0.put.count:19339|caisloader.u18044-0.put.pending:7|g|@0.000052aisloader.u18044-0.put.latency:4.068928072806246|ms|@0.000052
   470  aisloader.u18044-0.put.minlatency:0|ms|@0.000052
   471  aisloader.u18044-0.put.maxlatency:60|ms|@0.000052aisloader.u18044-0.put.throughput:1980758|g|@0.000052aisloader.u18044-0.put.latency.posthttp:0.0005170898185014737|ms|@0.000052
   472  aisloader.u18044-0.put.latency.proxyheader:0.06742851233259217|ms|@0.000052
   473  aisloader.u18044-0.put.latency.proxyrequest:0.1034696726821449|ms|@0.000052
   474  aisloader.u18044-0.put.latency.targetheader:0.0699622524432494|ms|@0.000052
   475  aisloader.u18044-0.put.latency.targetrequest:0.09168002482031129|ms|@0.000052
   476  aisloader.u18044-0.put.latency.targetresponse:0.806660116862299|ms|@0.000052
   477  aisloader.u18044-0.put.latency.proxy:0.6616681317544858|ms|@0.000052
   478  aisloader.u18044-0.put.latency.targetconn:1.0948859816950205|ms|@0.000052
   479  aisloader.u18044-0.put.latency.proxyresponse:0.425616629608563|ms|@0.000052
   480  aisloader.u18044-0.put.latency.proxyconn:1.2669734732923108|ms|@0.000052
   481  aisloader.u18044-0.put.latency.target:1.0063602047675682|ms|@0.000052aisproxy.DLEp8080.put.count:605044|caistarget.vuIt8081.put.redir.ms:2|ms
   482  ...
   483  ```
   484  
   485  ### 3. Finally, toggle detailed tracing on and off by sending aisoader `SIGHUP`:
   486  
   487  ```console
   488  $ pgrep -a aisloader
   489  3800 aisloader -bucket=ais://abc -duration 90s -numworkers=32 -minsize=1K -maxsize=1M -pctput=100 --cleanup=false --trace-http=true
   490  
   491  # kill -1 3800
   492  # or, same: kill -HUP 3800
   493  ```
   494  
   495  ### The result:
   496  
   497  ```console
   498  Time      OP    Count                   Size (Total)            Latency (min, avg, max)                 Throughput (Avg)        Errors (Total)
   499  10:11:27  PUT   20,136 (20,136 8 0)     19.7MiB (19.7MiB)       755.308µs  3.929ms    42.493ms         1.97MiB/s (1.97MiB/s)   -
   500  ...
   501  Detailed latency info is disabled
   502  ...
   503  
   504  # As stated, `SIGHUP` is a binary toggle - next time used it'll enable detailed trace with `aisloader printing:
   505  
   506  Detailed latency info is enabled
   507  ```
   508  
   509  > Note that other than `--trace-http`, all command-line options in this section are used for purely illustrative purposes.
   510  
   511  # AISLoader Composer
   512  
   513  For benchmarking production-level clusters, a single AISLoader instance may not be able to fully saturate the load the cluster can handle. In this case, multiple aisloader instances can be coordinated via the [AISLoader Composer](/bench/tools/aisloader-composer/). See the [README](/bench/tools/aisloader-composer/README.md) for instructions on setting up. 
   514  
   515  
   516  ## References
   517  
   518  For documented `aisloader` metrics, please refer to:
   519  
   520  * [aisloader metrics](/docs/metrics.md#ais-loader-metrics)
   521  
   522  The same readme (above) also describes:
   523  
   524  * [Statistics, Collected Metrics, Visualization](/docs/metrics.md)
   525  
   526  For [StatsD](https://github.com/etsy/statsd) compliant backends, see:
   527  
   528  * [StatsD backends](https://github.com/statsd/statsd/blob/master/docs/backend.md#supported-backends)
   529  
   530  Finally, for another supported - and alternative to StatsD - monitoring via Prometheus integration, see:
   531  
   532  * [Prometheus](/docs/prometheus.md)