github.com/SagerNet/gvisor@v0.0.0-20210707092255-7731c139d75c/test/benchmarks/README.md (about)

     1  # Benchmark tools
     2  
     3  This package and subpackages are for running macro benchmarks on `runsc`. They
     4  are meant to replace the previous //benchmarks benchmark-tools written in
     5  python.
     6  
     7  Benchmarks are meant to look like regular golang benchmarks using the testing.B
     8  library.
     9  
    10  ## Setup
    11  
    12  To run benchmarks you will need:
    13  
    14  *   Docker installed (17.09.0 or greater).
    15  
    16  ## Running benchmarks
    17  
    18  To run, use the Makefile:
    19  
    20  -   Install runsc as a runtime: `make dev`
    21      -   The above command will place several configurations of runsc in your
    22          /etc/docker/daemon.json file. Choose one without the debug option set.
    23  -   Run your benchmark: `make run-benchmark
    24      RUNTIME=[RUNTIME_FROM_DAEMON.JSON/runc]
    25      BENCHMARKS_TARGETS=//path/to/target"`
    26  -   Additionally, you can benchmark several platforms in one command:
    27  
    28  ```
    29  make benchmark-platforms BENCHMARKS_PLATFORMS=ptrace,kvm \
    30  BENCHMARKS_TARGET=//path/to/target"
    31  ```
    32  
    33  The above command will install runtimes/run benchmarks on ptrace and kvm as well
    34  as run the benchmark on native runc.
    35  
    36  Benchmarks are run with root as some benchmarks require root privileges to do
    37  things like drop caches.
    38  
    39  ## Writing benchmarks
    40  
    41  Benchmarks consist of docker images as Dockerfiles and golang testing.B
    42  benchmarks.
    43  
    44  ### Dockerfiles:
    45  
    46  *   Are stored at //images.
    47  *   New Dockerfiles go in an appropriately named directory at
    48      `//images/benchmarks/my-cool-dockerfile`.
    49  *   Dockerfiles for benchmarks should:
    50      *   Use explicitly versioned packages.
    51      *   Don't use ENV and CMD statements. It is easy to add these in the API via
    52          `dockerutil.RunOpts`.
    53  *   Note: A common pattern for getting access to a tmpfs mount is to copy files
    54      there after container start. See: //test/benchmarks/build/bazel_test.go. You
    55      can also make your own with `RunOpts.Mounts`.
    56  
    57  ### testing.B packages
    58  
    59  In general, benchmarks should look like this:
    60  
    61  ```golang
    62  func BenchmarkMyCoolOne(b *testing.B) {
    63    machine, err := harness.GetMachine()
    64    // check err
    65    defer machine.CleanUp()
    66  
    67    ctx := context.Background()
    68    container := machine.GetContainer(ctx, b)
    69    defer container.CleanUp(ctx)
    70  
    71    b.ResetTimer()
    72  
    73    // Respect b.N.
    74    for i := 0; i < b.N; i++ {
    75      out, err := container.Run(ctx, dockerutil.RunOpts{
    76        Image: "benchmarks/my-cool-image",
    77        Env: []string{"MY_VAR=awesome"},
    78        // other options...see dockerutil
    79      }, "sh", "-c", "echo MY_VAR")
    80      // check err...
    81      b.StopTimer()
    82  
    83      // Do parsing and reporting outside of the timer.
    84      number := parseMyMetric(out)
    85      b.ReportMetric(number, "my-cool-custom-metric")
    86  
    87      b.StartTimer()
    88    }
    89  }
    90  
    91  func TestMain(m *testing.M) {
    92      harness.Init()
    93      os.Exit(m.Run())
    94  }
    95  ```
    96  
    97  Some notes on the above:
    98  
    99  *   Respect and linearly scale by `b.N` so that users can run a number of times
   100      (--benchtime=10x) or for a time duration (--benchtime=1m). For many
   101      benchmarks, this is just the runtime of the container under test. Sometimes
   102      this is a parameter to the container itself. For Example, the httpd
   103      benchmark (and most client server benchmarks) uses b.N as a parameter to the
   104      Client container that specifies how many requests to make to the server.
   105  *   Use the `b.ReportMetric()` method to report custom metrics.
   106  *   Never turn off the timer (b.N), but set and reset it if useful for the
   107      benchmark. There isn't a way to turn off default metrics in testing.B (B/op,
   108      allocs/op, ns/op).
   109  *   Take a look at dockerutil at //pkg/test/dockerutil to see all methods
   110      available from containers. The API is based on the "official"
   111      [docker API for golang](https://pkg.go.dev/mod/github.com/docker/docker).
   112  *   `harness.GetMachine()` marks how many machines this tests needs. If you have
   113      a client and server and to mark them as multiple machines, call
   114      `harness.GetMachine()` twice.
   115  
   116  ## Profiling
   117  
   118  For profiling, the runtime is required to have the `--profile` flag enabled.
   119  This flag loosens seccomp filters so that the runtime can write profile data to
   120  disk. This configuration is not recommended for production.
   121  
   122  To profile, simply run the `benchmark-platforms` command from above and profiles
   123  will be in /tmp/profile.
   124  
   125  Or run with: `make run-benchmark RUNTIME=[RUNTIME_UNDER_TEST]
   126  BENCHMARKS_TARGETS=//path/to/target`
   127  
   128  Profiles will be in /tmp/profile. Note: runtimes must have the `--profile` flag
   129  set in /etc/docker/daemon.conf and profiling will not work on runc.