gvisor.dev/gvisor@v0.0.0-20240520182842-f9d4d51c7e0f/test/benchmarks/README.md (about)

     1  # Benchmark tools
     2  
     3  This package and subpackages are for running macro benchmarks on `runsc`. They
     4  are meant to replace the previous //benchmarks benchmark-tools written in
     5  python.
     6  
     7  Benchmarks are meant to look like regular golang benchmarks using the testing.B
     8  library.
     9  
    10  ## Setup
    11  
    12  To run benchmarks you will need:
    13  
    14  *   Docker installed (17.09.0 or greater).
    15  
    16  ## Running benchmarks
    17  
    18  To run, use the Makefile:
    19  
    20  -   Install runsc as a runtime: `make dev`
    21      -   The above command will place several configurations of runsc in your
    22          /etc/docker/daemon.json file. Choose one without the debug option set.
    23  -   Run your benchmark: `make run-benchmark
    24      RUNTIME=[RUNTIME_FROM_DAEMON.JSON/runc] BENCHMARKS_TARGETS=path/to/target`
    25  -   Additionally, you can benchmark several platforms in one command:
    26  
    27  ```
    28  make benchmark-platforms BENCHMARKS_TARGET=path/to/target
    29  ```
    30  
    31  The above command will install runtimes/run benchmarks on systrap and kvm as
    32  well as run the benchmark on native runc.
    33  
    34  Benchmarks are run with root as some benchmarks require root privileges to do
    35  things like drop caches.
    36  
    37  ## Writing benchmarks
    38  
    39  Benchmarks consist of docker images as Dockerfiles and golang testing.B
    40  benchmarks.
    41  
    42  ### Dockerfiles:
    43  
    44  *   Are stored at //images.
    45  *   New Dockerfiles go in an appropriately named directory at
    46      `//images/benchmarks/my-cool-dockerfile`.
    47  *   Dockerfiles for benchmarks should:
    48      *   Use explicitly versioned packages.
    49      *   Don't use ENV and CMD statements. It is easy to add these in the API via
    50          `dockerutil.RunOpts`.
    51  *   Note: A common pattern for getting access to a tmpfs mount is to copy files
    52      there after container start. See: //test/benchmarks/build/bazel_test.go. You
    53      can also make your own with `RunOpts.Mounts`.
    54  
    55  ### testing.B packages
    56  
    57  In general, benchmarks should look like this:
    58  
    59  ```golang
    60  func BenchmarkMyCoolOne(b *testing.B) {
    61    machine, err := harness.GetMachine()
    62    // check err
    63    defer machine.CleanUp()
    64  
    65    ctx := context.Background()
    66    container := machine.GetContainer(ctx, b)
    67    defer container.CleanUp(ctx)
    68  
    69    b.ResetTimer()
    70  
    71    // Respect b.N.
    72    for i := 0; i < b.N; i++ {
    73      out, err := container.Run(ctx, dockerutil.RunOpts{
    74        Image: "benchmarks/my-cool-image",
    75        Env: []string{"MY_VAR=awesome"},
    76        // other options...see dockerutil
    77      }, "sh", "-c", "echo MY_VAR")
    78      // check err...
    79      b.StopTimer()
    80  
    81      // Do parsing and reporting outside of the timer.
    82      number := parseMyMetric(out)
    83      b.ReportMetric(number, "my-cool-custom-metric")
    84  
    85      b.StartTimer()
    86    }
    87  }
    88  
    89  func TestMain(m *testing.M) {
    90      harness.Init()
    91      os.Exit(m.Run())
    92  }
    93  ```
    94  
    95  Some notes on the above:
    96  
    97  *   Respect and linearly scale by `b.N` so that users can run a number of times
    98      (--benchtime=10x) or for a time duration (--benchtime=1m). For many
    99      benchmarks, this is just the runtime of the container under test. Sometimes
   100      this is a parameter to the container itself. For Example, the httpd
   101      benchmark (and most client server benchmarks) uses b.N as a parameter to the
   102      Client container that specifies how many requests to make to the server.
   103  *   Use the `b.ReportMetric()` method to report custom metrics.
   104  *   Never turn off the timer (b.N), but set and reset it if useful for the
   105      benchmark. There isn't a way to turn off default metrics in testing.B (B/op,
   106      allocs/op, ns/op).
   107  *   Take a look at dockerutil at //pkg/test/dockerutil to see all methods
   108      available from containers. The API is based on the "official"
   109      [docker API for golang](https://pkg.go.dev/mod/github.com/docker/docker).
   110  *   `harness.GetMachine()` marks how many machines this tests needs. If you have
   111      a client and server and to mark them as multiple machines, call
   112      `harness.GetMachine()` twice.
   113  
   114  ## Profiling
   115  
   116  For profiling, the runtime is required to have the `--profile` flag enabled.
   117  This flag loosens seccomp filters so that the runtime can write profile data to
   118  disk. This configuration is not recommended for production.
   119  
   120  To profile, simply run the `benchmark-platforms` command from above and profiles
   121  will be in /tmp/profile.
   122  
   123  Or run with: `make run-benchmark RUNTIME=[RUNTIME_UNDER_TEST]
   124  BENCHMARKS_TARGETS=path/to/target`
   125  
   126  Profiles will be in /tmp/profile. Note: runtimes must have the `--profile` flag
   127  set in /etc/docker/daemon.conf and profiling will not work on runc.