github.com/ipld/go-ipld-prime@v0.21.0/node/tests/HACKME.md (about)

     1  HACKME
     2  ======
     3  
     4  This package is for reusable tests and benchmarks.
     5  These test and benchmark functions work over the Node and NodeBuilder interfaces,
     6  so they should work to test compatibility and compare performance of various implementations of Node.
     7  
     8  This is easier said than done.
     9  
    10  
    11  Naming conventions
    12  ------------------
    13  
    14  ### name prefix
    15  
    16  All reusable test functions start with the name prefix `TestSpec_`.
    17  
    18  All reusable benchmarks start with the name prefix `BenchmarkSpec_`.
    19  
    20  The "Test" and "Benchmark" prefixes are as per the requirements of the
    21  golang standard `testing` package.  They take `*testing.T` and `*testing.B`
    22  arguments respectively.  They also take at least one interface argument
    23  which is how you give your Node implementation to the test spec.
    24  
    25  The word "Spec" reflects on the fact that these are reusable/standardized tests.
    26  
    27  We recommend you copy-paste these method names outright into the package of your Node implementation.
    28  It's not necessary, but it's nice for consistency.
    29  (In the future, there may be tooling to help make automated comparisons
    30  of different Node implementation's relative performance; this would
    31  necessarily rely on consistent names across packages.)
    32  
    33  If your Node implementation package has *more* tests and benchmarks that
    34  *are not* from this reusable set, that's great -- but don't use the "Spec"
    35  word as a segment of their name; it'll make processing bulk output easier.
    36  
    37  ### full pattern
    38  
    39  The full pattern is:
    40  
    41  `BenchmarkSpec_{Application}_{FixtureCohort}/codec={codec}/n={size}`
    42  
    43  - `{Application}` means what feature or big-picture behavior we're testing.
    44    Examples include "Marshal", "Unmarshal", "Walk", etc.
    45  - `{FixtureCohort}` means... well, see the names from the 'corpus' subpackage;
    46    it should be literally one of those strings.
    47  - `n={size}` will be present for variable-scale benchmarks.
    48    You'll have to consider the Application and FixtureCohort to understand the
    49    context of what part of the data is being varied in size, though.
    50  - `codec={codec}` is an example of extra info that might exist for some applications.
    51    For example, it might include "json" and "cbor" for "Marshal" and "Unmarshal",
    52    but will not be seen at all in other applications like "Walk".
    53  
    54  The parts after the slash are those which are handled internally.
    55  For those, you call the `BenchmarkSpec_*` function name (stopping before the first slash),
    56  and that function will call `b.Run` to make sub-tests for all the variations.
    57  For example, when you call `BenchmarkSpec_Walk_MapNStrMap3StrInt`, that one call
    58  will result in a suite of tests for various sizes, each of which will be denoted
    59  in the output by `BenchmarkSpec_Walk_MapNStrMap3StrInt/n=1`, then `.../n=2`, etc.
    60  
    61  ### variable scale benchmarks
    62  
    63  Some corpuses have fixed sizes.  Some are variable.
    64  
    65  With fixed-size corpuses, you'll see an integer in the "FixtureCohort" name.
    66  For variable-size corpuses, you'll see the letter "N" in place of an integer.
    67  
    68  See the docs in the 'corpus' subpackage for more discussion of this.