github.com/jmrodri/operator-sdk@v0.5.0/doc/test-framework/scorecard.md (about)

     1  # Operator Scorecard
     2  
     3  The Operator Scorecard is a testing utility included in the `operator-sdk` binary that guides users towards operator best practices
     4  by checking the correctness of their operators and CSVs. While the Scorecard is in an early
     5  stage of development, it will gain more functionality and stabilize over time.
     6  
     7  ## How It Works
     8  
     9  The scorecard works by creating all resources required by CRs and the operator. For the operator
    10  deployment, it also adds another container to the operator's pod that is used to record calls to the API server,
    11  which are analyzed by the scorecard for various tests. The scorecard will also analyze the CR object itself,
    12  modifying spec fields and monitoring how the operator responds.
    13  
    14  ## Requirements
    15  
    16  - An operator made using the `operator-sdk` or an operator that uses a config getter that supports reading from the `KUBECONFIG` environment variable (such as the `clientcmd` or `controller-runtime` config getters). This is required for the scorecard proxy to work correctly.
    17  - Resource manifests for installing/configuring the operator and custom resources (see the [Writing E2E Tests][writing-tests] doc for more information on the global and namespaced manifests).
    18  - (OLM tests only) A CSV file for your operator.
    19  
    20  ## Running the tests
    21  
    22  The scorecard currently uses a large amount of flags to configure the scorecard tests. You can see
    23  these flags in the `scorecard` subcommand help text, or in the [SDK CLI Reference][cli-reference] doc. Here, we will highlight a few important
    24  flags:
    25  
    26  - `--cr-manifest` - this is a required flag for the scorecard. This flag must point to the location of the manifest for the custom resource you are currently testing.
    27  - `--csv-path` - this flag is required if the OLM tests are enabled (the tests are enabled by default). This flag must point to the location of the operators' CSV file.
    28  - `--namespaced-manifest` - if set, this flag must point to a manifest file with all resources that run within a namespace. By default, the scorecard will combine `service_account.yaml`, `role.yaml`, `role_binding.yaml`, and `operator.yaml` from the `deploy` directory into a temporary manifest to use as the namespaced manifest.
    29  - `--global-manifest` - if set, this flag must point to all required resources that run globally (not namespaced). By default, the scorecard will combine all CRDs in the `deploy/crds` directory into a temporary manifest to use as the global manifest.
    30  - `--namespace` - if set, which namespace to run the scorecard tests in. If it is not set, the scorecard will use the default namespace of the current context set in the kubeconfig file.
    31  
    32  To run the tests, simply run the `scorecard` subcommand from your project root with the flags you want to
    33  use. For example:
    34  
    35  ```console
    36  $ operator-sdk scorecard --cr-manifest deploy/crds/app_operator_cr.yaml --csv-path deploy/app_operator-0.0.2.yaml
    37  ```
    38  
    39  ## Config File
    40  
    41  The scorecard supports the use of a config file instead of or in addition to flags for configuration. By default, the scorecard will look
    42  for a file called `.osdk-scorecard` with either a `.yaml`, `.json`, or `.toml` file extension. You can also
    43  specify a different config file with the `--config` flag. The configuration options in the config file match the flags.
    44  For instance, for the flags `--cr-manifest "deploy/crds/cache_v1alpha1_memcached_cr.yaml" --init-timeout 60 --csv-path "deploy/memcachedoperator.0.0.2.csv.yaml"`, the corresponding yaml config file would contain:
    45  
    46  ```yaml
    47  cr-manifest: "deploy/crds/cache_v1alpha1_memcached_cr.yaml"
    48  init-timeout: 60
    49  csv-path: "deploy/memcachedoperator.0.0.2.csv.yaml"
    50  ```
    51  
    52  The hierarchy of config methods from highest priority to least is: flag->file->default.
    53  
    54  The config file support is provided by the `viper` package. For more info on how viper
    55  configuration works, see [`viper`'s README][viper].
    56  
    57  ## What Each Test Does
    58  
    59  There are 8 tests the scorecard can run:
    60  
    61  ### Basic Operator
    62  
    63  #### Spec Block Exists
    64  
    65  This test checks the Custom Resource that is created in the cluster to make sure that it has a spec block. This test
    66  has a maximum score of 1.
    67  
    68  #### Status Block Exists
    69  
    70  This test checks the Custom Resource that is created in the cluster to make sure that it has a status block. This
    71  test has a maximum score of 1.
    72  
    73  #### Writing Into CRs Has An Effect
    74  
    75  This test reads the scorecard proxy's logs to verify that the operator is making `PUT` and/or `POST` requests to the
    76  API server, indicating that it is modifying resources. This test has a maximum score of 1.
    77  
    78  ### OLM Integration
    79  
    80  #### Provided APIs have validation
    81  
    82  This test verifies that all the CRDs in the CRDs folder contain a validation section. If the CRD matches the kind and version of the
    83  CR currently being tested, it will also verify that there is a validation for each spec and status field in that CR. This test has a
    84  maximum score of 1.
    85  
    86  #### Owned CRDs Have Resources Listed
    87  
    88  This test makes sure that the CRDs listed in the [`owned` CRDs section][owned-crds] of the CSV have a `resources` subsection. This
    89  test has a maximum score equal to the number of CRDs listed in the CSV.
    90  
    91  Note: In the future, this test will verify that all resources modified by the operator are listed in the resources section.
    92  
    93  #### CRs Have At Least 1 Example
    94  
    95  This test checks that the CSV has an [`alm-examples` section][alm-examples] in its metadatas' annotations. This test has a maximum score of 1.
    96  
    97  #### Spec Fields With Descriptors
    98  
    99  This test verifies that every field in the Custom Resource's spec section has a corresponding descriptor listed in
   100  the CSV. This test has a maximum score equal to the number of fields in the spec section of your Custom Resource.
   101  
   102  #### Status Fields With Descriptors
   103  
   104  This test verifies that every field in the Custom Resource's status section has a corresponding descriptor listed in
   105  the CSV. This test has a maximum score equal to the number of fields in the status section of your Custom Resource.
   106  
   107  [cli-reference]: ../sdk-cli-reference.md#scorecard
   108  [writing-tests]: ./writing-e2e-tests.md
   109  [owned-crds]: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/design/building-your-csv.md#owned-crds
   110  [alm-examples]: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/design/building-your-csv.md#crd-templates
   111  [viper]: https://github.com/spf13/viper/blob/master/README.md