github.imxd.top/operator-framework/operator-sdk@v0.8.2/doc/proposals/scorecard-plugin-system.md (about)

     1  # Plugin System for the Operator Scorecard
     2  
     3  Implementation Owner: AlexNPavel
     4  
     5  Status: Draft
     6  
     7  [Background](#Background)
     8  
     9  [Goals](#Goals)
    10  
    11  [Design overview](#Design_overview)
    12  
    13  ## Background
    14  
    15  The operator scorecard is intended to allow users to run a generic set of tests on their operators. The scorecard currently only has
    16  built-in tests, and it would be beneficial to allow a simple way to add or remove various tests that the scorecard can run. This proposal
    17  outlines a plugin system that would allow us and users to dynamically add new tests without having to compile them into the scorecard/SDK
    18  binary.
    19  
    20  ## Goals
    21  
    22  - Implement a configurable plugin based scorecard test system
    23  
    24  ## Design Overview
    25  
    26  ### Plugin System
    27  
    28  In order to increase the flexibility of the user defined tests and allow users to implement more complex E2E style tests for scorecard,
    29  the user-defined tests will be implemented via a plugin system. Users would put executable files (etiher scripts or binaries) in a directory
    30  in the project root, for example `<root>/scorecard/bin` (the path can be configured via a flag). The scorecard would run all exectuable files
    31  sequentially and each plugin is expected to print out the result as JSON to stdout. If a plugin has a fatal error or does not return a valid JSON
    32  result, the scorecard will have a default failure JSON result that specifies that the binary/script failed to run along with what the executable printed
    33  to stdout.
    34  
    35  The JSON output will be reusing the Kubernetes API for marshalling and unmarshalling. This would allow us to have a standardized `TypeMeta` that will allow
    36  us to update the way we define tests and results in the future with proper versioning. Below is an example of what the JSON output of a test would look like:
    37  
    38  Go structs:
    39  
    40  ```go
    41  type ScorecardTest struct {
    42      metav1.TypeMeta `json:",inline"`
    43      // Spec describes the attributes for the test.
    44      Spec *ScorecardTestSpec `json:"spec"`
    45  
    46      // Status describes the current state of the test and final results.
    47      // +optional
    48      Status *ScorecardTestResults `json:"results,omitempty"`
    49  }
    50  
    51  type ScorecardTestSpec struct {
    52      // TestInfo is currently used for ScorecardTestSpec.
    53      TestInfo `json:",inline"`
    54  }
    55  
    56  type ScorecardTestResults struct {
    57      // Log contains the scorecard's current log.
    58      Log string `json:"log"`
    59      // Results is an array of ScorecardResult for each suite of the curent scorecard run.
    60      Results []ScorecardResult `json:"results"`
    61  }
    62  
    63  // ScorecardResult contains the combined results of a suite of tests
    64  type ScorecardResult struct {
    65      // Error is the number of tests that ended in the Error state
    66      Error       int               `json:"error"`
    67      // Pass is the number of tests that ended in the Pass state
    68      Pass        int               `json:"pass"`
    69      // PartialPass is the number of tests that ended in the PartialPass state
    70      PartialPass int               `json:"partial_pass"`
    71      // Fail is the number of tests that ended in the Fail state
    72      Fail        int               `json:"fail"`
    73      // TotalTests is the total number of tests run in this suite
    74      TotalTests  int               `json:"total_tests"`
    75      // TotalScore is the total score of this quite as a percentage
    76      TotalScore  int               `json:"total_score_percent"`
    77      // Tests is an array containing a json-ified version of the TestResults for the suite
    78      Tests       []*JSONTestResult `json:"tests"`
    79  }
    80  
    81  // JSONTestResult is a simplified version of the TestResult that only include the Name and Description of the Test field in TestResult
    82  type JSONTestResult struct {
    83      // State is the final state of the test
    84      State         State
    85      // Name is the name of the test
    86      Name          string
    87      // Description describes what the test does
    88      Description   string
    89      // EarnedPoints is how many points the test received after running
    90      EarnedPoints  int
    91      // MaximumPoints is the maximum number of points possible for the test
    92      MaximumPoints int
    93      // Suggestions is a list of suggestions for the user to improve their score (if applicable)
    94      Suggestions   []string
    95      // Errors is a list of the errors that occured during the test (this can include both fatal and non-fatal errors)
    96      Errors        []error
    97  }
    98  
    99  // State is a type used to indicate the result state of a Test.
   100  type State string
   101  
   102  const (
   103      // UnsetState is the default state for a TestResult. It must be updated by UpdateState or by the Test.
   104      UnsetState State = "unset"
   105      // PassState occurs when a Test's ExpectedPoints == MaximumPoints.
   106      PassState State = "pass"
   107      // PartialPassState occurs when a Test's ExpectedPoints < MaximumPoints and ExpectedPoints > 0.
   108      PartialPassState State = "partial_pass"
   109      // FailState occurs when a Test's ExpectedPoints == 0.
   110      FailState State = "fail"
   111      // ErrorState occurs when a Test encounters a fatal error and the reported points should not be considered.
   112      ErrorState State = "error"
   113  )
   114  ```
   115  
   116  JSON output for `ScorecardResult` object (for the initial `v1alpha1` of the scorecard test objects):
   117  
   118  ```json
   119  {
   120      "error": 0,
   121      "pass": 1,
   122      "partial_pass": 1,
   123      "fail": 0,
   124      "total_tests": 2,
   125      "total_score_percent": 71,
   126      "tests": [
   127          {
   128              "state": "partial_pass",
   129              "name": "Operator Actions Reflected In Status",
   130              "description": "The operator updates the Custom Resources status when the application state is updated",
   131              "earnedPoints": 2,
   132              "maximumPoints": 3,
   133              "suggestions": [
   134                  {
   135                      "suggestion": "Operator should update status when scaling cluster down"
   136                  }
   137              ],
   138              "errors": []
   139          },
   140          {
   141              "state": "pass",
   142              "name": "Verify health of cluster",
   143              "description": "The cluster created by the operator is working properly",
   144              "earnedPoints": 1,
   145              "maximumPoints": 1,
   146              "suggestions": [],
   147              "errors": []
   148          }
   149      ]
   150  }
   151  ```
   152  
   153  This JSON output would make it simple for others to create scorecard plugins while keeping it simple for the scorecard
   154  to parse and integrate with the other tests. Each plugin would be considered a separate suite, and the full result of the scorecard
   155  would be a list of `ScorecardResult`s.