sigs.k8s.io/cluster-api@v1.7.1/docs/book/src/developer/testing.md (about)

     1  # Testing Cluster API
     2  
     3  This document presents testing guidelines and conventions for Cluster API.
     4  
     5  IMPORTANT: improving and maintaining this document is a collaborative effort, so we are encouraging constructive
     6  feedback and suggestions.
     7  
     8  ## Unit tests
     9  
    10  Unit tests focus on individual pieces of logic - a single func - and don't require any additional services to execute. They should
    11  be fast and great for getting the first signal on the current implementation, but unit tests have the risk of
    12  allowing integration bugs to slip through.
    13  
    14  In Cluster API most of the unit tests are developed using [go test], [gomega] and the [fakeclient]; however using
    15  [fakeclient] is not suitable for all the use cases due to some limitations in how it is implemented. In some cases
    16  contributors will be required to use [envtest]. See the [quick reference](#quick-reference) below for more details.
    17  
    18  ### Mocking external APIs
    19  In some cases when writing tests it is required to mock external API, e.g. etcd client API or the AWS SDK API.
    20  
    21  This problem is usually well scoped in core Cluster API, and in most cases it is already solved by using fake
    22  implementations of the target API to be injected during tests.
    23  
    24  Instead, mocking is much more relevant for infrastructure providers; in order to address the issue
    25  some providers can use simulators reproducing the behaviour of a real infrastructure providers (e.g CAPV);
    26  if this is not possible, a viable solution is to use mocks (e.g CAPA).
    27  
    28  ### Generic providers
    29  When writing tests core Cluster API contributors should ensure that the code works with any providers, and thus it is required
    30  to not use any specific provider implementation. Instead, the so-called generic providers e.g. "GenericInfrastructureCluster" 
    31  should be used because they implement the plain Cluster API contract. This prevents tests from relying on assumptions that 
    32  may not hold true in all cases.
    33  
    34  Please note that in the long term we would like to improve the implementation of generic providers, centralizing
    35  the existing set of utilities scattered across the codebase, but while details of this work will be defined do not
    36  hesitate to reach out to reviewers and maintainers for guidance.
    37  
    38  ## Integration tests
    39  
    40  Integration tests are focused on testing the behavior of an entire controller or the interactions between two or
    41  more Cluster API controllers.
    42  
    43  In Cluster API, integration tests are based on [envtest] and one or more controllers configured to run against
    44  the test cluster.
    45  
    46  With this approach it is possible to interact with Cluster API almost like in a real environment, by creating/updating
    47  Kubernetes objects and waiting for the controllers to take action. See the [quick reference](#quick-reference) below for more details.
    48  
    49  Also in case of integration tests, considerations about [mocking external APIs](#mocking-external-apis) and usage of [generic providers](#generic-providers) apply. 
    50  
    51  ## Fuzzing tests
    52  
    53  Fuzzing tests automatically inject randomly generated inputs, often invalid or with unexpected values, into functions to discover vulnerabilities. 
    54  
    55  Two different types of fuzzing are currently being used on the Cluster API repository:
    56  
    57  ### Fuzz testing for API conversion
    58  
    59  Cluster API uses Kubernetes' conversion-gen to automate the generation of functions to convert our API objects between versions. These conversion functions are tested using the [FuzzTestFunc util in our conversion utils package](https://github.com/kubernetes-sigs/cluster-api/blob/1ec0cd6174f1b860dc466db587241ea7edea0b9f/util/conversion/conversion.go#L194).
    60  For more information about these conversions see the API conversion code walkthrough in our [video walkthrough series](./guide.md#videos-explaining-capi-architecture-and-code-walkthroughs).
    61  
    62  ### OSS-Fuzz continuous fuzzing
    63  
    64  Parts of the CAPI code base are continuously fuzzed through the [OSS-Fuzz project](https://github.com/google/oss-fuzz). Issues found in these fuzzing tests are reported to Cluster API maintainers and surfaced in issues on the repo for resolution.
    65  To read more about the integration of Cluster API with OSS Fuzz see [the 2022 Cluster API Fuzzing Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-assessments/cluster-api/capi_2022_fuzzing.pdf).
    66  
    67  ## Test maintainability
    68  
    69  Tests are an integral part of the project codebase.
    70  
    71  Cluster API maintainers and all the contributors should be committed to help in ensuring that tests are easily maintainable,
    72  easily readable, well documented and consistent across the code base.
    73  
    74  In light of continuing improving our practice around this ambitious goal, we are starting to introduce a shared set of:
    75  
    76  - Builders (`sigs.k8s.io/cluster-api/internal/test/builder`), allowing to create test objects in a simple and consistent way.
    77  - Matchers (`sigs.k8s.io/controller-runtime/pkg/envtest/komega`), improving how we write test assertions.
    78  
    79  Each contribution in growing this set of utilities or their adoption across the codebase is more than welcome!
    80  
    81  Another consideration that can help in improving test maintainability is the idea of testing "by layers"; this idea could 
    82  apply whenever we are testing "higher-level" functions that internally uses one or more "lower-level" functions;
    83  in order to avoid writing/maintaining redundant tests, whenever possible contributors should take care of testing
    84  _only_ the logic that is implemented in the "higher-level" function, delegating the test function called internally
    85  to a "lower-level" set of unit tests.
    86  
    87  A similar concern could be raised also in the case whenever there is overlap between unit tests and integration tests,
    88  but in this case the distinctive value of the two layers of testing is determined by how test are designed:
    89  
    90  - unit test are focused on code structure: func(input) = output, including edge case values, asserting error conditions etc.
    91  - integration test are user story driven: as a user, I want express some desired state using API objects, wait for the
    92    reconcilers to take action, check the new system state.
    93  
    94  ## Running unit and integration tests
    95  
    96  Run `make test` to execute all unit and integration tests.
    97  
    98  Integration tests use the [envtest](https://github.com/kubernetes-sigs/controller-runtime/blob/main/pkg/envtest/doc.go) test framework. The tests need to know the location of the executables called by the framework. The `make test` target installs these executables, and passes this location to the tests as an environment variable.
    99  
   100  <aside class="note">
   101  
   102  <h1>Tips</h1>
   103  
   104  When testing individual packages, you can speed up the test execution by running the tests with a local kind cluster.
   105  This avoids spinning up a testenv with each test execution. It also makes it easier to debug, because it's straightforward
   106  to access a kind cluster with kubectl during test execution. For further instructions, run: `./hack/setup-envtest-with-kind.sh`.
   107  
   108  When running individual tests, it could happen that a testenv is started if this is required by the `suite_test.go` file.
   109  However, if the tests you are running don't require testenv (i.e. they are only using fake client), you can skip the testenv
   110  creation by setting the environment variable `CAPI_DISABLE_TEST_ENV` (to any non-empty value).
   111  
   112  To debug testenv unit tests it is possible to use:
   113  * `CAPI_TEST_ENV_KUBECONFIG` to write out a kubeconfig for the testenv to a file location.
   114  * `CAPI_TEST_ENV_SKIP_STOP` to skip stopping the testenv after test execution.
   115  
   116  </aside>
   117  
   118  ### Test execution via IDE
   119  
   120  Your IDE needs to know the location of the executables called by the framework, so that it can pass the location to the tests as an environment variable.
   121  
   122  <aside class="note warning">
   123  
   124  <h1>Warning</h1>
   125  
   126  If you see this error when running a test in your IDE, the test uses the envtest framework, and probably does not know the location of the envtest executables.
   127  
   128  ```console
   129  E0210 16:11:04.222471  132945 server.go:329] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=0
   130  ```
   131  
   132  </aside>
   133  
   134  #### VSCode
   135  
   136  The `dev/vscode-example-configuration` directory in the repository contains an example configuration that integrates VSCode with the envtest framework.
   137  
   138  To use the example configuration, copy the files to the `.vscode` directory in the repository, and restart VSCode.
   139  
   140  The configuration works as follows: Whenever the project is opened in VSCode, a VSCode task runs that installs the executables, and writes the location to a file. A setting tells [vscode-go] to initialize the environment from this file.
   141  
   142  ## End-to-end tests
   143  
   144  The end-to-end tests are meant to verify the proper functioning of a Cluster API management cluster
   145  in an environment that resemble a real production environment.
   146  
   147  The following guidelines should be followed when developing E2E tests:
   148  
   149  - Use the [Cluster API test framework].
   150  - Define test spec reflecting real user workflow, e.g. [Cluster API quick start].
   151  - Unless you are testing provider specific features, ensure your test can run with
   152    different infrastructure providers (see [Writing Portable Tests](./e2e.md#writing-portable-e2e-tests)).
   153  
   154  See [e2e development] for more information on developing e2e tests for CAPI and external providers.
   155  
   156  ## Running the end-to-end tests locally
   157  
   158  Usually the e2e tests are executed by Prow, either pre-submit (on PRs) or periodically on certain branches
   159  (e.g. the default branch). Those jobs are defined in the kubernetes/test-infra repository in [config/jobs/kubernetes-sigs/cluster-api](https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes-sigs/cluster-api).
   160  For development and debugging those tests can also be executed locally.
   161  
   162  ### Prerequisites
   163  
   164  `make docker-build-e2e` will build the images for all providers that will be needed for the e2e tests.
   165  
   166  ### Test execution via ci-e2e.sh
   167  
   168  To run a test locally via the command line, you should look at the Prow Job configuration for the test you want to run and then execute the same commands locally.
   169  For example to run [pull-cluster-api-e2e-main](https://github.com/kubernetes/test-infra/blob/49ab08a6a2a17377d52a11212e6f1104c3e87bfc/config/jobs/kubernetes-sigs/cluster-api/cluster-api-presubmits-main.yaml#L113-L140)
   170  just execute:
   171  
   172  ```bash
   173  GINKGO_FOCUS="\[PR-Blocking\]" ./scripts/ci-e2e.sh
   174  ```
   175  
   176  ### Test execution via make test-e2e
   177  
   178  `make test-e2e` will run e2e tests by using whatever provider images already exist on disk.
   179  After running `make docker-build-e2e` at least once, `make test-e2e` can be used for a faster test run, if there are no
   180  provider code changes. If the provider code is changed, run `make docker-build-e2e` to update the images.
   181  
   182  ### Test execution via IDE
   183  
   184  It's also possible to run the tests via an IDE which makes it easier to debug the test code by stepping through the code.
   185  
   186  First, we have to make sure all prerequisites are fulfilled, i.e. all required images have been built (this also includes
   187  kind images). This can be done by executing the `./scripts/ci-e2e.sh` script.
   188  
   189  ```bash
   190  # Notes:
   191  # * You can cancel the script as soon as it starts the actual test execution via `make test-e2e`.
   192  # * If you want to run other tests (e.g. upgrade tests), make sure all required env variables are set (see the Prow Job config).
   193  GINKGO_FOCUS="\[PR-Blocking\]" ./scripts/ci-e2e.sh
   194  ```
   195  
   196  Now, the tests can be run in an IDE. The following describes how this can be done in IntelliJ IDEA and VS Code. It should work
   197  roughly the same way in all other IDEs. We assume the `cluster-api` repository has been checked
   198  out into `/home/user/code/src/sigs.k8s.io/cluster-api`.
   199  
   200  #### IntelliJ
   201  
   202  Create a new run configuration and fill in:
   203  * Test framework: `gotest`
   204  * Test kind: `Package`
   205  * Package path: `sigs.k8s.io/cluster-api/test/e2e`
   206  * Pattern: `^\QTestE2E\E$`
   207  * Working directory: `/home/user/code/src/sigs.k8s.io/cluster-api/test/e2e`
   208  * Environment: `ARTIFACTS=/home/user/code/src/sigs.k8s.io/cluster-api/_artifacts`
   209  * Program arguments: `-e2e.config=/home/user/code/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml -ginkgo.focus="\[PR-Blocking\]"`
   210  
   211  #### VS Code
   212  
   213  Add the launch.json file in the .vscode folder in your repo:
   214  ```json
   215  {
   216      "version": "0.2.0",
   217      "configurations": [
   218          {
   219              "name": "Run e2e test",
   220              "type": "go",
   221              "request": "launch",
   222              "mode": "test",
   223              "program": "${workspaceRoot}/test/e2e/e2e_suite_test.go",
   224              "env": {
   225                  "ARTIFACTS":"${workspaceRoot}/_artifacts"
   226              },
   227              "args": [
   228                  "-e2e.config=${workspaceRoot}/test/e2e/config/docker.yaml",
   229                  "-ginkgo.focus=\\[PR-Blocking\\]",
   230                  "-ginkgo.v=true"
   231              ],
   232              "trace": "verbose",
   233              "buildFlags": "-tags 'e2e'",
   234              "showGlobalVariables": true
   235          }
   236      ]
   237  }
   238  ```
   239  
   240  Execute the run configuration with `Debug`.
   241  
   242  <aside class="note">
   243  
   244  <h1>Tips</h1>
   245  
   246  The e2e tests create a new management cluster with kind on each run. To avoid this and speed up the test execution the tests can 
   247  also be run against a management cluster created by [tilt](./tilt.md):
   248  ```bash
   249  # Create a kind cluster
   250  ./hack/kind-install-for-capd.sh
   251  # Set up the management cluster via tilt
   252  tilt up 
   253  ```
   254  Now you can start the e2e test via IDE as described above but with the additional `-e2e.use-existing-cluster=true` flag.
   255  
   256  **Note**: This can also be used to debug controllers during e2e tests as described in [Developing Cluster API with Tilt](./tilt.md#wiring-up-debuggers).
   257  
   258  The e2e tests also create a local clusterctl repository. After it has been created on a first test execution this step can also be 
   259  skipped by setting `-e2e.clusterctl-config=<ARTIFACTS>/repository/clusterctl-config.yaml`. This also works with a clusterctl repository created 
   260  via [Create the local repository](http://localhost:3000/clusterctl/developers.html#create-the-local-repository).
   261  
   262  **Feature gates**: E2E tests often use features which need to be enabled first. Make sure to enable the feature gates in the tilt settings file:
   263  ```yaml
   264  kustomize_substitutions:
   265    CLUSTER_TOPOLOGY: "true"
   266    EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION: "true"
   267    EXP_RUNTIME_SDK: "true"
   268    EXP_MACHINE_SET_PREFLIGHT_CHECKS: "true"
   269  ```
   270  
   271  </aside>
   272  
   273  ### Running specific tests
   274  
   275  To run a subset of tests, a combination of either one or both of `GINKGO_FOCUS` and `GINKGO_SKIP` env variables can be set.
   276  Each of these can be used to match tests, for example:
   277  - `[PR-Blocking]` => Sanity tests run before each PR merge
   278  - `[K8s-Upgrade]` => Tests which verify k8s component version upgrades on workload clusters
   279  - `[Conformance]` => Tests which run the k8s conformance suite on workload clusters
   280  - `[ClusterClass]` => Tests which use a ClusterClass to create a workload cluster
   281  - `When testing KCP.*` => Tests which start with `When testing KCP`
   282  
   283  For example:
   284  ` GINKGO_FOCUS="\\[PR-Blocking\\]" make test-e2e ` can be used to run the sanity E2E tests
   285  ` GINKGO_SKIP="\\[K8s-Upgrade\\]" make test-e2e ` can be used to skip the upgrade E2E tests
   286  
   287  ### Further customization
   288  
   289  The following env variables can be set to customize the test execution:
   290  
   291  - `GINKGO_FOCUS` to set ginkgo focus (default empty - all tests)
   292  - `GINKGO_SKIP` to set ginkgo skip (default empty - to allow running all tests)
   293  - `GINKGO_NODES` to set the number of ginkgo parallel nodes (default to 1)
   294  - `E2E_CONF_FILE` to set the e2e test config file (default to ${REPO_ROOT}/test/e2e/config/docker.yaml)
   295  - `ARTIFACTS` to set the folder where test artifact will be stored (default to ${REPO_ROOT}/_artifacts)
   296  - `SKIP_RESOURCE_CLEANUP` to skip resource cleanup at the end of the test (useful for problem investigation) (default to false)
   297  - `USE_EXISTING_CLUSTER` to use an existing management cluster instead of creating a new one for each test run (default to false)
   298  - `GINKGO_NOCOLOR` to turn off the ginkgo colored output (default to false)
   299  
   300  Furthermore, it's possible to overwrite all env variables specified in `variables` in `test/e2e/config/docker.yaml`.
   301  
   302  ## Troubleshooting end-to-end tests
   303  
   304  ### Analyzing logs
   305  
   306  Logs of e2e tests can be analyzed with our development environment by pushing logs to Loki and then
   307  analyzing them via Grafana.
   308  
   309  1. Start the development environment as described in [Developing Cluster API with Tilt](./tilt.md).
   310      * Make sure to deploy Loki and Grafana via `deploy_observability`.
   311      * If you only want to see imported logs, don't deploy promtail (via `deploy_observability`).
   312      * If you want to drop all logs from Loki, just delete the Loki Pod in the `observability` namespace.
   313  2. You can then import logs via the `Import Logs` button on the top right of the [Loki resource page](http://localhost:10350/r/loki/overview).
   314     Just click on the downwards arrow, enter either a ProwJob URL, a GCS path or a local folder and click on `Import Logs`.
   315     This will retrieve the logs and push them to Loki. Alternatively, the logs can be imported via:
   316     ```bash
   317     go run ./hack/tools/internal/log-push --log-path=<log-path>
   318     ```
   319     Examples for log paths:
   320      * ProwJob URL: `https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api/6189/pull-cluster-api-e2e-main/1496954690603061248`
   321      * GCS path: `gs://kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api/6189/pull-cluster-api-e2e-main/1496954690603061248`
   322      * Local folder: `./_artifacts`
   323  4. Now the logs are available:
   324      * via [Grafana](http://localhost:3001/explore)
   325      * via [Loki logcli](https://grafana.com/docs/loki/latest/getting-started/logcli/)
   326        ```bash
   327        logcli query '{app="capi-controller-manager"}' --timezone=UTC --from="2022-02-22T10:00:00Z"
   328        ```
   329  
   330  <aside class="note">
   331  
   332  <h1>Caveats</h1>
   333  
   334  * Make sure you query the correct time range via Grafana or `logcli`.
   335  * The logs are currently uploaded by using now as the timestamp, because otherwise it would
   336    take a few minutes until the logs show up in Loki. The original timestamp is preserved as `original_ts`.
   337  
   338  </aside>
   339  
   340  As alternative to loki, JSON logs can be visualized with a human readable timestamp using `jq`:
   341  
   342  1. Browse the ProwJob artifacts and download the wanted logfile.
   343  2. Use `jq` to query the logs:
   344  
   345     ```bash
   346     cat manager.log \
   347       | grep -v "TLS handshake error" \
   348       | jq -r '(.ts / 1000 | todateiso8601) + " " + (. | tostring)'
   349     ```
   350  
   351     The `(. | tostring)` part could also be customized to only output parts of the JSON logline.
   352     E.g.:
   353    
   354     * `(.err)` to only output the error message part.
   355     * `(.msg)` to only output the message part.
   356     * `(.controller + " " + .msg)` to output the controller name and message part.
   357  
   358  ### Known Issues
   359  
   360  #### Building images on SELinux
   361  
   362  Cluster API repositories use [Moby Buildkit](https://github.com/moby/buildkit) to speed up image builds.
   363  [BuildKit does not currently work on SELinux](https://github.com/moby/buildkit/issues/2295).
   364  
   365  Use `sudo setenforce 0` to make SELinux permissive when running e2e tests.
   366  
   367  ## Quick reference
   368  
   369  ### `envtest`
   370  
   371  [envtest] is a testing environment that is provided by the [controller-runtime] project. This environment spins up a
   372  local instance of etcd and the kube-apiserver. This allows tests to be executed in an environment very similar to a
   373  real environment.
   374  
   375  Additionally, in Cluster API there is a set of utilities under [internal/envtest] that helps developers in setting up
   376  a [envtest] ready for Cluster API testing, and more specifically:
   377  
   378  - With the required CRDs already pre-configured.
   379  - With all the Cluster API webhook pre-configured, so there are enforced guarantees about the semantic accuracy
   380    of the test objects you are going to create.
   381  
   382  This is an example of how to create an instance of [envtest] that can be shared across all the tests in a package;
   383  by convention, this code should be in a file named `suite_test.go`:
   384  
   385  ```golang
   386  var (
   387  	env *envtest.Environment
   388  	ctx = ctrl.SetupSignalHandler()
   389  )
   390  
   391  func TestMain(m *testing.M) {
   392  	// Setup envtest
   393  	...
   394  
   395  	// Run tests
   396  	os.Exit(envtest.Run(ctx, envtest.RunInput{
   397  		M:        m,
   398  		SetupEnv: func(e *envtest.Environment) { env = e },
   399  		SetupIndexes:     setupIndexes,
   400  		SetupReconcilers: setupReconcilers,
   401  	}))
   402  }
   403  ```
   404  
   405  Most notably, [envtest] provides not only a real API server to use during testing, but it offers the opportunity
   406  to configure one or more controllers to run against the test cluster, as well as creating informers index. 
   407  
   408  ```golang
   409  func TestMain(m *testing.M) {
   410  	// Setup envtest
   411  	setupReconcilers := func(ctx context.Context, mgr ctrl.Manager) {
   412  		if err := (&MyReconciler{
   413  			Client:  mgr.GetClient(),
   414  			Log:     log.NullLogger{},
   415  		}).SetupWithManager(mgr, controller.Options{MaxConcurrentReconciles: 1}); err != nil {
   416  			panic(fmt.Sprintf("Failed to start the MyReconciler: %v", err))
   417  		}
   418  	}
   419  
   420  	setupIndexes := func(ctx context.Context, mgr ctrl.Manager) {
   421  		if err := index.AddDefaultIndexes(ctx, mgr); err != nil {
   422  		panic(fmt.Sprintf("unable to setup index: %v", err))
   423  	}
   424      
   425      // Run tests
   426  	...
   427  }
   428  ```
   429  
   430  By combining pre-configured validation and mutating webhooks and reconcilers/indexes it is possible
   431  to use [envtest] for developing Cluster API integration tests that can mimic how the system
   432  behaves in real Cluster.
   433  
   434  Please note that, because [envtest] uses a real kube-apiserver that is shared across many test cases, the developer
   435  should take care in ensuring each test runs in isolation from the others, by:
   436  
   437  - Creating objects in separated namespaces.
   438  - Avoiding object name conflict.
   439  
   440  Developers should also be aware of the fact that the informers cache used to access the [envtest]
   441  depends on actual etcd watches/API calls for updates, and thus it could happen that after creating 
   442  or deleting objects the cache takes a few milliseconds to get updated. This can lead to test flakes, 
   443  and thus it always recommended to use patterns like create and wait or delete and wait; Cluster API env
   444  test provides a set of utils for this scope.
   445  
   446  However, developers should be aware that in some ways, the test control plane will behave differently from “real”
   447  clusters, and that might have an impact on how you write tests.
   448  
   449  One common example is garbage collection; because there are no controllers monitoring built-in resources, objects
   450  do not get deleted, even if an OwnerReference is set up; as a consequence, usually test implements code for cleaning up
   451  created objects.
   452  
   453  This is an example of a test implementing those recommendations:
   454  
   455  ```golang
   456  func TestAFunc(t *testing.T) {
   457  	g := NewWithT(t)
   458  	// Generate namespace with a random name starting with ns1; such namespace
   459  	// will host test objects in isolation from other tests.
   460  	ns1, err := env.CreateNamespace(ctx, "ns1")
   461  	g.Expect(err).ToNot(HaveOccurred())
   462  	defer func() {
   463  		// Cleanup the test namespace
   464  		g.Expect(env.DeleteNamespace(ctx, ns1)).To(Succeed())
   465  	}()
   466  
   467  	obj := &clusterv1.Cluster{
   468  		ObjectMeta: metav1.ObjectMeta{
   469  			Name:      "test",
   470  			Namespace: ns1.Name, // Place test objects in the test namespace
   471  		},
   472  	}
   473  
   474  	// Actual test code...
   475  }
   476  ```
   477  
   478  In case of object used in many test case within the same test, it is possible to leverage on Kubernetes `GenerateName`;
   479  For objects that are shared across sub-tests, ensure they are scoped within the test namespace and deep copied to avoid
   480  cross-test changes that may occur to the object.
   481  
   482  ```golang
   483  func TestAFunc(t *testing.T) {
   484  	g := NewWithT(t)
   485  	// Generate namespace with a random name starting with ns1; such namespace
   486  	// will host test objects in isolation from other tests.
   487  	ns1, err := env.CreateNamespace(ctx, "ns1")
   488  	g.Expect(err).ToNot(HaveOccurred())
   489  	defer func() {
   490  		// Cleanup the test namespace
   491  		g.Expect(env.DeleteNamespace(ctx, ns1)).To(Succeed())
   492  	}()
   493  
   494  	obj := &clusterv1.Cluster{
   495  		ObjectMeta: metav1.ObjectMeta{
   496  			GenerateName: "test-",  // Instead of assigning a name, use GenerateName
   497  			Namespace:    ns1.Name, // Place test objects in the test namespace
   498  		},
   499  	}
   500  
   501  	t.Run("test case 1", func(t *testing.T) {
   502  		g := NewWithT(t)
   503  		// Deep copy the object in each test case, so we prevent side effects in case the object changes.
   504  		// Additionally, thanks to GenerateName, the objects gets a new name for each test case.
   505  		obj := obj.DeepCopy()
   506  
   507  	    // Actual test case code...
   508  	}
   509  	t.Run("test case 2", func(t *testing.T) {
   510  		g := NewWithT(t)
   511  		obj := obj.DeepCopy()
   512  
   513  	    // Actual test case code...
   514  	}
   515  	// More test cases.
   516  }
   517  ```
   518  
   519  ### `fakeclient`
   520  
   521  [fakeclient] is another utility that is provided by the [controller-runtime] project. While this utility is really
   522  fast and simple to use because it does not require to spin-up an instance of etcd and kube-apiserver, the [fakeclient]
   523  comes with a set of limitations that could hamper the validity of a test, most notably:
   524  
   525  - it does not properly handle a set of fields which are common in the Kubernetes API objects (and Cluster API objects as well)
   526    like e.g. `creationTimestamp`, `resourceVersion`, `generation`, `uid`
   527  - [fakeclient] operations do not trigger defaulting or validation webhooks, so there are no enforced guarantees about the semantic accuracy
   528    of the test objects.
   529  - the [fakeclient] does not use a cache based on informers/API calls/etcd watches, so the test written in this way
   530    can't help in surfacing race conditions related to how those components behave in real cluster.
   531  - there is no support for cache index/operations using cache indexes. 
   532  
   533  Accordingly, using [fakeclient] is not suitable for all the use cases, so in some cases contributors will be required
   534  to use [envtest] instead. In case of doubts about which one to use when writing tests, don't hesitate to ask for
   535  guidance from project maintainers.
   536  
   537  ### `ginkgo`
   538  [Ginkgo] is a Go testing framework built to help you efficiently write expressive and comprehensive tests using Behavior-Driven Development (“BDD”) style.
   539  
   540  While [Ginkgo] is widely used in the Kubernetes ecosystem, Cluster API maintainers found the lack of integration with the
   541  most used golang IDE somehow limiting, mostly because:
   542  
   543  - it makes interactive debugging of tests more difficult, since you can't just run the test using the debugger directly
   544  - it makes it more difficult to only run a subset of tests, since you can't just run or debug individual tests using an IDE,
   545    but you now need to run the tests using `make` or the `ginkgo` command line and override the focus to select individual tests
   546  
   547  In Cluster API you MUST use ginkgo only for E2E tests, where it is required to leverage the support for running specs
   548  in parallel; in any case, developers MUST NOT use the table driven extension DSL (`DescribeTable`, `Entry` commands)
   549  which is considered unintuitive.
   550  
   551  ### `gomega`
   552  [Gomega] is a matcher/assertion library. It is usually paired with the Ginkgo BDD test framework, but it can be used with
   553   other test frameworks too.
   554  
   555   More specifically, in order to use Gomega with go test you should
   556  
   557   ```golang
   558   func TestFarmHasCow(t *testing.T) {
   559       g := NewWithT(t)
   560       g.Expect(f.HasCow()).To(BeTrue(), "Farm should have cow")
   561   }
   562  ```
   563  
   564  In Cluster API all the test MUST use [Gomega] assertions.
   565  
   566  ### `go test`
   567  
   568  [go test] testing provides support for automated testing of Go packages.
   569  
   570  In Cluster API Unit and integration test MUST use [go test].
   571  
   572  [Cluster API quick start]: ../user/quick-start.md
   573  [Cluster API test framework]: https://pkg.go.dev/sigs.k8s.io/cluster-api/test/framework?tab=doc
   574  [e2e development]: ./e2e.md
   575  [Ginkgo]: https://onsi.github.io/ginkgo/
   576  [Gomega]: https://onsi.github.io/gomega/
   577  [go test]: https://golang.org/pkg/testing/
   578  [controller-runtime]: https://github.com/kubernetes-sigs/controller-runtime
   579  [envtest]: https://github.com/kubernetes-sigs/controller-runtime/tree/main/pkg/envtest
   580  [fakeclient]: https://github.com/kubernetes-sigs/controller-runtime/tree/main/pkg/client/fake
   581  [test/helpers]: https://github.com/kubernetes-sigs/cluster-api/tree/main/test/helpers
   582  
   583  [vscode-go]: https://marketplace.visualstudio.com/items?itemName=golang.Go