sigs.k8s.io/cluster-api@v1.6.3/docs/book/src/developer/testing.md (about)

     1  # Testing Cluster API
     2  
     3  This document presents testing guidelines and conventions for Cluster API.
     4  
     5  IMPORTANT: improving and maintaining this document is a collaborative effort, so we are encouraging constructive
     6  feedback and suggestions.
     7  
     8  ## Unit tests
     9  
    10  Unit tests focus on individual pieces of logic - a single func - and don't require any additional services to execute. They should
    11  be fast and great for getting the first signal on the current implementation, but unit tests have the risk of
    12  allowing integration bugs to slip through.
    13  
    14  In Cluster API most of the unit tests are developed using [go test], [gomega] and the [fakeclient]; however using
    15  [fakeclient] is not suitable for all the use cases due to some limitations in how it is implemented. In some cases
    16  contributors will be required to use [envtest]. See the [quick reference](#quick-reference) below for more details.
    17  
    18  ### Mocking external APIs
    19  In some cases when writing tests it is required to mock external API, e.g. etcd client API or the AWS SDK API.
    20  
    21  This problem is usually well scoped in core Cluster API, and in most cases it is already solved by using fake
    22  implementations of the target API to be injected during tests.
    23  
    24  Instead, mocking is much more relevant for infrastructure providers; in order to address the issue
    25  some providers can use simulators reproducing the behaviour of a real infrastructure providers (e.g CAPV);
    26  if this is not possible, a viable solution is to use mocks (e.g CAPA).
    27  
    28  ### Generic providers
    29  When writing tests core Cluster API contributors should ensure that the code works with any providers, and thus it is required
    30  to not use any specific provider implementation. Instead, the so-called generic providers e.g. "GenericInfrastructureCluster" 
    31  should be used because they implement the plain Cluster API contract. This prevents tests from relying on assumptions that 
    32  may not hold true in all cases.
    33  
    34  Please note that in the long term we would like to improve the implementation of generic providers, centralizing
    35  the existing set of utilities scattered across the codebase, but while details of this work will be defined do not
    36  hesitate to reach out to reviewers and maintainers for guidance.
    37  
    38  ## Integration tests
    39  
    40  Integration tests are focused on testing the behavior of an entire controller or the interactions between two or
    41  more Cluster API controllers.
    42  
    43  In Cluster API, integration tests are based on [envtest] and one or more controllers configured to run against
    44  the test cluster.
    45  
    46  With this approach it is possible to interact with Cluster API almost like in a real environment, by creating/updating
    47  Kubernetes objects and waiting for the controllers to take action. See the [quick reference](#quick-reference) below for more details.
    48  
    49  Also in case of integration tests, considerations about [mocking external APIs](#mocking-external-apis) and usage of [generic providers](#generic-providers) apply. 
    50  
    51  ## Fuzzing tests
    52  
    53  Fuzzing tests automatically inject randomly generated inputs, often invalid or with unexpected values, into functions to discover vulnerabilities. 
    54  
    55  Two different types of fuzzing are currently being used on the Cluster API repository:
    56  
    57  ### Fuzz testing for API conversion
    58  
    59  Cluster API uses Kubernetes' conversion-gen to automate the generation of functions to convert our API objects between versions. These conversion functions are tested using the [FuzzTestFunc util in our conversion utils package](https://github.com/kubernetes-sigs/cluster-api/blob/1ec0cd6174f1b860dc466db587241ea7edea0b9f/util/conversion/conversion.go#L194).
    60  For more information about these conversions see the API conversion code walkthrough in our [video walkthrough series](./guide.md#videos-explaining-capi-architecture-and-code-walkthroughs).
    61  
    62  ### OSS-Fuzz continuous fuzzing
    63  
    64  Parts of the CAPI code base are continuously fuzzed through the [OSS-Fuzz project](https://github.com/google/oss-fuzz). Issues found in these fuzzing tests are reported to Cluster API maintainers and surfaced in issues on the repo for resolution.
    65  To read more about the integration of Cluster API with OSS Fuzz see [the 2022 Cluster API Fuzzing Report](https://github.com/kubernetes/sig-security/blob/main/sig-security-assessments/cluster-api/capi_2022_fuzzing.pdf).
    66  
    67  ## Test maintainability
    68  
    69  Tests are an integral part of the project codebase.
    70  
    71  Cluster API maintainers and all the contributors should be committed to help in ensuring that tests are easily maintainable,
    72  easily readable, well documented and consistent across the code base.
    73  
    74  In light of continuing improving our practice around this ambitious goal, we are starting to introduce a shared set of:
    75  
    76  - Builders (`sigs.k8s.io/cluster-api/internal/test/builder`), allowing to create test objects in a simple and consistent way.
    77  - Matchers (`sigs.k8s.io/controller-runtime/pkg/envtest/komega`), improving how we write test assertions.
    78  
    79  Each contribution in growing this set of utilities or their adoption across the codebase is more than welcome!
    80  
    81  Another consideration that can help in improving test maintainability is the idea of testing "by layers"; this idea could 
    82  apply whenever we are testing "higher-level" functions that internally uses one or more "lower-level" functions;
    83  in order to avoid writing/maintaining redundant tests, whenever possible contributors should take care of testing
    84  _only_ the logic that is implemented in the "higher-level" function, delegating the test function called internally
    85  to a "lower-level" set of unit tests.
    86  
    87  A similar concern could be raised also in the case whenever there is overlap between unit tests and integration tests,
    88  but in this case the distinctive value of the two layers of testing is determined by how test are designed:
    89  
    90  - unit test are focused on code structure: func(input) = output, including edge case values, asserting error conditions etc.
    91  - integration test are user story driven: as a user, I want express some desired state using API objects, wait for the
    92    reconcilers to take action, check the new system state.
    93  
    94  ## Running unit and integration tests
    95  
    96  Run `make test` to execute all unit and integration tests.
    97  
    98  Integration tests use the [envtest](https://github.com/kubernetes-sigs/controller-runtime/blob/main/pkg/envtest/doc.go) test framework. The tests need to know the location of the executables called by the framework. The `make test` target installs these executables, and passes this location to the tests as an environment variable.
    99  
   100  <aside class="note">
   101  
   102  <h1>Tips</h1>
   103  
   104  When testing individual packages, you can speed up the test execution by running the tests with a local kind cluster.
   105  This avoids spinning up a testenv with each test execution. It also makes it easier to debug, because it's straightforward
   106  to access a kind cluster with kubectl during test execution. For further instructions, run: `./hack/setup-envtest-with-kind.sh`.
   107  
   108  When running individual tests, it could happen that a testenv is started if this is required by the `suite_test.go` file.
   109  However, if the tests you are running don't require testenv (i.e. they are only using fake client), you can skip the testenv
   110  creation by setting the environment variable `CAPI_DISABLE_TEST_ENV` (to any non-empty value).
   111  
   112  To debug testenv unit tests it is possible to use:
   113  * `CAPI_TEST_ENV_KUBECONFIG` to write out a kubeconfig for the testenv to a file location.
   114  * `CAPI_TEST_ENV_SKIP_STOP` to skip stopping the testenv after test execution.
   115  
   116  </aside>
   117  
   118  ### Test execution via IDE
   119  
   120  Your IDE needs to know the location of the executables called by the framework, so that it can pass the location to the tests as an environment variable.
   121  
   122  <aside class="note warning">
   123  
   124  <h1>Warning</h1>
   125  
   126  If you see this error when running a test in your IDE, the test uses the envtest framework, and probably does not know the location of the envtest executables.
   127  
   128  ```console
   129  E0210 16:11:04.222471  132945 server.go:329] controller-runtime/test-env "msg"="unable to start the controlplane" "error"="fork/exec /usr/local/kubebuilder/bin/etcd: no such file or directory" "tries"=0
   130  ```
   131  
   132  </aside>
   133  
   134  #### VSCode
   135  
   136  The `dev/vscode-example-configuration` directory in the repository contains an example configuration that integrates VSCode with the envtest framework.
   137  
   138  To use the example configuration, copy the files to the `.vscode` directory in the repository, and restart VSCode.
   139  
   140  The configuration works as follows: Whenever the project is opened in VSCode, a VSCode task runs that installs the executables, and writes the location to a file. A setting tells [vscode-go] to initialize the environment from this file.
   141  
   142  ## End-to-end tests
   143  
   144  The end-to-end tests are meant to verify the proper functioning of a Cluster API management cluster
   145  in an environment that resemble a real production environment.
   146  
   147  The following guidelines should be followed when developing E2E tests:
   148  
   149  - Use the [Cluster API test framework].
   150  - Define test spec reflecting real user workflow, e.g. [Cluster API quick start].
   151  - Unless you are testing provider specific features, ensure your test can run with
   152    different infrastructure providers (see [Writing Portable Tests](./e2e.md#writing-portable-e2e-tests)).
   153  
   154  See [e2e development] for more information on developing e2e tests for CAPI and external providers.
   155  
   156  ## Running the end-to-end tests locally
   157  
   158  Usually the e2e tests are executed by Prow, either pre-submit (on PRs) or periodically on certain branches
   159  (e.g. the default branch). Those jobs are defined in the kubernetes/test-infra repository in [config/jobs/kubernetes-sigs/cluster-api](https://github.com/kubernetes/test-infra/tree/master/config/jobs/kubernetes-sigs/cluster-api).
   160  For development and debugging those tests can also be executed locally.
   161  
   162  ### Prerequisites
   163  
   164  `make docker-build-e2e` will build the images for all providers that will be needed for the e2e tests.
   165  
   166  ### Test execution via ci-e2e.sh
   167  
   168  To run a test locally via the command line, you should look at the Prow Job configuration for the test you want to run and then execute the same commands locally.
   169  For example to run [pull-cluster-api-e2e-main](https://github.com/kubernetes/test-infra/blob/49ab08a6a2a17377d52a11212e6f1104c3e87bfc/config/jobs/kubernetes-sigs/cluster-api/cluster-api-presubmits-main.yaml#L113-L140)
   170  just execute:
   171  
   172  ```bash
   173  GINKGO_FOCUS="\[PR-Blocking\]" ./scripts/ci-e2e.sh
   174  ```
   175  
   176  ### Test execution via make test-e2e
   177  
   178  `make test-e2e` will run e2e tests by using whatever provider images already exist on disk.
   179  After running `make docker-build-e2e` at least once, `make test-e2e` can be used for a faster test run, if there are no
   180  provider code changes. If the provider code is changed, run `make docker-build-e2e` to update the images.
   181  
   182  ### Test execution via IDE
   183  
   184  It's also possible to run the tests via an IDE which makes it easier to debug the test code by stepping through the code.
   185  
   186  First, we have to make sure all prerequisites are fulfilled, i.e. all required images have been built (this also includes
   187  kind images). This can be done by executing the `./scripts/ci-e2e.sh` script.
   188  
   189  ```bash
   190  # Notes:
   191  # * You can cancel the script as soon as it starts the actual test execution via `make test-e2e`.
   192  # * If you want to run other tests (e.g. upgrade tests), make sure all required env variables are set (see the Prow Job config).
   193  GINKGO_FOCUS="\[PR-Blocking\]" ./scripts/ci-e2e.sh
   194  ```
   195  
   196  Now, the tests can be run in an IDE. The following describes how this can be done in IntelliJ IDEA and VS Code. It should work
   197  roughly the same way in all other IDEs. We assume the `cluster-api` repository has been checked
   198  out into `/home/user/code/src/sigs.k8s.io/cluster-api`.
   199  
   200  #### IntelliJ
   201  
   202  Create a new run configuration and fill in:
   203  * Test framework: `gotest`
   204  * Test kind: `Package`
   205  * Package path: `sigs.k8s.io/cluster-api/test/e2e`
   206  * Pattern: `^\QTestE2E\E$`
   207  * Working directory: `/home/user/code/src/sigs.k8s.io/cluster-api/test/e2e`
   208  * Environment: `ARTIFACTS=/home/user/code/src/sigs.k8s.io/cluster-api/_artifacts`
   209  * Program arguments: `-e2e.config=/home/user/code/src/sigs.k8s.io/cluster-api/test/e2e/config/docker.yaml -ginkgo.focus="\[PR-Blocking\]"`
   210  
   211  #### VS Code
   212  
   213  Add the launch.json file in the .vscode folder in your repo:
   214  ```json
   215  {
   216      "version": "0.2.0",
   217      "configurations": [
   218          {
   219              "name": "Run e2e test",
   220              "type": "go",
   221              "request": "launch",
   222              "mode": "test",
   223              "program": "${workspaceRoot}/test/e2e/e2e_suite_test.go",
   224              "env": {
   225                  "ARTIFACTS":"${workspaceRoot}/_artifacts"
   226              },
   227              "args": [
   228                  "-e2e.config=${workspaceRoot}/test/e2e/config/docker.yaml",
   229                  "-ginkgo.focus=\\[PR-Blocking\\]",
   230                  "-ginkgo.v=true"
   231              ],
   232              "trace": "verbose",
   233              "buildFlags": "-tags 'e2e'",
   234              "showGlobalVariables": true
   235          }
   236      ]
   237  }
   238  ```
   239  
   240  Execute the run configuration with `Debug`.
   241  
   242  <aside class="note">
   243  
   244  <h1>Tips</h1>
   245  
   246  The e2e tests create a new management cluster with kind on each run. To avoid this and speed up the test execution the tests can 
   247  also be run against a management cluster created by [tilt](./tilt.md):
   248  ```bash
   249  # Create a kind cluster
   250  ./hack/kind-install-for-capd.sh
   251  # Set up the management cluster via tilt
   252  tilt up 
   253  ```
   254  Now you can start the e2e test via IDE as described above but with the additional `-e2e.use-existing-cluster=true` flag.
   255  
   256  **Note**: This can also be used to debug controllers during e2e tests as described in [Developing Cluster API with Tilt](./tilt.md#wiring-up-debuggers).
   257  
   258  The e2e tests also create a local clusterctl repository. After it has been created on a first test execution this step can also be 
   259  skipped by setting `-e2e.cluster-config=<ARTIFACTS>/repository/clusterctl-config.yaml`. This also works with a clusterctl repository created 
   260  via [Create the local repository](http://localhost:3000/clusterctl/developers.html#create-the-local-repository).
   261  
   262  **Feature gates**: E2E tests often use features which need to be enabled first. Make sure to enable the feature gates in the tilt settings file:
   263  ```yaml
   264  kustomize_substitutions:
   265    CLUSTER_TOPOLOGY: "true"
   266    EXP_MACHINE_POOL: "true"
   267    EXP_CLUSTER_RESOURCE_SET: "true"
   268    EXP_KUBEADM_BOOTSTRAP_FORMAT_IGNITION: "true"
   269    EXP_RUNTIME_SDK: "true"
   270    EXP_MACHINE_SET_PREFLIGHT_CHECKS: "true"
   271  ```
   272  
   273  </aside>
   274  
   275  ### Running specific tests
   276  
   277  To run a subset of tests, a combination of either one or both of `GINKGO_FOCUS` and `GINKGO_SKIP` env variables can be set.
   278  Each of these can be used to match tests, for example:
   279  - `[PR-Blocking]` => Sanity tests run before each PR merge
   280  - `[K8s-Upgrade]` => Tests which verify k8s component version upgrades on workload clusters
   281  - `[Conformance]` => Tests which run the k8s conformance suite on workload clusters
   282  - `[ClusterClass]` => Tests which use a ClusterClass to create a workload cluster
   283  - `When testing KCP.*` => Tests which start with `When testing KCP`
   284  
   285  For example:
   286  ` GINKGO_FOCUS="\\[PR-Blocking\\]" make test-e2e ` can be used to run the sanity E2E tests
   287  ` GINKGO_SKIP="\\[K8s-Upgrade\\]" make test-e2e ` can be used to skip the upgrade E2E tests
   288  
   289  ### Further customization
   290  
   291  The following env variables can be set to customize the test execution:
   292  
   293  - `GINKGO_FOCUS` to set ginkgo focus (default empty - all tests)
   294  - `GINKGO_SKIP` to set ginkgo skip (default empty - to allow running all tests)
   295  - `GINKGO_NODES` to set the number of ginkgo parallel nodes (default to 1)
   296  - `E2E_CONF_FILE` to set the e2e test config file (default to ${REPO_ROOT}/test/e2e/config/docker.yaml)
   297  - `ARTIFACTS` to set the folder where test artifact will be stored (default to ${REPO_ROOT}/_artifacts)
   298  - `SKIP_RESOURCE_CLEANUP` to skip resource cleanup at the end of the test (useful for problem investigation) (default to false)
   299  - `USE_EXISTING_CLUSTER` to use an existing management cluster instead of creating a new one for each test run (default to false)
   300  - `GINKGO_NOCOLOR` to turn off the ginkgo colored output (default to false)
   301  
   302  Furthermore, it's possible to overwrite all env variables specified in `variables` in `test/e2e/config/docker.yaml`.
   303  
   304  ## Troubleshooting end-to-end tests
   305  
   306  ### Analyzing logs
   307  
   308  Logs of e2e tests can be analyzed with our development environment by pushing logs to Loki and then
   309  analyzing them via Grafana.
   310  
   311  1. Start the development environment as described in [Developing Cluster API with Tilt](./tilt.md).
   312      * Make sure to deploy Loki and Grafana via `deploy_observability`.
   313      * If you only want to see imported logs, don't deploy promtail (via `deploy_observability`).
   314      * If you want to drop all logs from Loki, just delete the Loki Pod in the `observability` namespace.
   315  2. You can then import logs via the `Import Logs` button on the top right of the [Loki resource page](http://localhost:10350/r/loki/overview).
   316     Just click on the downwards arrow, enter either a ProwJob URL, a GCS path or a local folder and click on `Import Logs`.
   317     This will retrieve the logs and push them to Loki. Alternatively, the logs can be imported via:
   318     ```bash
   319     go run ./hack/tools/internal/log-push --log-path=<log-path>
   320     ```
   321     Examples for log paths:
   322      * ProwJob URL: `https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api/6189/pull-cluster-api-e2e-main/1496954690603061248`
   323      * GCS path: `gs://kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api/6189/pull-cluster-api-e2e-main/1496954690603061248`
   324      * Local folder: `./_artifacts`
   325  4. Now the logs are available:
   326      * via [Grafana](http://localhost:3001/explore)
   327      * via [Loki logcli](https://grafana.com/docs/loki/latest/getting-started/logcli/)
   328        ```bash
   329        logcli query '{app="capi-controller-manager"}' --timezone=UTC --from="2022-02-22T10:00:00Z"
   330        ```
   331  
   332  <aside class="note">
   333  
   334  <h1>Caveats</h1>
   335  
   336  * Make sure you query the correct time range via Grafana or `logcli`.
   337  * The logs are currently uploaded by using now as the timestamp, because otherwise it would
   338    take a few minutes until the logs show up in Loki. The original timestamp is preserved as `original_ts`.
   339  
   340  </aside>
   341  
   342  As alternative to loki, JSON logs can be visualized with a human readable timestamp using `jq`:
   343  
   344  1. Browse the ProwJob artifacts and download the wanted logfile.
   345  2. Use `jq` to query the logs:
   346  
   347     ```bash
   348     cat manager.log \
   349       | grep -v "TLS handshake error" \
   350       | jq -r '(.ts / 1000 | todateiso8601) + " " + (. | tostring)'
   351     ```
   352  
   353     The `(. | tostring)` part could also be customized to only output parts of the JSON logline.
   354     E.g.:
   355    
   356     * `(.err)` to only output the error message part.
   357     * `(.msg)` to only output the message part.
   358     * `(.controller + " " + .msg)` to output the controller name and message part.
   359  
   360  ### Known Issues
   361  
   362  #### Building images on SELinux
   363  
   364  Cluster API repositories use [Moby Buildkit](https://github.com/moby/buildkit) to speed up image builds.
   365  [BuildKit does not currently work on SELinux](https://github.com/moby/buildkit/issues/2295).
   366  
   367  Use `sudo setenforce 0` to make SELinux permissive when running e2e tests.
   368  
   369  ## Quick reference
   370  
   371  ### `envtest`
   372  
   373  [envtest] is a testing environment that is provided by the [controller-runtime] project. This environment spins up a
   374  local instance of etcd and the kube-apiserver. This allows tests to be executed in an environment very similar to a
   375  real environment.
   376  
   377  Additionally, in Cluster API there is a set of utilities under [internal/envtest] that helps developers in setting up
   378  a [envtest] ready for Cluster API testing, and more specifically:
   379  
   380  - With the required CRDs already pre-configured.
   381  - With all the Cluster API webhook pre-configured, so there are enforced guarantees about the semantic accuracy
   382    of the test objects you are going to create.
   383  
   384  This is an example of how to create an instance of [envtest] that can be shared across all the tests in a package;
   385  by convention, this code should be in a file named `suite_test.go`:
   386  
   387  ```golang
   388  var (
   389  	env *envtest.Environment
   390  	ctx = ctrl.SetupSignalHandler()
   391  )
   392  
   393  func TestMain(m *testing.M) {
   394  	// Setup envtest
   395  	...
   396  
   397  	// Run tests
   398  	os.Exit(envtest.Run(ctx, envtest.RunInput{
   399  		M:        m,
   400  		SetupEnv: func(e *envtest.Environment) { env = e },
   401  		SetupIndexes:     setupIndexes,
   402  		SetupReconcilers: setupReconcilers,
   403  	}))
   404  }
   405  ```
   406  
   407  Most notably, [envtest] provides not only a real API server to use during testing, but it offers the opportunity
   408  to configure one or more controllers to run against the test cluster, as well as creating informers index. 
   409  
   410  ```golang
   411  func TestMain(m *testing.M) {
   412  	// Setup envtest
   413  	setupReconcilers := func(ctx context.Context, mgr ctrl.Manager) {
   414  		if err := (&MyReconciler{
   415  			Client:  mgr.GetClient(),
   416  			Log:     log.NullLogger{},
   417  		}).SetupWithManager(mgr, controller.Options{MaxConcurrentReconciles: 1}); err != nil {
   418  			panic(fmt.Sprintf("Failed to start the MyReconciler: %v", err))
   419  		}
   420  	}
   421  
   422  	setupIndexes := func(ctx context.Context, mgr ctrl.Manager) {
   423  		if err := index.AddDefaultIndexes(ctx, mgr); err != nil {
   424  		panic(fmt.Sprintf("unable to setup index: %v", err))
   425  	}
   426      
   427      // Run tests
   428  	...
   429  }
   430  ```
   431  
   432  By combining pre-configured validation and mutating webhooks and reconcilers/indexes it is possible
   433  to use [envtest] for developing Cluster API integration tests that can mimic how the system
   434  behaves in real Cluster.
   435  
   436  Please note that, because [envtest] uses a real kube-apiserver that is shared across many test cases, the developer
   437  should take care in ensuring each test runs in isolation from the others, by:
   438  
   439  - Creating objects in separated namespaces.
   440  - Avoiding object name conflict.
   441  
   442  Developers should also be aware of the fact that the informers cache used to access the [envtest]
   443  depends on actual etcd watches/API calls for updates, and thus it could happen that after creating 
   444  or deleting objects the cache takes a few milliseconds to get updated. This can lead to test flakes, 
   445  and thus it always recommended to use patterns like create and wait or delete and wait; Cluster API env
   446  test provides a set of utils for this scope.
   447  
   448  However, developers should be aware that in some ways, the test control plane will behave differently from “real”
   449  clusters, and that might have an impact on how you write tests.
   450  
   451  One common example is garbage collection; because there are no controllers monitoring built-in resources, objects
   452  do not get deleted, even if an OwnerReference is set up; as a consequence, usually test implements code for cleaning up
   453  created objects.
   454  
   455  This is an example of a test implementing those recommendations:
   456  
   457  ```golang
   458  func TestAFunc(t *testing.T) {
   459  	g := NewWithT(t)
   460  	// Generate namespace with a random name starting with ns1; such namespace
   461  	// will host test objects in isolation from other tests.
   462  	ns1, err := env.CreateNamespace(ctx, "ns1")
   463  	g.Expect(err).ToNot(HaveOccurred())
   464  	defer func() {
   465  		// Cleanup the test namespace
   466  		g.Expect(env.DeleteNamespace(ctx, ns1)).To(Succeed())
   467  	}()
   468  
   469  	obj := &clusterv1.Cluster{
   470  		ObjectMeta: metav1.ObjectMeta{
   471  			Name:      "test",
   472  			Namespace: ns1.Name, // Place test objects in the test namespace
   473  		},
   474  	}
   475  
   476  	// Actual test code...
   477  }
   478  ```
   479  
   480  In case of object used in many test case within the same test, it is possible to leverage on Kubernetes `GenerateName`;
   481  For objects that are shared across sub-tests, ensure they are scoped within the test namespace and deep copied to avoid
   482  cross-test changes that may occur to the object.
   483  
   484  ```golang
   485  func TestAFunc(t *testing.T) {
   486  	g := NewWithT(t)
   487  	// Generate namespace with a random name starting with ns1; such namespace
   488  	// will host test objects in isolation from other tests.
   489  	ns1, err := env.CreateNamespace(ctx, "ns1")
   490  	g.Expect(err).ToNot(HaveOccurred())
   491  	defer func() {
   492  		// Cleanup the test namespace
   493  		g.Expect(env.DeleteNamespace(ctx, ns1)).To(Succeed())
   494  	}()
   495  
   496  	obj := &clusterv1.Cluster{
   497  		ObjectMeta: metav1.ObjectMeta{
   498  			GenerateName: "test-",  // Instead of assigning a name, use GenerateName
   499  			Namespace:    ns1.Name, // Place test objects in the test namespace
   500  		},
   501  	}
   502  
   503  	t.Run("test case 1", func(t *testing.T) {
   504  		g := NewWithT(t)
   505  		// Deep copy the object in each test case, so we prevent side effects in case the object changes.
   506  		// Additionally, thanks to GenerateName, the objects gets a new name for each test case.
   507  		obj := obj.DeepCopy()
   508  
   509  	    // Actual test case code...
   510  	}
   511  	t.Run("test case 2", func(t *testing.T) {
   512  		g := NewWithT(t)
   513  		obj := obj.DeepCopy()
   514  
   515  	    // Actual test case code...
   516  	}
   517  	// More test cases.
   518  }
   519  ```
   520  
   521  ### `fakeclient`
   522  
   523  [fakeclient] is another utility that is provided by the [controller-runtime] project. While this utility is really
   524  fast and simple to use because it does not require to spin-up an instance of etcd and kube-apiserver, the [fakeclient]
   525  comes with a set of limitations that could hamper the validity of a test, most notably:
   526  
   527  - it does not properly handle a set of fields which are common in the Kubernetes API objects (and Cluster API objects as well)
   528    like e.g. `creationTimestamp`, `resourceVersion`, `generation`, `uid`
   529  - [fakeclient] operations do not trigger defaulting or validation webhooks, so there are no enforced guarantees about the semantic accuracy
   530    of the test objects.
   531  - the [fakeclient] does not use a cache based on informers/API calls/etcd watches, so the test written in this way
   532    can't help in surfacing race conditions related to how those components behave in real cluster.
   533  - there is no support for cache index/operations using cache indexes. 
   534  
   535  Accordingly, using [fakeclient] is not suitable for all the use cases, so in some cases contributors will be required
   536  to use [envtest] instead. In case of doubts about which one to use when writing tests, don't hesitate to ask for
   537  guidance from project maintainers.
   538  
   539  ### `ginkgo`
   540  [Ginkgo] is a Go testing framework built to help you efficiently write expressive and comprehensive tests using Behavior-Driven Development (“BDD”) style.
   541  
   542  While [Ginkgo] is widely used in the Kubernetes ecosystem, Cluster API maintainers found the lack of integration with the
   543  most used golang IDE somehow limiting, mostly because:
   544  
   545  - it makes interactive debugging of tests more difficult, since you can't just run the test using the debugger directly
   546  - it makes it more difficult to only run a subset of tests, since you can't just run or debug individual tests using an IDE,
   547    but you now need to run the tests using `make` or the `ginkgo` command line and override the focus to select individual tests
   548  
   549  In Cluster API you MUST use ginkgo only for E2E tests, where it is required to leverage the support for running specs
   550  in parallel; in any case, developers MUST NOT use the table driven extension DSL (`DescribeTable`, `Entry` commands)
   551  which is considered unintuitive.
   552  
   553  ### `gomega`
   554  [Gomega] is a matcher/assertion library. It is usually paired with the Ginkgo BDD test framework, but it can be used with
   555   other test frameworks too.
   556  
   557   More specifically, in order to use Gomega with go test you should
   558  
   559   ```golang
   560   func TestFarmHasCow(t *testing.T) {
   561       g := NewWithT(t)
   562       g.Expect(f.HasCow()).To(BeTrue(), "Farm should have cow")
   563   }
   564  ```
   565  
   566  In Cluster API all the test MUST use [Gomega] assertions.
   567  
   568  ### `go test`
   569  
   570  [go test] testing provides support for automated testing of Go packages.
   571  
   572  In Cluster API Unit and integration test MUST use [go test].
   573  
   574  [Cluster API quick start]: ../user/quick-start.md
   575  [Cluster API test framework]: https://pkg.go.dev/sigs.k8s.io/cluster-api/test/framework?tab=doc
   576  [e2e development]: ./e2e.md
   577  [Ginkgo]: https://onsi.github.io/ginkgo/
   578  [Gomega]: https://onsi.github.io/gomega/
   579  [go test]: https://golang.org/pkg/testing/
   580  [controller-runtime]: https://github.com/kubernetes-sigs/controller-runtime
   581  [envtest]: https://github.com/kubernetes-sigs/controller-runtime/tree/main/pkg/envtest
   582  [fakeclient]: https://github.com/kubernetes-sigs/controller-runtime/tree/main/pkg/client/fake
   583  [test/helpers]: https://github.com/kubernetes-sigs/cluster-api/tree/main/test/helpers
   584  
   585  [vscode-go]: https://marketplace.visualstudio.com/items?itemName=golang.Go