sigs.k8s.io/cluster-api@v1.7.1/docs/book/src/clusterctl/provider-contract.md (about)

     1  # clusterctl Provider Contract
     2  
     3  The `clusterctl` command is designed to work with all the providers compliant with the following rules.
     4  
     5  ## Provider Repositories
     6  
     7  Each provider MUST define a **provider repository**, that is a well-known place where the release assets for
     8  a provider are published.
     9  
    10  The provider repository MUST contain the following files:
    11  
    12  * The metadata YAML
    13  * The components YAML
    14  
    15  Additionally, the provider repository SHOULD contain the following files:
    16  
    17  * Workload cluster templates
    18  
    19  Optionally, the provider repository can include the following files:
    20  
    21  * ClusterClass definitions
    22  
    23  <aside class="note">
    24  
    25  <h1> Pre-defined list of providers </h1>
    26  
    27  The `clusterctl` command ships with a pre-defined list of provider repositories that allows a simpler "out-of-the-box" user experience.
    28  As a provider implementer, if you are interested in being added to this list, please see next paragraph.
    29  
    30  </aside>
    31  
    32  <aside class="note">
    33  
    34  <h1>Customizing the list of providers</h1>
    35  
    36  It is possible to customize the list of providers for `clusterctl` by changing the [clusterctl configuration](configuration.md).
    37  
    38  </aside>
    39  
    40  #### Adding a provider to clusterctl
    41  
    42  As a Cluster API project, we always have been more than happy to give visibility to all the open source CAPI providers
    43  by allowing provider's maintainers to add their own project to the pre-defined list of provider shipped with `clusterctl`.
    44  
    45  <aside class="note">
    46  
    47  <h1>Important! it is visibility only</h1>
    48  
    49  Provider's maintainer are the ultimately responsible for their own project.
    50  
    51  Adding a provider to the `clusterctl` provider list does not imply any form of quality assessment, market screening, 
    52  entitlement, recognition or support by the Cluster API maintainers.
    53  
    54  </aside>
    55  
    56  This is the process to add a new provider to the pre-defined list of providers shipped with `clusterctl`:
    57  - As soon as possible, create an issue to the [Cluster API repository](https://sigs.k8s.io/cluster-api) declaring the intent to add a new provider;
    58    each provider must have a unique name & type in the pre-defined list of providers shipped with `clusterctl`; the provider's name
    59    must be declared in the issue above and abide to the following naming convention:
    60    - The name must consist of lower case alphanumeric characters or '-', and must start and end with an alphanumeric character.
    61    - The name length should not exceed 63 characters.
    62    - For providers not in the kubernetes-sigs org, in order to prevent conflicts the `clusterctl` name must be prefixed with
    63      the provider's GitHub org name followed by `-` (see note below).
    64  - Create a PR making the necessary changes to clusterctl and the Cluster API book, e.g. [#9798](https://github.com/kubernetes-sigs/cluster-api/pull/9798),
    65    [9720](https://github.com/kubernetes-sigs/cluster-api/pull/9720/files). 
    66  
    67  The Cluster API maintainers will review issues/PRs for adding new providers. If the PR merges before code freeze deadline
    68  for the next Cluster API minor release, changes will be included in the release, otherwise in the next minor
    69  release. Maintainers will also consider if possible/convenient to backport to the current Cluster API minor release
    70  branch to include it in the next patch release.
    71  
    72  <aside class="note">
    73  
    74  <h1>What about closed source providers?</h1>
    75  
    76  Closed source provider can not be added to the pre-defined list of provider shipped with `clusterctl`, however, 
    77  those providers could be used with `clusterctl` by changing the [clusterctl configuration](configuration.md).
    78  
    79  </aside>
    80  
    81  <aside class="note">
    82  
    83  <h1>Provider's GitHub org prefix</h1>
    84  
    85  The need to add a prefix for providers not in the kubernetes-sigs org applies to all the providers being added to
    86  `clusterctl`'s pre-defined list of provider starting from January 2024. This rule doesn't apply retroactively
    87  to the existing pre-defined providers, but we reserve the right to reconsider this in the future.
    88  
    89  Please note that the need to add a prefix for providers not in the kubernetes-sigs org does not apply to providers added by
    90  changing the [clusterctl configuration](configuration.md).
    91  
    92  </aside>
    93  
    94  #### Creating a provider repository on GitHub
    95  
    96  You can use a GitHub release to package your provider artifacts for other people to use.
    97  
    98  A GitHub release can be used as a provider repository if:
    99  
   100  * The release tag is a valid semantic version number
   101  * The components YAML, the metadata YAML and eventually the workload cluster templates are included into the release assets.
   102  
   103  See the [GitHub docs](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository) for more information
   104  about how to create a release.
   105  
   106  Per default `clusterctl` will use a go proxy to detect the available versions to prevent additional
   107  API calls to the GitHub API. It is possible to configure the go proxy url using the `GOPROXY` variable as
   108  for go itself (defaults to `https://proxy.golang.org`).
   109  To immediately fallback to the GitHub client and not use a go proxy, the environment variable could get set to
   110  `GOPROXY=off` or `GOPROXY=direct`.
   111  If a provider does not follow Go's semantic versioning, `clusterctl` may fail when detecting the correct version.
   112  In such cases, disabling the go proxy functionality via `GOPROXY=off` should be considered.
   113  
   114  #### Creating a provider repository on GitLab
   115  
   116  You can use a GitLab generic packages for provider artifacts.
   117  
   118  A provider url should be in the form
   119  `https://{host}/api/v4/projects/{projectSlug}/packages/generic/{packageName}/{defaultVersion}/{componentsPath}`, where:
   120  
   121  * `{host}` should start with `gitlab.` (`gitlab.com`, `gitlab.example.org`, ...)
   122  * `{projectSlug}` is either a project id (`42`) or escaped full path (`myorg%2Fmyrepo`)
   123  * `{defaultVersion}` is a valid semantic version number
   124  * The components YAML, the metadata YAML and eventually the workload cluster templates are included into the same package version
   125  
   126  See the [GitLab docs](https://docs.gitlab.com/ee/user/packages/generic_packages/) for more information
   127  about how to create a generic package.
   128  
   129  This can be used in conjunction with [GitLabracadabra](https://gitlab.com/gitlabracadabra/gitlabracadabra/)
   130  to avoid direct internet access from `clusterctl`, and use GitLab as artifacts repository. For example,
   131  for the core provider:
   132  
   133  - Use the following [action file](https://gitlab.com/gitlabracadabra/gitlabracadabra/#action-files):
   134  
   135    ```yaml
   136    external-packages/cluster-api:
   137      packages_enabled: true
   138      package_mirrors:
   139      - github:
   140          full_name: kubernetes-sigs/cluster-api
   141          tags:
   142          - v1.2.3
   143          assets:
   144          - clusterctl-linux-amd64
   145          - core-components.yaml
   146          - bootstrap-components.yaml
   147          - control-plane-components.yaml
   148          - metadata.yaml
   149    ```
   150  
   151  - Use the following [`clusterctl` configuration](configuration.md):
   152  
   153    ```yaml
   154    providers:
   155      # override a pre-defined provider on a self-host GitLab
   156      - name: "cluster-api"
   157        url: "https://gitlab.example.com/api/v4/projects/external-packages%2Fcluster-api/packages/generic/cluster-api/v1.2.3/core-components.yaml"
   158        type: "CoreProvider"
   159    ```
   160  
   161  Limitation: Provider artifacts hosted on GitLab don't support getting all versions.
   162  As a consequence, you need to set version explicitly for upgrades.
   163  
   164  #### Creating a local provider repository
   165  
   166  clusterctl supports reading from a repository defined on the local file system.
   167  
   168  A local repository can be defined by creating a `<provider-label>` folder with a `<version>` sub-folder for each hosted release;
   169  the sub-folder name MUST be a valid semantic version number. e.g.
   170  
   171  ```bash
   172  ~/local-repository/infrastructure-aws/v0.5.2
   173  ```
   174  
   175  Each version sub-folder MUST contain the corresponding components YAML, the metadata YAML and eventually the workload cluster templates.
   176  
   177  ### Metadata YAML
   178  
   179  The provider is required to generate a **metadata YAML** file and publish it to the provider's repository.
   180  
   181  The metadata YAML file documents the release series of each provider and maps each release series to an API Version of Cluster API (contract).
   182  
   183  For example, for Cluster API:
   184  
   185  ```yaml
   186  apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
   187  kind: Metadata
   188  releaseSeries:
   189  - major: 0
   190    minor: 3
   191    contract: v1alpha3
   192  - major: 0
   193    minor: 2
   194    contract: v1alpha2
   195  ```
   196  
   197  <aside class="note">
   198  
   199  <h1> Note on user experience</h1>
   200  
   201  For clusterctl versions pre-v1alpha4, if provider implementers only update the clusterctl's built-in metadata and don't provide a `metadata.yaml` in a new release, users are forced to update `clusterctl`
   202  to the latest released version in order to properly install the provider.
   203  
   204  As a related example, see the details in [issue 3418].
   205  
   206  To address the above explained issue, the embedded metadata within clusterctl has been removed (as of v1alpha4) to prevent the reliance on using the latest version of clusterctl in order to pull newer provider releases.
   207  
   208  For more information see the details in [issue 3515].
   209  </aside>
   210  
   211  ### Components YAML
   212  
   213  The provider is required to generate a **components YAML** file and publish it to the provider's repository.
   214  This file is a single YAML with _all_ the components required for installing the provider itself (CRDs, Controller, RBAC etc.).
   215  
   216  The following rules apply:
   217  
   218  #### Naming conventions
   219  
   220  It is strongly recommended that:
   221  * Core providers release a file called `core-components.yaml`
   222  * Infrastructure providers release a file called `infrastructure-components.yaml`
   223  * Bootstrap providers release a file called ` bootstrap-components.yaml`
   224  * Control plane providers release a file called `control-plane-components.yaml`
   225  * IPAM providers release a file called `ipam-components.yaml`
   226  * Runtime extensions providers release a file called `runtime-extension-components.yaml`
   227  * Add-on providers release a file called `addon-components.yaml`
   228  
   229  #### Target namespace
   230  
   231  The instance components should contain one Namespace object, which will be used as the default target namespace
   232  when creating the provider components.
   233  
   234  All the objects in the components YAML MUST belong to the target namespace, with the exception of objects that
   235  are not namespaced, like ClusterRoles/ClusterRoleBinding and CRD objects.
   236  
   237  <aside class="note warning">
   238  
   239  <h1>Warning</h1>
   240  
   241  If the generated component YAML doesn't contain a Namespace object, the user will be required to provide one to `clusterctl init`
   242  using the `--target-namespace` flag.
   243  
   244  In case there is more than one Namespace object in the components YAML, `clusterctl` will generate an error and abort
   245  the provider installation.
   246  
   247  </aside>
   248  
   249  #### Controllers & Watching namespace
   250  
   251  Each provider is expected to deploy controllers/runtime extension server using a Deployment.
   252  
   253  While defining the Deployment Spec, the container that executes the controller/runtime extension server binary MUST be called `manager`.
   254  
   255  For controllers only, the manager MUST support a `--namespace` flag for specifying the namespace where the controller
   256  will look for objects to reconcile; however, clusterctl will always install providers watching for all namespaces
   257  (`--namespace=""`); for more details see [support for multiple instances](../developer/architecture/controllers/support-multiple-instances.md)
   258  for more context.
   259  
   260  While defining Pods for Deployments, canonical names should be used for images.
   261  
   262  #### Variables
   263  
   264  The components YAML can contain environment variables matching the format ${VAR}; it is highly
   265  recommended to prefix the variable name with the provider name e.g. `${AWS_CREDENTIALS}`
   266  
   267  <aside class="note warning">
   268  
   269  <h1>Warning</h1>
   270  
   271  `clusterctl` currently supports variables with leading/trailing spaces such
   272  as: `${ VAR }`, `${ VAR}`,`${VAR }`. However, these formats will be deprecated
   273  in the near future. e.g. v1alpha4.
   274  
   275  Formats such as `${VAR$FOO}` are not supported.
   276  </aside>
   277  
   278  `clusterctl` uses the library [drone/envsubst][drone-envsubst] to perform
   279  variable substitution.
   280  
   281  ```bash
   282  # If `VAR` is not set or empty, the default value is used. This is true for
   283  # all the following formats.
   284  ${VAR:=default}
   285  ${VAR=default}
   286  ${VAR:-default}
   287  ```
   288  Other functions such as substring replacement are also supported by the
   289  library. See [drone/envsubst][drone-envsubst] for more information.
   290  
   291  Additionally, each provider should create user facing documentation with the list of required variables and with all the additional
   292  notes that are required to assist the user in defining the value for each variable.
   293  
   294  #### Labels
   295  The components YAML components should be labeled with
   296  `cluster.x-k8s.io/provider` and the name of the provider. This will enable an
   297  easier transition from `kubectl apply` to `clusterctl`.
   298  
   299  As a reference you can consider the labels applied to the following
   300  providers.
   301  
   302  | Provider Name | Label                                                 |
   303  |---------------|-------------------------------------------------------|
   304  | CAPI          | cluster.x-k8s.io/provider=cluster-api                 |
   305  | CABPK         | cluster.x-k8s.io/provider=bootstrap-kubeadm           |
   306  | CABPM         | cluster.x-k8s.io/provider=bootstrap-microk8s          |
   307  | CABPKK3S      | cluster.x-k8s.io/provider=bootstrap-kubekey-k3s       |
   308  | CABPOCNE      | cluster.x-k8s.io/provider=bootstrap-ocne              |
   309  | CABPK0S       | cluster.x-k8s.io/provider=bootstrap-k0smotron         |
   310  | CACPK         | cluster.x-k8s.io/provider=control-plane-kubeadm       |
   311  | CACPM         | cluster.x-k8s.io/provider=control-plane-microk8s      |
   312  | CACPN         | cluster.x-k8s.io/provider=control-plane-nested        |
   313  | CACPKK3S      | cluster.x-k8s.io/provider=control-plane-kubekey-k3s   |
   314  | CACPOCNE      | cluster.x-k8s.io/provider=control-plane-ocne          |
   315  | CACPK0S       | cluster.x-k8s.io/provider=control-plane-k0smotron     |
   316  | CAPA          | cluster.x-k8s.io/provider=infrastructure-aws          |
   317  | CAPB          | cluster.x-k8s.io/provider=infrastructure-byoh         |
   318  | CAPC          | cluster.x-k8s.io/provider=infrastructure-cloudstack   |
   319  | CAPD          | cluster.x-k8s.io/provider=infrastructure-docker       |
   320  | CAPIM         | cluster.x-k8s.io/provider=infrastructure-in-memory    |
   321  | CAPDO         | cluster.x-k8s.io/provider=infrastructure-digitalocean |
   322  | CAPG          | cluster.x-k8s.io/provider=infrastructure-gcp          |
   323  | CAPH          | cluster.x-k8s.io/provider=infrastructure-hetzner      |
   324  | CAPHV         | cluster.x-k8s.io/provider=infrastructure-hivelocity   |
   325  | CAPIBM        | cluster.x-k8s.io/provider=infrastructure-ibmcloud     |
   326  | CAPKK         | cluster.x-k8s.io/provider=infrastructure-kubekey      |
   327  | CAPK          | cluster.x-k8s.io/provider=infrastructure-kubevirt     |
   328  | CAPM3         | cluster.x-k8s.io/provider=infrastructure-metal3       |
   329  | CAPN          | cluster.x-k8s.io/provider=infrastructure-nested       |
   330  | CAPO          | cluster.x-k8s.io/provider=infrastructure-openstack    |
   331  | CAPOCI        | cluster.x-k8s.io/provider=infrastructure-oci          |
   332  | CAPP          | cluster.x-k8s.io/provider=infrastructure-packet       |
   333  | CAPV          | cluster.x-k8s.io/provider=infrastructure-vsphere      |
   334  | CAPVC         | cluster.x-k8s.io/provider=infrastructure-vcluster     |
   335  | CAPVCD        | cluster.x-k8s.io/provider=infrastructure-vcd          |
   336  | CAPX          | cluster.x-k8s.io/provider=infrastructure-nutanix      |
   337  | CAPZ          | cluster.x-k8s.io/provider=infrastructure-azure        |
   338  | CAPOSC        | cluster.x-k8s.io/provider=infrastructure-outscale     |
   339  | CAPK0S        | cluster.x-k8s.io/provider=infrastructure-k0smotron    |
   340  | CAIPAMIC      | cluster.x-k8s.io/provider=ipam-in-cluster             |
   341  
   342  ### Workload cluster templates
   343  
   344  An infrastructure provider could publish a **cluster templates** file to be used by `clusterctl generate cluster`.
   345  This is single YAML with _all_ the objects required to create a new workload cluster.
   346  
   347  With ClusterClass enabled it is possible to have cluster templates with managed topologies. Cluster templates with managed
   348  topologies require only the cluster object in the template and a corresponding ClusterClass definition.
   349  
   350  The following rules apply:
   351  
   352  #### Naming conventions
   353  
   354  Cluster templates MUST be stored in the same location as the component YAML and follow this naming convention:
   355  1. The default cluster template should be named `cluster-template.yaml`.
   356  2. Additional cluster template should be named `cluster-template-{flavor}.yaml`. e.g `cluster-template-prod.yaml`
   357  
   358  `{flavor}` is the name the user can pass to the `clusterctl generate cluster --flavor` flag to identify the specific template to use.
   359  
   360  Each provider SHOULD create user facing documentation with the list of available cluster templates.
   361  
   362  #### Target namespace
   363  
   364  The cluster template YAML MUST assume the target namespace already exists.
   365  
   366  All the objects in the cluster template YAML MUST be deployed in the same namespace.
   367  
   368  #### Variables
   369  
   370  The cluster templates YAML can also contain environment variables (as can the components YAML).
   371  
   372  Additionally, each provider should create user facing documentation with the list of required variables and with all the additional
   373  notes that are required to assist the user in defining the value for each variable.
   374  
   375  ##### Common variables
   376  
   377  The `clusterctl generate cluster` command allows user to set a small set of common variables via CLI flags or command arguments.
   378  
   379  Templates writers should use the common variables to ensure consistency across providers and a simpler user experience
   380  (if compared to the usage of OS environment variables or the `clusterctl` config file).
   381  
   382  | CLI flag                | Variable name     | Note                                        |
   383  | ---------------------- | ----------------- | ------------------------------------------- |
   384  |`--target-namespace`| `${NAMESPACE}` | The namespace where the workload cluster should be deployed |
   385  |`--kubernetes-version`| `${KUBERNETES_VERSION}` | The Kubernetes version to use for the workload cluster |
   386  |`--controlplane-machine-count`| `${CONTROL_PLANE_MACHINE_COUNT}` | The number of control plane machines to be added to the workload cluster |
   387  |`--worker-machine-count`| `${WORKER_MACHINE_COUNT}` | The number of worker machines to be added to the workload cluster |
   388  
   389  Additionally, the value of the command argument to `clusterctl generate cluster <cluster-name>` (`<cluster-name>` in this case), will
   390  be applied to every occurrence of the `${ CLUSTER_NAME }` variable.
   391  
   392  ### ClusterClass definitions
   393  
   394  An infrastructure provider could publish a **ClusterClass definition** file to be used by `clusterctl generate cluster` that will be used along
   395  with the workload cluster templates.
   396  This is a single YAML with _all_ the objects required that make up the ClusterClass.
   397  
   398  The following rules apply:
   399  
   400  #### Naming conventions
   401  
   402  ClusterClass definitions MUST be stored in the same location as the component YAML and follow this naming convention:
   403  1. The ClusterClass definition should be named `clusterclass-{ClusterClass-name}.yaml`, e.g `clusterclass-prod.yaml`.
   404  
   405  `{ClusterClass-name}` is the name of the ClusterClass that is referenced from the Cluster.spec.topology.class field
   406  in the Cluster template; Cluster template files using a ClusterClass are usually simpler because they are no longer
   407  required to have all the templates.
   408  
   409  Each provider should create user facing documentation with the list of available ClusterClass definitions.
   410  
   411  #### Target namespace
   412  
   413  The ClusterClass definition YAML MUST assume the target namespace already exists.
   414  
   415  The references in the ClusterClass definition should NOT specify a namespace.
   416  
   417  It is recommended that none of the objects in the ClusterClass YAML should specify a namespace.
   418  
   419  Even if technically possible, it is strongly recommended that none of the objects in the ClusterClass definitions are shared across multiple definitions;
   420  this helps in preventing changing an object inadvertently impacting many ClusterClasses, and consequently, all the Clusters using those ClusterClasses.
   421  
   422  #### Variables
   423  
   424  Currently the ClusterClass definitions SHOULD NOT have any environment variables in them.
   425  
   426  ClusterClass definitions files should not use variable substitution, given that ClusterClass and managed topologies provide an alternative model for variable definition.
   427  
   428  #### Note
   429  
   430  A ClusterClass definition is automatically included in the output of  `clusterctl generate cluster` if the cluster template uses a managed topology
   431  and a ClusterClass with the same name does not already exists in the Cluster.
   432  
   433  ## OwnerReferences chain
   434  
   435  Each provider is responsible to ensure that all the providers resources (like e.g. `VSphereCluster`, `VSphereMachine`, `VSphereVM` etc.
   436  for the `vsphere` provider) MUST have a `Metadata.OwnerReferences` entry that links directly or indirectly to a `Cluster` object.
   437  
   438  Please note that all the provider specific resources that are referenced by the Cluster API core objects will get the `OwnerReference`
   439  set by the Cluster API core controllers, e.g.:
   440  
   441  * The Cluster controller ensures that all the objects referenced in `Cluster.Spec.InfrastructureRef` get an `OwnerReference`
   442    that links directly to the corresponding `Cluster`.
   443  * The Machine controller ensures that all the objects referenced in `Machine.Spec.InfrastructureRef` get an `OwnerReference`
   444    that links to the corresponding `Machine`, and the `Machine` is linked to the `Cluster` through its own `OwnerReference` chain.
   445  
   446  That means that, practically speaking, provider implementers are responsible for ensuring that the `OwnerReference`s
   447  are set only for objects that are not directly referenced by Cluster API core objects, e.g.:
   448  
   449  * All the `VSphereVM` instances should get an `OwnerReference` that links to the corresponding `VSphereMachine`, and the `VSphereMachine`
   450    is linked to the `Cluster` through its own `OwnerReference` chain.
   451  
   452  ## Additional notes
   453  
   454  ### Components YAML transformations
   455  
   456  Provider authors should be aware of the following transformations that `clusterctl` applies during component installation:
   457  
   458  * Variable substitution;
   459  * Enforcement of target namespace:
   460    * The name of the namespace object is set;
   461    * The namespace field of all the objects is set (with exception of cluster wide objects like e.g. ClusterRoles);
   462  * All components are labeled;
   463  
   464  ### Cluster template transformations
   465  
   466  Provider authors should be aware of the following transformations that `clusterctl` applies during components installation:
   467  
   468  * Variable substitution;
   469  * Enforcement of target namespace:
   470    * The namespace field of all the objects are set;
   471  
   472  ### Links to external objects
   473  
   474  The `clusterctl` command requires that both the components YAML and the cluster templates contain _all_ the required
   475  objects.
   476  
   477  If, for any reason, the provider authors/YAML designers decide not to comply with this recommendation and e.g. to
   478  
   479  * implement links to external objects from a component YAML (e.g. secrets, aggregated ClusterRoles NOT included in the component YAML)
   480  * implement link to external objects from a cluster template (e.g. secrets, configMaps NOT included in the cluster template)
   481  
   482  The provider authors/YAML designers should be aware that it is their responsibility to ensure the proper
   483  functioning of `clusterctl` when using non-compliant component YAML or cluster templates.
   484  
   485  ### Move
   486  
   487  Provider authors should be aware that `clusterctl move` command implements a discovery mechanism that considers:
   488  
   489  * All the Kind defined in one of the CRDs installed by clusterctl using `clusterctl init` (identified via the `clusterctl.cluster.x-k8s.io label`);
   490    For each CRD, discovery collects:
   491    * All the objects from the namespace being moved only if the CRD scope is `Namespaced`.
   492    * All the objects if the CRD scope is `Cluster`.
   493  * All the `ConfigMap` objects from the namespace being moved.
   494  * All the `Secret` objects from the namespace being moved and from the namespaces where infrastructure providers are installed.
   495  
   496  After completing discovery, `clusterctl move` moves to the target cluster only the objects discovered in the previous phase
   497  that are compliant with one of the following rules:
   498    * The object is directly or indirectly linked to a `Cluster` object (linked through the `OwnerReference` chain).
   499    * The object is a secret containing a user provided certificate (linked to a `Cluster` object via a naming convention).
   500    * The object is directly or indirectly linked to a `ClusterResourceSet` object (through the `OwnerReference` chain).
   501    * The object is directly or indirectly linked to another object with the `clusterctl.cluster.x-k8s.io/move-hierarchy`
   502      label, e.g. the infrastructure Provider ClusterIdentity objects (linked through the `OwnerReference` chain).
   503    * The object has the `clusterctl.cluster.x-k8s.io/move` label or the `clusterctl.cluster.x-k8s.io/move-hierarchy` label,
   504      e.g. the CPI config secret.
   505  
   506  Note. `clusterctl.cluster.x-k8s.io/move` and `clusterctl.cluster.x-k8s.io/move-hierarchy` labels could be applied
   507  to single objects or at the CRD level (the label applies to all the objects).
   508  
   509  Please note that during move:
   510    * Namespaced objects, if not existing in the target cluster, are created.
   511    * Namespaced objects, if already existing in the target cluster, are updated.
   512    * Namespaced objects are removed from the source cluster.
   513    * Global objects, if not existing in the target cluster, are created.
   514    * Global objects, if already existing in the target cluster, are not updated.
   515    * Global objects are not removed from the source cluster.
   516    * Namespaced objects which are part of an owner chain that starts with a global object (e.g. a secret containing
   517      credentials for an infrastructure Provider ClusterIdentity) are treated as Global objects.
   518  
   519  <aside class="note warning">
   520  
   521  <h1>Warning</h1>
   522  
   523  When using the "move" label, if the CRD is a global resource, the object is copied to the target cluster but not removed from the source cluster. It is up to the user to remove the source object as necessary.
   524  
   525  </aside>
   526  
   527  If moving some of excluded object is required, the provider authors should create documentation describing the
   528  exact move sequence to be executed by the user.
   529  
   530  Additionally, provider authors should be aware that `clusterctl move` assumes all the provider's Controllers respect the
   531  `Cluster.Spec.Paused` field introduced in the v1alpha3 Cluster API specification. If a provider needs to perform extra work in response to a
   532  cluster being paused, `clusterctl move` can be blocked from creating any resources on the destination
   533  management cluster by annotating any resource to be moved with `clusterctl.cluster.x-k8s.io/block-move`.
   534  
   535  <aside class="note warning">
   536  
   537  <h1> Warning: Status subresource is never restored </h1>
   538  
   539  Every object's `Status` subresource, including every nested field (e.g. `Status.Conditions`), is never 
   540  restored during a `move` operation. A `Status` subresource should never contain fields that cannot 
   541  be recreated or derived from information in spec, metadata, or external systems.
   542  
   543  Provider implementers should not store non-ephemeral data in the `Status`. 
   544  `Status` should be able to be fully rebuilt by controllers by observing the current state of resources.
   545  
   546  </aside>
   547  
   548  <!--LINKS-->
   549  [drone-envsubst]: https://github.com/drone/envsubst
   550  [issue 3418]: https://github.com/kubernetes-sigs/cluster-api/issues/3418
   551  [issue 3515]: https://github.com/kubernetes-sigs/cluster-api/issues/3515