sigs.k8s.io/cluster-api@v1.7.1/docs/book/src/user/quick-start.md (about)

     1  # Quick Start
     2  
     3  In this tutorial we'll cover the basics of how to use Cluster API to create one or more Kubernetes clusters.
     4  
     5  <aside class="note warning">
     6  
     7  <h1>Warning</h1>
     8  
     9  If using a [provider] that does not support v1beta1 or v1alpha4 yet, please follow the [release 0.3](https://release-0-3.cluster-api.sigs.k8s.io/user/quick-start.html) or [release 0.4](https://release-0-4.cluster-api.sigs.k8s.io/user/quick-start.html) quickstart instructions instead.
    10  
    11  </aside>
    12  
    13  ## Installation
    14  
    15  There are two major quickstart paths:  Using clusterctl or the Cluster API Operator.
    16  
    17   This article describes a path that uses the `clusterctl` CLI tool to handle the lifecycle of a Cluster API [management cluster](https://cluster-api.sigs.k8s.io/reference/glossary#management-cluster).
    18  
    19  The clusterctl command line interface is specifically designed for providing a simple “day 1 experience” and a quick start with Cluster API. It automates fetching the YAML files defining [provider components](https://cluster-api.sigs.k8s.io/reference/glossary#provider-components) and installing them.
    20  
    21  Additionally it encodes a set of best practices in managing providers, that helps the user in avoiding mis-configurations or in managing day 2 operations such as upgrades.
    22  
    23  The Cluster API Operator is a Kubernetes Operator built on top of clusterctl and designed to empower cluster administrators to handle the lifecycle of Cluster API providers within a management cluster using a declarative approach. It aims to improve user experience in deploying and managing Cluster API, making it easier to handle day-to-day tasks and automate workflows with GitOps. Visit the [CAPI Operator quickstart] if you want to experiment with this tool.
    24  
    25  ### Common Prerequisites
    26  
    27  - Install and setup [kubectl] in your local environment
    28  - Install [kind] and [Docker]
    29  - Install [Helm]
    30  
    31  ### Install and/or configure a Kubernetes cluster
    32  
    33  Cluster API requires an existing Kubernetes cluster accessible via kubectl. During the installation process the
    34  Kubernetes cluster will be transformed into a [management cluster] by installing the Cluster API [provider components], so it
    35  is recommended to keep it separated from any application workload.
    36  
    37  It is a common practice to create a temporary, local bootstrap cluster which is then used to provision
    38  a target [management cluster] on the selected [infrastructure provider].
    39  
    40  **Choose one of the options below:**
    41  
    42  1. **Existing Management Cluster**
    43  
    44     For production use-cases a "real" Kubernetes cluster should be used with appropriate backup and disaster recovery policies and procedures in place. The Kubernetes cluster must be at least v1.20.0.
    45  
    46     ```bash
    47     export KUBECONFIG=<...>
    48     ```
    49  **OR**
    50  
    51  2. **Kind**
    52  
    53     <aside class="note warning">
    54  
    55     <h1>Warning</h1>
    56  
    57     [kind] is not designed for production use.
    58  
    59     **Minimum [kind] supported version**: v0.22.0
    60  
    61     **Help with common issues can be found in the [Troubleshooting Guide](./troubleshooting.md).**
    62  
    63     Note for macOS users: you may need to [increase the memory available](https://docs.docker.com/docker-for-mac/#resources) for containers (recommend 6 GB for CAPD).
    64  
    65     Note for Linux users: you may need to [increase `ulimit` and `inotify` when using Docker (CAPD)](./troubleshooting.md#cluster-api-with-docker----too-many-open-files).
    66  
    67     </aside>
    68  
    69     [kind] can be used for creating a local Kubernetes cluster for development environments or for
    70     the creation of a temporary [bootstrap cluster] used to provision a target [management cluster] on the selected infrastructure provider.
    71  
    72     The installation procedure depends on the version of kind; if you are planning to use the Docker infrastructure provider,
    73     please follow the additional instructions in the dedicated tab:
    74  
    75     {{#tabs name:"install-kind" tabs:"Default,Docker,KubeVirt"}}
    76     {{#tab Default}}
    77  
    78     Create the kind cluster:
    79     ```bash
    80     kind create cluster
    81     ```
    82     Test to ensure the local kind cluster is ready:
    83     ```bash
    84     kubectl cluster-info
    85     ```
    86  
    87     {{#/tab }}
    88     {{#tab Docker}}
    89  
    90     Run the following command to create a kind config file for allowing the Docker provider to access Docker on the host:
    91  
    92     ```bash
    93     cat > kind-cluster-with-extramounts.yaml <<EOF
    94     kind: Cluster
    95     apiVersion: kind.x-k8s.io/v1alpha4
    96     networking:
    97       ipFamily: dual
    98     nodes:
    99     - role: control-plane
   100       extraMounts:
   101         - hostPath: /var/run/docker.sock
   102           containerPath: /var/run/docker.sock
   103     EOF
   104     ```
   105  
   106     Then follow the instruction for your kind version using  `kind create cluster --config kind-cluster-with-extramounts.yaml`
   107     to create the management cluster using the above file.
   108  
   109     {{#/tab }}
   110     {{#tab KubeVirt}}
   111  
   112     #### Create the Kind Cluster
   113     [KubeVirt][KubeVirt] is a cloud native virtualization solution. The virtual machines we're going to create and use for
   114     the workload cluster's nodes, are actually running within pods in the management cluster. In order to communicate with
   115     the workload cluster's API server, we'll need to expose it. We are using Kind which is a limited environment. The
   116     easiest way to expose the workload cluster's API server (a pod within a node running in a VM that is itself running
   117     within a pod in the management cluster, that is running inside a Docker container), is to use a LoadBalancer service.
   118  
   119     To allow using a LoadBalancer service, we can't use the kind's default CNI (kindnet), but we'll need to install
   120     another CNI, like Calico. In order to do that, we'll need first to initiate the kind cluster with two modifications:
   121     1. Disable the default CNI
   122     2. Add the Docker credentials to the cluster, to avoid the Docker Hub pull rate limit of the calico images; read more
   123        about it in the [docker documentation](https://docs.docker.com/docker-hub/download-rate-limit/), and in the
   124        [kind documentation](https://kind.sigs.k8s.io/docs/user/private-registries/#mount-a-config-file-to-each-node).
   125  
   126     Create a configuration file for kind. Please notice the Docker config file path, and adjust it to your local setting:
   127     ```bash
   128     cat <<EOF > kind-config.yaml
   129     kind: Cluster
   130     apiVersion: kind.x-k8s.io/v1alpha4
   131     networking:
   132     # the default CNI will not be installed
   133       disableDefaultCNI: true
   134     nodes:
   135     - role: control-plane
   136       extraMounts:
   137        - containerPath: /var/lib/kubelet/config.json
   138          hostPath: <YOUR DOCKER CONFIG FILE PATH>
   139     EOF
   140     ```
   141     Now, create the kind cluster with the configuration file:
   142     ```bash
   143     kind create cluster --config=kind-config.yaml
   144     ```
   145     Test to ensure the local kind cluster is ready:
   146     ```bash
   147     kubectl cluster-info
   148     ```
   149  
   150     #### Install the Calico CNI
   151     Now we'll need to install a CNI. In this example, we're using calico, but other CNIs should work as well. Please see
   152     [calico installation guide](https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico)
   153     for more details (use the "Manifest" tab). Below is an example of how to install calico version v3.24.4.
   154  
   155     Use the Calico manifest to create the required resources; e.g.:
   156     ```bash
   157     kubectl create -f  https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml
   158     ```
   159  
   160     {{#/tab }}
   161     {{#/tabs }}
   162  
   163  ### Install clusterctl
   164  The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.
   165  
   166  {{#tabs name:"install-clusterctl" tabs:"Linux,macOS,homebrew,Windows"}}
   167  {{#tab Linux}}
   168  
   169  #### Install clusterctl binary with curl on Linux
   170  If you are unsure you can determine your computers architecture by running `uname -a`
   171  
   172  Download for AMD64:
   173  ```bash
   174  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-amd64" version:"1.7.x"}} -o clusterctl
   175  ```
   176  
   177  Download for ARM64:
   178  ```bash
   179  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-arm64" version:"1.7.x"}} -o clusterctl
   180  ```
   181  
   182  Download for PPC64LE:
   183  ```bash
   184  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-ppc64le" version:"1.7.x"}} -o clusterctl
   185  ```
   186  
   187  Install clusterctl:
   188  ```bash
   189  sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
   190  ```
   191  Test to ensure the version you installed is up-to-date:
   192  ```bash
   193  clusterctl version
   194  ```
   195  
   196  {{#/tab }}
   197  {{#tab macOS}}
   198  
   199  #### Install clusterctl binary with curl on macOS
   200  If you are unsure you can determine your computers architecture by running `uname -a`
   201  
   202  Download for AMD64:
   203  ```bash
   204  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-darwin-amd64" version:"1.7.x"}} -o clusterctl
   205  ```
   206  
   207  Download for M1 CPU ("Apple Silicon") / ARM64:
   208  ```bash
   209  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-darwin-arm64" version:"1.7.x"}} -o clusterctl
   210  ```
   211  
   212  Make the clusterctl binary executable.
   213  ```bash
   214  chmod +x ./clusterctl
   215  ```
   216  Move the binary in to your PATH.
   217  ```bash
   218  sudo mv ./clusterctl /usr/local/bin/clusterctl
   219  ```
   220  Test to ensure the version you installed is up-to-date:
   221  ```bash
   222  clusterctl version
   223  ```
   224  {{#/tab }}
   225  {{#tab homebrew}}
   226  
   227  #### Install clusterctl with homebrew on macOS and Linux
   228  
   229  Install the latest release using homebrew:
   230  
   231  ```bash
   232  brew install clusterctl
   233  ```
   234  
   235  Test to ensure the version you installed is up-to-date:
   236  ```bash
   237  clusterctl version
   238  ```
   239  
   240  {{#/tab }}
   241  {{#tab windows}}
   242  
   243  #### Install clusterctl binary with curl on Windows using PowerShell
   244  Go to the working directory where you want clusterctl downloaded.
   245  
   246  Download the latest release; on Windows, type:
   247  ```powershell
   248  curl.exe -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-windows-amd64.exe" version:"1.7.x"}} -o clusterctl.exe
   249  ```
   250  Append or prepend the path of that directory to the `PATH` environment variable.
   251  
   252  Test to ensure the version you installed is up-to-date:
   253  ```powershell
   254  clusterctl.exe version
   255  ```
   256  
   257  {{#/tab }}
   258  {{#/tabs }}
   259  
   260  ### Initialize the management cluster
   261  
   262  Now that we've got clusterctl installed and all the prerequisites in place, let's transform the Kubernetes cluster
   263  into a management cluster by using `clusterctl init`.
   264  
   265  The command accepts as input a list of providers to install; when executed for the first time, `clusterctl init`
   266  automatically adds to the list the `cluster-api` core provider, and if unspecified, it also adds the `kubeadm` bootstrap
   267  and `kubeadm` control-plane providers.
   268  
   269  #### Enabling Feature Gates
   270  
   271  Feature gates can be enabled by exporting environment variables before executing `clusterctl init`.
   272  For example, the `ClusterTopology` feature, which is required to enable support for managed topologies and ClusterClass,
   273  can be enabled via:
   274  ```bash
   275  export CLUSTER_TOPOLOGY=true
   276  ```
   277  Additional documentation about experimental features can be found in [Experimental Features].
   278  
   279  #### Initialization for common providers
   280  
   281  Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied
   282  before getting started with Cluster API. See below for the expected settings for common providers.
   283  
   284  {{#tabs name:"tab-installation-infrastructure" tabs:"AWS,Azure,CloudStack,DigitalOcean,Docker,Equinix Metal,GCP,Hetzner,Hivelocity,IBM Cloud,K0smotron,KubeKey,KubeVirt,Metal3,Nutanix,OCI,OpenStack,Outscale,Proxmox,VCD,vcluster,Virtink,vSphere"}}
   285  {{#tab AWS}}
   286  
   287  Download the latest binary of `clusterawsadm` from the [AWS provider releases]. The [clusterawsadm] command line utility assists with identity and access management (IAM) for [Cluster API Provider AWS][capa].
   288  
   289  {{#tabs name:"install-clusterawsadm" tabs:"Linux,macOS,homebrew,Windows"}}
   290  {{#tab Linux}}
   291  
   292  Download the latest release; on Linux, type:
   293  ```
   294  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api-provider-aws" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-linux-amd64" version:">=2.0.0"}} -o clusterawsadm
   295  ```
   296  
   297  Make it executable
   298  ```
   299  chmod +x clusterawsadm
   300  ```
   301  
   302  Move the binary to a directory present in your PATH
   303  ```
   304  sudo mv clusterawsadm /usr/local/bin
   305  ```
   306  
   307  Check version to confirm installation
   308  ```
   309  clusterawsadm version
   310  ```
   311  
   312  **Example Usage**
   313  ```bash
   314  export AWS_REGION=us-east-1 # This is used to help encode your environment variables
   315  export AWS_ACCESS_KEY_ID=<your-access-key>
   316  export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
   317  export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
   318  
   319  # The clusterawsadm utility takes the credentials that you set as environment
   320  # variables and uses them to create a CloudFormation stack in your AWS account
   321  # with the correct IAM resources.
   322  clusterawsadm bootstrap iam create-cloudformation-stack
   323  
   324  # Create the base64 encoded credentials using clusterawsadm.
   325  # This command uses your environment variables and encodes
   326  # them in a value to be stored in a Kubernetes Secret.
   327  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   328  
   329  # Finally, initialize the management cluster
   330  clusterctl init --infrastructure aws
   331  ```
   332  
   333  {{#/tab }}
   334  {{#tab macOS}}
   335  
   336  Download the latest release; on macOs, type:
   337  ```
   338  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api-provider-aws" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-darwin-amd64" version:">=2.0.0"}} -o clusterawsadm
   339  ```
   340  
   341  Or if your Mac has an M1 CPU (”Apple Silicon”):
   342  ```
   343  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api-provider-aws" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-darwin-arm64" version:">=2.0.0"}} -o clusterawsadm
   344  ```
   345  
   346  Make it executable
   347  ```
   348  chmod +x clusterawsadm
   349  ```
   350  
   351  Move the binary to a directory present in your PATH
   352  ```
   353  sudo mv clusterawsadm /usr/local/bin
   354  ```
   355  
   356  Check version to confirm installation
   357  ```
   358  clusterawsadm version
   359  ```
   360  
   361  **Example Usage**
   362  ```bash
   363  export AWS_REGION=us-east-1 # This is used to help encode your environment variables
   364  export AWS_ACCESS_KEY_ID=<your-access-key>
   365  export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
   366  export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
   367  
   368  # The clusterawsadm utility takes the credentials that you set as environment
   369  # variables and uses them to create a CloudFormation stack in your AWS account
   370  # with the correct IAM resources.
   371  clusterawsadm bootstrap iam create-cloudformation-stack
   372  
   373  # Create the base64 encoded credentials using clusterawsadm.
   374  # This command uses your environment variables and encodes
   375  # them in a value to be stored in a Kubernetes Secret.
   376  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   377  
   378  # Finally, initialize the management cluster
   379  clusterctl init --infrastructure aws
   380  ```
   381  {{#/tab }}
   382  {{#tab homebrew}}
   383  
   384  Install the latest release using homebrew:
   385  ```
   386  brew install clusterawsadm
   387  ```
   388  
   389  Check version to confirm installation
   390  ```
   391  clusterawsadm version
   392  ```
   393  
   394  **Example Usage**
   395  ```bash
   396  export AWS_REGION=us-east-1 # This is used to help encode your environment variables
   397  export AWS_ACCESS_KEY_ID=<your-access-key>
   398  export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
   399  export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
   400  
   401  # The clusterawsadm utility takes the credentials that you set as environment
   402  # variables and uses them to create a CloudFormation stack in your AWS account
   403  # with the correct IAM resources.
   404  clusterawsadm bootstrap iam create-cloudformation-stack
   405  
   406  # Create the base64 encoded credentials using clusterawsadm.
   407  # This command uses your environment variables and encodes
   408  # them in a value to be stored in a Kubernetes Secret.
   409  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   410  
   411  # Finally, initialize the management cluster
   412  clusterctl init --infrastructure aws
   413  ```
   414  
   415  {{#/tab }}
   416  {{#tab Windows}}
   417  
   418  Download the latest release; on Windows, type:
   419  ```
   420  curl.exe -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api-provider-aws" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-windows-amd64.exe" version:">=2.0.0"}} -o clusterawsadm.exe
   421  ```
   422  
   423  Append or prepend the path of that directory to the `PATH` environment variable.
   424  Check version to confirm installation
   425  ```
   426  clusterawsadm.exe version
   427  ```
   428  
   429  **Example Usage in Powershell**
   430  ```bash
   431  $Env:AWS_REGION="us-east-1" # This is used to help encode your environment variables
   432  $Env:AWS_ACCESS_KEY_ID="<your-access-key>"
   433  $Env:AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
   434  $Env:AWS_SESSION_TOKEN="<session-token>" # If you are using Multi-Factor Auth.
   435  
   436  # The clusterawsadm utility takes the credentials that you set as environment
   437  # variables and uses them to create a CloudFormation stack in your AWS account
   438  # with the correct IAM resources.
   439  clusterawsadm bootstrap iam create-cloudformation-stack
   440  
   441  # Create the base64 encoded credentials using clusterawsadm.
   442  # This command uses your environment variables and encodes
   443  # them in a value to be stored in a Kubernetes Secret.
   444  $Env:AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   445  
   446  # Finally, initialize the management cluster
   447  clusterctl init --infrastructure aws
   448  ```
   449  {{#/tab }}
   450  {{#/tabs }}
   451  
   452  See the [AWS provider prerequisites] document for more details.
   453  
   454  {{#/tab }}
   455  {{#tab Azure}}
   456  
   457  For more information about authorization, AAD, or requirements for Azure, visit the [Azure provider prerequisites] document.
   458  
   459  ```bash
   460  export AZURE_SUBSCRIPTION_ID="<SubscriptionId>"
   461  
   462  # Create an Azure Service Principal and paste the output here
   463  export AZURE_TENANT_ID="<Tenant>"
   464  export AZURE_CLIENT_ID="<AppId>"
   465  export AZURE_CLIENT_SECRET="<Password>"
   466  
   467  # Base64 encode the variables
   468  export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
   469  export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
   470  export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
   471  export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"
   472  
   473  # Settings needed for AzureClusterIdentity used by the AzureCluster
   474  export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret"
   475  export CLUSTER_IDENTITY_NAME="cluster-identity"
   476  export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default"
   477  
   478  # Create a secret to include the password of the Service Principal identity created in Azure
   479  # This secret will be referenced by the AzureClusterIdentity used by the AzureCluster
   480  kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" --namespace "${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}"
   481  
   482  # Finally, initialize the management cluster
   483  clusterctl init --infrastructure azure
   484  ```
   485  
   486  {{#/tab }}
   487  {{#tab CloudStack}}
   488  
   489  Create a file named cloud-config in the repo's root directory, substituting in your own environment's values
   490  ```bash
   491  [Global]
   492  api-url = <cloudstackApiUrl>
   493  api-key = <cloudstackApiKey>
   494  secret-key = <cloudstackSecretKey>
   495  ```
   496  
   497  Create the base64 encoded credentials by catting your credentials file.
   498  This command uses your environment variables and encodes
   499  them in a value to be stored in a Kubernetes Secret.
   500  
   501  ```bash
   502  export CLOUDSTACK_B64ENCODED_SECRET=`cat cloud-config | base64 | tr -d '\n'`
   503  ```
   504  
   505  Finally, initialize the management cluster
   506  ```bash
   507  clusterctl init --infrastructure cloudstack
   508  ```
   509  
   510  {{#/tab }}
   511  {{#tab DigitalOcean}}
   512  
   513  ```bash
   514  export DIGITALOCEAN_ACCESS_TOKEN=<your-access-token>
   515  export DO_B64ENCODED_CREDENTIALS="$(echo -n "${DIGITALOCEAN_ACCESS_TOKEN}" | base64 | tr -d '\n')"
   516  
   517  # Initialize the management cluster
   518  clusterctl init --infrastructure digitalocean
   519  ```
   520  
   521  {{#/tab }}
   522  
   523  {{#tab Docker}}
   524  
   525  <aside class="note warning">
   526  
   527  <h1>Warning</h1>
   528  
   529  The Docker provider is not designed for production use and is intended for development environments only.
   530  
   531  </aside>
   532  
   533  The Docker provider requires the `ClusterTopology` and `MachinePool` features to deploy ClusterClass-based clusters.
   534  We are only supporting ClusterClass-based cluster-templates in this quickstart as ClusterClass makes it possible to
   535  adapt configuration based on Kubernetes version. This is required to install Kubernetes clusters < v1.24 and
   536  for the upgrade from v1.23 to v1.24 as we have to use different cgroupDrivers depending on Kubernetes version.
   537  
   538  ```bash
   539  # Enable the experimental Cluster topology feature.
   540  export CLUSTER_TOPOLOGY=true
   541  
   542  # Initialize the management cluster
   543  clusterctl init --infrastructure docker
   544  ```
   545  
   546  {{#/tab }}
   547  {{#tab Equinix Metal}}
   548  
   549  In order to initialize the Equinix Metal Provider (formerly Packet) you have to expose the environment
   550  variable `PACKET_API_KEY`. This variable is used to authorize the infrastructure
   551  provider manager against the Equinix Metal API. You can retrieve your token directly
   552  from the Equinix Metal Console.
   553  
   554  ```bash
   555  export PACKET_API_KEY="34ts3g4s5g45gd45dhdh"
   556  
   557  clusterctl init --infrastructure packet
   558  ```
   559  
   560  {{#/tab }}
   561  {{#tab GCP}}
   562  
   563  ```bash
   564  # Create the base64 encoded credentials by catting your credentials json.
   565  # This command uses your environment variables and encodes
   566  # them in a value to be stored in a Kubernetes Secret.
   567  export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
   568  
   569  # Finally, initialize the management cluster
   570  clusterctl init --infrastructure gcp
   571  ```
   572  
   573  {{#/tab }}
   574  {{#tab Hetzner}}
   575  
   576  Please visit the [Hetzner project][Hetzner provider].
   577  
   578  {{#/tab }}
   579  {{#tab Hivelocity}}
   580  
   581  Please visit the [Hivelocity project][Hivelocity provider].
   582  
   583  {{#/tab }}
   584  {{#tab IBM Cloud}}
   585  
   586  In order to initialize the IBM Cloud Provider you have to expose the environment
   587  variable `IBMCLOUD_API_KEY`. This variable is used to authorize the infrastructure
   588  provider manager against the IBM Cloud API. To create one from the UI, refer [here](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key).
   589  
   590  ```bash
   591  export IBMCLOUD_API_KEY=<you_api_key>
   592  
   593  # Finally, initialize the management cluster
   594  clusterctl init --infrastructure ibmcloud
   595  ```
   596  
   597  {{#/tab }}
   598  {{#tab K0smotron}}
   599  
   600  ```bash
   601  # Initialize the management cluster
   602  clusterctl init --infrastructure k0sproject-k0smotron
   603  ```
   604  
   605  {{#/tab }}
   606  {{#tab KubeKey}}
   607  
   608  ```bash
   609  # Initialize the management cluster
   610  clusterctl init --infrastructure kubekey
   611  ```
   612  
   613  {{#/tab }}
   614  {{#tab KubeVirt}}
   615  
   616  Please visit the [KubeVirt project][KubeVirt provider] for more information.
   617  
   618  As described above, we want to use a LoadBalancer service in order to expose the workload cluster's API server. In the
   619  example below, we will use [MetalLB](https://metallb.universe.tf/) solution to implement load balancing to our kind
   620  cluster. Other solution should work as well.
   621  
   622  #### Install MetalLB for load balancing
   623  Install MetalLB, as described [here](https://metallb.universe.tf/installation/#installation-by-manifest); for example:
   624  ```bash
   625  METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
   626  kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
   627  kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
   628  kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m
   629  ```
   630  
   631  Now, we'll create the `IPAddressPool` and the `L2Advertisement` custom resources. The script below creates the CRs with
   632  the right addresses, that match to the kind cluster addresses:
   633  ```bash
   634  GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
   635  NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
   636  cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
   637  apiVersion: metallb.io/v1beta1
   638  kind: IPAddressPool
   639  metadata:
   640    name: capi-ip-pool
   641    namespace: metallb-system
   642  spec:
   643    addresses:
   644    - 172.19.255.200-172.19.255.250
   645  ---
   646  apiVersion: metallb.io/v1beta1
   647  kind: L2Advertisement
   648  metadata:
   649    name: empty
   650    namespace: metallb-system
   651  EOF
   652  ```
   653  
   654  #### Install KubeVirt on the kind cluster
   655  ```bash
   656  # get KubeVirt version
   657  KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
   658  # deploy required CRDs
   659  kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
   660  # deploy the KubeVirt custom resource
   661  kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
   662  kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
   663  ```
   664  
   665  #### Initialize the management cluster with the KubeVirt Provider
   666  ```bash
   667  clusterctl init --infrastructure kubevirt
   668  ```
   669  
   670  {{#/tab }}
   671  {{#tab Metal3}}
   672  
   673  Please visit the [Metal3 project][Metal3 provider].
   674  
   675  {{#/tab }}
   676  {{#tab Nutanix}}
   677  
   678  Please follow the Cluster API Provider for [Nutanix Getting Started Guide](https://opendocs.nutanix.com/capx/latest/getting_started/)
   679  
   680  {{#/tab }}
   681  {{#tab OCI}}
   682  
   683  Please follow the Cluster API Provider for [Oracle Cloud Infrastructure (OCI) Getting Started Guide][oci-provider]
   684  
   685  {{#/tab }}
   686  {{#tab OpenStack}}
   687  
   688  ```bash
   689  # Initialize the management cluster
   690  clusterctl init --infrastructure openstack
   691  ```
   692  
   693  {{#/tab }}
   694  
   695  {{#tab Outscale}}
   696  
   697  ```bash
   698  export OSC_SECRET_KEY=<your-secret-key>
   699  export OSC_ACCESS_KEY=<your-access-key>
   700  export OSC_REGION=<you-region>
   701  # Create namespace
   702  kubectl create namespace cluster-api-provider-outscale-system
   703  # Create secret
   704  kubectl create secret generic cluster-api-provider-outscale --from-literal=access_key=${OSC_ACCESS_KEY} --from-literal=secret_key=${OSC_SECRET_KEY} --from-literal=region=${OSC_REGION}  -n cluster-api-provider-outscale-system
   705  # Initialize the management cluster
   706  clusterctl init --infrastructure outscale
   707  ```
   708  
   709  {{#/tab }}
   710  
   711  {{#tab Proxmox}}
   712  
   713  First, we need to add the IPAM provider to your [clusterctl config file](../clusterctl/configuration.md) (`$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml`):
   714  
   715  ```yaml
   716  providers:
   717    - name: in-cluster
   718      url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
   719      type: IPAMProvider
   720  ```
   721  
   722  ```bash
   723  # The host for the Proxmox cluster
   724  export PROXMOX_URL="https://pve.example:8006"
   725  # The Proxmox token ID to access the remote Proxmox endpoint
   726  export PROXMOX_TOKEN='root@pam!capi'
   727  # The secret associated with the token ID
   728  # You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
   729  # bash history
   730  export PROXMOX_SECRET="1234-1234-1234-1234"
   731  
   732  
   733  # Finally, initialize the management cluster
   734  clusterctl init --infrastructure proxmox --ipam in-cluster
   735  ```
   736  
   737  For more information about the CAPI provider for Proxmox, see the [Proxmox
   738  project][Proxmox getting started guide].
   739  
   740  {{#/tab }}
   741  
   742  {{#tab VCD}}
   743  
   744  Please follow the Cluster API Provider for [Cloud Director Getting Started Guide](https://github.com/vmware/cluster-api-provider-cloud-director/blob/main/README.md)
   745  
   746  ```bash
   747  # Initialize the management cluster
   748  clusterctl init --infrastructure vcd
   749  ```
   750  
   751  {{#/tab }}
   752  {{#tab vcluster}}
   753  
   754  ```bash
   755  clusterctl init --infrastructure vcluster
   756  ```
   757  
   758  Please follow the Cluster API Provider for [vcluster Quick Start Guide](https://github.com/loft-sh/cluster-api-provider-vcluster/blob/main/docs/quick-start.md)
   759  
   760  {{#/tab }}
   761  {{#tab Virtink}}
   762  
   763  ```bash
   764  # Initialize the management cluster
   765  clusterctl init --infrastructure virtink
   766  ```
   767  
   768  {{#/tab }}
   769  {{#tab vSphere}}
   770  
   771  ```bash
   772  # The username used to access the remote vSphere endpoint
   773  export VSPHERE_USERNAME="vi-admin@vsphere.local"
   774  # The password used to access the remote vSphere endpoint
   775  # You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
   776  # bash history
   777  export VSPHERE_PASSWORD="admin!23"
   778  
   779  # Finally, initialize the management cluster
   780  clusterctl init --infrastructure vsphere
   781  ```
   782  
   783  For more information about prerequisites, credentials management, or permissions for vSphere, see the [vSphere
   784  project][vSphere getting started guide].
   785  
   786  {{#/tab }}
   787  {{#/tabs }}
   788  
   789  The output of `clusterctl init` is similar to this:
   790  
   791  ```bash
   792  Fetching providers
   793  Installing cert-manager Version="v1.11.0"
   794  Waiting for cert-manager to be available...
   795  Installing Provider="cluster-api" Version="v1.0.0" TargetNamespace="capi-system"
   796  Installing Provider="bootstrap-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-bootstrap-system"
   797  Installing Provider="control-plane-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-control-plane-system"
   798  Installing Provider="infrastructure-docker" Version="v1.0.0" TargetNamespace="capd-system"
   799  
   800  Your management cluster has been initialized successfully!
   801  
   802  You can now create your first workload cluster by running the following:
   803  
   804    clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
   805  ```
   806  
   807  <aside class="note">
   808  
   809  <h1>Alternatives to environment variables</h1>
   810  
   811  Throughout this quickstart guide we've given instructions on setting parameters using environment variables. For most
   812  environment variables in the rest of the guide, you can also set them in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml`
   813  
   814  See [`clusterctl init`](../clusterctl/commands/init.md) for more details.
   815  
   816  </aside>
   817  
   818  ### Create your first workload cluster
   819  
   820  Once the management cluster is ready, you can create your first workload cluster.
   821  
   822  #### Preparing the workload cluster configuration
   823  
   824  The `clusterctl generate cluster` command returns a YAML template for creating a [workload cluster].
   825  
   826  <aside class="note">
   827  
   828  <h1> Which provider will be used for my cluster? </h1>
   829  
   830  The `clusterctl generate cluster` command uses smart defaults in order to simplify the user experience; for example,
   831  if only the `aws` infrastructure provider is deployed, it detects and uses that when creating the cluster.
   832  
   833  </aside>
   834  
   835  <aside class="note">
   836  
   837  <h1> What topology will be used for my cluster? </h1>
   838  
   839  The `clusterctl generate cluster` command by default uses cluster templates which are provided by the infrastructure
   840  providers. See the provider's documentation for more information.
   841  
   842  See the `clusterctl generate cluster` [command][clusterctl generate cluster] documentation for
   843  details about how to use alternative sources. for cluster templates.
   844  
   845  </aside>
   846  
   847  #### Required configuration for common providers
   848  
   849  Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied
   850  before configuring a cluster with Cluster API. Instructions are provided for common providers below.
   851  
   852  Otherwise, you can look at the `clusterctl generate cluster` [command][clusterctl generate cluster] documentation for details about how to
   853  discover the list of variables required by a cluster templates.
   854  
   855  {{#tabs name:"tab-configuration-infrastructure" tabs:"AWS,Azure,CloudStack,DigitalOcean,Docker,Equinix Metal,GCP,IBM Cloud,K0smotron,KubeKey,KubeVirt,Metal3,Nutanix,OpenStack,Outscale,Proxmox,VCD,vcluster,Virtink,vSphere"}}
   856  {{#tab AWS}}
   857  
   858  ```bash
   859  export AWS_REGION=us-east-1
   860  export AWS_SSH_KEY_NAME=default
   861  # Select instance types
   862  export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
   863  export AWS_NODE_MACHINE_TYPE=t3.large
   864  ```
   865  
   866  See the [AWS provider prerequisites] document for more details.
   867  
   868  {{#/tab }}
   869  {{#tab Azure}}
   870  
   871  <aside class="note warning">
   872  
   873  <h1>Warning</h1>
   874  
   875  Make sure you choose a VM size which is available in the desired location for your subscription. To see available SKUs, use `az vm list-skus -l <your_location> -r virtualMachines -o table`
   876  
   877  </aside>
   878  
   879  ```bash
   880  # Name of the Azure datacenter location. Change this value to your desired location.
   881  export AZURE_LOCATION="centralus"
   882  
   883  # Select VM types.
   884  export AZURE_CONTROL_PLANE_MACHINE_TYPE="Standard_D2s_v3"
   885  export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"
   886  
   887  # [Optional] Select resource group. The default value is ${CLUSTER_NAME}.
   888  export AZURE_RESOURCE_GROUP="<ResourceGroupName>"
   889  ```
   890  
   891  {{#/tab }}
   892  {{#tab CloudStack}}
   893  
   894  A Cluster API compatible image must be available in your CloudStack installation. For instructions on how to build a compatible image
   895  see [image-builder (CloudStack)](https://image-builder.sigs.k8s.io/capi/providers/cloudstack.html)
   896  
   897  Prebuilt images can be found [here](http://packages.shapeblue.com/cluster-api-provider-cloudstack/images/)
   898  
   899  To see all required CloudStack environment variables execute:
   900  ```bash
   901  clusterctl generate cluster --infrastructure cloudstack --list-variables capi-quickstart
   902  ```
   903  
   904  Apart from the script, the following CloudStack environment variables are required.
   905  ```bash
   906  # Set this to the name of the zone in which to deploy the cluster
   907  export CLOUDSTACK_ZONE_NAME=<zone name>
   908  # The name of the network on which the VMs will reside
   909  export CLOUDSTACK_NETWORK_NAME=<network name>
   910  # The endpoint of the workload cluster
   911  export CLUSTER_ENDPOINT_IP=<cluster endpoint address>
   912  export CLUSTER_ENDPOINT_PORT=<cluster endpoint port>
   913  # The service offering of the control plane nodes
   914  export CLOUDSTACK_CONTROL_PLANE_MACHINE_OFFERING=<control plane service offering name>
   915  # The service offering of the worker nodes
   916  export CLOUDSTACK_WORKER_MACHINE_OFFERING=<worker node service offering name>
   917  # The capi compatible template to use
   918  export CLOUDSTACK_TEMPLATE_NAME=<template name>
   919  # The ssh key to use to log into the nodes
   920  export CLOUDSTACK_SSH_KEY_NAME=<ssh key name>
   921  
   922  ```
   923  
   924  A full configuration reference can be found in [configuration.md](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/blob/master/docs/book/src/clustercloudstack/configuration.md).
   925  
   926  {{#/tab }}
   927  {{#tab DigitalOcean}}
   928  
   929  A ClusterAPI compatible image must be available in your DigitalOcean account. For instructions on how to build a compatible image
   930  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
   931  
   932  ```bash
   933  export DO_REGION=nyc1
   934  export DO_SSH_KEY_FINGERPRINT=<your-ssh-key-fingerprint>
   935  export DO_CONTROL_PLANE_MACHINE_TYPE=s-2vcpu-2gb
   936  export DO_CONTROL_PLANE_MACHINE_IMAGE=<your-capi-image-id>
   937  export DO_NODE_MACHINE_TYPE=s-2vcpu-2gb
   938  export DO_NODE_MACHINE_IMAGE==<your-capi-image-id>
   939  ```
   940  
   941  {{#/tab }}
   942  
   943  {{#tab Docker}}
   944  
   945  <aside class="note warning">
   946  
   947  <h1>Warning</h1>
   948  
   949  The Docker provider is not designed for production use and is intended for development environments only.
   950  
   951  </aside>
   952  
   953  The Docker provider does not require additional configurations for cluster templates.
   954  
   955  However, if you require special network settings you can set the following environment variables:
   956  
   957  ```bash
   958  # The list of service CIDR, default ["10.128.0.0/12"]
   959  export SERVICE_CIDR=["10.96.0.0/12"]
   960  
   961  # The list of pod CIDR, default ["192.168.0.0/16"]
   962  export POD_CIDR=["192.168.0.0/16"]
   963  
   964  # The service domain, default "cluster.local"
   965  export SERVICE_DOMAIN="k8s.test"
   966  ```
   967  
   968  It is also possible but **not recommended** to disable the per-default enabled [Pod Security Standard](../security/pod-security-standards.md):
   969  ```bash
   970  export POD_SECURITY_STANDARD_ENABLED="false"
   971  ```
   972  
   973  {{#/tab }}
   974  {{#tab Equinix Metal}}
   975  
   976  There are several required variables you need to set to create a cluster. There
   977  are also a few optional tunables if you'd like to change the OS or CIDRs used.
   978  
   979  ```bash
   980  # Required (made up examples shown)
   981  # The project where your cluster will be placed to.
   982  # You have to get one from the Equinix Metal Console if you don't have one already.
   983  export PROJECT_ID="2b59569f-10d1-49a6-a000-c2fb95a959a1"
   984  # This can help to take advantage of automated, interconnected bare metal across our global metros.
   985  export METRO="da"
   986  # What plan to use for your control plane nodes
   987  export CONTROLPLANE_NODE_TYPE="m3.small.x86"
   988  # What plan to use for your worker nodes
   989  export WORKER_NODE_TYPE="m3.small.x86"
   990  # The ssh key you would like to have access to the nodes
   991  export SSH_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvMgVEubPLztrvVKgNPnRe9sZSjAqaYj9nmCkgr4PdK username@computer"
   992  export CLUSTER_NAME="my-cluster"
   993  
   994  # Optional (defaults shown)
   995  export NODE_OS="ubuntu_18_04"
   996  export POD_CIDR="192.168.0.0/16"
   997  export SERVICE_CIDR="172.26.0.0/16"
   998  # Only relevant if using the kube-vip flavor
   999  export KUBE_VIP_VERSION="v0.5.0"
  1000  ```
  1001  
  1002  {{#/tab }}
  1003  {{#tab GCP}}
  1004  
  1005  
  1006  ```bash
  1007  # Name of the GCP datacenter location. Change this value to your desired location
  1008  export GCP_REGION="<GCP_REGION>"
  1009  export GCP_PROJECT="<GCP_PROJECT>"
  1010  # Make sure to use same Kubernetes version here as building the GCE image
  1011  export KUBERNETES_VERSION=1.23.3
  1012  # This is the image you built. See https://github.com/kubernetes-sigs/image-builder
  1013  export IMAGE_ID=projects/$GCP_PROJECT/global/images/<built image>
  1014  export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
  1015  export GCP_NODE_MACHINE_TYPE=n1-standard-2
  1016  export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default>
  1017  export CLUSTER_NAME="<CLUSTER_NAME>"
  1018  ```
  1019  
  1020  See the [GCP provider] for more information.
  1021  
  1022  {{#/tab }}
  1023  {{#tab IBM Cloud}}
  1024  
  1025  ```bash
  1026  # Required environment variables for VPC
  1027  # VPC region
  1028  export IBMVPC_REGION=us-south
  1029  # VPC zone within the region
  1030  export IBMVPC_ZONE=us-south-1
  1031  # ID of the resource group in which the VPC will be created
  1032  export IBMVPC_RESOURCEGROUP=<your-resource-group-id>
  1033  # Name of the VPC
  1034  export IBMVPC_NAME=ibm-vpc-0
  1035  export IBMVPC_IMAGE_ID=<you-image-id>
  1036  # Profile for the virtual server instances
  1037  export IBMVPC_PROFILE=bx2-4x16
  1038  export IBMVPC_SSHKEY_ID=<your-sshkey-id>
  1039  
  1040  # Required environment variables for PowerVS
  1041  export IBMPOWERVS_SSHKEY_NAME=<your-ssh-key>
  1042  # Internal and external IP of the network
  1043  export IBMPOWERVS_VIP=<internal-ip>
  1044  export IBMPOWERVS_VIP_EXTERNAL=<external-ip>
  1045  export IBMPOWERVS_VIP_CIDR=29
  1046  export IBMPOWERVS_IMAGE_NAME=<your-capi-image-name>
  1047  # ID of the PowerVS service instance
  1048  export IBMPOWERVS_SERVICE_INSTANCE_ID=<service-instance-id>
  1049  export IBMPOWERVS_NETWORK_NAME=<your-capi-network-name>
  1050  ```
  1051  
  1052  Please visit the [IBM Cloud provider] for more information.
  1053  
  1054  {{#/tab }}
  1055  {{#tab K0smotron}}
  1056  
  1057  Please visit the [K0smotron provider] for more information.
  1058  
  1059  {{#/tab }}
  1060  {{#tab KubeKey}}
  1061  
  1062  ```bash
  1063  # Required environment variables
  1064  # The KKZONE is used to specify where to download the binaries. (e.g. "", "cn")
  1065  export KKZONE=""
  1066  # The ssh name of the all instance Linux user. (e.g. root, ubuntu)
  1067  export USER_NAME=<your-linux-user>
  1068  # The ssh password of the all instance Linux user.
  1069  export PASSWORD=<your-linux-user-password>
  1070  # The ssh IP address of the all instance. (e.g. "[{address: 192.168.100.3}, {address: 192.168.100.4}]")
  1071  export INSTANCES=<your-linux-ip-address>
  1072  # The cluster control plane VIP. (e.g. "192.168.100.100")
  1073  export CONTROL_PLANE_ENDPOINT_IP=<your-control-plane-virtual-ip>
  1074  ```
  1075  
  1076  Please visit the [KubeKey provider] for more information.
  1077  
  1078  {{#/tab }}
  1079  {{#tab KubeVirt}}
  1080  
  1081  ```bash
  1082  export CAPK_GUEST_K8S_VERSION="v1.23.10"
  1083  export CRI_PATH="/var/run/containerd/containerd.sock"
  1084  export NODE_VM_IMAGE_TEMPLATE="quay.io/capk/ubuntu-2004-container-disk:${CAPK_GUEST_K8S_VERSION}"
  1085  ```
  1086  Please visit the [KubeVirt project][KubeVirt provider] for more information.
  1087  
  1088  {{#/tab }}
  1089  {{#tab Metal3}}
  1090  
  1091  **Note**: If you are running CAPM3 release prior to v0.5.0, make sure to export the following
  1092  environment variables. However, you don't need them to be exported if you use
  1093  CAPM3 release v0.5.0 or higher.
  1094  
  1095  ```bash
  1096  # The URL of the kernel to deploy.
  1097  export DEPLOY_KERNEL_URL="http://172.22.0.1:6180/images/ironic-python-agent.kernel"
  1098  # The URL of the ramdisk to deploy.
  1099  export DEPLOY_RAMDISK_URL="http://172.22.0.1:6180/images/ironic-python-agent.initramfs"
  1100  # The URL of the Ironic endpoint.
  1101  export IRONIC_URL="http://172.22.0.1:6385/v1/"
  1102  # The URL of the Ironic inspector endpoint.
  1103  export IRONIC_INSPECTOR_URL="http://172.22.0.1:5050/v1/"
  1104  # Do not use a dedicated CA certificate for Ironic API. Any value provided in this variable disables additional CA certificate validation.
  1105  # To provide a CA certificate, leave this variable unset. If unset, then IRONIC_CA_CERT_B64 must be set.
  1106  export IRONIC_NO_CA_CERT=true
  1107  # Disables basic authentication for Ironic API. Any value provided in this variable disables authentication.
  1108  # To enable authentication, leave this variable unset. If unset, then IRONIC_USERNAME and IRONIC_PASSWORD must be set.
  1109  export IRONIC_NO_BASIC_AUTH=true
  1110  # Disables basic authentication for Ironic inspector API. Any value provided in this variable disables authentication.
  1111  # To enable authentication, leave this variable unset. If unset, then IRONIC_INSPECTOR_USERNAME and IRONIC_INSPECTOR_PASSWORD must be set.
  1112  export IRONIC_INSPECTOR_NO_BASIC_AUTH=true
  1113  ```
  1114  
  1115  Please visit the [Metal3 getting started guide] for more details.
  1116  
  1117  {{#/tab }}
  1118  {{#tab Nutanix}}
  1119  
  1120  A ClusterAPI compatible image must be available in your Nutanix image library. For instructions on how to build a compatible image
  1121  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1122  
  1123  To see all required Nutanix environment variables execute:
  1124  ```bash
  1125  clusterctl generate cluster --infrastructure nutanix --list-variables capi-quickstart
  1126  ```
  1127  
  1128  {{#/tab }}
  1129  {{#tab OpenStack}}
  1130  
  1131  A ClusterAPI compatible image must be available in your OpenStack. For instructions on how to build a compatible image
  1132  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1133  Depending on your OpenStack and underlying hypervisor the following options might be of interest:
  1134  * [image-builder (OpenStack)](https://image-builder.sigs.k8s.io/capi/providers/openstack.html)
  1135  * [image-builder (vSphere)](https://image-builder.sigs.k8s.io/capi/providers/vsphere.html)
  1136  
  1137  To see all required OpenStack environment variables execute:
  1138  ```bash
  1139  clusterctl generate cluster --infrastructure openstack --list-variables capi-quickstart
  1140  ```
  1141  
  1142  The following script can be used to export some of them:
  1143  ```bash
  1144  wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
  1145  source /tmp/env.rc <path/to/clouds.yaml> <cloud>
  1146  ```
  1147  
  1148  Apart from the script, the following OpenStack environment variables are required.
  1149  ```bash
  1150  # The list of nameservers for OpenStack Subnet being created.
  1151  # Set this value when you need create a new network/subnet while the access through DNS is required.
  1152  export OPENSTACK_DNS_NAMESERVERS=<dns nameserver>
  1153  # FailureDomain is the failure domain the machine will be created in.
  1154  export OPENSTACK_FAILURE_DOMAIN=<availability zone name>
  1155  # The flavor reference for the flavor for your server instance.
  1156  export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=<flavor>
  1157  # The flavor reference for the flavor for your server instance.
  1158  export OPENSTACK_NODE_MACHINE_FLAVOR=<flavor>
  1159  # The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
  1160  export OPENSTACK_IMAGE_NAME=<image name>
  1161  # The SSH key pair name
  1162  export OPENSTACK_SSH_KEY_NAME=<ssh key pair name>
  1163  # The external network
  1164  export OPENSTACK_EXTERNAL_NETWORK_ID=<external network ID>
  1165  ```
  1166  
  1167  A full configuration reference can be found in [configuration.md](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/docs/book/src/clusteropenstack/configuration.md).
  1168  
  1169  {{#/tab }}
  1170  {{#tab Outscale}}
  1171  
  1172  A ClusterAPI compatible image must be available in your Outscale account. For instructions on how to build a compatible image
  1173  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1174  
  1175  ```bash
  1176  # The outscale root disk iops
  1177  export OSC_IOPS="<IOPS>"
  1178  # The outscale root disk size
  1179  export OSC_VOLUME_SIZE="<VOLUME_SIZE>"
  1180  # The outscale root disk volumeType
  1181  export OSC_VOLUME_TYPE="<VOLUME_TYPE>"
  1182  # The outscale key pair
  1183  export OSC_KEYPAIR_NAME="<KEYPAIR_NAME>"
  1184  # The outscale subregion name
  1185  export OSC_SUBREGION_NAME="<SUBREGION_NAME>"
  1186  # The outscale vm type
  1187  export OSC_VM_TYPE="<VM_TYPE>"
  1188  # The outscale image name
  1189  export OSC_IMAGE_NAME="<IMAGE_NAME>"
  1190  ```
  1191  
  1192  {{#/tab }}
  1193  {{#tab Proxmox}}
  1194  
  1195  A ClusterAPI compatible image must be available in your Proxmox cluster. For instructions on how to build a compatible VM template
  1196  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1197  
  1198  ```bash
  1199  # The node that hosts the VM template to be used to provision VMs
  1200  export PROXMOX_SOURCENODE="pve"
  1201  # The template VM ID used for cloning VMs
  1202  export TEMPLATE_VMID=100
  1203  # The ssh authorized keys used to ssh to the machines.
  1204  export VM_SSH_KEYS="ssh-ed25519 ..., ssh-ed25519 ..."
  1205  # The IP address used for the control plane endpoint
  1206  export CONTROL_PLANE_ENDPOINT_IP=10.10.10.4
  1207  # The IP ranges for Cluster nodes
  1208  export NODE_IP_RANGES="[10.10.10.5-10.10.10.50, 10.10.10.55-10.10.10.70]"
  1209  # The gateway for the machines network-config.
  1210  export GATEWAY="10.10.10.1"
  1211  # Subnet Mask in CIDR notation for your node IP ranges
  1212  export IP_PREFIX=24
  1213  # The Proxmox network device for VMs
  1214  export BRIDGE="vmbr1"
  1215  # The dns nameservers for the machines network-config.
  1216  export DNS_SERVERS="[8.8.8.8,8.8.4.4]"
  1217  # The Proxmox nodes used for VM deployments
  1218  export ALLOWED_NODES="[pve1,pve2,pve3]"
  1219  ```
  1220  
  1221  For more information about prerequisites and advanced setups for Proxmox, see the [Proxmox getting started guide].
  1222  
  1223  {{#/tab }}
  1224  {{#tab VCD}}
  1225  
  1226  A ClusterAPI compatible image must be available in your VCD catalog. For instructions on how to build and upload a compatible image
  1227  see [CAPVCD](https://github.com/vmware/cluster-api-provider-cloud-director)
  1228  
  1229  To see all required VCD environment variables execute:
  1230  ```bash
  1231  clusterctl generate cluster --infrastructure vcd --list-variables capi-quickstart
  1232  ```
  1233  
  1234  
  1235  {{#/tab }}
  1236  {{#tab vcluster}}
  1237  
  1238  ```bash
  1239  export CLUSTER_NAME=kind
  1240  export CLUSTER_NAMESPACE=vcluster
  1241  export KUBERNETES_VERSION=1.23.4
  1242  export HELM_VALUES="service:\n  type: NodePort"
  1243  ```
  1244  
  1245  Please see the [vcluster installation instructions](https://github.com/loft-sh/cluster-api-provider-vcluster#installation-instructions) for more details.
  1246  
  1247  {{#/tab }}
  1248  {{#tab Virtink}}
  1249  
  1250  To see all required Virtink environment variables execute:
  1251  ```bash
  1252  clusterctl generate cluster --infrastructure virtink --list-variables capi-quickstart
  1253  ```
  1254  
  1255  See the [Virtink provider](https://github.com/smartxworks/cluster-api-provider-virtink) document for more details.
  1256  
  1257  {{#/tab }}
  1258  {{#tab vSphere}}
  1259  
  1260  It is required to use an official CAPV machine images for your vSphere VM templates. See [uploading CAPV machine images][capv-upload-images] for instructions on how to do this.
  1261  
  1262  ```bash
  1263  # The vCenter server IP or FQDN
  1264  export VSPHERE_SERVER="10.0.0.1"
  1265  # The vSphere datacenter to deploy the management cluster on
  1266  export VSPHERE_DATACENTER="SDDC-Datacenter"
  1267  # The vSphere datastore to deploy the management cluster on
  1268  export VSPHERE_DATASTORE="vsanDatastore"
  1269  # The VM network to deploy the management cluster on
  1270  export VSPHERE_NETWORK="VM Network"
  1271  # The vSphere resource pool for your VMs
  1272  export VSPHERE_RESOURCE_POOL="*/Resources"
  1273  # The VM folder for your VMs. Set to "" to use the root vSphere folder
  1274  export VSPHERE_FOLDER="vm"
  1275  # The VM template to use for your VMs
  1276  export VSPHERE_TEMPLATE="ubuntu-1804-kube-v1.17.3"
  1277  # The public ssh authorized key on all machines
  1278  export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
  1279  # The certificate thumbprint for the vCenter server
  1280  export VSPHERE_TLS_THUMBPRINT="97:48:03:8D:78:A9..."
  1281  # The storage policy to be used (optional). Set to "" if not required
  1282  export VSPHERE_STORAGE_POLICY="policy-one"
  1283  # The IP address used for the control plane endpoint
  1284  export CONTROL_PLANE_ENDPOINT_IP="1.2.3.4"
  1285  ```
  1286  
  1287  For more information about prerequisites, credentials management, or permissions for vSphere, see the [vSphere getting started guide].
  1288  
  1289  {{#/tab }}
  1290  {{#/tabs }}
  1291  
  1292  #### Generating the cluster configuration
  1293  
  1294  For the purpose of this tutorial, we'll name our cluster capi-quickstart.
  1295  
  1296  {{#tabs name:"tab-clusterctl-config-cluster" tabs:"Docker, vcluster, KubeVirt, Other providers..."}}
  1297  {{#tab Docker}}
  1298  
  1299  <aside class="note warning">
  1300  
  1301  <h1>Warning</h1>
  1302  
  1303  The Docker provider is not designed for production use and is intended for development environments only.
  1304  
  1305  </aside>
  1306  
  1307  ```bash
  1308  clusterctl generate cluster capi-quickstart --flavor development \
  1309    --kubernetes-version v1.29.2 \
  1310    --control-plane-machine-count=3 \
  1311    --worker-machine-count=3 \
  1312    > capi-quickstart.yaml
  1313  ```
  1314  
  1315  {{#/tab }}
  1316  {{#tab vcluster}}
  1317  
  1318  ```bash
  1319  export CLUSTER_NAME=kind
  1320  export CLUSTER_NAMESPACE=vcluster
  1321  export KUBERNETES_VERSION=1.28.0
  1322  export HELM_VALUES="service:\n  type: NodePort"
  1323  
  1324  kubectl create namespace ${CLUSTER_NAMESPACE}
  1325  clusterctl generate cluster ${CLUSTER_NAME} \
  1326      --infrastructure vcluster \
  1327      --kubernetes-version ${KUBERNETES_VERSION} \
  1328      --target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -
  1329  ```
  1330  
  1331  {{#/tab }}
  1332  {{#tab KubeVirt}}
  1333  
  1334  As we described above, in this tutorial, we will use a LoadBalancer service in order to expose the API server of the
  1335  workload cluster, so we want to use the load balancer (lb) template (rather than the default one). We'll use the
  1336  clusterctl's `--flavor` flag for that:
  1337  ```bash
  1338  clusterctl generate cluster capi-quickstart \
  1339    --infrastructure="kubevirt" \
  1340    --flavor lb \
  1341    --kubernetes-version ${CAPK_GUEST_K8S_VERSION} \
  1342    --control-plane-machine-count=1 \
  1343    --worker-machine-count=1 \
  1344    > capi-quickstart.yaml
  1345  ```
  1346  
  1347  {{#/tab }}
  1348  {{#tab Other providers...}}
  1349  
  1350  ```bash
  1351  clusterctl generate cluster capi-quickstart \
  1352    --kubernetes-version v1.29.2 \
  1353    --control-plane-machine-count=3 \
  1354    --worker-machine-count=3 \
  1355    > capi-quickstart.yaml
  1356  ```
  1357  
  1358  {{#/tab }}
  1359  {{#/tabs }}
  1360  
  1361  This creates a YAML file named `capi-quickstart.yaml` with a predefined list of Cluster API objects; Cluster, Machines,
  1362  Machine Deployments, etc.
  1363  
  1364  The file can be eventually modified using your editor of choice.
  1365  
  1366  See [clusterctl generate cluster] for more details.
  1367  
  1368  #### Apply the workload cluster
  1369  
  1370  When ready, run the following command to apply the cluster manifest.
  1371  
  1372  ```bash
  1373  kubectl apply -f capi-quickstart.yaml
  1374  ```
  1375  
  1376  The output is similar to this:
  1377  
  1378  ```bash
  1379  cluster.cluster.x-k8s.io/capi-quickstart created
  1380  dockercluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
  1381  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
  1382  dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
  1383  machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
  1384  dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
  1385  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created
  1386  ```
  1387  
  1388  #### Accessing the workload cluster
  1389  
  1390  The cluster will now start provisioning. You can check status with:
  1391  
  1392  ```bash
  1393  kubectl get cluster
  1394  ```
  1395  
  1396  You can also get an "at glance" view of the cluster and its resources by running:
  1397  
  1398  ```bash
  1399  clusterctl describe cluster capi-quickstart
  1400  ```
  1401  
  1402  and see an output similar to this:
  1403  
  1404  ```bash
  1405  NAME              PHASE         AGE   VERSION
  1406  capi-quickstart   Provisioned   8s    v1.29.2
  1407  ```
  1408  
  1409  To verify the first control plane is up:
  1410  
  1411  ```bash
  1412  kubectl get kubeadmcontrolplane
  1413  ```
  1414  
  1415  You should see an output is similar to this:
  1416  
  1417  ```bash
  1418  NAME                    CLUSTER           INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE    VERSION
  1419  capi-quickstart-g2trk   capi-quickstart   true                                 3                  3         3             4m7s   v1.29.2
  1420  ```
  1421  
  1422  <aside class="note warning">
  1423  
  1424  <h1> Warning </h1>
  1425  
  1426  The control plane won't be `Ready` until we install a CNI in the next step.
  1427  
  1428  </aside>
  1429  
  1430  After the first control plane node is up and running, we can retrieve the [workload cluster] Kubeconfig.
  1431  
  1432  {{#tabs name:"tab-get-kubeconfig" tabs:"Default,Docker"}}
  1433  
  1434  {{#/tab }}
  1435  {{#tab Default}}
  1436  
  1437  ```bash
  1438  clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
  1439  ```
  1440  
  1441  {{#/tab }}
  1442  
  1443  {{#tab Docker}}
  1444  For Docker Desktop on macOS, Linux or Windows use kind to retrieve the kubeconfig. Docker Engine for Linux works with the default clusterctl approach.
  1445  
  1446  ```bash
  1447  kind get kubeconfig --name capi-quickstart > capi-quickstart.kubeconfig
  1448  ```
  1449  
  1450  <aside class="note warning">
  1451  
  1452  Note: To use the default clusterctl method to retrieve kubeconfig for a workload cluster created with the Docker provider when using Docker Desktop see [Additional Notes for the Docker provider](../clusterctl/developers.md#additional-notes-for-the-docker-provider).
  1453  
  1454  </aside>
  1455  
  1456  {{#/tab }}
  1457  {{#/tabs }}
  1458  
  1459  ### Install a Cloud Provider
  1460  
  1461  The Kubernetes in-tree cloud provider implementations are being [removed](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers) in favor of external cloud providers (also referred to as "out-of-tree"). This requires deploying a new component called the cloud-controller-manager which is responsible for running all the cloud specific controllers that were previously run in the kube-controller-manager. To learn more, see [this blog post](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/).
  1462  
  1463  {{#tabs name:"tab-install-cloud-provider" tabs:"Azure,OpenStack"}}
  1464  {{#tab Azure}}
  1465  
  1466  Install the official cloud-provider-azure Helm chart on the workload cluster:
  1467  
  1468  ```bash
  1469  helm install --kubeconfig=./capi-quickstart.kubeconfig --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=capi-quickstart --set cloudControllerManager.clusterCIDR="192.168.0.0/16"
  1470  ```
  1471  
  1472  For more information, see the [CAPZ book](https://capz.sigs.k8s.io/topics/addons.html).
  1473  
  1474  {{#/tab }}
  1475  {{#tab OpenStack}}
  1476  
  1477  Before deploying the OpenStack external cloud provider, configure the `cloud.conf` file for integration with your OpenStack environment:
  1478  
  1479  ```bash
  1480  cat > cloud.conf <<EOF
  1481  [Global]
  1482  auth-url=<your_auth_url>
  1483  application-credential-id=<your_credential_id>
  1484  application-credential-secret=<your_credential_secret>
  1485  region=<your_region>
  1486  domain-name=<your_domain_name>
  1487  EOF
  1488  ```
  1489  
  1490  For more detailed information on configuring the `cloud.conf` file, see the [OpenStack Cloud Controller Manager documentation](https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/openstack-cloud-controller-manager/using-openstack-cloud-controller-manager.md#config-openstack-cloud-controller-manager).
  1491  
  1492  Next, create a Kubernetes secret using this configuration to securely store your cloud environment details.
  1493  You can create this secret for example with:
  1494  
  1495  ```bash
  1496  kubectl -n kube-system create secret generic cloud-config --from-file=cloud.conf
  1497  ```
  1498  
  1499  Now, you are ready to deploy the external cloud provider!
  1500  
  1501  ```bash
  1502  kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
  1503  kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
  1504  kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/controller-manager/openstack-cloud-controller-manager-ds.yaml
  1505  ```
  1506  
  1507  Alternatively, refer to the [helm chart](https://github.com/kubernetes/cloud-provider-openstack/tree/master/charts/openstack-cloud-controller-manager).
  1508  
  1509  {{#/tab }}
  1510  {{#/tabs }}
  1511  
  1512  ### Deploy a CNI solution
  1513  
  1514  Calico is used here as an example.
  1515  
  1516  {{#tabs name:"tab-deploy-cni" tabs:"Azure,vcluster,KubeVirt,Other providers..."}}
  1517  {{#tab Azure}}
  1518  
  1519  Install the official Calico Helm chart on the workload cluster:
  1520  
  1521  ```bash
  1522  helm repo add projectcalico https://docs.tigera.io/calico/charts --kubeconfig=./capi-quickstart.kubeconfig && \
  1523  helm install calico projectcalico/tigera-operator --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico/values.yaml --namespace tigera-operator --create-namespace
  1524  ```
  1525  
  1526  After a short while, our nodes should be running and in `Ready` state,
  1527  let's check the status using `kubectl get nodes`:
  1528  
  1529  ```bash
  1530  kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
  1531  ```
  1532  
  1533  {{#/tab }}
  1534  {{#tab vcluster}}
  1535  
  1536  Calico not required for vcluster.
  1537  
  1538  {{#/tab }}
  1539  {{#tab KubeVirt}}
  1540  
  1541  Before deploying the Calico CNI, make sure the VMs are running:
  1542  ```bash
  1543  kubectl get vm
  1544  ```
  1545  
  1546  If our new VMs are running, we should see a response similar to this:
  1547  
  1548  ```text
  1549  NAME                                  AGE    STATUS    READY
  1550  capi-quickstart-control-plane-7s945   167m   Running   True
  1551  capi-quickstart-md-0-zht5j            164m   Running   True
  1552  ```
  1553  
  1554  We can also read the virtual machine instances:
  1555  ```bash
  1556  kubectl get vmi
  1557  ```
  1558  The output will be similar to:
  1559  ```text
  1560  NAME                                  AGE    PHASE     IP             NODENAME             READY
  1561  capi-quickstart-control-plane-7s945   167m   Running   10.244.82.16   kind-control-plane   True
  1562  capi-quickstart-md-0-zht5j            164m   Running   10.244.82.17   kind-control-plane   True
  1563  ```
  1564  
  1565  Since our workload cluster is running within the kind cluster, we need to prevent conflicts between the kind
  1566  (management) cluster's CNI, and the workload cluster CNI. The following modifications in the default Calico settings
  1567  are enough for these two CNI to work on (actually) the same environment.
  1568  
  1569  * Change the CIDR to a non-conflicting range
  1570  * Change the value of the `CLUSTER_TYPE` environment variable to `k8s`
  1571  * Change the value of the `CALICO_IPV4POOL_IPIP` environment variable to `Never`
  1572  * Change the value of the `CALICO_IPV4POOL_VXLAN` environment variable to `Always`
  1573  * Add the `FELIX_VXLANPORT` environment variable with the value of a non-conflicting port, e.g. `"6789"`.
  1574  
  1575  The following script downloads the Calico manifest and modifies the required field. The CIDR and the port values are examples.
  1576  ```bash
  1577  curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml -o calico-workload.yaml
  1578  
  1579  sed -i -E 's|^( +)# (- name: CALICO_IPV4POOL_CIDR)$|\1\2|g;'\
  1580  's|^( +)# (  value: )"192.168.0.0/16"|\1\2"10.243.0.0/16"|g;'\
  1581  '/- name: CLUSTER_TYPE/{ n; s/( +value: ").+/\1k8s"/g };'\
  1582  '/- name: CALICO_IPV4POOL_IPIP/{ n; s/value: "Always"/value: "Never"/ };'\
  1583  '/- name: CALICO_IPV4POOL_VXLAN/{ n; s/value: "Never"/value: "Always"/};'\
  1584  '/# Set Felix endpoint to host default action to ACCEPT./a\            - name: FELIX_VXLANPORT\n              value: "6789"' \
  1585  calico-workload.yaml
  1586  ```
  1587  Now, deploy the Calico CNI on the workload cluster:
  1588  ```bash
  1589  kubectl --kubeconfig=./capi-quickstart.kubeconfig create -f calico-workload.yaml
  1590  ```
  1591  
  1592  After a short while, our nodes should be running and in `Ready` state, let’s check the status using `kubectl get nodes`:
  1593  
  1594  ```bash
  1595  kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
  1596  ```
  1597  
  1598  <aside class="note">
  1599  
  1600  <h1>Troubleshooting</h1>
  1601  
  1602  If the nodes don't become ready after a long period, read the pods in the `kube-system` namespace
  1603  ```bash
  1604  kubectl --kubeconfig=./capi-quickstart.kubeconfig get pod -n kube-system
  1605  ```
  1606  
  1607  If the Calico pods are in image pull error state (`ErrImagePull`), it's probably because of the Docker Hub pull rate limit.
  1608  We can try to fix that by adding a secret with our Docker Hub credentials, and use it;
  1609  see [here](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials)
  1610  for details.
  1611  
  1612  First, create the secret. Please notice the Docker config file path, and adjust it to your local setting.
  1613  ```bash
  1614  kubectl --kubeconfig=./capi-quickstart.kubeconfig create secret generic docker-creds \
  1615      --from-file=.dockerconfigjson=<YOUR DOCKER CONFIG FILE PATH> \
  1616      --type=kubernetes.io/dockerconfigjson \
  1617      -n kube-system
  1618  ```
  1619  
  1620  Now, if the `calico-node` pods are with status of `ErrImagePull`, patch their DaemonSet to make them use the new secret to pull images:
  1621  ```bash
  1622  kubectl --kubeconfig=./capi-quickstart.kubeconfig patch daemonset \
  1623      -n kube-system calico-node \
  1624      -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"docker-creds"}]}}}}'
  1625  ```
  1626  
  1627  After a short while, the calico-node pods will be with `Running` status. Now, if the calico-kube-controllers pod is also
  1628  in `ErrImagePull` status, patch its deployment to fix the problem:
  1629  ```bash
  1630  kubectl --kubeconfig=./capi-quickstart.kubeconfig patch deployment \
  1631      -n kube-system calico-kube-controllers \
  1632      -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"docker-creds"}]}}}}'
  1633  ```
  1634  
  1635  Read the pods again
  1636  ```bash
  1637  kubectl --kubeconfig=./capi-quickstart.kubeconfig get pod -n kube-system
  1638  ```
  1639  
  1640  Eventually, all the pods in the kube-system namespace will run, and the result should be similar to this:
  1641  ```text
  1642  NAME                                                          READY   STATUS    RESTARTS   AGE
  1643  calico-kube-controllers-c969cf844-dgld6                       1/1     Running   0          50s
  1644  calico-node-7zz7c                                             1/1     Running   0          54s
  1645  calico-node-jmjd6                                             1/1     Running   0          54s
  1646  coredns-64897985d-dspjm                                       1/1     Running   0          3m49s
  1647  coredns-64897985d-pgtgz                                       1/1     Running   0          3m49s
  1648  etcd-capi-quickstart-control-plane-kjjbb                      1/1     Running   0          3m57s
  1649  kube-apiserver-capi-quickstart-control-plane-kjjbb            1/1     Running   0          3m57s
  1650  kube-controller-manager-capi-quickstart-control-plane-kjjbb   1/1     Running   0          3m57s
  1651  kube-proxy-b9g5m                                              1/1     Running   0          3m12s
  1652  kube-proxy-p6xx8                                              1/1     Running   0          3m49s
  1653  kube-scheduler-capi-quickstart-control-plane-kjjbb            1/1     Running   0          3m57s
  1654  ```
  1655  
  1656  </aside>
  1657  
  1658  {{#/tab }}
  1659  {{#tab Other providers...}}
  1660  
  1661  ```bash
  1662  kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  1663    apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
  1664  ```
  1665  
  1666  After a short while, our nodes should be running and in `Ready` state,
  1667  let's check the status using `kubectl get nodes`:
  1668  
  1669  ```bash
  1670  kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
  1671  ```
  1672  ```bash
  1673  NAME                                          STATUS   ROLES           AGE    VERSION
  1674  capi-quickstart-vs89t-gmbld                   Ready    control-plane   5m33s  v1.29.2
  1675  capi-quickstart-vs89t-kf9l5                   Ready    control-plane   6m20s  v1.29.2
  1676  capi-quickstart-vs89t-t8cfn                   Ready    control-plane   7m10s  v1.29.2
  1677  capi-quickstart-md-0-55x6t-5649968bd7-8tq9v   Ready    <none>          6m5s   v1.29.2
  1678  capi-quickstart-md-0-55x6t-5649968bd7-glnjd   Ready    <none>          6m9s   v1.29.2
  1679  capi-quickstart-md-0-55x6t-5649968bd7-sfzp6   Ready    <none>          6m9s   v1.29.2
  1680  ```
  1681  
  1682  {{#/tab }}
  1683  {{#/tabs }}
  1684  
  1685  ### Clean Up
  1686  
  1687  Delete workload cluster.
  1688  ```bash
  1689  kubectl delete cluster capi-quickstart
  1690  ```
  1691  <aside class="note warning">
  1692  
  1693  IMPORTANT: In order to ensure a proper cleanup of your infrastructure you must always delete the cluster object. Deleting the entire cluster template with `kubectl delete -f capi-quickstart.yaml` might lead to pending resources to be cleaned up manually.
  1694  </aside>
  1695  
  1696  Delete management cluster
  1697  ```bash
  1698  kind delete cluster
  1699  ```
  1700  
  1701  ## Next steps
  1702  
  1703  - Create a second workload cluster. Simply follow the steps outlined above, but remember to provide a different name for your second workload cluster.
  1704  - Deploy applications to your workload cluster. Use the [CNI deployment steps](#deploy-a-cni-solution) for pointers.
  1705  - See the [clusterctl] documentation for more detail about clusterctl supported actions.
  1706  
  1707  <!-- links -->
  1708  [Experimental Features]: ../tasks/experimental-features/experimental-features.md
  1709  [AWS provider prerequisites]: https://cluster-api-aws.sigs.k8s.io/topics/using-clusterawsadm-to-fulfill-prerequisites.html
  1710  [AWS provider releases]: https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases
  1711  [Azure Provider Prerequisites]: https://capz.sigs.k8s.io/topics/getting-started.html#prerequisites
  1712  [bootstrap cluster]: ../reference/glossary.md#bootstrap-cluster
  1713  [capa]: https://cluster-api-aws.sigs.k8s.io
  1714  [capv-upload-images]: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md#uploading-the-machine-images
  1715  [clusterawsadm]: https://cluster-api-aws.sigs.k8s.io/clusterawsadm/clusterawsadm.html
  1716  [clusterctl generate cluster]: ../clusterctl/commands/generate-cluster.md
  1717  [clusterctl get kubeconfig]: ../clusterctl/commands/get-kubeconfig.md
  1718  [clusterctl]: ../clusterctl/overview.md
  1719  [Docker]: https://www.docker.com/
  1720  [GCP provider]: https://cluster-api-gcp.sigs.k8s.io/
  1721  [Helm]: https://helm.sh/docs/intro/install/
  1722  [Hetzner provider]: https://github.com/syself/cluster-api-provider-hetzner
  1723  [Hivelocity provider]: https://github.com/hivelocity/cluster-api-provider-hivelocity
  1724  [IBM Cloud provider]: https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud
  1725  [infrastructure provider]: ../reference/glossary.md#infrastructure-provider
  1726  [kind]: https://kind.sigs.k8s.io/
  1727  [KubeadmControlPlane]: ../developer/architecture/controllers/control-plane.md
  1728  [kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
  1729  [management cluster]: ../reference/glossary.md#management-cluster
  1730  [Metal3 getting started guide]: https://github.com/metal3-io/cluster-api-provider-metal3/blob/master/docs/getting-started.md
  1731  [Metal3 provider]: https://github.com/metal3-io/cluster-api-provider-metal3/
  1732  [K0smotron provider]: https://github.com/k0sproject/k0smotron
  1733  [KubeKey provider]: https://github.com/kubesphere/kubekey
  1734  [KubeVirt provider]: https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/
  1735  [KubeVirt]: https://kubevirt.io/
  1736  [oci-provider]: https://oracle.github.io/cluster-api-provider-oci/#getting-started
  1737  [Equinix Metal getting started guide]: https://github.com/kubernetes-sigs/cluster-api-provider-packet#using
  1738  [provider]:../reference/providers.md
  1739  [provider components]: ../reference/glossary.md#provider-components
  1740  [vSphere getting started guide]: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md
  1741  [workload cluster]: ../reference/glossary.md#workload-cluster
  1742  [CAPI Operator quickstart]: ./quick-start-operator.md
  1743  [Proxmox getting started guide]: https://github.com/ionos-cloud/cluster-api-provider-proxmox/blob/main/docs/Usage.md