sigs.k8s.io/cluster-api@v1.6.3/docs/book/src/user/quick-start.md (about)

     1  # Quick Start
     2  
     3  In this tutorial we'll cover the basics of how to use Cluster API to create one or more Kubernetes clusters.
     4  
     5  <aside class="note warning">
     6  
     7  <h1>Warning</h1>
     8  
     9  If using a [provider] that does not support v1beta1 or v1alpha4 yet, please follow the [release 0.3](https://release-0-3.cluster-api.sigs.k8s.io/user/quick-start.html) or [release 0.4](https://release-0-4.cluster-api.sigs.k8s.io/user/quick-start.html) quickstart instructions instead.
    10  
    11  </aside>
    12  
    13  ## Installation
    14  
    15  There are two major quickstart paths:  Using clusterctl or the Cluster API Operator.  
    16   
    17   This article describes a path that uses the `clusterctl` CLI tool to handle the lifecycle of a Cluster API [management cluster](https://cluster-api.sigs.k8s.io/reference/glossary#management-cluster).
    18  
    19  The clusterctl command line interface is specifically designed for providing a simple “day 1 experience” and a quick start with Cluster API. It automates fetching the YAML files defining [provider components](https://cluster-api.sigs.k8s.io/reference/glossary#provider-components) and installing them.
    20  
    21  Additionally it encodes a set of best practices in managing providers, that helps the user in avoiding mis-configurations or in managing day 2 operations such as upgrades.
    22  
    23  The Cluster API Operator is a Kubernetes Operator built on top of clusterctl and designed to empower cluster administrators to handle the lifecycle of Cluster API providers within a management cluster using a declarative approach. It aims to improve user experience in deploying and managing Cluster API, making it easier to handle day-to-day tasks and automate workflows with GitOps. Visit the [CAPI Operator quickstart] if you want to experiment with this tool.
    24  
    25  ### Common Prerequisites
    26  
    27  - Install and setup [kubectl] in your local environment
    28  - Install [kind] and [Docker]
    29  - Install [Helm]
    30  
    31  ### Install and/or configure a Kubernetes cluster
    32  
    33  Cluster API requires an existing Kubernetes cluster accessible via kubectl. During the installation process the
    34  Kubernetes cluster will be transformed into a [management cluster] by installing the Cluster API [provider components], so it
    35  is recommended to keep it separated from any application workload.
    36  
    37  It is a common practice to create a temporary, local bootstrap cluster which is then used to provision
    38  a target [management cluster] on the selected [infrastructure provider].
    39  
    40  **Choose one of the options below:**
    41  
    42  1. **Existing Management Cluster**
    43  
    44     For production use-cases a "real" Kubernetes cluster should be used with appropriate backup and disaster recovery policies and procedures in place. The Kubernetes cluster must be at least v1.20.0.
    45  
    46     ```bash
    47     export KUBECONFIG=<...>
    48     ```
    49  **OR**
    50  
    51  2. **Kind**
    52  
    53     <aside class="note warning">
    54  
    55     <h1>Warning</h1>
    56  
    57     [kind] is not designed for production use.
    58  
    59     **Minimum [kind] supported version**: v0.20.0
    60  
    61     **Help with common issues can be found in the [Troubleshooting Guide](./troubleshooting.md).**
    62  
    63     Note for macOS users: you may need to [increase the memory available](https://docs.docker.com/docker-for-mac/#resources) for containers (recommend 6 GB for CAPD).
    64  
    65     Note for Linux users: you may need to [increase `ulimit` and `inotify` when using Docker (CAPD)](./troubleshooting.md#cluster-api-with-docker----too-many-open-files).
    66  
    67     </aside>
    68  
    69     [kind] can be used for creating a local Kubernetes cluster for development environments or for
    70     the creation of a temporary [bootstrap cluster] used to provision a target [management cluster] on the selected infrastructure provider.
    71  
    72     The installation procedure depends on the version of kind; if you are planning to use the Docker infrastructure provider,
    73     please follow the additional instructions in the dedicated tab:
    74  
    75     {{#tabs name:"install-kind" tabs:"Default,Docker,KubeVirt"}}
    76     {{#tab Default}}
    77  
    78     Create the kind cluster:
    79     ```bash
    80     kind create cluster
    81     ```
    82     Test to ensure the local kind cluster is ready:
    83     ```bash
    84     kubectl cluster-info
    85     ```
    86  
    87     {{#/tab }}
    88     {{#tab Docker}}
    89  
    90     Run the following command to create a kind config file for allowing the Docker provider to access Docker on the host:
    91  
    92     ```bash
    93     cat > kind-cluster-with-extramounts.yaml <<EOF
    94     kind: Cluster
    95     apiVersion: kind.x-k8s.io/v1alpha4
    96     networking:
    97       ipFamily: dual
    98     nodes:
    99     - role: control-plane
   100       extraMounts:
   101         - hostPath: /var/run/docker.sock
   102           containerPath: /var/run/docker.sock
   103     EOF
   104     ```
   105  
   106     Then follow the instruction for your kind version using  `kind create cluster --config kind-cluster-with-extramounts.yaml`
   107     to create the management cluster using the above file.
   108  
   109     {{#/tab }}
   110     {{#tab KubeVirt}}
   111  
   112     #### Create the Kind Cluster
   113     [KubeVirt][KubeVirt] is a cloud native virtualization solution. The virtual machines we're going to create and use for
   114     the workload cluster's nodes, are actually running within pods in the management cluster. In order to communicate with
   115     the workload cluster's API server, we'll need to expose it. We are using Kind which is a limited environment. The
   116     easiest way to expose the workload cluster's API server (a pod within a node running in a VM that is itself running
   117     within a pod in the management cluster, that is running inside a Docker container), is to use a LoadBalancer service.
   118  
   119     To allow using a LoadBalancer service, we can't use the kind's default CNI (kindnet), but we'll need to install
   120     another CNI, like Calico. In order to do that, we'll need first to initiate the kind cluster with two modifications:
   121     1. Disable the default CNI
   122     2. Add the Docker credentials to the cluster, to avoid the Docker Hub pull rate limit of the calico images; read more
   123        about it in the [docker documentation](https://docs.docker.com/docker-hub/download-rate-limit/), and in the
   124        [kind documentation](https://kind.sigs.k8s.io/docs/user/private-registries/#mount-a-config-file-to-each-node).
   125  
   126     Create a configuration file for kind. Please notice the Docker config file path, and adjust it to your local setting:
   127     ```bash
   128     cat <<EOF > kind-config.yaml
   129     kind: Cluster
   130     apiVersion: kind.x-k8s.io/v1alpha4
   131     networking:
   132     # the default CNI will not be installed
   133       disableDefaultCNI: true
   134     nodes:
   135     - role: control-plane
   136       extraMounts:
   137        - containerPath: /var/lib/kubelet/config.json
   138          hostPath: <YOUR DOCKER CONFIG FILE PATH>
   139     EOF
   140     ```
   141     Now, create the kind cluster with the configuration file:
   142     ```bash
   143     kind create cluster --config=kind-config.yaml
   144     ```
   145     Test to ensure the local kind cluster is ready:
   146     ```bash
   147     kubectl cluster-info
   148     ```
   149  
   150     #### Install the Calico CNI
   151     Now we'll need to install a CNI. In this example, we're using calico, but other CNIs should work as well. Please see
   152     [calico installation guide](https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico)
   153     for more details (use the "Manifest" tab). Below is an example of how to install calico version v3.24.4.
   154  
   155     Use the Calico manifest to create the required resources; e.g.:
   156     ```bash
   157     kubectl create -f  https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml
   158     ```
   159  
   160     {{#/tab }}
   161     {{#/tabs }}
   162  
   163  ### Install clusterctl
   164  The clusterctl CLI tool handles the lifecycle of a Cluster API management cluster.
   165  
   166  {{#tabs name:"install-clusterctl" tabs:"Linux,macOS,homebrew,Windows"}}
   167  {{#tab Linux}}
   168  
   169  #### Install clusterctl binary with curl on Linux
   170  If you are unsure you can determine your computers architecture by running `uname -a`
   171  
   172  Download for AMD64:
   173  ```bash
   174  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-amd64" version:"1.6.x"}} -o clusterctl
   175  ```
   176  
   177  Download for ARM64:
   178  ```bash
   179  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-arm64" version:"1.6.x"}} -o clusterctl
   180  ```
   181  
   182  Download for PPC64LE:
   183  ```bash
   184  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-linux-ppc64le" version:"1.6.x"}} -o clusterctl
   185  ```
   186  
   187  Install clusterctl:
   188  ```bash
   189  sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
   190  ```
   191  Test to ensure the version you installed is up-to-date:
   192  ```bash
   193  clusterctl version
   194  ```
   195  
   196  {{#/tab }}
   197  {{#tab macOS}}
   198  
   199  #### Install clusterctl binary with curl on macOS
   200  If you are unsure you can determine your computers architecture by running `uname -a`
   201  
   202  Download for AMD64:
   203  ```bash
   204  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-darwin-amd64" version:"1.6.x"}} -o clusterctl
   205  ```
   206  
   207  Download for M1 CPU ("Apple Silicon") / ARM64:
   208  ```bash
   209  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-darwin-arm64" version:"1.6.x"}} -o clusterctl
   210  ```
   211  
   212  Make the clusterctl binary executable.
   213  ```bash
   214  chmod +x ./clusterctl
   215  ```
   216  Move the binary in to your PATH.
   217  ```bash
   218  sudo mv ./clusterctl /usr/local/bin/clusterctl
   219  ```
   220  Test to ensure the version you installed is up-to-date:
   221  ```bash
   222  clusterctl version
   223  ```
   224  {{#/tab }}
   225  {{#tab homebrew}}
   226  
   227  #### Install clusterctl with homebrew on macOS and Linux
   228  
   229  Install the latest release using homebrew:
   230  
   231  ```bash
   232  brew install clusterctl
   233  ```
   234  
   235  Test to ensure the version you installed is up-to-date:
   236  ```bash
   237  clusterctl version
   238  ```
   239  
   240  {{#/tab }}
   241  {{#tab windows}}
   242  
   243  #### Install clusterctl binary with curl on Windows using PowerShell
   244  Go to the working directory where you want clusterctl downloaded.
   245  
   246  Download the latest release; on Windows, type:
   247  ```powershell
   248  curl.exe -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api" asset:"clusterctl-windows-amd64.exe" version:"1.6.x"}} -o clusterctl.exe
   249  ```
   250  Append or prepend the path of that directory to the `PATH` environment variable.
   251  
   252  Test to ensure the version you installed is up-to-date:
   253  ```powershell
   254  clusterctl.exe version
   255  ```
   256  
   257  {{#/tab }}
   258  {{#/tabs }}
   259  
   260  ### Initialize the management cluster
   261  
   262  Now that we've got clusterctl installed and all the prerequisites in place, let's transform the Kubernetes cluster
   263  into a management cluster by using `clusterctl init`.
   264  
   265  The command accepts as input a list of providers to install; when executed for the first time, `clusterctl init`
   266  automatically adds to the list the `cluster-api` core provider, and if unspecified, it also adds the `kubeadm` bootstrap
   267  and `kubeadm` control-plane providers.
   268  
   269  #### Enabling Feature Gates
   270  
   271  Feature gates can be enabled by exporting environment variables before executing `clusterctl init`.
   272  For example, the `ClusterTopology` feature, which is required to enable support for managed topologies and ClusterClass,
   273  can be enabled via:
   274  ```bash
   275  export CLUSTER_TOPOLOGY=true
   276  ```
   277  Additional documentation about experimental features can be found in [Experimental Features].
   278  
   279  #### Initialization for common providers
   280  
   281  Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied
   282  before getting started with Cluster API. See below for the expected settings for common providers.
   283  
   284  {{#tabs name:"tab-installation-infrastructure" tabs:"AWS,Azure,CloudStack,DigitalOcean,Docker,Equinix Metal,GCP,Hetzner,Hivelocity,IBM Cloud,K0smotron,KubeKey,KubeVirt,Metal3,Nutanix,OCI,OpenStack,Outscale,Proxmox,VCD,vcluster,Virtink,vSphere"}}
   285  {{#tab AWS}}
   286  
   287  Download the latest binary of `clusterawsadm` from the [AWS provider releases]. The [clusterawsadm] command line utility assists with identity and access management (IAM) for [Cluster API Provider AWS][capa].
   288  
   289  {{#tabs name:"install-clusterawsadm" tabs:"Linux,macOS,homebrew,Windows"}}
   290  {{#tab Linux}}
   291  
   292  Download the latest release; on Linux, type:
   293  ```
   294  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-linux-amd64" version:">=2.0.0"}} -o clusterawsadm
   295  ```
   296  
   297  Make it executable
   298  ```
   299  chmod +x clusterawsadm
   300  ```
   301  
   302  Move the binary to a directory present in your PATH
   303  ```
   304  sudo mv clusterawsadm /usr/local/bin
   305  ```
   306  
   307  Check version to confirm installation
   308  ```
   309  clusterawsadm version
   310  ```
   311  
   312  **Example Usage**
   313  ```bash
   314  export AWS_REGION=us-east-1 # This is used to help encode your environment variables
   315  export AWS_ACCESS_KEY_ID=<your-access-key>
   316  export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
   317  export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
   318  
   319  # The clusterawsadm utility takes the credentials that you set as environment
   320  # variables and uses them to create a CloudFormation stack in your AWS account
   321  # with the correct IAM resources.
   322  clusterawsadm bootstrap iam create-cloudformation-stack
   323  
   324  # Create the base64 encoded credentials using clusterawsadm.
   325  # This command uses your environment variables and encodes
   326  # them in a value to be stored in a Kubernetes Secret.
   327  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   328  
   329  # Finally, initialize the management cluster
   330  clusterctl init --infrastructure aws
   331  ```
   332  
   333  {{#/tab }}
   334  {{#tab macOS}}
   335  
   336  Download the latest release; on macOs, type:
   337  ```
   338  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-darwin-amd64" version:">=2.0.0"}} -o clusterawsadm
   339  ```
   340  
   341  Or if your Mac has an M1 CPU (”Apple Silicon”):
   342  ```
   343  curl -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-darwin-arm64" version:">=2.0.0"}} -o clusterawsadm
   344  ```
   345  
   346  Make it executable
   347  ```
   348  chmod +x clusterawsadm
   349  ```
   350  
   351  Move the binary to a directory present in your PATH
   352  ```
   353  sudo mv clusterawsadm /usr/local/bin
   354  ```
   355  
   356  Check version to confirm installation
   357  ```
   358  clusterawsadm version
   359  ```
   360  
   361  **Example Usage**
   362  ```bash
   363  export AWS_REGION=us-east-1 # This is used to help encode your environment variables
   364  export AWS_ACCESS_KEY_ID=<your-access-key>
   365  export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
   366  export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
   367  
   368  # The clusterawsadm utility takes the credentials that you set as environment
   369  # variables and uses them to create a CloudFormation stack in your AWS account
   370  # with the correct IAM resources.
   371  clusterawsadm bootstrap iam create-cloudformation-stack
   372  
   373  # Create the base64 encoded credentials using clusterawsadm.
   374  # This command uses your environment variables and encodes
   375  # them in a value to be stored in a Kubernetes Secret.
   376  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   377  
   378  # Finally, initialize the management cluster
   379  clusterctl init --infrastructure aws
   380  ```
   381  {{#/tab }}
   382  {{#tab homebrew}}
   383  
   384  Install the latest release using homebrew:
   385  ```
   386  brew install clusterawsadm
   387  ```
   388  
   389  Check version to confirm installation
   390  ```
   391  clusterawsadm version
   392  ```
   393  
   394  **Example Usage**
   395  ```bash
   396  export AWS_REGION=us-east-1 # This is used to help encode your environment variables
   397  export AWS_ACCESS_KEY_ID=<your-access-key>
   398  export AWS_SECRET_ACCESS_KEY=<your-secret-access-key>
   399  export AWS_SESSION_TOKEN=<session-token> # If you are using Multi-Factor Auth.
   400  
   401  # The clusterawsadm utility takes the credentials that you set as environment
   402  # variables and uses them to create a CloudFormation stack in your AWS account
   403  # with the correct IAM resources.
   404  clusterawsadm bootstrap iam create-cloudformation-stack
   405  
   406  # Create the base64 encoded credentials using clusterawsadm.
   407  # This command uses your environment variables and encodes
   408  # them in a value to be stored in a Kubernetes Secret.
   409  export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   410  
   411  # Finally, initialize the management cluster
   412  clusterctl init --infrastructure aws
   413  ```
   414  
   415  {{#/tab }}
   416  {{#tab Windows}}
   417  
   418  Download the latest release; on Windows, type:
   419  ```
   420  curl.exe -L {{#releaselink repo:"https://github.com/kubernetes-sigs/cluster-api" gomodule:"sigs.k8s.io/cluster-api-provider-aws" asset:"clusterawsadm-windows-amd64" version:">=2.0.0"}} -o clusterawsadm.exe
   421  ```
   422  
   423  Append or prepend the path of that directory to the `PATH` environment variable.
   424  Check version to confirm installation
   425  ```
   426  clusterawsadm.exe version
   427  ```
   428  
   429  **Example Usage in Powershell**
   430  ```bash
   431  $Env:AWS_REGION="us-east-1" # This is used to help encode your environment variables
   432  $Env:AWS_ACCESS_KEY_ID="<your-access-key>"
   433  $Env:AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
   434  $Env:AWS_SESSION_TOKEN="<session-token>" # If you are using Multi-Factor Auth.
   435  
   436  # The clusterawsadm utility takes the credentials that you set as environment
   437  # variables and uses them to create a CloudFormation stack in your AWS account
   438  # with the correct IAM resources.
   439  clusterawsadm bootstrap iam create-cloudformation-stack
   440  
   441  # Create the base64 encoded credentials using clusterawsadm.
   442  # This command uses your environment variables and encodes
   443  # them in a value to be stored in a Kubernetes Secret.
   444  $Env:AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm bootstrap credentials encode-as-profile)
   445  
   446  # Finally, initialize the management cluster
   447  clusterctl init --infrastructure aws
   448  ```
   449  {{#/tab }}
   450  {{#/tabs }}
   451  
   452  See the [AWS provider prerequisites] document for more details.
   453  
   454  {{#/tab }}
   455  {{#tab Azure}}
   456  
   457  For more information about authorization, AAD, or requirements for Azure, visit the [Azure provider prerequisites] document.
   458  
   459  ```bash
   460  export AZURE_SUBSCRIPTION_ID="<SubscriptionId>"
   461  
   462  # Create an Azure Service Principal and paste the output here
   463  export AZURE_TENANT_ID="<Tenant>"
   464  export AZURE_CLIENT_ID="<AppId>"
   465  export AZURE_CLIENT_SECRET="<Password>"
   466  
   467  # Base64 encode the variables
   468  export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
   469  export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
   470  export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
   471  export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"
   472  
   473  # Settings needed for AzureClusterIdentity used by the AzureCluster
   474  export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret"
   475  export CLUSTER_IDENTITY_NAME="cluster-identity"
   476  export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default"
   477  
   478  # Create a secret to include the password of the Service Principal identity created in Azure
   479  # This secret will be referenced by the AzureClusterIdentity used by the AzureCluster
   480  kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" --namespace "${AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE}"
   481  
   482  # Finally, initialize the management cluster
   483  clusterctl init --infrastructure azure
   484  ```
   485  
   486  {{#/tab }}
   487  {{#tab CloudStack}}
   488  
   489  Create a file named cloud-config in the repo's root directory, substituting in your own environment's values
   490  ```bash
   491  [Global]
   492  api-url = <cloudstackApiUrl>
   493  api-key = <cloudstackApiKey>
   494  secret-key = <cloudstackSecretKey>
   495  ```
   496  
   497  Create the base64 encoded credentials by catting your credentials file.
   498  This command uses your environment variables and encodes
   499  them in a value to be stored in a Kubernetes Secret.
   500  
   501  ```bash
   502  export CLOUDSTACK_B64ENCODED_SECRET=`cat cloud-config | base64 | tr -d '\n'`
   503  ```
   504  
   505  Finally, initialize the management cluster
   506  ```bash
   507  clusterctl init --infrastructure cloudstack
   508  ```
   509  
   510  {{#/tab }}
   511  {{#tab DigitalOcean}}
   512  
   513  ```bash
   514  export DIGITALOCEAN_ACCESS_TOKEN=<your-access-token>
   515  export DO_B64ENCODED_CREDENTIALS="$(echo -n "${DIGITALOCEAN_ACCESS_TOKEN}" | base64 | tr -d '\n')"
   516  
   517  # Initialize the management cluster
   518  clusterctl init --infrastructure digitalocean
   519  ```
   520  
   521  {{#/tab }}
   522  
   523  {{#tab Docker}}
   524  
   525  <aside class="note warning">
   526  
   527  <h1>Warning</h1>
   528  
   529  The Docker provider is not designed for production use and is intended for development environments only.
   530  
   531  </aside>
   532  
   533  The Docker provider requires the `ClusterTopology` and `MachinePool` features to deploy ClusterClass-based clusters.
   534  We are only supporting ClusterClass-based cluster-templates in this quickstart as ClusterClass makes it possible to
   535  adapt configuration based on Kubernetes version. This is required to install Kubernetes clusters < v1.24 and
   536  for the upgrade from v1.23 to v1.24 as we have to use different cgroupDrivers depending on Kubernetes version.
   537  
   538  ```bash
   539  # Enable the experimental Cluster topology feature.
   540  export CLUSTER_TOPOLOGY=true
   541  
   542  # Enable the experimental Machine Pool feature
   543  export EXP_MACHINE_POOL=true
   544  
   545  # Initialize the management cluster
   546  clusterctl init --infrastructure docker
   547  ```
   548  
   549  {{#/tab }}
   550  {{#tab Equinix Metal}}
   551  
   552  In order to initialize the Equinix Metal Provider (formerly Packet) you have to expose the environment
   553  variable `PACKET_API_KEY`. This variable is used to authorize the infrastructure
   554  provider manager against the Equinix Metal API. You can retrieve your token directly
   555  from the Equinix Metal Console.
   556  
   557  ```bash
   558  export PACKET_API_KEY="34ts3g4s5g45gd45dhdh"
   559  
   560  clusterctl init --infrastructure packet
   561  ```
   562  
   563  {{#/tab }}
   564  {{#tab GCP}}
   565  
   566  ```bash
   567  # Create the base64 encoded credentials by catting your credentials json.
   568  # This command uses your environment variables and encodes
   569  # them in a value to be stored in a Kubernetes Secret.
   570  export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
   571  
   572  # Finally, initialize the management cluster
   573  clusterctl init --infrastructure gcp
   574  ```
   575  
   576  {{#/tab }}
   577  {{#tab Hetzner}}
   578  
   579  Please visit the [Hetzner project][Hetzner provider].
   580  
   581  {{#/tab }}
   582  {{#tab Hivelocity}}
   583  
   584  Please visit the [Hivelocity project][Hivelocity provider].
   585  
   586  {{#/tab }}
   587  {{#tab IBM Cloud}}
   588  
   589  In order to initialize the IBM Cloud Provider you have to expose the environment
   590  variable `IBMCLOUD_API_KEY`. This variable is used to authorize the infrastructure
   591  provider manager against the IBM Cloud API. To create one from the UI, refer [here](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key).
   592  
   593  ```bash
   594  export IBMCLOUD_API_KEY=<you_api_key>
   595  
   596  # Finally, initialize the management cluster
   597  clusterctl init --infrastructure ibmcloud
   598  ```
   599  
   600  {{#/tab }}
   601  {{#tab K0smotron}}
   602  
   603  ```bash
   604  # Initialize the management cluster
   605  clusterctl init --infrastructure k0sproject-k0smotron
   606  ```
   607  
   608  {{#/tab }}
   609  {{#tab KubeKey}}
   610  
   611  ```bash
   612  # Initialize the management cluster
   613  clusterctl init --infrastructure kubekey
   614  ```
   615  
   616  {{#/tab }}
   617  {{#tab KubeVirt}}
   618  
   619  Please visit the [KubeVirt project][KubeVirt provider] for more information.
   620  
   621  As described above, we want to use a LoadBalancer service in order to expose the workload cluster's API server. In the
   622  example below, we will use [MetalLB](https://metallb.universe.tf/) solution to implement load balancing to our kind
   623  cluster. Other solution should work as well.
   624  
   625  #### Install MetalLB for load balancing
   626  Install MetalLB, as described [here](https://metallb.universe.tf/installation/#installation-by-manifest); for example:
   627  ```bash
   628  METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
   629  kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
   630  kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
   631  kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m
   632  ```
   633  
   634  Now, we'll create the `IPAddressPool` and the `L2Advertisement` custom resources. The script below creates the CRs with
   635  the right addresses, that match to the kind cluster addresses:
   636  ```bash
   637  GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
   638  NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
   639  cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
   640  apiVersion: metallb.io/v1beta1
   641  kind: IPAddressPool
   642  metadata:
   643    name: capi-ip-pool
   644    namespace: metallb-system
   645  spec:
   646    addresses:
   647    - 172.19.255.200-172.19.255.250
   648  ---
   649  apiVersion: metallb.io/v1beta1
   650  kind: L2Advertisement
   651  metadata:
   652    name: empty
   653    namespace: metallb-system
   654  EOF
   655  ```
   656  
   657  #### Install KubeVirt on the kind cluster
   658  ```bash
   659  # get KubeVirt version
   660  KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
   661  # deploy required CRDs
   662  kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
   663  # deploy the KubeVirt custom resource
   664  kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
   665  kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
   666  ```
   667  
   668  #### Initialize the management cluster with the KubeVirt Provider
   669  ```bash
   670  clusterctl init --infrastructure kubevirt
   671  ```
   672  
   673  {{#/tab }}
   674  {{#tab Metal3}}
   675  
   676  Please visit the [Metal3 project][Metal3 provider].
   677  
   678  {{#/tab }}
   679  {{#tab Nutanix}}
   680  
   681  Please follow the Cluster API Provider for [Nutanix Getting Started Guide](https://opendocs.nutanix.com/capx/latest/getting_started/)
   682  
   683  {{#/tab }}
   684  {{#tab OCI}}
   685  
   686  Please follow the Cluster API Provider for [Oracle Cloud Infrastructure (OCI) Getting Started Guide][oci-provider]
   687  
   688  {{#/tab }}
   689  {{#tab OpenStack}}
   690  
   691  ```bash
   692  # Initialize the management cluster
   693  clusterctl init --infrastructure openstack
   694  ```
   695  
   696  {{#/tab }}
   697  
   698  {{#tab Outscale}}
   699  
   700  ```bash
   701  export OSC_SECRET_KEY=<your-secret-key>
   702  export OSC_ACCESS_KEY=<your-access-key>
   703  export OSC_REGION=<you-region>
   704  # Create namespace
   705  kubectl create namespace cluster-api-provider-outscale-system
   706  # Create secret
   707  kubectl create secret generic cluster-api-provider-outscale --from-literal=access_key=${OSC_ACCESS_KEY} --from-literal=secret_key=${OSC_SECRET_KEY} --from-literal=region=${OSC_REGION}  -n cluster-api-provider-outscale-system
   708  # Initialize the management cluster
   709  clusterctl init --infrastructure outscale
   710  ```
   711  
   712  {{#/tab }}
   713  
   714  {{#tab Proxmox}}
   715  
   716  First, we need to add the IPAM provider to your [clusterctl config file](../clusterctl/configuration.md) (`$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml`):
   717  
   718  ```yaml
   719  providers:
   720    - name: in-cluster
   721      url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
   722      type: IPAMProvider
   723  ```
   724  
   725  ```bash
   726  # The host for the Proxmox cluster
   727  export PROXMOX_URL="https://pve.example:8006"
   728  # The Proxmox token ID to access the remote Proxmox endpoint
   729  export PROXMOX_TOKEN='root@pam!capi'
   730  # The secret associated with the token ID
   731  # You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
   732  # bash history
   733  export PROXMOX_SECRET="1234-1234-1234-1234"
   734  
   735  
   736  # Finally, initialize the management cluster
   737  clusterctl init --infrastructure proxmox --ipam in-cluster
   738  ```
   739  
   740  For more information about the CAPI provider for Proxmox, see the [Proxmox
   741  project][Proxmox getting started guide].
   742  
   743  {{#/tab }}
   744  
   745  {{#tab VCD}}
   746  
   747  Please follow the Cluster API Provider for [Cloud Director Getting Started Guide](https://github.com/vmware/cluster-api-provider-cloud-director/blob/main/README.md)
   748  
   749  EXP_CLUSTER_RESOURCE_SET: "true"
   750  ```bash
   751  # Initialize the management cluster
   752  clusterctl init --infrastructure vcd
   753  ```
   754  
   755  {{#/tab }}
   756  {{#tab vcluster}}
   757  
   758  ```bash
   759  clusterctl init --infrastructure vcluster
   760  ```
   761  
   762  Please follow the Cluster API Provider for [vcluster Quick Start Guide](https://github.com/loft-sh/cluster-api-provider-vcluster/blob/main/docs/quick-start.md)
   763  
   764  {{#/tab }}
   765  {{#tab Virtink}}
   766  
   767  ```bash
   768  # Initialize the management cluster
   769  clusterctl init --infrastructure virtink
   770  ```
   771  
   772  {{#/tab }}
   773  {{#tab vSphere}}
   774  
   775  ```bash
   776  # The username used to access the remote vSphere endpoint
   777  export VSPHERE_USERNAME="vi-admin@vsphere.local"
   778  # The password used to access the remote vSphere endpoint
   779  # You may want to set this in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml` so your password is not in
   780  # bash history
   781  export VSPHERE_PASSWORD="admin!23"
   782  
   783  # Finally, initialize the management cluster
   784  clusterctl init --infrastructure vsphere
   785  ```
   786  
   787  For more information about prerequisites, credentials management, or permissions for vSphere, see the [vSphere
   788  project][vSphere getting started guide].
   789  
   790  {{#/tab }}
   791  {{#/tabs }}
   792  
   793  The output of `clusterctl init` is similar to this:
   794  
   795  ```bash
   796  Fetching providers
   797  Installing cert-manager Version="v1.11.0"
   798  Waiting for cert-manager to be available...
   799  Installing Provider="cluster-api" Version="v1.0.0" TargetNamespace="capi-system"
   800  Installing Provider="bootstrap-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-bootstrap-system"
   801  Installing Provider="control-plane-kubeadm" Version="v1.0.0" TargetNamespace="capi-kubeadm-control-plane-system"
   802  Installing Provider="infrastructure-docker" Version="v1.0.0" TargetNamespace="capd-system"
   803  
   804  Your management cluster has been initialized successfully!
   805  
   806  You can now create your first workload cluster by running the following:
   807  
   808    clusterctl generate cluster [name] --kubernetes-version [version] | kubectl apply -f -
   809  ```
   810  
   811  <aside class="note">
   812  
   813  <h1>Alternatives to environment variables</h1>
   814  
   815  Throughout this quickstart guide we've given instructions on setting parameters using environment variables. For most
   816  environment variables in the rest of the guide, you can also set them in `$XDG_CONFIG_HOME/cluster-api/clusterctl.yaml`
   817  
   818  See [`clusterctl init`](../clusterctl/commands/init.md) for more details.
   819  
   820  </aside>
   821  
   822  ### Create your first workload cluster
   823  
   824  Once the management cluster is ready, you can create your first workload cluster.
   825  
   826  #### Preparing the workload cluster configuration
   827  
   828  The `clusterctl generate cluster` command returns a YAML template for creating a [workload cluster].
   829  
   830  <aside class="note">
   831  
   832  <h1> Which provider will be used for my cluster? </h1>
   833  
   834  The `clusterctl generate cluster` command uses smart defaults in order to simplify the user experience; for example,
   835  if only the `aws` infrastructure provider is deployed, it detects and uses that when creating the cluster.
   836  
   837  </aside>
   838  
   839  <aside class="note">
   840  
   841  <h1> What topology will be used for my cluster? </h1>
   842  
   843  The `clusterctl generate cluster` command by default uses cluster templates which are provided by the infrastructure
   844  providers. See the provider's documentation for more information.
   845  
   846  See the `clusterctl generate cluster` [command][clusterctl generate cluster] documentation for
   847  details about how to use alternative sources. for cluster templates.
   848  
   849  </aside>
   850  
   851  #### Required configuration for common providers
   852  
   853  Depending on the infrastructure provider you are planning to use, some additional prerequisites should be satisfied
   854  before configuring a cluster with Cluster API. Instructions are provided for common providers below.
   855  
   856  Otherwise, you can look at the `clusterctl generate cluster` [command][clusterctl generate cluster] documentation for details about how to
   857  discover the list of variables required by a cluster templates.
   858  
   859  {{#tabs name:"tab-configuration-infrastructure" tabs:"AWS,Azure,CloudStack,DigitalOcean,Docker,Equinix Metal,GCP,IBM Cloud,K0smotron,KubeKey,KubeVirt,Metal3,Nutanix,OpenStack,Outscale,Proxmox,VCD,vcluster,Virtink,vSphere"}}
   860  {{#tab AWS}}
   861  
   862  ```bash
   863  export AWS_REGION=us-east-1
   864  export AWS_SSH_KEY_NAME=default
   865  # Select instance types
   866  export AWS_CONTROL_PLANE_MACHINE_TYPE=t3.large
   867  export AWS_NODE_MACHINE_TYPE=t3.large
   868  ```
   869  
   870  See the [AWS provider prerequisites] document for more details.
   871  
   872  {{#/tab }}
   873  {{#tab Azure}}
   874  
   875  <aside class="note warning">
   876  
   877  <h1>Warning</h1>
   878  
   879  Make sure you choose a VM size which is available in the desired location for your subscription. To see available SKUs, use `az vm list-skus -l <your_location> -r virtualMachines -o table`
   880  
   881  </aside>
   882  
   883  ```bash
   884  # Name of the Azure datacenter location. Change this value to your desired location.
   885  export AZURE_LOCATION="centralus"
   886  
   887  # Select VM types.
   888  export AZURE_CONTROL_PLANE_MACHINE_TYPE="Standard_D2s_v3"
   889  export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"
   890  
   891  # [Optional] Select resource group. The default value is ${CLUSTER_NAME}.
   892  export AZURE_RESOURCE_GROUP="<ResourceGroupName>"
   893  ```
   894  
   895  {{#/tab }}
   896  {{#tab CloudStack}}
   897  
   898  A Cluster API compatible image must be available in your CloudStack installation. For instructions on how to build a compatible image
   899  see [image-builder (CloudStack)](https://image-builder.sigs.k8s.io/capi/providers/cloudstack.html)
   900  
   901  Prebuilt images can be found [here](http://packages.shapeblue.com/cluster-api-provider-cloudstack/images/)
   902  
   903  To see all required CloudStack environment variables execute:
   904  ```bash
   905  clusterctl generate cluster --infrastructure cloudstack --list-variables capi-quickstart
   906  ```
   907  
   908  Apart from the script, the following CloudStack environment variables are required.
   909  ```bash
   910  # Set this to the name of the zone in which to deploy the cluster
   911  export CLOUDSTACK_ZONE_NAME=<zone name>
   912  # The name of the network on which the VMs will reside
   913  export CLOUDSTACK_NETWORK_NAME=<network name>
   914  # The endpoint of the workload cluster
   915  export CLUSTER_ENDPOINT_IP=<cluster endpoint address>
   916  export CLUSTER_ENDPOINT_PORT=<cluster endpoint port>
   917  # The service offering of the control plane nodes
   918  export CLOUDSTACK_CONTROL_PLANE_MACHINE_OFFERING=<control plane service offering name>
   919  # The service offering of the worker nodes
   920  export CLOUDSTACK_WORKER_MACHINE_OFFERING=<worker node service offering name>
   921  # The capi compatible template to use
   922  export CLOUDSTACK_TEMPLATE_NAME=<template name>
   923  # The ssh key to use to log into the nodes
   924  export CLOUDSTACK_SSH_KEY_NAME=<ssh key name>
   925  
   926  ```
   927  
   928  A full configuration reference can be found in [configuration.md](https://github.com/kubernetes-sigs/cluster-api-provider-cloudstack/blob/master/docs/book/src/clustercloudstack/configuration.md).
   929  
   930  {{#/tab }}
   931  {{#tab DigitalOcean}}
   932  
   933  A ClusterAPI compatible image must be available in your DigitalOcean account. For instructions on how to build a compatible image
   934  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
   935  
   936  ```bash
   937  export DO_REGION=nyc1
   938  export DO_SSH_KEY_FINGERPRINT=<your-ssh-key-fingerprint>
   939  export DO_CONTROL_PLANE_MACHINE_TYPE=s-2vcpu-2gb
   940  export DO_CONTROL_PLANE_MACHINE_IMAGE=<your-capi-image-id>
   941  export DO_NODE_MACHINE_TYPE=s-2vcpu-2gb
   942  export DO_NODE_MACHINE_IMAGE==<your-capi-image-id>
   943  ```
   944  
   945  {{#/tab }}
   946  
   947  {{#tab Docker}}
   948  
   949  <aside class="note warning">
   950  
   951  <h1>Warning</h1>
   952  
   953  The Docker provider is not designed for production use and is intended for development environments only.
   954  
   955  </aside>
   956  
   957  The Docker provider does not require additional configurations for cluster templates.
   958  
   959  However, if you require special network settings you can set the following environment variables:
   960  
   961  ```bash
   962  # The list of service CIDR, default ["10.128.0.0/12"]
   963  export SERVICE_CIDR=["10.96.0.0/12"]
   964  
   965  # The list of pod CIDR, default ["192.168.0.0/16"]
   966  export POD_CIDR=["192.168.0.0/16"]
   967  
   968  # The service domain, default "cluster.local"
   969  export SERVICE_DOMAIN="k8s.test"
   970  ```
   971  
   972  It is also possible but **not recommended** to disable the per-default enabled [Pod Security Standard](../security/pod-security-standards.md):
   973  ```bash
   974  export POD_SECURITY_STANDARD_ENABLED="false"
   975  ```
   976  
   977  {{#/tab }}
   978  {{#tab Equinix Metal}}
   979  
   980  There are several required variables you need to set to create a cluster. There
   981  are also a few optional tunables if you'd like to change the OS or CIDRs used.
   982  
   983  ```bash
   984  # Required (made up examples shown)
   985  # The project where your cluster will be placed to.
   986  # You have to get one from the Equinix Metal Console if you don't have one already.
   987  export PROJECT_ID="2b59569f-10d1-49a6-a000-c2fb95a959a1"
   988  # This can help to take advantage of automated, interconnected bare metal across our global metros.
   989  export METRO="da"
   990  # What plan to use for your control plane nodes
   991  export CONTROLPLANE_NODE_TYPE="m3.small.x86"
   992  # What plan to use for your worker nodes
   993  export WORKER_NODE_TYPE="m3.small.x86"
   994  # The ssh key you would like to have access to the nodes
   995  export SSH_KEY="ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvMgVEubPLztrvVKgNPnRe9sZSjAqaYj9nmCkgr4PdK username@computer"
   996  export CLUSTER_NAME="my-cluster"
   997  
   998  # Optional (defaults shown)
   999  export NODE_OS="ubuntu_18_04"
  1000  export POD_CIDR="192.168.0.0/16"
  1001  export SERVICE_CIDR="172.26.0.0/16"
  1002  # Only relevant if using the kube-vip flavor
  1003  export KUBE_VIP_VERSION="v0.5.0"
  1004  ```
  1005  
  1006  {{#/tab }}
  1007  {{#tab GCP}}
  1008  
  1009  
  1010  ```bash
  1011  # Name of the GCP datacenter location. Change this value to your desired location
  1012  export GCP_REGION="<GCP_REGION>"
  1013  export GCP_PROJECT="<GCP_PROJECT>"
  1014  # Make sure to use same Kubernetes version here as building the GCE image
  1015  export KUBERNETES_VERSION=1.23.3
  1016  # This is the image you built. See https://github.com/kubernetes-sigs/image-builder
  1017  export IMAGE_ID=projects/$GCP_PROJECT/global/images/<built image>
  1018  export GCP_CONTROL_PLANE_MACHINE_TYPE=n1-standard-2
  1019  export GCP_NODE_MACHINE_TYPE=n1-standard-2
  1020  export GCP_NETWORK_NAME=<GCP_NETWORK_NAME or default>
  1021  export CLUSTER_NAME="<CLUSTER_NAME>"
  1022  ```
  1023  
  1024  See the [GCP provider] for more information.
  1025  
  1026  {{#/tab }}
  1027  {{#tab IBM Cloud}}
  1028  
  1029  ```bash
  1030  # Required environment variables for VPC
  1031  # VPC region
  1032  export IBMVPC_REGION=us-south
  1033  # VPC zone within the region
  1034  export IBMVPC_ZONE=us-south-1
  1035  # ID of the resource group in which the VPC will be created
  1036  export IBMVPC_RESOURCEGROUP=<your-resource-group-id>
  1037  # Name of the VPC
  1038  export IBMVPC_NAME=ibm-vpc-0
  1039  export IBMVPC_IMAGE_ID=<you-image-id>
  1040  # Profile for the virtual server instances
  1041  export IBMVPC_PROFILE=bx2-4x16
  1042  export IBMVPC_SSHKEY_ID=<your-sshkey-id>
  1043  
  1044  # Required environment variables for PowerVS
  1045  export IBMPOWERVS_SSHKEY_NAME=<your-ssh-key>
  1046  # Internal and external IP of the network
  1047  export IBMPOWERVS_VIP=<internal-ip>
  1048  export IBMPOWERVS_VIP_EXTERNAL=<external-ip>
  1049  export IBMPOWERVS_VIP_CIDR=29
  1050  export IBMPOWERVS_IMAGE_NAME=<your-capi-image-name>
  1051  # ID of the PowerVS service instance
  1052  export IBMPOWERVS_SERVICE_INSTANCE_ID=<service-instance-id>
  1053  export IBMPOWERVS_NETWORK_NAME=<your-capi-network-name>
  1054  ```
  1055  
  1056  Please visit the [IBM Cloud provider] for more information.
  1057  
  1058  {{#/tab }}
  1059  {{#tab K0smotron}}
  1060  
  1061  Please visit the [K0smotron provider] for more information.
  1062  
  1063  {{#/tab }}
  1064  {{#tab KubeKey}}
  1065  
  1066  ```bash
  1067  # Required environment variables
  1068  # The KKZONE is used to specify where to download the binaries. (e.g. "", "cn")
  1069  export KKZONE=""
  1070  # The ssh name of the all instance Linux user. (e.g. root, ubuntu)
  1071  export USER_NAME=<your-linux-user>
  1072  # The ssh password of the all instance Linux user.
  1073  export PASSWORD=<your-linux-user-password>
  1074  # The ssh IP address of the all instance. (e.g. "[{address: 192.168.100.3}, {address: 192.168.100.4}]")
  1075  export INSTANCES=<your-linux-ip-address>
  1076  # The cluster control plane VIP. (e.g. "192.168.100.100")
  1077  export CONTROL_PLANE_ENDPOINT_IP=<your-control-plane-virtual-ip>
  1078  ```
  1079  
  1080  Please visit the [KubeKey provider] for more information.
  1081  
  1082  {{#/tab }}
  1083  {{#tab KubeVirt}}
  1084  
  1085  ```bash
  1086  export CAPK_GUEST_K8S_VERSION="v1.23.10"
  1087  export CRI_PATH="/var/run/containerd/containerd.sock"
  1088  export NODE_VM_IMAGE_TEMPLATE="quay.io/capk/ubuntu-2004-container-disk:${CAPK_GUEST_K8S_VERSION}"
  1089  ```
  1090  Please visit the [KubeVirt project][KubeVirt provider] for more information.
  1091  
  1092  {{#/tab }}
  1093  {{#tab Metal3}}
  1094  
  1095  **Note**: If you are running CAPM3 release prior to v0.5.0, make sure to export the following
  1096  environment variables. However, you don't need them to be exported if you use
  1097  CAPM3 release v0.5.0 or higher.
  1098  
  1099  ```bash
  1100  # The URL of the kernel to deploy.
  1101  export DEPLOY_KERNEL_URL="http://172.22.0.1:6180/images/ironic-python-agent.kernel"
  1102  # The URL of the ramdisk to deploy.
  1103  export DEPLOY_RAMDISK_URL="http://172.22.0.1:6180/images/ironic-python-agent.initramfs"
  1104  # The URL of the Ironic endpoint.
  1105  export IRONIC_URL="http://172.22.0.1:6385/v1/"
  1106  # The URL of the Ironic inspector endpoint.
  1107  export IRONIC_INSPECTOR_URL="http://172.22.0.1:5050/v1/"
  1108  # Do not use a dedicated CA certificate for Ironic API. Any value provided in this variable disables additional CA certificate validation.
  1109  # To provide a CA certificate, leave this variable unset. If unset, then IRONIC_CA_CERT_B64 must be set.
  1110  export IRONIC_NO_CA_CERT=true
  1111  # Disables basic authentication for Ironic API. Any value provided in this variable disables authentication.
  1112  # To enable authentication, leave this variable unset. If unset, then IRONIC_USERNAME and IRONIC_PASSWORD must be set.
  1113  export IRONIC_NO_BASIC_AUTH=true
  1114  # Disables basic authentication for Ironic inspector API. Any value provided in this variable disables authentication.
  1115  # To enable authentication, leave this variable unset. If unset, then IRONIC_INSPECTOR_USERNAME and IRONIC_INSPECTOR_PASSWORD must be set.
  1116  export IRONIC_INSPECTOR_NO_BASIC_AUTH=true
  1117  ```
  1118  
  1119  Please visit the [Metal3 getting started guide] for more details.
  1120  
  1121  {{#/tab }}
  1122  {{#tab Nutanix}}
  1123  
  1124  A ClusterAPI compatible image must be available in your Nutanix image library. For instructions on how to build a compatible image
  1125  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1126  
  1127  To see all required Nutanix environment variables execute:
  1128  ```bash
  1129  clusterctl generate cluster --infrastructure nutanix --list-variables capi-quickstart
  1130  ```
  1131  
  1132  {{#/tab }}
  1133  {{#tab OpenStack}}
  1134  
  1135  A ClusterAPI compatible image must be available in your OpenStack. For instructions on how to build a compatible image
  1136  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1137  Depending on your OpenStack and underlying hypervisor the following options might be of interest:
  1138  * [image-builder (OpenStack)](https://image-builder.sigs.k8s.io/capi/providers/openstack.html)
  1139  * [image-builder (vSphere)](https://image-builder.sigs.k8s.io/capi/providers/vsphere.html)
  1140  
  1141  To see all required OpenStack environment variables execute:
  1142  ```bash
  1143  clusterctl generate cluster --infrastructure openstack --list-variables capi-quickstart
  1144  ```
  1145  
  1146  The following script can be used to export some of them:
  1147  ```bash
  1148  wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
  1149  source /tmp/env.rc <path/to/clouds.yaml> <cloud>
  1150  ```
  1151  
  1152  Apart from the script, the following OpenStack environment variables are required.
  1153  ```bash
  1154  # The list of nameservers for OpenStack Subnet being created.
  1155  # Set this value when you need create a new network/subnet while the access through DNS is required.
  1156  export OPENSTACK_DNS_NAMESERVERS=<dns nameserver>
  1157  # FailureDomain is the failure domain the machine will be created in.
  1158  export OPENSTACK_FAILURE_DOMAIN=<availability zone name>
  1159  # The flavor reference for the flavor for your server instance.
  1160  export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=<flavor>
  1161  # The flavor reference for the flavor for your server instance.
  1162  export OPENSTACK_NODE_MACHINE_FLAVOR=<flavor>
  1163  # The name of the image to use for your server instance. If the RootVolume is specified, this will be ignored and use rootVolume directly.
  1164  export OPENSTACK_IMAGE_NAME=<image name>
  1165  # The SSH key pair name
  1166  export OPENSTACK_SSH_KEY_NAME=<ssh key pair name>
  1167  # The external network
  1168  export OPENSTACK_EXTERNAL_NETWORK_ID=<external network ID>
  1169  ```
  1170  
  1171  A full configuration reference can be found in [configuration.md](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/docs/book/src/clusteropenstack/configuration.md).
  1172  
  1173  {{#/tab }}
  1174  {{#tab Outscale}}
  1175  
  1176  A ClusterAPI compatible image must be available in your Outscale account. For instructions on how to build a compatible image
  1177  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1178  
  1179  ```bash
  1180  # The outscale root disk iops
  1181  export OSC_IOPS="<IOPS>"
  1182  # The outscale root disk size
  1183  export OSC_VOLUME_SIZE="<VOLUME_SIZE>"
  1184  # The outscale root disk volumeType
  1185  export OSC_VOLUME_TYPE="<VOLUME_TYPE>"
  1186  # The outscale key pair
  1187  export OSC_KEYPAIR_NAME="<KEYPAIR_NAME>"
  1188  # The outscale subregion name
  1189  export OSC_SUBREGION_NAME="<SUBREGION_NAME>"
  1190  # The outscale vm type
  1191  export OSC_VM_TYPE="<VM_TYPE>"
  1192  # The outscale image name
  1193  export OSC_IMAGE_NAME="<IMAGE_NAME>"
  1194  ```
  1195  
  1196  {{#/tab }}
  1197  {{#tab Proxmox}}
  1198  
  1199  A ClusterAPI compatible image must be available in your Proxmox cluster. For instructions on how to build a compatible VM template
  1200  see [image-builder](https://image-builder.sigs.k8s.io/capi/capi.html).
  1201  
  1202  ```bash
  1203  # The node that hosts the VM template to be used to provision VMs
  1204  export PROXMOX_SOURCENODE="pve"
  1205  # The template VM ID used for cloning VMs
  1206  export TEMPLATE_VMID=100
  1207  # The ssh authorized keys used to ssh to the machines.
  1208  export VM_SSH_KEYS="ssh-ed25519 ..., ssh-ed25519 ..."
  1209  # The IP address used for the control plane endpoint
  1210  export CONTROL_PLANE_ENDPOINT_IP=10.10.10.4
  1211  # The IP ranges for Cluster nodes
  1212  export NODE_IP_RANGES="[10.10.10.5-10.10.10.50, 10.10.10.55-10.10.10.70]"
  1213  # The gateway for the machines network-config.
  1214  export GATEWAY="10.10.10.1"
  1215  # Subnet Mask in CIDR notation for your node IP ranges
  1216  export IP_PREFIX=24
  1217  # The Proxmox network device for VMs
  1218  export BRIDGE="vmbr1"
  1219  # The dns nameservers for the machines network-config.
  1220  export DNS_SERVERS="[8.8.8.8,8.8.4.4]"
  1221  # The Proxmox nodes used for VM deployments
  1222  export ALLOWED_NODES="[pve1,pve2,pve3]"
  1223  ```
  1224  
  1225  For more information about prerequisites and advanced setups for Proxmox, see the [Proxmox getting started guide].
  1226  
  1227  {{#/tab }}
  1228  {{#tab VCD}}
  1229  
  1230  A ClusterAPI compatible image must be available in your VCD catalog. For instructions on how to build and upload a compatible image
  1231  see [CAPVCD](https://github.com/vmware/cluster-api-provider-cloud-director)
  1232  
  1233  To see all required VCD environment variables execute:
  1234  ```bash
  1235  clusterctl generate cluster --infrastructure vcd --list-variables capi-quickstart
  1236  ```
  1237  
  1238  
  1239  {{#/tab }}
  1240  {{#tab vcluster}}
  1241  
  1242  ```bash
  1243  export CLUSTER_NAME=kind
  1244  export CLUSTER_NAMESPACE=vcluster
  1245  export KUBERNETES_VERSION=1.23.4
  1246  export HELM_VALUES="service:\n  type: NodePort"
  1247  ```
  1248  
  1249  Please see the [vcluster installation instructions](https://github.com/loft-sh/cluster-api-provider-vcluster#installation-instructions) for more details.
  1250  
  1251  {{#/tab }}
  1252  {{#tab Virtink}}
  1253  
  1254  To see all required Virtink environment variables execute:
  1255  ```bash
  1256  clusterctl generate cluster --infrastructure virtink --list-variables capi-quickstart
  1257  ```
  1258  
  1259  See the [Virtink provider](https://github.com/smartxworks/cluster-api-provider-virtink) document for more details.
  1260  
  1261  {{#/tab }}
  1262  {{#tab vSphere}}
  1263  
  1264  It is required to use an official CAPV machine images for your vSphere VM templates. See [uploading CAPV machine images][capv-upload-images] for instructions on how to do this.
  1265  
  1266  ```bash
  1267  # The vCenter server IP or FQDN
  1268  export VSPHERE_SERVER="10.0.0.1"
  1269  # The vSphere datacenter to deploy the management cluster on
  1270  export VSPHERE_DATACENTER="SDDC-Datacenter"
  1271  # The vSphere datastore to deploy the management cluster on
  1272  export VSPHERE_DATASTORE="vsanDatastore"
  1273  # The VM network to deploy the management cluster on
  1274  export VSPHERE_NETWORK="VM Network"
  1275  # The vSphere resource pool for your VMs
  1276  export VSPHERE_RESOURCE_POOL="*/Resources"
  1277  # The VM folder for your VMs. Set to "" to use the root vSphere folder
  1278  export VSPHERE_FOLDER="vm"
  1279  # The VM template to use for your VMs
  1280  export VSPHERE_TEMPLATE="ubuntu-1804-kube-v1.17.3"
  1281  # The public ssh authorized key on all machines
  1282  export VSPHERE_SSH_AUTHORIZED_KEY="ssh-rsa AAAAB3N..."
  1283  # The certificate thumbprint for the vCenter server
  1284  export VSPHERE_TLS_THUMBPRINT="97:48:03:8D:78:A9..."
  1285  # The storage policy to be used (optional). Set to "" if not required
  1286  export VSPHERE_STORAGE_POLICY="policy-one"
  1287  # The IP address used for the control plane endpoint
  1288  export CONTROL_PLANE_ENDPOINT_IP="1.2.3.4"
  1289  ```
  1290  
  1291  For more information about prerequisites, credentials management, or permissions for vSphere, see the [vSphere getting started guide].
  1292  
  1293  {{#/tab }}
  1294  {{#/tabs }}
  1295  
  1296  #### Generating the cluster configuration
  1297  
  1298  For the purpose of this tutorial, we'll name our cluster capi-quickstart.
  1299  
  1300  {{#tabs name:"tab-clusterctl-config-cluster" tabs:"Docker, vcluster, KubeVirt, others..."}}
  1301  {{#tab Docker}}
  1302  
  1303  <aside class="note warning">
  1304  
  1305  <h1>Warning</h1>
  1306  
  1307  The Docker provider is not designed for production use and is intended for development environments only.
  1308  
  1309  </aside>
  1310  
  1311  ```bash
  1312  clusterctl generate cluster capi-quickstart --flavor development \
  1313    --kubernetes-version v1.29.0 \
  1314    --control-plane-machine-count=3 \
  1315    --worker-machine-count=3 \
  1316    > capi-quickstart.yaml
  1317  ```
  1318  
  1319  {{#/tab }}
  1320  {{#tab vcluster}}
  1321  
  1322  ```bash
  1323  export CLUSTER_NAME=kind
  1324  export CLUSTER_NAMESPACE=vcluster
  1325  export KUBERNETES_VERSION=1.28.0
  1326  export HELM_VALUES="service:\n  type: NodePort"
  1327  
  1328  kubectl create namespace ${CLUSTER_NAMESPACE}
  1329  clusterctl generate cluster ${CLUSTER_NAME} \
  1330      --infrastructure vcluster \
  1331      --kubernetes-version ${KUBERNETES_VERSION} \
  1332      --target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -
  1333  ```
  1334  
  1335  {{#/tab }}
  1336  {{#tab KubeVirt}}
  1337  
  1338  As we described above, in this tutorial, we will use a LoadBalancer service in order to expose the API server of the
  1339  workload cluster, so we want to use the load balancer (lb) template (rather than the default one). We'll use the
  1340  clusterctl's `--flavor` flag for that:
  1341  ```bash
  1342  clusterctl generate cluster capi-quickstart \
  1343    --infrastructure="kubevirt" \
  1344    --flavor lb \
  1345    --kubernetes-version ${CAPK_GUEST_K8S_VERSION} \
  1346    --control-plane-machine-count=1 \
  1347    --worker-machine-count=1 \
  1348    > capi-quickstart.yaml
  1349  ```
  1350  
  1351  {{#/tab }}
  1352  {{#tab others...}}
  1353  
  1354  ```bash
  1355  clusterctl generate cluster capi-quickstart \
  1356    --kubernetes-version v1.29.0 \
  1357    --control-plane-machine-count=3 \
  1358    --worker-machine-count=3 \
  1359    > capi-quickstart.yaml
  1360  ```
  1361  
  1362  {{#/tab }}
  1363  {{#/tabs }}
  1364  
  1365  This creates a YAML file named `capi-quickstart.yaml` with a predefined list of Cluster API objects; Cluster, Machines,
  1366  Machine Deployments, etc.
  1367  
  1368  The file can be eventually modified using your editor of choice.
  1369  
  1370  See [clusterctl generate cluster] for more details.
  1371  
  1372  #### Apply the workload cluster
  1373  
  1374  When ready, run the following command to apply the cluster manifest.
  1375  
  1376  ```bash
  1377  kubectl apply -f capi-quickstart.yaml
  1378  ```
  1379  
  1380  The output is similar to this:
  1381  
  1382  ```bash
  1383  cluster.cluster.x-k8s.io/capi-quickstart created
  1384  dockercluster.infrastructure.cluster.x-k8s.io/capi-quickstart created
  1385  kubeadmcontrolplane.controlplane.cluster.x-k8s.io/capi-quickstart-control-plane created
  1386  dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-control-plane created
  1387  machinedeployment.cluster.x-k8s.io/capi-quickstart-md-0 created
  1388  dockermachinetemplate.infrastructure.cluster.x-k8s.io/capi-quickstart-md-0 created
  1389  kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/capi-quickstart-md-0 created
  1390  ```
  1391  
  1392  #### Accessing the workload cluster
  1393  
  1394  The cluster will now start provisioning. You can check status with:
  1395  
  1396  ```bash
  1397  kubectl get cluster
  1398  ```
  1399  
  1400  You can also get an "at glance" view of the cluster and its resources by running:
  1401  
  1402  ```bash
  1403  clusterctl describe cluster capi-quickstart
  1404  ```
  1405  
  1406  and see an output similar to this:
  1407  
  1408  ```bash
  1409  NAME              PHASE         AGE   VERSION
  1410  capi-quickstart   Provisioned   8s    v1.29.0
  1411  ```
  1412  
  1413  To verify the first control plane is up:
  1414  
  1415  ```bash
  1416  kubectl get kubeadmcontrolplane
  1417  ```
  1418  
  1419  You should see an output is similar to this:
  1420  
  1421  ```bash
  1422  NAME                    CLUSTER           INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE    VERSION
  1423  capi-quickstart-g2trk   capi-quickstart   true                                 3                  3         3             4m7s   v1.29.0
  1424  ```
  1425  
  1426  <aside class="note warning">
  1427  
  1428  <h1> Warning </h1>
  1429  
  1430  The control plane won't be `Ready` until we install a CNI in the next step.
  1431  
  1432  </aside>
  1433  
  1434  After the first control plane node is up and running, we can retrieve the [workload cluster] Kubeconfig.
  1435  
  1436  {{#tabs name:"tab-get-kubeconfig" tabs:"Default,Docker"}}
  1437  
  1438  {{#/tab }}
  1439  {{#tab Default}}
  1440  
  1441  ```bash
  1442  clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
  1443  ```
  1444  
  1445  {{#/tab }}
  1446  
  1447  {{#tab Docker}}
  1448  For Docker Desktop on macOS, Linux or Windows use kind to retrieve the kubeconfig. Docker Engine for Linux works with the default clusterctl approach.
  1449  
  1450  ```bash
  1451  kind get kubeconfig --name capi-quickstart > capi-quickstart.kubeconfig
  1452  ```
  1453  
  1454  <aside class="note warning">
  1455  
  1456  Note: To use the default clusterctl method to retrieve kubeconfig for a workload cluster created with the Docker provider when using Docker Desktop see [Additional Notes for the Docker provider](../clusterctl/developers.md#additional-notes-for-the-docker-provider).
  1457  
  1458  </aside>
  1459  
  1460  {{#/tab }}
  1461  {{#/tabs }}
  1462  
  1463  ### Install a Cloud Provider
  1464  
  1465  The Kubernetes in-tree cloud provider implementations are being [removed](https://github.com/kubernetes/enhancements/tree/master/keps/sig-cloud-provider/2395-removing-in-tree-cloud-providers) in favor of external cloud providers (also referred to as "out-of-tree"). This requires deploying a new component called the cloud-controller-manager which is responsible for running all the cloud specific controllers that were previously run in the kube-controller-manager. To learn more, see [this blog post](https://kubernetes.io/blog/2019/04/17/the-future-of-cloud-providers-in-kubernetes/).
  1466  
  1467  {{#tabs name:"tab-install-cloud-provider" tabs:"Azure"}}
  1468  {{#tab Azure}}
  1469  
  1470  Install the official cloud-provider-azure Helm chart on the workload cluster:
  1471  
  1472  ```bash
  1473  helm install --kubeconfig=./capi-quickstart.kubeconfig --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=capi-quickstart --set cloudControllerManager.clusterCIDR="192.168.0.0/16"
  1474  ```
  1475  
  1476  For more information, see the [CAPZ book](https://capz.sigs.k8s.io/topics/addons.html).
  1477  
  1478  {{#/tab }}
  1479  {{#/tabs }}
  1480  
  1481  ### Deploy a CNI solution
  1482  
  1483  Calico is used here as an example.
  1484  
  1485  {{#tabs name:"tab-deploy-cni" tabs:"Azure,vcluster,KubeVirt,others..."}}
  1486  {{#tab Azure}}
  1487  
  1488  Install the official Calico Helm chart on the workload cluster:
  1489  
  1490  ```bash
  1491  helm repo add projectcalico https://docs.tigera.io/calico/charts --kubeconfig=./capi-quickstart.kubeconfig && \
  1492  helm install calico projectcalico/tigera-operator --kubeconfig=./capi-quickstart.kubeconfig -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico/values.yaml --namespace tigera-operator --create-namespace
  1493  ```
  1494  
  1495  After a short while, our nodes should be running and in `Ready` state,
  1496  let's check the status using `kubectl get nodes`:
  1497  
  1498  ```bash
  1499  kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
  1500  ```
  1501  
  1502  {{#/tab }}
  1503  {{#tab vcluster}}
  1504  
  1505  Calico not required for vcluster.
  1506  
  1507  {{#/tab }}
  1508  {{#tab KubeVirt}}
  1509  
  1510  Before deploying the Calico CNI, make sure the VMs are running:
  1511  ```bash
  1512  kubectl get vm
  1513  ```
  1514  
  1515  If our new VMs are running, we should see a response similar to this:
  1516  
  1517  ```text
  1518  NAME                                  AGE    STATUS    READY
  1519  capi-quickstart-control-plane-7s945   167m   Running   True
  1520  capi-quickstart-md-0-zht5j            164m   Running   True
  1521  ```
  1522  
  1523  We can also read the virtual machine instances:
  1524  ```bash
  1525  kubectl get vmi
  1526  ```
  1527  The output will be similar to:
  1528  ```text
  1529  NAME                                  AGE    PHASE     IP             NODENAME             READY
  1530  capi-quickstart-control-plane-7s945   167m   Running   10.244.82.16   kind-control-plane   True
  1531  capi-quickstart-md-0-zht5j            164m   Running   10.244.82.17   kind-control-plane   True
  1532  ```
  1533  
  1534  Since our workload cluster is running within the kind cluster, we need to prevent conflicts between the kind
  1535  (management) cluster's CNI, and the workload cluster CNI. The following modifications in the default Calico settings
  1536  are enough for these two CNI to work on (actually) the same environment.
  1537  
  1538  * Change the CIDR to a non-conflicting range
  1539  * Change the value of the `CLUSTER_TYPE` environment variable to `k8s`
  1540  * Change the value of the `CALICO_IPV4POOL_IPIP` environment variable to `Never`
  1541  * Change the value of the `CALICO_IPV4POOL_VXLAN` environment variable to `Always`
  1542  * Add the `FELIX_VXLANPORT` environment variable with the value of a non-conflicting port, e.g. `"6789"`.
  1543  
  1544  The following script downloads the Calico manifest and modifies the required field. The CIDR and the port values are examples.
  1545  ```bash
  1546  curl https://raw.githubusercontent.com/projectcalico/calico/v3.24.4/manifests/calico.yaml -o calico-workload.yaml
  1547  
  1548  sed -i -E 's|^( +)# (- name: CALICO_IPV4POOL_CIDR)$|\1\2|g;'\
  1549  's|^( +)# (  value: )"192.168.0.0/16"|\1\2"10.243.0.0/16"|g;'\
  1550  '/- name: CLUSTER_TYPE/{ n; s/( +value: ").+/\1k8s"/g };'\
  1551  '/- name: CALICO_IPV4POOL_IPIP/{ n; s/value: "Always"/value: "Never"/ };'\
  1552  '/- name: CALICO_IPV4POOL_VXLAN/{ n; s/value: "Never"/value: "Always"/};'\
  1553  '/# Set Felix endpoint to host default action to ACCEPT./a\            - name: FELIX_VXLANPORT\n              value: "6789"' \
  1554  calico-workload.yaml
  1555  ```
  1556  Now, deploy the Calico CNI on the workload cluster:
  1557  ```bash
  1558  kubectl --kubeconfig=./capi-quickstart.kubeconfig create -f calico-workload.yaml
  1559  ```
  1560  
  1561  After a short while, our nodes should be running and in `Ready` state, let’s check the status using `kubectl get nodes`:
  1562  
  1563  ```bash
  1564  kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
  1565  ```
  1566  
  1567  <aside class="note">
  1568  
  1569  <h1>Troubleshooting</h1>
  1570  
  1571  If the nodes don't become ready after a long period, read the pods in the `kube-system` namespace
  1572  ```bash
  1573  kubectl --kubeconfig=./capi-quickstart.kubeconfig get pod -n kube-system
  1574  ```
  1575  
  1576  If the Calico pods are in image pull error state (`ErrImagePull`), it's probably because of the Docker Hub pull rate limit.
  1577  We can try to fix that by adding a secret with our Docker Hub credentials, and use it;
  1578  see [here](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#registry-secret-existing-credentials)
  1579  for details.
  1580  
  1581  First, create the secret. Please notice the Docker config file path, and adjust it to your local setting.
  1582  ```bash
  1583  kubectl --kubeconfig=./capi-quickstart.kubeconfig create secret generic docker-creds \
  1584      --from-file=.dockerconfigjson=<YOUR DOCKER CONFIG FILE PATH> \
  1585      --type=kubernetes.io/dockerconfigjson \
  1586      -n kube-system
  1587  ```
  1588  
  1589  Now, if the `calico-node` pods are with status of `ErrImagePull`, patch their DaemonSet to make them use the new secret to pull images:
  1590  ```bash
  1591  kubectl --kubeconfig=./capi-quickstart.kubeconfig patch daemonset \
  1592      -n kube-system calico-node \
  1593      -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"docker-creds"}]}}}}'
  1594  ```
  1595  
  1596  After a short while, the calico-node pods will be with `Running` status. Now, if the calico-kube-controllers pod is also
  1597  in `ErrImagePull` status, patch its deployment to fix the problem:
  1598  ```bash
  1599  kubectl --kubeconfig=./capi-quickstart.kubeconfig patch deployment \
  1600      -n kube-system calico-kube-controllers \
  1601      -p '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"docker-creds"}]}}}}'
  1602  ```
  1603  
  1604  Read the pods again
  1605  ```bash
  1606  kubectl --kubeconfig=./capi-quickstart.kubeconfig get pod -n kube-system
  1607  ```
  1608  
  1609  Eventually, all the pods in the kube-system namespace will run, and the result should be similar to this:
  1610  ```text
  1611  NAME                                                          READY   STATUS    RESTARTS   AGE
  1612  calico-kube-controllers-c969cf844-dgld6                       1/1     Running   0          50s
  1613  calico-node-7zz7c                                             1/1     Running   0          54s
  1614  calico-node-jmjd6                                             1/1     Running   0          54s
  1615  coredns-64897985d-dspjm                                       1/1     Running   0          3m49s
  1616  coredns-64897985d-pgtgz                                       1/1     Running   0          3m49s
  1617  etcd-capi-quickstart-control-plane-kjjbb                      1/1     Running   0          3m57s
  1618  kube-apiserver-capi-quickstart-control-plane-kjjbb            1/1     Running   0          3m57s
  1619  kube-controller-manager-capi-quickstart-control-plane-kjjbb   1/1     Running   0          3m57s
  1620  kube-proxy-b9g5m                                              1/1     Running   0          3m12s
  1621  kube-proxy-p6xx8                                              1/1     Running   0          3m49s
  1622  kube-scheduler-capi-quickstart-control-plane-kjjbb            1/1     Running   0          3m57s
  1623  ```
  1624  
  1625  </aside>
  1626  
  1627  {{#/tab }}
  1628  {{#tab others...}}
  1629  
  1630  ```bash
  1631  kubectl --kubeconfig=./capi-quickstart.kubeconfig \
  1632    apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml
  1633  ```
  1634  
  1635  After a short while, our nodes should be running and in `Ready` state,
  1636  let's check the status using `kubectl get nodes`:
  1637  
  1638  ```bash
  1639  kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
  1640  ```
  1641  ```bash
  1642  NAME                                          STATUS   ROLES           AGE    VERSION
  1643  capi-quickstart-vs89t-gmbld                   Ready    control-plane   5m33s  v1.29.0
  1644  capi-quickstart-vs89t-kf9l5                   Ready    control-plane   6m20s  v1.29.0
  1645  capi-quickstart-vs89t-t8cfn                   Ready    control-plane   7m10s  v1.29.0
  1646  capi-quickstart-md-0-55x6t-5649968bd7-8tq9v   Ready    <none>          6m5s   v1.29.0
  1647  capi-quickstart-md-0-55x6t-5649968bd7-glnjd   Ready    <none>          6m9s   v1.29.0
  1648  capi-quickstart-md-0-55x6t-5649968bd7-sfzp6   Ready    <none>          6m9s   v1.29.0
  1649  ```
  1650  
  1651  {{#/tab }}
  1652  {{#/tabs }}
  1653  
  1654  ### Clean Up
  1655  
  1656  Delete workload cluster.
  1657  ```bash
  1658  kubectl delete cluster capi-quickstart
  1659  ```
  1660  <aside class="note warning">
  1661  
  1662  IMPORTANT: In order to ensure a proper cleanup of your infrastructure you must always delete the cluster object. Deleting the entire cluster template with `kubectl delete -f capi-quickstart.yaml` might lead to pending resources to be cleaned up manually.
  1663  </aside>
  1664  
  1665  Delete management cluster
  1666  ```bash
  1667  kind delete cluster
  1668  ```
  1669  
  1670  ## Next steps
  1671  
  1672  - Create a second workload cluster. Simply follow the steps outlined above, but remember to provide a different name for your second workload cluster.
  1673  - Deploy applications to your workload cluster. Use the [CNI deployment steps](#deploy-a-cni-solution) for pointers.
  1674  - See the [clusterctl] documentation for more detail about clusterctl supported actions.
  1675  
  1676  <!-- links -->
  1677  [Experimental Features]: ../tasks/experimental-features/experimental-features.md
  1678  [AWS provider prerequisites]: https://cluster-api-aws.sigs.k8s.io/topics/using-clusterawsadm-to-fulfill-prerequisites.html
  1679  [AWS provider releases]: https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases
  1680  [Azure Provider Prerequisites]: https://capz.sigs.k8s.io/topics/getting-started.html#prerequisites
  1681  [bootstrap cluster]: ../reference/glossary.md#bootstrap-cluster
  1682  [capa]: https://cluster-api-aws.sigs.k8s.io
  1683  [capv-upload-images]: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md#uploading-the-machine-images
  1684  [clusterawsadm]: https://cluster-api-aws.sigs.k8s.io/clusterawsadm/clusterawsadm.html
  1685  [clusterctl generate cluster]: ../clusterctl/commands/generate-cluster.md
  1686  [clusterctl get kubeconfig]: ../clusterctl/commands/get-kubeconfig.md
  1687  [clusterctl]: ../clusterctl/overview.md
  1688  [Docker]: https://www.docker.com/
  1689  [GCP provider]: https://github.com/kubernetes-sigs/cluster-api-provider-gcp
  1690  [Helm]: https://helm.sh/docs/intro/install/
  1691  [Hetzner provider]: https://github.com/syself/cluster-api-provider-hetzner
  1692  [Hivelocity provider]: https://github.com/hivelocity/cluster-api-provider-hivelocity
  1693  [IBM Cloud provider]: https://github.com/kubernetes-sigs/cluster-api-provider-ibmcloud
  1694  [infrastructure provider]: ../reference/glossary.md#infrastructure-provider
  1695  [kind]: https://kind.sigs.k8s.io/
  1696  [KubeadmControlPlane]: ../developer/architecture/controllers/control-plane.md
  1697  [kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
  1698  [management cluster]: ../reference/glossary.md#management-cluster
  1699  [Metal3 getting started guide]: https://github.com/metal3-io/cluster-api-provider-metal3/blob/master/docs/getting-started.md
  1700  [Metal3 provider]: https://github.com/metal3-io/cluster-api-provider-metal3/
  1701  [K0smotron provider]: https://github.com/k0sproject/k0smotron
  1702  [KubeKey provider]: https://github.com/kubesphere/kubekey
  1703  [KubeVirt provider]: https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/
  1704  [KubeVirt]: https://kubevirt.io/
  1705  [oci-provider]: https://oracle.github.io/cluster-api-provider-oci/#getting-started
  1706  [Equinix Metal getting started guide]: https://github.com/kubernetes-sigs/cluster-api-provider-packet#using
  1707  [provider]:../reference/providers.md
  1708  [provider components]: ../reference/glossary.md#provider-components
  1709  [vSphere getting started guide]: https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md
  1710  [workload cluster]: ../reference/glossary.md#workload-cluster
  1711  [CAPI Operator quickstart]: ./quick-start-operator.md
  1712  [Proxmox getting started guide]: https://github.com/ionos-cloud/cluster-api-provider-proxmox/blob/main/docs/Usage.md