sigs.k8s.io/cluster-api@v1.7.1/docs/book/src/clusterctl/developers.md (about)

     1  # clusterctl for Developers
     2  
     3  This document describes how to use `clusterctl` during the development workflow.
     4  
     5  ## Prerequisites
     6  
     7  * A Cluster API development setup (go, git, kind v0.9 or newer, Docker v19.03 or newer etc.)
     8  * A local clone of the Cluster API GitHub repository
     9  * A local clone of the GitHub repositories for the providers you want to install
    10  
    11  ## Build clusterctl
    12  
    13  From the root of the local copy of Cluster API, you can build the `clusterctl` binary by running:
    14  
    15  ```bash
    16  make clusterctl
    17  ```
    18  
    19  The output of the build is saved in the `bin/` folder; In order to use it you have to specify
    20  the full path, create an alias or copy it into a folder under your `$PATH`.
    21  
    22  ## Use local artifacts
    23  
    24  Clusterctl by default uses artifacts published in the [providers repositories];
    25  during the development workflow you may want to use artifacts from your local workstation.
    26  
    27  There are two options to do so:
    28  
    29  * Use the [overrides layer], when you want to override a single published artifact with a local one.
    30  * Create a local repository, when you want to avoid using published artifacts and use the local ones instead.
    31  
    32  If you want to create a local artifact, follow these instructions:
    33  
    34  ### Build artifacts locally
    35  
    36  In order to build artifacts for the CAPI core provider, the kubeadm bootstrap provider, the kubeadm control plane provider and the Docker infrastructure provider:
    37  
    38  ```bash
    39  make docker-build REGISTRY=gcr.io/k8s-staging-cluster-api PULL_POLICY=IfNotPresent
    40  ```
    41  
    42  ### Create a clusterctl-settings.json file
    43  
    44  Next, create a `clusterctl-settings.json` file and place it in your local copy
    45  of Cluster API. This file will be used by [create-local-repository.py](#create-the-local-repository). Here is an example:
    46  
    47  ```yaml
    48  {
    49    "providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-aws", "infrastructure-docker"],
    50    "provider_repos": ["../cluster-api-provider-aws"]
    51  }
    52  ```
    53  
    54  **providers** (Array[]String, default=[]): A list of the providers to enable.
    55  See [available providers](#available-providers) for more details.
    56  
    57  **provider_repos** (Array[]String, default=[]): A list of paths to all the providers you want to use. Each provider must have
    58  a `clusterctl-settings.json` file describing how to build the provider assets.
    59  
    60  ### Create the local repository
    61  
    62  Run the create-local-repository hack from the root of the local copy of Cluster API:
    63  
    64  ```bash
    65  cmd/clusterctl/hack/create-local-repository.py
    66  ```
    67  
    68  The script reads from the source folders for the providers you want to install, builds the providers' assets,
    69  and places them in a local repository folder located under `$XDG_CONFIG_HOME/cluster-api/dev-repository/`.
    70  Additionally, the command output provides you the `clusterctl init` command with all the necessary flags.
    71  The output should be similar to:
    72  
    73  ```bash
    74  clusterctl local overrides generated from local repositories for the cluster-api, bootstrap-kubeadm, control-plane-kubeadm, infrastructure-docker, infrastructure-aws providers.
    75  in order to use them, please run:
    76  
    77  clusterctl init \
    78     --core cluster-api:v0.3.8 \
    79     --bootstrap kubeadm:v0.3.8 \
    80     --control-plane kubeadm:v0.3.8 \
    81     --infrastructure aws:v0.5.0 \
    82     --infrastructure docker:v0.3.8 \
    83     --config $XDG_CONFIG_HOME/cluster-api/dev-repository/config.yaml
    84  ```
    85  
    86  As you might notice, the command is using the `$XDG_CONFIG_HOME/cluster-api/dev-repository/config.yaml` config file,
    87  containing all the required setting to make clusterctl use the local repository (it fallbacks to `$HOME` if `$XDG_CONFIG_HOME` 
    88  is not set on your machine).
    89  
    90  <aside class="note warning">
    91  
    92  <h1>Warnings</h1>
    93  
    94  You must pass `--config ...` to all the clusterctl commands you are running during your dev session.
    95  
    96  The above config file changes the location of the [overrides layer] folder thus ensuring
    97  you dev session isn't hijacked by other local artifacts.
    98  
    99  With the exceptions of the Docker and the in memory provider, the local repository folder does not contain cluster templates,
   100  so the `clusterctl generate cluster` command will fail if you don't copy a template into the local repository.
   101  
   102  </aside>
   103  
   104  <aside class="note warning">
   105  
   106  <h1>Nightly builds</h1>
   107  
   108  if you want to run your tests using a Cluster API nightly build, you can run the hack passing the nightly build folder
   109  (change the date at the end of the bucket name according to your needs):
   110  
   111  ```bash
   112  cmd/clusterctl/hack/create-local-repository.py https://storage.googleapis.com/artifacts.k8s-staging-cluster-api.appspot.com/components/nightly_main_20240101
   113  ```
   114  
   115  Note: this works only with core Cluster API nightly builds. 
   116  
   117  </aside>
   118  
   119  #### Available providers
   120  
   121  The following providers are currently defined in the script:
   122  
   123  * `cluster-api`
   124  * `bootstrap-kubeadm`
   125  * `control-plane-kubeadm`
   126  * `infrastructure-docker`
   127  
   128  More providers can be added by editing the `clusterctl-settings.json` in your local copy of Cluster API;
   129  please note that each `provider_repo` should have its own `clusterctl-settings.json` describing how to build the provider assets, e.g.
   130  
   131  ```json
   132  {
   133    "name": "infrastructure-aws",
   134    "config": {
   135      "componentsFile": "infrastructure-components.yaml",
   136      "nextVersion": "v0.5.0"
   137    }
   138  }
   139  ```
   140  
   141  ## Create a kind management cluster
   142  
   143  [kind] can provide a Kubernetes cluster to be used as a management cluster.
   144  See [Install and/or configure a Kubernetes cluster] for more information.
   145  
   146  *Before* running clusterctl init, you must ensure all the required images are available in the kind cluster.
   147  
   148  This is always the case for images published in some image repository like Docker Hub or gcr.io, but it can't be
   149  the case for images built locally; in this case, you can use `kind load` to move the images built locally. e.g.
   150  
   151  ```bash
   152  kind load docker-image gcr.io/k8s-staging-cluster-api/cluster-api-controller-amd64:dev
   153  kind load docker-image gcr.io/k8s-staging-cluster-api/kubeadm-bootstrap-controller-amd64:dev
   154  kind load docker-image gcr.io/k8s-staging-cluster-api/kubeadm-control-plane-controller-amd64:dev
   155  kind load docker-image gcr.io/k8s-staging-cluster-api/capd-manager-amd64:dev
   156  ```
   157  
   158  to make the controller images available for the kubelet in the management cluster.
   159  
   160  When the kind cluster is ready and all the required images are in place, run
   161  the clusterctl init command generated by the create-local-repository.py
   162  script.
   163  
   164  Optionally, you may want to check if the components are running properly. The
   165  exact components are dependent on which providers you have initialized. Below
   166  is an example output with the Docker provider being installed.
   167  
   168  ```bash
   169  kubectl get deploy -A | grep  "cap\|cert"
   170  capd-system
   171  ```
   172  ```bash
   173  capd-controller-manager                         1/1     1            1           25m
   174  capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager       1/1     1            1           25m
   175  capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager   1/1     1            1           25m
   176  capi-system                         capi-controller-manager                         1/1     1            1           25m
   177  cert-manager                        cert-manager                                    1/1     1            1           27m
   178  cert-manager                        cert-manager-cainjector                         1/1     1            1           27m
   179  cert-manager                        cert-manager-webhook                            1/1     1            1           27m
   180  ```
   181  
   182  ## Additional Notes for the Docker Provider
   183  
   184  ### Select the appropriate Kubernetes version
   185  
   186  When selecting the `--kubernetes-version`, ensure that the `kindest/node`
   187  image is available.
   188  
   189  For example, assuming that on [docker hub][kind-docker-hub] there is no
   190  image for version `vX.Y.Z`, therefore creating a CAPD workload cluster with
   191  `--kubernetes-version=vX.Y.Z` will fail. See [issue 3795] for more details.
   192  
   193  ### Get the kubeconfig for the workload cluster when using Docker Desktop
   194  
   195  For Docker Desktop on macOS, Linux or Windows use kind to retrieve the kubeconfig.
   196  
   197  ```bash
   198  kind get kubeconfig --name capi-quickstart > capi-quickstart.kubeconfig
   199  ````
   200  
   201  Docker Engine for Linux works with the default clusterctl approach.
   202  ```bash
   203  clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
   204  ```
   205  
   206  ### Fix kubeconfig when using Docker Desktop and clusterctl
   207  When retrieving the kubeconfig using `clusterctl` with Docker Desktop on macOS or Windows or Docker Desktop (Docker Engine works fine) on Linux, you'll need to take a few extra steps to get the kubeconfig for a workload cluster created with the Docker provider.
   208  
   209  ```bash
   210  clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
   211  ```
   212  
   213  To fix the kubeconfig run:
   214  ```bash
   215  # Point the kubeconfig to the exposed port of the load balancer, rather than the inaccessible container IP.
   216  sed -i -e "s/server:.*/server: https:\/\/$(docker port capi-quickstart-lb 6443/tcp | sed "s/0.0.0.0/127.0.0.1/")/g" ./capi-quickstart.kubeconfig
   217  ```
   218  
   219  <!-- links -->
   220  [kind]: https://kind.sigs.k8s.io/
   221  [providers repositories]: configuration.md#provider-repositories
   222  [overrides layer]: configuration.md#overrides-layer
   223  [Install and/or configure a Kubernetes cluster]: ../user/quick-start.md#install-andor-configure-a-kubernetes-cluster
   224  [kind-docker-hub]: https://hub.docker.com/r/kindest/node/tags
   225  [issue 3795]: https://github.com/kubernetes-sigs/cluster-api/issues/3795