sigs.k8s.io/cluster-api-provider-azure@v1.14.3/docs/book/src/topics/addons.md (about)

     1  # Overview
     2  
     3  This section provides examples for addons for self-managed clusters. For managed cluster addons, please go to the [managed cluster specifications](https://capz.sigs.k8s.io/topics/managedcluster.html#specification).
     4  
     5  Self managed cluster addon options covered here:
     6  
     7  - CNI - including Calico for IPv4, IPv6, dual stack, and Flannel
     8  - [External Cloud provider](#external-cloud-provider) - including Azure File, Azure Disk CSI storage drivers
     9  
    10  # CNI
    11  
    12  By default, the CNI plugin is not installed for self-managed clusters, so you have to [install your own](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution).
    13  
    14  Some of the instructions below use [Helm](https://helm.sh) to install the addons. If you're not familiar with using Helm to manage Kubernetes applications as packages, there's lots of good [Helm documentation on the official website](https://helm.sh/docs/). You can install Helm by following the [official instructions](https://helm.sh/docs/intro/install/).
    15  
    16  ## Calico
    17  
    18  To install [Calico](https://www.tigera.io/project-calico/) on a self-managed cluster using the office Calico Helm chart, run the commands corresponding to the cluster network configuration.
    19  
    20  ### For IPv4 Clusters
    21  
    22  Grab the IPv4 CIDR from your cluster by running this kubectl statement against the management cluster:
    23  
    24  ```bash
    25  export IPV4_CIDR_BLOCK=$(kubectl get cluster "${CLUSTER_NAME}" -o=jsonpath='{.spec.clusterNetwork.pods.cidrBlocks[0]}')
    26  ```
    27  
    28  Then install the Helm chart on the workload cluster:
    29  
    30  ```bash
    31  helm repo add projectcalico https://docs.tigera.io/calico/charts && \
    32  helm install calico projectcalico/tigera-operator --version v3.26.1 -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico/values.yaml --set-string "installation.calicoNetwork.ipPools[0].cidr=${IPV4_CIDR_BLOCK}" --namespace tigera-operator --create-namespace
    33  ```
    34  
    35  ### For IPv6 Clusters
    36  
    37  Grab the IPv6 CIDR from your cluster by running this kubectl statement against the management cluster:
    38  
    39  ```bash
    40  export IPV6_CIDR_BLOCK=$(kubectl get cluster "${CLUSTER_NAME}" -o=jsonpath='{.spec.clusterNetwork.pods.cidrBlocks[0]}')
    41  ```
    42  
    43  Then install the Helm chart on the workload cluster:
    44  
    45  ```bash
    46  helm repo add projectcalico https://docs.tigera.io/calico/charts && \
    47  helm install calico projectcalico/tigera-operator --version v3.26.1 -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico-ipv6/values.yaml  --set-string "installation.calicoNetwork.ipPools[0].cidr=${IPV6_CIDR_BLOCK}" --namespace tigera-operator --create-namespace
    48  ```
    49  
    50  ### For Dual-Stack Clusters
    51  
    52  Grab the IPv4 and IPv6 CIDRs from your cluster by running this kubectl statement against the management cluster:
    53  
    54  ```bash
    55  export IPV4_CIDR_BLOCK=$(kubectl get cluster "${CLUSTER_NAME}" -o=jsonpath='{.spec.clusterNetwork.pods.cidrBlocks[0]}')
    56  export IPV6_CIDR_BLOCK=$(kubectl get cluster "${CLUSTER_NAME}" -o=jsonpath='{.spec.clusterNetwork.pods.cidrBlocks[1]}')
    57  ```
    58  
    59  Then install the Helm chart on the workload cluster:
    60  
    61  ```bash
    62  helm repo add projectcalico https://docs.tigera.io/calico/charts && \
    63  helm install calico projectcalico/tigera-operator --version v3.26.1 -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/calico-dual-stack/values.yaml --set-string "installation.calicoNetwork.ipPools[0].cidr=${IPV4_CIDR_BLOCK}","installation.calicoNetwork.ipPools[1].cidr=${IPV6_CIDR_BLOCK}" --namespace tigera-operator --create-namespace
    64  ```
    65  
    66  <aside class="note">
    67  
    68  <h1> Note </h1>
    69  
    70  For Windows nodes, you also need to copy the kubeadm-config configmap to the calico-system namespace so the calico-node-windows Daemonset can find it:
    71  
    72  ```bash
    73  kubectl create ns calico-system
    74  kubectl get configmap kubeadm-config --namespace=kube-system -o yaml \
    75  | sed 's/namespace: kube-system/namespace: calico-system/' \
    76  | kubectl create -f -
    77  ```
    78  
    79  </aside>
    80  
    81  For more information, see the [official Calico documentation](https://projectcalico.docs.tigera.io/getting-started/kubernetes/helm).
    82  
    83  ## Flannel
    84  
    85  This section describes how to use [Flannel](https://github.com/flannel-io/flannel) as your CNI solution.
    86  
    87  ### Modify the Cluster resources
    88  
    89  Before deploying the cluster, change the `KubeadmControlPlane` value at `spec.kubeadmConfigSpec.clusterConfiguration.controllerManager.extraArgs.allocate-node-cidrs` to `"true"`
    90  
    91  ```yaml
    92  apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    93  kind: KubeadmControlPlane
    94  spec:
    95    kubeadmConfigSpec:
    96      clusterConfiguration:
    97        controllerManager:
    98          extraArgs:
    99            allocate-node-cidrs: "true"
   100  ```
   101  
   102  #### Modify Flannel config
   103  
   104  _NOTE_: This is based off of the instructions at: <https://github.com/flannel-io/flannel#deploying-flannel-manually>
   105  
   106  You need to make an adjustment to the default flannel configuration so that the CIDR inside your CAPZ cluster matches the Flannel Network CIDR.
   107  
   108  View your capi-cluster.yaml and make note of the Cluster Network CIDR Block.  For example:
   109  
   110  ```yaml
   111  apiVersion: cluster.x-k8s.io/v1beta1
   112  kind: Cluster
   113  spec:
   114    clusterNetwork:
   115      pods:
   116        cidrBlocks:
   117        - 192.168.0.0/16
   118  ```
   119  
   120  Download the file at `https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml` and modify the `kube-flannel-cfg` ConfigMap.
   121  Set the value at `data.net-conf.json.Network` value to match your Cluster Network CIDR Block.
   122  
   123  ```bash
   124  wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
   125  ```
   126  
   127  Edit kube-flannel.yml and change this section so that the Network section matches your Cluster CIDR
   128  
   129  ```yaml
   130  kind: ConfigMap
   131  apiVersion: v1
   132  metadata:
   133    name: kube-flannel-cfg
   134  data:
   135    net-conf.json: |
   136      {
   137        "Network": "192.168.0.0/16",
   138        "Backend": {
   139          "Type": "vxlan"
   140        }
   141      }
   142  ```
   143  
   144  Apply kube-flannel.yml
   145  
   146  ```bash
   147  kubectl apply -f kube-flannel.yml
   148  ```
   149  
   150  ## Using Azure CNI V1
   151  
   152  While following the [quick start steps in Cluster API book](https://cluster-api.sigs.k8s.io/user/quick-start.html#quick-start), Azure CNI v1 can be used in place of Calico as a [container networking interface solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) for your workload cluster.
   153  
   154  Artifacts required for Azure CNI:
   155  
   156  - [azure-cni.yaml](https://raw.githubusercontent.com/Azure/azure-container-networking/v1.5.3/hack/manifests/cni-installer-v1.yaml)
   157  
   158  ### Limitations
   159  
   160  - Azure CNI v1 is only supported for Linux nodes. Refer to: [CAPZ#3650](https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/3650)
   161  
   162  - We can only configure one subnet per control-plane node. Refer to: [CAPZ#3506](https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/3506)
   163  
   164  - We can only configure one Network Interface per worker node. Refer to: [Azure-container-networking#3611](https://github.com/Azure/azure-container-networking/issues/1945)
   165  
   166  ### Update Cluster Configuration
   167  
   168  The following resources need to be updated when using `capi-quickstart.yaml` (the default cluster manifest generated while following the Cluster API quick start).
   169  
   170  - `kind: AzureCluster`
   171    - update `spec.networkSpecs.subnets` with the name and role of the subnets you want to use in your workload cluster.
   172  
   173    - ```yaml
   174      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   175      kind: AzureCluster
   176      metadata:
   177        name: ${CLUSTER_NAME}
   178        namespace: default
   179      spec:
   180        .
   181        .
   182        networkSpec:
   183          subnets:
   184          - name: control-plane-subnet # update this as per your nomenclature
   185            role: control-plane
   186          - name: node-subnet # update this as per your nomenclature
   187            role: node
   188        .
   189        .
   190      ```
   191  
   192  - `kind: KubeadmControlPlane` of control plane nodes
   193    - add `max-pods: "30"` to `spec.kubeadmConfigSpec.initConfiguration.nodeRegistration.kubeletExtraArgs`.
   194    - add `max-pods: "30"` to `spec.kubeadmConfigSpec.joinConfiguration.nodeRegistration.kubeletExtraArgs`.
   195  
   196    - ```yaml
   197      apiVersion: controlplane.cluster.x-k8s.io/v1beta1
   198      kind: KubeadmControlPlane
   199      metadata:
   200        name: ${CLUSTER_NAME}-control-plane
   201        namespace: default
   202      spec:
   203        kubeadmConfigSpec:
   204          .
   205          .
   206          initConfiguration:
   207            nodeRegistration:
   208              kubeletExtraArgs:
   209                max-pods: "30"
   210                .
   211                .
   212          joinConfiguration:
   213            nodeRegistration:
   214              kubeletExtraArgs:
   215                max-pods: "30"
   216                .
   217                .
   218      ```
   219  
   220  - `kind: AzureMachineTemplate` of control-plane
   221    - Add `networkInterfaces` to controlplane's `AzureMachineTemplate`
   222  
   223    - ```yaml
   224      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   225      kind: AzureMachineTemplate
   226      metadata:
   227        name: ${CLUSTER_NAME}-control-plane
   228        namespace: default
   229      spec:
   230        template:
   231          spec:
   232            .
   233            .
   234            networkInterfaces:
   235            - privateIPConfigs: 30
   236              subnetName: control-plane-subnet
   237            .
   238            .
   239      ```
   240  
   241  - `kind: AzureMachineTemplate` of worker node
   242    - Add `networkInterfaces` to worker node's `AzureMachineTemplate`
   243  
   244    - ```yaml
   245      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   246      kind: AzureMachineTemplate
   247      metadata:
   248        name: ${CLUSTER_NAME}-md-0
   249        namespace: default
   250      spec:
   251        template:
   252          spec:
   253            networkInterfaces:
   254            - privateIPConfigs: 30
   255              subnetName: node-subnet
   256            .
   257            .
   258      ```
   259  
   260  - `kind: KubeadmControlPlane` of worker nodes
   261    - add `max-pods: "30"` to `spec.template.spec.joinConfiguration.nodeRegistration.kubeletExtraArgs`.
   262  
   263    - ```yaml
   264      apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
   265      kind: KubeadmConfigTemplate
   266      metadata:
   267        name: ${CLUSTER_NAME}-md-0
   268        namespace: default
   269      spec:
   270        template:
   271          spec:
   272            .
   273            .
   274            joinConfiguration:
   275              nodeRegistration:
   276                kubeletExtraArgs:
   277                  max-pods: "30"
   278                  .
   279                  .
   280      ```
   281  
   282  ### Disable Azure network discovery (if using custom images without image-builder)
   283  
   284  By default, Azure assigns secondary IP Configurations to the host OS.
   285  
   286  This behavior interfere with Azure CNI who needs those free to allocate them to pod netns's/veth's.
   287  
   288  Simply create a file in `/etc/cloud/cloud.cfg.d/15_azure-vnet.cfg` with:
   289  ```yaml
   290  datasource:
   291    Azure:
   292      apply_network_config: false
   293  ```
   294  
   295  For more information, here's a [link](https://github.com/kubernetes-sigs/image-builder/pull/1090) to the entire discussion for context.
   296  
   297  
   298  # External Cloud Provider
   299  
   300  The "external" or "out-of-tree" cloud provider for Azure is the recommended  cloud provider for CAPZ clusters. The "in-tree" cloud provider has been deprecated since v1.20 and only bug fixes are allowed in its Kubernetes repository directory.
   301  
   302  Below are instructions to install [external cloud provider](https://github.com/kubernetes-sigs/cloud-provider-azure) components on a self-managed cluster using the official helm chart. For more information see the official [`cloud-provider-azure` helm chart documentation](https://github.com/kubernetes-sigs/cloud-provider-azure/tree/master/helm/cloud-provider-azure).
   303  
   304  Grab the CIDR ranges from your cluster by running this kubectl statement against the management cluster:
   305  
   306  ```bash
   307  export CCM_CIDR_BLOCK=$(kubectl get cluster "${CLUSTER_NAME}" -o=jsonpath='{.spec.clusterNetwork.pods.cidrBlocks[0]}')
   308  if DUAL_CIDR=$(kubectl get cluster "${CLUSTER_NAME}" -o=jsonpath='{.spec.clusterNetwork.pods.cidrBlocks[1]}' 2> /dev/null); then
   309    export CCM_CLUSTER_CIDR="${CCM_CLUSTER_CIDR}\,${DUAL_CIDR}"
   310  fi
   311  ```
   312  
   313  Then install the Helm chart on the workload cluster:
   314  
   315  ```bash
   316  helm install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} --set "cloudControllerManager.clusterCIDR=${CCM_CIDR_BLOCK}"
   317  ```
   318  
   319  - **Note**: 
   320    When working with **Flatcar machines**, append `--set-string cloudControllerManager.caCertDir=/usr/share/ca-certificates` to the `cloud-provider-azure` _helm_ command. The helm command to install cloud provider azure for Flatcar-flavored workload cluster will be:
   321  
   322      ```bash
   323      helm install --repo https://raw.githubusercontent.com/kubernetes-sigs/cloud-provider-azure/master/helm/repo cloud-provider-azure --generate-name --set infra.clusterName=${CLUSTER_NAME} --set "cloudControllerManager.clusterCIDR=${CCM_CIDR_BLOCK}" --set-string "cloudControllerManager.caCertDir=/usr/share/ca-certificates"
   324      ```
   325  
   326  The Helm chart will pick the right version of `cloud-controller-manager` and `cloud-node-manager` to work with the version of Kubernetes your cluster is running.
   327  
   328  After running `helm install`, you should eventually see a set of pods like these in a `Running` state:
   329  
   330  ```bash
   331  kube-system   cloud-controller-manager                                            1/1     Running   0          41s
   332  kube-system   cloud-node-manager-5pklx                                            1/1     Running   0          26s
   333  kube-system   cloud-node-manager-hbbqt                                            1/1     Running   0          30s
   334  kube-system   cloud-node-manager-mfsdg                                            1/1     Running   0          39s
   335  kube-system   cloud-node-manager-qrz74                                            1/1     Running   0          24s
   336  ```
   337  
   338  To know more about configuring cloud-provider-azure, see [Configuring the Kubernetes Cloud Provider for Azure](./cloud-provider-config.md).
   339  
   340  ## Storage Drivers
   341  
   342  ### Azure File CSI Driver
   343  
   344  To install the Azure File CSI driver please refer to the [installation guide](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/install-azurefile-csi-driver.md)
   345  
   346  Repository: <https://github.com/kubernetes-sigs/azurefile-csi-driver>
   347  
   348  ### Azure Disk CSI Driver
   349  
   350  To install the Azure Disk CSI driver please refer to the [installation guide](https://github.com/kubernetes-sigs/azuredisk-csi-driver/blob/master/docs/install-azuredisk-csi-driver.md)
   351  
   352  Repository: <https://github.com/kubernetes-sigs/azuredisk-csi-driver>