sigs.k8s.io/cluster-api-provider-azure@v1.14.3/docs/book/src/topics/managedcluster.md (about)

     1  # Managed Clusters (AKS)
     2  
     3  - **Feature status:** GA
     4  - **Feature gate:** MachinePool=true
     5  
     6  Cluster API Provider Azure (CAPZ) supports managing Azure
     7  Kubernetes Service (AKS) clusters. CAPZ implements this with three
     8  custom resources:
     9  
    10  - AzureManagedControlPlane
    11  - AzureManagedCluster
    12  - AzureManagedMachinePool
    13  
    14  The combination of AzureManagedControlPlane/AzureManagedCluster
    15  corresponds to provisioning an AKS cluster. AzureManagedMachinePool
    16  corresponds one-to-one with AKS node pools. This also means that
    17  creating an AzureManagedControlPlane requires at least one AzureManagedMachinePool
    18  with `spec.mode` `System`, since AKS expects at least one system pool at creation
    19  time. For more documentation on system node pool refer [AKS Docs](https://learn.microsoft.com/azure/aks/use-system-pools)
    20  
    21  ## Deploy with clusterctl
    22  
    23  A clusterctl flavor exists to deploy an AKS cluster with CAPZ. This
    24  flavor requires the following environment variables to be set before
    25  executing clusterctl.
    26  
    27  ```bash
    28  # Kubernetes values
    29  export CLUSTER_NAME="my-cluster"
    30  export WORKER_MACHINE_COUNT=2
    31  export KUBERNETES_VERSION="v1.27.3"
    32  
    33  # Azure values
    34  export AZURE_LOCATION="southcentralus"
    35  export AZURE_RESOURCE_GROUP="${CLUSTER_NAME}"
    36  ```
    37  
    38  Create a new service principal and save to a local file:
    39  
    40  ```bash
    41  az ad sp create-for-rbac --role Contributor --scopes="/subscriptions/${AZURE_SUBSCRIPTION_ID}" --sdk-auth > sp.json
    42  ```
    43  
    44  export the following variables in your current shell.
    45  
    46  ```bash
    47  export AZURE_SUBSCRIPTION_ID="$(cat sp.json | jq -r .subscriptionId | tr -d '\n')"
    48  export AZURE_CLIENT_SECRET="$(cat sp.json | jq -r .clientSecret | tr -d '\n')"
    49  export AZURE_CLIENT_ID="$(cat sp.json | jq -r .clientId | tr -d '\n')"
    50  export AZURE_TENANT_ID="$(cat sp.json | jq -r .tenantId | tr -d '\n')"
    51  export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3"
    52  export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret"
    53  export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default"
    54  export CLUSTER_IDENTITY_NAME="cluster-identity"
    55  ```
    56  
    57  Managed clusters require the Cluster API "MachinePool" feature flag enabled. You can do that via an environment variable thusly:
    58  
    59  ```bash
    60  export EXP_MACHINE_POOL=true
    61  ```
    62  
    63  Optionally, you can enable the CAPZ "AKSResourceHealth" feature flag as well:
    64  
    65  ```bash
    66  export EXP_AKS_RESOURCE_HEALTH=true
    67  ```
    68  
    69  Create a local kind cluster to run the management cluster components:
    70  
    71  ```bash
    72  kind create cluster
    73  ```
    74  
    75  Create an identity secret on the management cluster:
    76  
    77  ```bash
    78  kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}"
    79  ```
    80  
    81  Execute clusterctl to template the resources, then apply to your kind management cluster:
    82  
    83  ```bash
    84  clusterctl init --infrastructure azure
    85  clusterctl generate cluster ${CLUSTER_NAME} --kubernetes-version ${KUBERNETES_VERSION} --flavor aks > cluster.yaml
    86  
    87  # assumes an existing management cluster
    88  kubectl apply -f cluster.yaml
    89  
    90  # check status of created resources
    91  kubectl get cluster-api -o wide
    92  ```
    93  
    94  ## Specification
    95  
    96  We'll walk through an example to view available options.
    97  
    98  ```yaml
    99  apiVersion: cluster.x-k8s.io/v1beta1
   100  kind: Cluster
   101  metadata:
   102    name: my-cluster
   103  spec:
   104    clusterNetwork:
   105      services:
   106        cidrBlocks:
   107        - 192.168.0.0/16
   108    controlPlaneRef:
   109      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   110      kind: AzureManagedControlPlane
   111      name: my-cluster-control-plane
   112    infrastructureRef:
   113      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   114      kind: AzureManagedCluster
   115      name: my-cluster
   116  ---
   117  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   118  kind: AzureManagedControlPlane
   119  metadata:
   120    name: my-cluster-control-plane
   121  spec:
   122    location: southcentralus
   123    resourceGroupName: foo-bar
   124    sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""}
   125    subscriptionID: 00000000-0000-0000-0000-000000000000 # fake uuid
   126    version: v1.21.2
   127    networkPolicy: azure # or calico
   128    networkPlugin: azure # or kubenet
   129    sku:
   130      tier: Free # or Standard
   131    addonProfiles:
   132    - name: azureKeyvaultSecretsProvider
   133      enabled: true
   134    - name: azurepolicy
   135      enabled: true
   136  ---
   137  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   138  kind: AzureManagedCluster
   139  metadata:
   140    name: my-cluster
   141  ---
   142  apiVersion: cluster.x-k8s.io/v1beta1
   143  kind: MachinePool
   144  metadata:
   145    name: agentpool0
   146  spec:
   147    clusterName: my-cluster
   148    replicas: 2
   149    template:
   150      spec:
   151        clusterName: my-cluster
   152        infrastructureRef:
   153          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   154          kind: AzureManagedMachinePool
   155          name: agentpool0
   156          namespace: default
   157        version: v1.21.2
   158  ---
   159  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   160  kind: AzureManagedMachinePool
   161  metadata:
   162    name: agentpool0
   163  spec:
   164    mode: System
   165    osDiskSizeGB: 30
   166    sku: Standard_D2s_v3
   167  ---
   168  apiVersion: cluster.x-k8s.io/v1beta1
   169  kind: MachinePool
   170  metadata:
   171    name: agentpool1
   172  spec:
   173    clusterName: my-cluster
   174    replicas: 2
   175    template:
   176      spec:
   177        clusterName: my-cluster
   178        infrastructureRef:
   179          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   180          kind: AzureManagedMachinePool
   181          name: agentpool1
   182          namespace: default
   183        version: v1.21.2
   184  ---
   185  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   186  kind: AzureManagedMachinePool
   187  metadata:
   188    name: agentpool1
   189  spec:
   190    mode: User
   191    osDiskSizeGB: 40
   192    sku: Standard_D2s_v4
   193  ```
   194  
   195  Please note that we don't declare a configuration for the apiserver endpoint. This configuration data will be populated automatically based on the data returned from AKS API during cluster create as `.spec.controlPlaneEndpoint.Host` and `.spec.controlPlaneEndpoint.Port` in both the `AzureManagedCluster` and `AzureManagedControlPlane` resources. Any user-provided data will be ignored and overwritten by data returned from the AKS API.
   196  
   197  The [CAPZ API reference documentation](../reference/v1beta1-api.html) describes all of the available options. See also the AKS API documentation for [Agent Pools](https://learn.microsoft.com/rest/api/aks/agent-pools/create-or-update?tabs=HTTP) and [Managed Clusters](https://learn.microsoft.com/rest/api/aks/managed-clusters/create-or-update?tabs=HTTP).
   198  
   199  The main features for configuration are:
   200  
   201  - [networkPolicy](https://learn.microsoft.com/azure/aks/concepts-network#network-policies)
   202  - [networkPlugin](https://learn.microsoft.com/azure/aks/concepts-network#azure-virtual-networks)
   203  - [addonProfiles](https://learn.microsoft.com/cli/azure/aks/addon?view=azure-cli-latest#az-aks-addon-list-available) - for additional addons not listed below, look for the `*ADDON_NAME` values in [this code](https://github.com/Azure/azure-cli/blob/main/src/azure-cli/azure/cli/command_modules/acs/_consts.py).
   204  
   205  | addon name                | YAML value                |
   206  |---------------------------|---------------------------|
   207  | http_application_routing  | httpApplicationRouting    |
   208  | monitoring                | omsagent                  |
   209  | virtual-node              | aciConnector              |
   210  | kube-dashboard            | kubeDashboard             |
   211  | azure-policy              | azurepolicy               |
   212  | ingress-appgw             | ingressApplicationGateway |
   213  | confcom                   | ACCSGXDevicePlugin        |
   214  | open-service-mesh         | openServiceMesh           |
   215  | azure-keyvault-secrets-provider |  azureKeyvaultSecretsProvider |
   216  | gitops                    | Unsupported?              |
   217  | web_application_routing   | Unsupported?              |
   218  
   219  ### Use an existing Virtual Network to provision an AKS cluster
   220  
   221  If you'd like to deploy your AKS cluster in an existing Virtual Network, but create the cluster itself in a different resource group, you can configure the AzureManagedControlPlane resource with a reference to the existing Virtual Network and subnet. For example:
   222  
   223  ```yaml
   224  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   225  kind: AzureManagedControlPlane
   226  metadata:
   227    name: my-cluster-control-plane
   228  spec:
   229    location: southcentralus
   230    resourceGroupName: foo-bar
   231    sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""}
   232    subscriptionID: 00000000-0000-0000-0000-000000000000 # fake uuid
   233    version: v1.21.2
   234    virtualNetwork:
   235      cidrBlock: 10.0.0.0/8
   236      name: test-vnet
   237      resourceGroup: test-rg
   238      subnet:
   239        cidrBlock: 10.0.2.0/24
   240        name: test-subnet
   241  ```
   242  
   243  ### Enable AKS features with custom headers (--aks-custom-headers)
   244  
   245  CAPZ no longer supports passing custom headers to AKS APIs with `infrastructure.cluster.x-k8s.io/custom-header-` annotations.
   246  Custom headers are deprecated in AKS in favor of new features first landing in preview API versions:
   247  
   248  https://github.com/Azure/azure-rest-api-specs/pull/18232
   249  
   250  ### Disable Local Accounts in AKS when using Azure Active Directory
   251  
   252  When deploying an AKS cluster, local accounts are enabled by default.
   253  Even when you enable RBAC or Azure AD integration,
   254  --admin access still exists as a non-auditable backdoor option.
   255  Disabling local accounts closes the backdoor access to the cluster
   256  Example to disable local accounts for AAD enabled cluster.
   257  
   258  ```yaml
   259  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   260  kind: AzureManagedMachinePool
   261  metadata:
   262    ...
   263  spec:
   264    aadProfile:
   265      managed: true
   266      adminGroupObjectIDs:
   267      -  00000000-0000-0000-0000-000000000000 # group object id created in azure.
   268    disableLocalAccounts: true
   269    ...
   270  ```
   271  
   272  Note: CAPZ and CAPI requires access to the target cluster to maintain and manage the cluster.
   273  Disabling local accounts will cut off direct access to the target cluster.
   274  CAPZ and CAPI can access target cluster only via the Service Principal,
   275  hence the user has to provide appropriate access to the Service Principal to access the target cluster.
   276  User can do that by adding the Service Principal to the appropriate group defined in Azure and
   277  add the corresponding group ID in `spec.aadProfile.adminGroupObjectIDs`.
   278  CAPI and CAPZ will be able to authenticate via AAD while accessing the target cluster.
   279  
   280  ### AKS Fleet Integration
   281  
   282  CAPZ supports joining your managed AKS clusters to a single AKS fleet. Azure Kubernetes Fleet Manager (Fleet) enables at-scale management of multiple Azure Kubernetes Service (AKS) clusters. For more documentation on Azure Kubernetes Fleet Manager, refer [AKS Docs](https://learn.microsoft.com/azure/kubernetes-fleet/overview)
   283  
   284  To join a CAPZ cluster to an AKS fleet, you must first create an AKS fleet manager. For more information on how to create an AKS fleet manager, refer [AKS Docs](https://learn.microsoft.com/en-us/azure/kubernetes-fleet/quickstart-create-fleet-and-members). This fleet manager will be your point of reference for managing any CAPZ clusters that you join to the fleet.
   285  
   286  Once you have created an AKS fleet manager, you can join your CAPZ cluster to the fleet by adding the `fleetsMember` field to your AzureManagedControlPlane resource spec:
   287  
   288  ```yaml
   289  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   290  kind: AzureManagedControlPlane
   291  metadata:
   292    name: ${CLUSTER_NAME}
   293    namespace: default
   294  spec:
   295    fleetsMember:
   296      group: fleet-update-group
   297      managerName: fleet-manager-name
   298      managerResourceGroup: fleet-manager-resource-group
   299  ```
   300  
   301  The `managerName` and `managerResourceGroup` fields are the name and resource group of your AKS fleet manager. The `group` field is the name of the update group for the cluster, not to be confused with the resource group.
   302  
   303  When the `fleetMember` field is included, CAPZ will create an AKS fleet member resource which will join the CAPZ cluster to the AKS fleet. The AKS fleet member resource will be created in the same resource group as the CAPZ cluster.
   304  
   305  ### AKS Extensions
   306  
   307  CAPZ supports enabling AKS extensions on your managed AKS clusters. Cluster extensions provide an Azure Resource Manager driven experience for installation and lifecycle management of services like Azure Machine Learning or Kubernetes applications on an AKS cluster. For more documentation on AKS extensions, refer [AKS Docs](https://learn.microsoft.com/azure/aks/cluster-extensions).
   308  
   309  You can either provision official AKS extensions or Kubernetes applications through Marketplace. Please refer to [AKS Docs](https://learn.microsoft.com/en-us/azure/aks/cluster-extensions#currently-available-extensions) for the list of currently available extensions.
   310  
   311  To add an AKS extension to your managed cluster, simply add the `extensions` field to your AzureManagedControlPlane resource spec:
   312  
   313  ```yaml
   314  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   315  kind: AzureManagedControlPlane
   316  metadata:
   317    name: ${CLUSTER_NAME}
   318    namespace: default
   319  spec:
   320    extensions:
   321    - name: my-extension
   322      extensionType: "TraefikLabs.TraefikProxy"
   323      plan:
   324        name: "traefik-proxy"
   325        product: "traefik-proxy"
   326        publisher: "containous"
   327  ```
   328  
   329  To list all of the available extensions for your cluster as well as its plan details, use the following az cli command:
   330  
   331  ```bash
   332  az k8s-extension extension-types list-by-cluster --resource-group my-resource-group --cluster-name mycluster --cluster-type managedClusters
   333  ```
   334  
   335  For more details, please refer to the [az k8s-extension cli reference](https://learn.microsoft.com/cli/azure/k8s-extension).
   336  
   337  
   338  ### Security Profile for AKS clusters.
   339  
   340  Example for configuring AzureManagedControlPlane with a security profile:
   341  
   342  ```yaml
   343  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   344  kind: AzureManagedControlPlane
   345  metadata:
   346    name: my-cluster-control-plane
   347  spec:
   348    location: southcentralus
   349    resourceGroupName: foo-bar
   350    sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""}
   351    subscriptionID: 00000000-0000-0000-0000-000000000000 # fake uuid
   352    version: v1.26.6
   353    identity:
   354      type: UserAssigned
   355      userAssignedIdentityResourceID: /subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/<your-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<your-managed-identity>
   356    oidcIssuerProfile:
   357      enabled: true
   358    securityProfile:
   359      workloadIdentity:
   360        enabled: true
   361      imageCleaner:
   362        enabled: true
   363        intervalHours: 48
   364      azureKeyVaultKms:
   365        enabled: true
   366        keyID: https://key-vault.vault.azure.net/keys/secret-key/00000000000000000
   367      defender:
   368        logAnalyticsWorkspaceResourceID: /subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/<your-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<your-managed-identity>
   369        securityMonitoring:
   370          enabled: true
   371  ```
   372  
   373  ### Enabling Preview API Features for ManagedClusters
   374  
   375  #### :warning: WARNING: This is meant to be used sparingly to enable features for development and testing that are not otherwise represented in the CAPZ API. Misconfiguration that conflicts with CAPZ's normal mode of operation is possible.
   376  
   377  To enable preview features for managed clusters, you can use the `enablePreviewFeatures` field in the `AzureManagedControlPlane` resource spec. To use any of the new fields included in the preview API version, use the `asoManagedClusterPatches` field in the `AzureManagedControlPlane` resource spec and the `asoManagedClustersAgentPoolPatches` field in the `AzureManagedMachinePool` resource spec to patch in the new fields.
   378  
   379  Please refer to the [ASO Docs](https://azure.github.io/azure-service-operator/reference/containerservice/) for the ContainerService API reference for the latest preview fields and their usage.
   380  
   381  Example for enabling preview features for managed clusters:
   382  
   383  ```yaml
   384  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   385  kind: AzureManagedControlPlane
   386  metadata:
   387    name: ${CLUSTER_NAME}
   388    namespace: default
   389  spec:
   390    enablePreviewFeatures: true
   391    asoManagedClusterPatches:
   392    - '{"spec": {"enableNamespaceResources": true}}'
   393  ---
   394  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   395  kind: AzureManagedMachinePool
   396  metadata:
   397    ...
   398  spec:
   399    asoManagedClustersAgentPoolPatches:
   400    - '{"spec": {"enableCustomCATrust": true}}'
   401  ```
   402  
   403  #### OIDC Issuer on AKS
   404  
   405  Setting `AzureManagedControlPlane.Spec.oidcIssuerProfile.enabled` to `true` will enable OIDC issuer profile for the `AzureManagedControlPlane`. Once enabled, you will see a configmap named `<cluster-name>-aso-oidc-issuer-profile` in the same namespace as the `AzureManagedControlPlane` resource. This configmap will contain the OIDC issuer profile url under the `oidc-issuer-profile-url` key.
   406  
   407  Once OIDC issuer is enabled on the cluster, it's not supported to disable it.
   408  
   409  To learn more about OIDC and AKS refer [AKS Docs on OIDC issuer](https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer).
   410  
   411  
   412  ## Features
   413  
   414  AKS clusters deployed from CAPZ currently only support a limited,
   415  "blessed" configuration. This was primarily to keep the initial
   416  implementation simple. If you'd like to run managed AKS cluster with CAPZ
   417  and need an additional feature, please open a pull request or issue with
   418  details. We're happy to help!
   419  
   420  ## Best Practices
   421  
   422  A set of best practices for managing AKS clusters is documented here: https://learn.microsoft.com/azure/aks/best-practices
   423  
   424  ## Troubleshooting
   425  
   426  If a user tries to delete the MachinePool which refers to the last system node pool AzureManagedMachinePool webhook will reject deletion, so time stamp never gets set on the AzureManagedMachinePool. However the timestamp would be set on the MachinePool and would be in deletion state. To recover from this state create a new MachinePool manually referencing the AzureManagedMachinePool, edit the required references and finalizers to link the MachinePool to the AzureManagedMachinePool. In the AzureManagedMachinePool remove the owner reference to the old MachinePool, and set it to the new MachinePool. Once the new MachinePool is pointing to the AzureManagedMachinePool you can delete the old MachinePool. To delete the old MachinePool remove the finalizers in that object.
   427  
   428  Here is an Example:
   429  
   430  ```yaml
   431  # MachinePool deleted
   432  apiVersion: cluster.x-k8s.io/v1beta1
   433  kind: MachinePool
   434  metadata:
   435    finalizers:             # remove finalizers once new object is pointing to the AzureManagedMachinePool
   436    - machinepool.cluster.x-k8s.io
   437    labels:
   438      cluster.x-k8s.io/cluster-name: capz-managed-aks
   439    name: agentpool0
   440    namespace: default
   441    ownerReferences:
   442    - apiVersion: cluster.x-k8s.io/v1beta1
   443      kind: Cluster
   444      name: capz-managed-aks
   445      uid: 152ecf45-0a02-4635-987c-1ebb89055fa2
   446    uid: ae4a235a-f0fa-4252-928a-0e3b4c61dbea
   447  spec:
   448    clusterName: capz-managed-aks
   449    minReadySeconds: 0
   450    providerIDList:
   451    - azure:///subscriptions/9107f2fb-e486-a434-a948-52e2929b6f18/resourceGroups/MC_rg_capz-managed-aks_eastus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool0-10226072-vmss/virtualMachines/0
   452    replicas: 1
   453    template:
   454      metadata: {}
   455      spec:
   456        bootstrap:
   457          dataSecretName: ""
   458        clusterName: capz-managed-aks
   459        infrastructureRef:
   460          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   461          kind: AzureManagedMachinePool
   462          name: agentpool0
   463          namespace: default
   464        version: v1.21.2
   465  
   466  ---
   467  # New Machinepool
   468  apiVersion: cluster.x-k8s.io/v1beta1
   469  kind: MachinePool
   470  metadata:
   471    finalizers:
   472    - machinepool.cluster.x-k8s.io
   473    generation: 2
   474    labels:
   475      cluster.x-k8s.io/cluster-name: capz-managed-aks
   476    name: agentpool2    # change the name of the machinepool
   477    namespace: default
   478    ownerReferences:
   479    - apiVersion: cluster.x-k8s.io/v1beta1
   480      kind: Cluster
   481      name: capz-managed-aks
   482      uid: 152ecf45-0a02-4635-987c-1ebb89055fa2
   483    # uid: ae4a235a-f0fa-4252-928a-0e3b4c61dbea     # remove the uid set for machinepool
   484  spec:
   485    clusterName: capz-managed-aks
   486    minReadySeconds: 0
   487    providerIDList:
   488    - azure:///subscriptions/9107f2fb-e486-a434-a948-52e2929b6f18/resourceGroups/MC_rg_capz-managed-aks_eastus/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool0-10226072-vmss/virtualMachines/0
   489    replicas: 1
   490    template:
   491      metadata: {}
   492      spec:
   493        bootstrap:
   494          dataSecretName: ""
   495        clusterName: capz-managed-aks
   496        infrastructureRef:
   497          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   498          kind: AzureManagedMachinePool
   499          name: agentpool0
   500          namespace: default
   501        version: v1.21.2
   502  ```
   503  
   504  ## Joining self-managed VMSS nodes to an AKS control plane
   505  
   506  <aside class="note warning">
   507  
   508  <h1> Warning </h1>
   509  
   510  This is not an officially supported AKS scenario. It is meant to facilitate development and testing of alpha/beta Kubernetes features. Please use at your own risk.
   511  
   512  </aside>
   513  
   514  ### Creating the MachinePool
   515  
   516  You can add a self-managed VMSS node pool to any CAPZ-managed AKS cluster by applying the following resources to the management cluster:
   517  
   518  ```yaml
   519  apiVersion: cluster.x-k8s.io/v1beta1
   520  kind: MachinePool
   521  metadata:
   522    name: ${CLUSTER_NAME}-vmss
   523    namespace: default
   524  spec:
   525    clusterName: ${CLUSTER_NAME}
   526    replicas: ${WORKER_MACHINE_COUNT}
   527    template:
   528      spec:
   529        bootstrap:
   530          configRef:
   531            apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
   532            kind: KubeadmConfig
   533            name: ${CLUSTER_NAME}-vmss
   534        clusterName: ${CLUSTER_NAME}
   535        infrastructureRef:
   536          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   537          kind: AzureMachinePool
   538          name: ${CLUSTER_NAME}-vmss
   539        version: ${KUBERNETES_VERSION}
   540  ---
   541  apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
   542  kind: AzureMachinePool
   543  metadata:
   544    name: ${CLUSTER_NAME}-vmss
   545    namespace: default
   546  spec:
   547    location: ${AZURE_LOCATION}
   548    strategy:
   549      rollingUpdate:
   550        deletePolicy: Oldest
   551        maxSurge: 25%
   552        maxUnavailable: 1
   553      type: RollingUpdate
   554    template:
   555      osDisk:
   556        diskSizeGB: 30
   557        managedDisk:
   558          storageAccountType: Premium_LRS
   559        osType: Linux
   560      sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""}
   561      vmSize: ${AZURE_NODE_MACHINE_TYPE}
   562  ---
   563  apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
   564  kind: KubeadmConfig
   565  metadata:
   566    name: ${CLUSTER_NAME}-vmss
   567    namespace: default
   568  spec:
   569    files:
   570    - contentFrom:
   571        secret:
   572          key: worker-node-azure.json
   573          name: ${CLUSTER_NAME}-vmss-azure-json
   574      owner: root:root
   575      path: /etc/kubernetes/azure.json
   576      permissions: "0644"
   577    - contentFrom:
   578        secret:
   579          key: value
   580          name: ${CLUSTER_NAME}-kubeconfig
   581      owner: root:root
   582      path: /etc/kubernetes/admin.conf
   583      permissions: "0644"
   584    joinConfiguration:
   585      discovery:
   586        file:
   587          kubeConfigPath: /etc/kubernetes/admin.conf
   588      nodeRegistration:
   589        kubeletExtraArgs:
   590          cloud-provider: external
   591        name: '{{ ds.meta_data["local_hostname"] }}'
   592    preKubeadmCommands:
   593    - kubeadm init phase upload-config all
   594    ```
   595  
   596  ### Installing Addons
   597  
   598  In order for the nodes to become ready, you'll need to install Cloud Provider Azure and a CNI.
   599  
   600  AKS will install Cloud Provider Azure on the self-managed nodes as long as they have the appropriate labels. You can add the required label on the nodes by running the following command on the AKS cluster:
   601  
   602  ```bash
   603  kubectl label node <node name> kubernetes.azure.com/cluster=<nodeResourceGroupName>
   604  ```
   605  
   606  Repeat this for each node in the MachinePool.
   607  
   608  <aside class="note">
   609  
   610  <h1> Warning </h1>
   611  
   612  Note: CAPI does not currently support propagating labels from the MachinePool to the nodes, in the future this could be part of the MachinePool definition.
   613  
   614  </aside>
   615  
   616  For the CNI, you can install the CNI of your choice. For example, to install Azure CNI, run the following command on the AKS cluster:
   617  
   618  ```bash
   619  kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-azure/main/templates/addons/azure-cni-v1.yaml
   620  ```
   621  
   622  ### Notes
   623  
   624  Some notes about how this works under the hood:
   625  
   626  - CAPZ will fetch the kubeconfig for the AKS cluster and store it in a secret named `${CLUSTER_NAME}-kubeconfig` in the management cluster. That secret is then used for discovery by the `KubeadmConfig` resource.
   627  - You can customize the `MachinePool`, `AzureMachinePool`, and `KubeadmConfig` resources to your liking. The example above is just a starting point. Note that the key configurations to keep are in the `KubeadmConfig` resource, namely the `files`, `joinConfiguration`, and `preKubeadmCommands` sections.
   628  - The `KubeadmConfig` resource will be used to generate a `kubeadm join` command that will be executed on each node in the VMSS. It uses the cluster kubeconfig for discovery. The `kubeadm init phase upload-config all` is run as a preKubeadmCommand to ensure that the kubeadm and kubelet configurations are uploaded to a ConfigMap. This step would normally be done by the `kubeadm init` command, but since we're not running `kubeadm init` we need to do it manually.