sigs.k8s.io/cluster-api-provider-azure@v1.17.0/docs/book/src/managed/managedcluster.md (about) 1 # Managed Clusters (AKS) 2 3 - **Feature status:** GA 4 - **Feature gate:** MachinePool=true 5 6 Cluster API Provider Azure (CAPZ) supports managing Azure 7 Kubernetes Service (AKS) clusters. CAPZ implements this with three 8 custom resources: 9 10 - AzureManagedControlPlane 11 - AzureManagedCluster 12 - AzureManagedMachinePool 13 14 The combination of AzureManagedControlPlane/AzureManagedCluster 15 corresponds to provisioning an AKS cluster. AzureManagedMachinePool 16 corresponds one-to-one with AKS node pools. This also means that 17 creating an AzureManagedControlPlane requires at least one AzureManagedMachinePool 18 with `spec.mode` `System`, since AKS expects at least one system pool at creation 19 time. For more documentation on system node pool refer [AKS Docs](https://learn.microsoft.com/azure/aks/use-system-pools) 20 21 Sections in this document: 22 - [Deploy with clusterctl](#deploy-with-clusterctl) 23 - [Specification walkthrough](#specification) 24 - [Use an existing Virtual Network to provision an AKS cluster](#use-an-existing-virtual-network-to-provision-an-aks-cluster) 25 - [Disable Local Accounts in AKS when using Azure Active Directory](#disable-local-accounts-in-aks-when-using-azure-active-directory) 26 - [AKS Fleet Integration](#aks-fleet-integration) 27 - [AKS Extensions](#aks-extensions) 28 - [Security Profile for AKS clusters](#security-profile-for-aks-clusters) 29 - [Enabling Preview API Features for ManagedClusters](#enabling-preview-api-features-for-managedclusters) 30 - [OIDC Issuer on AKS](#oidc-issuer-on-aks) 31 - [Enable AKS features with custom headers](#enable-aks-features-with-custom-headers---aks-custom-headers) 32 33 ## Deploy with clusterctl 34 35 A clusterctl flavor exists to deploy an AKS cluster with CAPZ. This 36 flavor requires the following environment variables to be set before 37 executing clusterctl. 38 39 ```bash 40 # Kubernetes values 41 export CLUSTER_NAME="my-cluster" 42 export WORKER_MACHINE_COUNT=2 43 export KUBERNETES_VERSION="v1.27.3" 44 45 # Azure values 46 export AZURE_LOCATION="southcentralus" 47 export AZURE_RESOURCE_GROUP="${CLUSTER_NAME}" 48 ``` 49 50 ***NOTE***: `${CLUSTER_NAME}` should adhere to the RFC 1123 standard. This means that it must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character. 51 52 Create a new service principal and save to a local file: 53 54 ```bash 55 az ad sp create-for-rbac --role Contributor --scopes="/subscriptions/${AZURE_SUBSCRIPTION_ID}" --sdk-auth > sp.json 56 ``` 57 58 export the following variables in your current shell. 59 60 ```bash 61 export AZURE_SUBSCRIPTION_ID="$(cat sp.json | jq -r .subscriptionId | tr -d '\n')" 62 export AZURE_CLIENT_SECRET="$(cat sp.json | jq -r .clientSecret | tr -d '\n')" 63 export AZURE_CLIENT_ID="$(cat sp.json | jq -r .clientId | tr -d '\n')" 64 export AZURE_TENANT_ID="$(cat sp.json | jq -r .tenantId | tr -d '\n')" 65 export AZURE_NODE_MACHINE_TYPE="Standard_D2s_v3" 66 export AZURE_CLUSTER_IDENTITY_SECRET_NAME="cluster-identity-secret" 67 export AZURE_CLUSTER_IDENTITY_SECRET_NAMESPACE="default" 68 export CLUSTER_IDENTITY_NAME="cluster-identity" 69 ``` 70 71 Managed clusters require the Cluster API "MachinePool" feature flag enabled. You can do that via an environment variable thusly: 72 73 ```bash 74 export EXP_MACHINE_POOL=true 75 ``` 76 77 Create a local kind cluster to run the management cluster components: 78 79 ```bash 80 kind create cluster 81 ``` 82 83 Create an identity secret on the management cluster: 84 85 ```bash 86 kubectl create secret generic "${AZURE_CLUSTER_IDENTITY_SECRET_NAME}" --from-literal=clientSecret="${AZURE_CLIENT_SECRET}" 87 ``` 88 89 Execute clusterctl to template the resources, then apply to your kind management cluster: 90 91 ```bash 92 clusterctl init --infrastructure azure 93 clusterctl generate cluster ${CLUSTER_NAME} --kubernetes-version ${KUBERNETES_VERSION} --flavor aks > cluster.yaml 94 95 # assumes an existing management cluster 96 kubectl apply -f cluster.yaml 97 98 # check status of created resources 99 kubectl get cluster-api -o wide 100 ``` 101 102 ## Specification 103 104 We'll walk through an example to view available options. 105 106 ```yaml 107 apiVersion: cluster.x-k8s.io/v1beta1 108 kind: Cluster 109 metadata: 110 name: my-cluster 111 spec: 112 clusterNetwork: 113 services: 114 cidrBlocks: 115 - 192.168.0.0/16 116 controlPlaneRef: 117 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 118 kind: AzureManagedControlPlane 119 name: my-cluster-control-plane 120 infrastructureRef: 121 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 122 kind: AzureManagedCluster 123 name: my-cluster 124 --- 125 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 126 kind: AzureManagedControlPlane 127 metadata: 128 name: my-cluster-control-plane 129 spec: 130 location: southcentralus 131 resourceGroupName: foo-bar 132 sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""} 133 subscriptionID: 00000000-0000-0000-0000-000000000000 # fake uuid 134 version: v1.21.2 135 networkPolicy: azure # or calico 136 networkPlugin: azure # or kubenet 137 sku: 138 tier: Free # or Standard 139 addonProfiles: 140 - name: azureKeyvaultSecretsProvider 141 enabled: true 142 - name: azurepolicy 143 enabled: true 144 --- 145 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 146 kind: AzureManagedCluster 147 metadata: 148 name: my-cluster 149 --- 150 apiVersion: cluster.x-k8s.io/v1beta1 151 kind: MachinePool 152 metadata: 153 name: agentpool0 154 spec: 155 clusterName: my-cluster 156 replicas: 2 157 template: 158 spec: 159 clusterName: my-cluster 160 infrastructureRef: 161 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 162 kind: AzureManagedMachinePool 163 name: agentpool0 164 namespace: default 165 version: v1.21.2 166 --- 167 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 168 kind: AzureManagedMachinePool 169 metadata: 170 name: agentpool0 171 spec: 172 mode: System 173 osDiskSizeGB: 30 174 sku: Standard_D2s_v3 175 --- 176 apiVersion: cluster.x-k8s.io/v1beta1 177 kind: MachinePool 178 metadata: 179 name: agentpool1 180 spec: 181 clusterName: my-cluster 182 replicas: 2 183 template: 184 spec: 185 clusterName: my-cluster 186 infrastructureRef: 187 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 188 kind: AzureManagedMachinePool 189 name: agentpool1 190 namespace: default 191 version: v1.21.2 192 --- 193 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 194 kind: AzureManagedMachinePool 195 metadata: 196 name: agentpool1 197 spec: 198 mode: User 199 osDiskSizeGB: 40 200 sku: Standard_D2s_v4 201 ``` 202 203 Please note that we don't declare a configuration for the apiserver endpoint. This configuration data will be populated automatically based on the data returned from AKS API during cluster create as `.spec.controlPlaneEndpoint.Host` and `.spec.controlPlaneEndpoint.Port` in both the `AzureManagedCluster` and `AzureManagedControlPlane` resources. Any user-provided data will be ignored and overwritten by data returned from the AKS API. 204 205 The [CAPZ API reference documentation](../reference/v1beta1-api.html) describes all of the available options. See also the AKS API documentation for [Agent Pools](https://learn.microsoft.com/rest/api/aks/agent-pools/create-or-update?tabs=HTTP) and [Managed Clusters](https://learn.microsoft.com/rest/api/aks/managed-clusters/create-or-update?tabs=HTTP). 206 207 The main features for configuration are: 208 209 - [networkPolicy](https://learn.microsoft.com/azure/aks/concepts-network#network-policies) 210 - [networkPlugin](https://learn.microsoft.com/azure/aks/concepts-network#azure-virtual-networks) 211 - [addonProfiles](https://learn.microsoft.com/cli/azure/aks/addon?view=azure-cli-latest#az-aks-addon-list-available) - for additional addons not listed below, look for the `*ADDON_NAME` values in [this code](https://github.com/Azure/azure-cli/blob/main/src/azure-cli/azure/cli/command_modules/acs/_consts.py). 212 213 | addon name | YAML value | 214 |---------------------------|---------------------------| 215 | http_application_routing | httpApplicationRouting | 216 | monitoring | omsagent | 217 | virtual-node | aciConnector | 218 | kube-dashboard | kubeDashboard | 219 | azure-policy | azurepolicy | 220 | ingress-appgw | ingressApplicationGateway | 221 | confcom | ACCSGXDevicePlugin | 222 | open-service-mesh | openServiceMesh | 223 | azure-keyvault-secrets-provider | azureKeyvaultSecretsProvider | 224 | gitops | Unsupported? | 225 | web_application_routing | Unsupported? | 226 227 ### Use an existing Virtual Network to provision an AKS cluster 228 229 If you'd like to deploy your AKS cluster in an existing Virtual Network, but create the cluster itself in a different resource group, you can configure the AzureManagedControlPlane resource with a reference to the existing Virtual Network and subnet. For example: 230 231 ```yaml 232 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 233 kind: AzureManagedControlPlane 234 metadata: 235 name: my-cluster-control-plane 236 spec: 237 location: southcentralus 238 resourceGroupName: foo-bar 239 sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""} 240 subscriptionID: 00000000-0000-0000-0000-000000000000 # fake uuid 241 version: v1.21.2 242 virtualNetwork: 243 cidrBlock: 10.0.0.0/8 244 name: test-vnet 245 resourceGroup: test-rg 246 subnet: 247 cidrBlock: 10.0.2.0/24 248 name: test-subnet 249 ``` 250 251 252 253 ### Disable Local Accounts in AKS when using Azure Active Directory 254 255 When deploying an AKS cluster, local accounts are enabled by default. 256 Even when you enable RBAC or Azure AD integration, 257 --admin access still exists as a non-auditable backdoor option. 258 Disabling local accounts closes the backdoor access to the cluster 259 Example to disable local accounts for AAD enabled cluster. 260 261 ```yaml 262 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 263 kind: AzureManagedMachinePool 264 metadata: 265 ... 266 spec: 267 aadProfile: 268 managed: true 269 adminGroupObjectIDs: 270 - 00000000-0000-0000-0000-000000000000 # group object id created in azure. 271 disableLocalAccounts: true 272 ... 273 ``` 274 275 Note: CAPZ and CAPI requires access to the target cluster to maintain and manage the cluster. 276 Disabling local accounts will cut off direct access to the target cluster. 277 CAPZ and CAPI can access target cluster only via the Service Principal, 278 hence the user has to provide appropriate access to the Service Principal to access the target cluster. 279 User can do that by adding the Service Principal to the appropriate group defined in Azure and 280 add the corresponding group ID in `spec.aadProfile.adminGroupObjectIDs`. 281 CAPI and CAPZ will be able to authenticate via AAD while accessing the target cluster. 282 283 ### AKS Fleet Integration 284 285 CAPZ supports joining your managed AKS clusters to a single AKS fleet. Azure Kubernetes Fleet Manager (Fleet) enables at-scale management of multiple Azure Kubernetes Service (AKS) clusters. For more documentation on Azure Kubernetes Fleet Manager, refer [AKS Docs](https://learn.microsoft.com/azure/kubernetes-fleet/overview) 286 287 To join a CAPZ cluster to an AKS fleet, you must first create an AKS fleet manager. For more information on how to create an AKS fleet manager, refer [AKS Docs](https://learn.microsoft.com/en-us/azure/kubernetes-fleet/quickstart-create-fleet-and-members). This fleet manager will be your point of reference for managing any CAPZ clusters that you join to the fleet. 288 289 Once you have created an AKS fleet manager, you can join your CAPZ cluster to the fleet by adding the `fleetsMember` field to your AzureManagedControlPlane resource spec: 290 291 ```yaml 292 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 293 kind: AzureManagedControlPlane 294 metadata: 295 name: ${CLUSTER_NAME} 296 namespace: default 297 spec: 298 fleetsMember: 299 group: fleet-update-group 300 managerName: fleet-manager-name 301 managerResourceGroup: fleet-manager-resource-group 302 ``` 303 304 The `managerName` and `managerResourceGroup` fields are the name and resource group of your AKS fleet manager. The `group` field is the name of the update group for the cluster, not to be confused with the resource group. 305 306 When the `fleetMember` field is included, CAPZ will create an AKS fleet member resource which will join the CAPZ cluster to the AKS fleet. The AKS fleet member resource will be created in the same resource group as the CAPZ cluster. 307 308 ### AKS Extensions 309 310 CAPZ supports enabling AKS extensions on your managed AKS clusters. Cluster extensions provide an Azure Resource Manager driven experience for installation and lifecycle management of services like Azure Machine Learning or Kubernetes applications on an AKS cluster. For more documentation on AKS extensions, refer [AKS Docs](https://learn.microsoft.com/azure/aks/cluster-extensions). 311 312 You can either provision official AKS extensions or Kubernetes applications through Marketplace. Please refer to [AKS Docs](https://learn.microsoft.com/en-us/azure/aks/cluster-extensions#currently-available-extensions) for the list of currently available extensions. 313 314 To add an AKS extension to your managed cluster, simply add the `extensions` field to your AzureManagedControlPlane resource spec: 315 316 ```yaml 317 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 318 kind: AzureManagedControlPlane 319 metadata: 320 name: ${CLUSTER_NAME} 321 namespace: default 322 spec: 323 extensions: 324 - name: my-extension 325 extensionType: "TraefikLabs.TraefikProxy" 326 plan: 327 name: "traefik-proxy" 328 product: "traefik-proxy" 329 publisher: "containous" 330 ``` 331 332 To list all of the available extensions for your cluster as well as its plan details, use the following az cli command: 333 334 ```bash 335 az k8s-extension extension-types list-by-cluster --resource-group my-resource-group --cluster-name mycluster --cluster-type managedClusters 336 ``` 337 338 For more details, please refer to the [az k8s-extension cli reference](https://learn.microsoft.com/cli/azure/k8s-extension). 339 340 341 ### Security Profile for AKS clusters 342 343 Example for configuring AzureManagedControlPlane with a security profile: 344 345 ```yaml 346 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 347 kind: AzureManagedControlPlane 348 metadata: 349 name: my-cluster-control-plane 350 spec: 351 location: southcentralus 352 resourceGroupName: foo-bar 353 sshPublicKey: ${AZURE_SSH_PUBLIC_KEY_B64:=""} 354 subscriptionID: 00000000-0000-0000-0000-000000000000 # fake uuid 355 version: v1.29.4 356 identity: 357 type: UserAssigned 358 userAssignedIdentityResourceID: /subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/<your-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<your-managed-identity> 359 oidcIssuerProfile: 360 enabled: true 361 securityProfile: 362 workloadIdentity: 363 enabled: true 364 imageCleaner: 365 enabled: true 366 intervalHours: 48 367 azureKeyVaultKms: 368 enabled: true 369 keyID: https://key-vault.vault.azure.net/keys/secret-key/00000000000000000 370 defender: 371 logAnalyticsWorkspaceResourceID: /subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/<your-resource-group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<your-managed-identity> 372 securityMonitoring: 373 enabled: true 374 ``` 375 376 ### Enabling Preview API Features for ManagedClusters 377 378 #### :warning: WARNING: This is meant to be used sparingly to enable features for development and testing that are not otherwise represented in the CAPZ API. Misconfiguration that conflicts with CAPZ's normal mode of operation is possible. 379 380 To enable preview features for managed clusters, you can use the `enablePreviewFeatures` field in the `AzureManagedControlPlane` resource spec. To use any of the new fields included in the preview API version, use the `asoManagedClusterPatches` field in the `AzureManagedControlPlane` resource spec and the `asoManagedClustersAgentPoolPatches` field in the `AzureManagedMachinePool` resource spec to patch in the new fields. 381 382 Please refer to the [ASO Docs](https://azure.github.io/azure-service-operator/reference/containerservice/) for the ContainerService API reference for the latest preview fields and their usage. 383 384 Example for enabling preview features for managed clusters: 385 386 ```yaml 387 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 388 kind: AzureManagedControlPlane 389 metadata: 390 name: ${CLUSTER_NAME} 391 namespace: default 392 spec: 393 enablePreviewFeatures: true 394 asoManagedClusterPatches: 395 - '{"spec": {"enableNamespaceResources": true}}' 396 --- 397 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 398 kind: AzureManagedMachinePool 399 metadata: 400 ... 401 spec: 402 asoManagedClustersAgentPoolPatches: 403 - '{"spec": {"enableCustomCATrust": true}}' 404 ``` 405 406 ### OIDC Issuer on AKS 407 408 Setting `AzureManagedControlPlane.Spec.oidcIssuerProfile.enabled` to `true` will enable OIDC issuer profile for the `AzureManagedControlPlane`. Once enabled, you will see a configmap named `<cluster-name>-aso-oidc-issuer-profile` in the same namespace as the `AzureManagedControlPlane` resource. This configmap will contain the OIDC issuer profile url under the `oidc-issuer-profile-url` key. 409 410 Once OIDC issuer is enabled on the cluster, it's not supported to disable it. 411 412 To learn more about OIDC and AKS refer [AKS Docs on OIDC issuer](https://learn.microsoft.com/en-us/azure/aks/use-oidc-issuer). 413 414 ### Enable AKS features with custom headers (--aks-custom-headers) 415 416 CAPZ no longer supports passing custom headers to AKS APIs with `infrastructure.cluster.x-k8s.io/custom-header-` annotations. 417 Custom headers are deprecated in AKS in favor of new features first landing in preview API versions: 418 419 https://github.com/Azure/azure-rest-api-specs/pull/18232