github.com/cilium/cilium@v1.16.2/Documentation/installation/cni-chaining-azure-cni.rst (about) 1 .. only:: not (epub or latex or html) 2 3 WARNING: You are looking at unreleased Cilium documentation. 4 Please use the official rendered version released here: 5 https://docs.cilium.io 6 7 .. _chaining_azure: 8 9 ****************** 10 Azure CNI (Legacy) 11 ****************** 12 13 .. note:: 14 15 For most users, the best way to run Cilium on AKS is either 16 AKS BYO CNI as described in :ref:`k8s_install_quick` 17 or `Azure CNI Powered by Cilium <https://aka.ms/aks/cilium-dataplane>`__. 18 This guide provides alternative instructions to run Cilium with Azure CNI 19 in a chaining configuration. This is the legacy way of running Azure CNI with 20 cilium as Azure IPAM is legacy, for more information see :ref:`ipam_azure`. 21 22 .. include:: cni-chaining-limitations.rst 23 24 .. admonition:: Video 25 :class: attention 26 27 If you'd like a video explanation of the Azure CNI Powered by Cilium, check out `eCHO episode 70: Azure CNI Powered by Cilium <https://www.youtube.com/watch?v=8it8Hm2F_GM>`__. 28 29 This guide explains how to set up Cilium in combination with Azure CNI in a 30 chaining configuration. In this hybrid mode, the Azure CNI plugin is 31 responsible for setting up the virtual network devices as well as address 32 allocation (IPAM). After the initial networking is setup, the Cilium CNI plugin 33 is called to attach eBPF programs to the network devices set up by Azure CNI to 34 enforce network policies, perform load-balancing, and encryption. 35 36 37 Create an AKS + Cilium CNI configuration 38 ======================================== 39 40 Create a ``chaining.yaml`` file based on the following template to specify the 41 desired CNI chaining configuration. This :term:`ConfigMap` will be installed as the CNI 42 configuration file on all nodes and defines the chaining configuration. In the 43 example below, the Azure CNI, portmap, and Cilium are chained together. 44 45 .. code-block:: yaml 46 47 apiVersion: v1 48 kind: ConfigMap 49 metadata: 50 name: cni-configuration 51 namespace: kube-system 52 data: 53 cni-config: |- 54 { 55 "cniVersion": "0.3.0", 56 "name": "azure", 57 "plugins": [ 58 { 59 "type": "azure-vnet", 60 "mode": "transparent", 61 "ipam": { 62 "type": "azure-vnet-ipam" 63 } 64 }, 65 { 66 "type": "portmap", 67 "capabilities": {"portMappings": true}, 68 "snat": true 69 }, 70 { 71 "name": "cilium", 72 "type": "cilium-cni" 73 } 74 ] 75 } 76 77 Deploy the :term:`ConfigMap`: 78 79 .. code-block:: shell-session 80 81 kubectl apply -f chaining.yaml 82 83 84 Deploy Cilium 85 ============= 86 87 .. include:: k8s-install-download-release.rst 88 89 Deploy Cilium release via Helm: 90 91 .. parsed-literal:: 92 93 helm install cilium |CHART_RELEASE| \\ 94 --namespace kube-system \\ 95 --set cni.chainingMode=generic-veth \\ 96 --set cni.customConf=true \\ 97 --set cni.exclusive=false \\ 98 --set nodeinit.enabled=true \\ 99 --set cni.configMap=cni-configuration \\ 100 --set routingMode=native \\ 101 --set enableIPv4Masquerade=false \\ 102 --set endpointRoutes.enabled=true 103 104 This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the eBPF filesystem and updating the 105 existing Azure CNI plugin to run in 'transparent' mode. 106 107 .. include:: k8s-install-restart-pods.rst 108 109 .. include:: k8s-install-validate.rst 110 111 .. include:: next-steps.rst