github.com/cilium/cilium@v1.16.2/Documentation/network/clustermesh/aks-clustermesh-prep.rst (about)

     1  .. _gs_clustermesh_aks_prep:
     2  
     3  **********************************
     4  AKS-to-AKS Clustermesh Preparation
     5  **********************************
     6  
     7  This is a step-by-step guide on how to install and prepare 
     8  AKS (Azure Kubernetes Service) clusters in BYOCNI mode to meet the requirements 
     9  for the clustermesh feature.
    10  
    11  This guide describes how to install two AKS clusters in BYOCNI (Bring Your Own CNI) 
    12  mode and connect them together via clustermesh. This guide is not 
    13  applicable for cross-cloud clustermesh since this guide doesn't expose the node
    14  IPs outside of the Azure cloud.
    15  
    16  .. note::
    17  
    18          BYOCNI requires the ``aks-preview`` CLI extension with version >=
    19          0.5.55, which itself requires an ``az`` CLI version >= 2.32.0.
    20  
    21  Install cluster one
    22  ###################
    23  
    24  1.  Create a resource group for the cluster (or set the environment variables
    25      to an existing resource group).
    26  
    27      .. code-block:: bash
    28  
    29          export NAME="$(whoami)-$RANDOM"
    30          export AZURE_RESOURCE_GROUP="${NAME}-group"
    31  
    32          #  westus2 can be changed to any available location (`az account list-locations`)
    33          az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
    34  
    35  2.  Create a VNet (virtual network). 
    36      Creating a custom VNet is required to ensure that the Node, Pod, and 
    37      Service CIDRs are unique and they don't overlap with other clusters.
    38  
    39      .. note::
    40          The example below uses range ``192.168.10.0/24`` range, but you could use any range except for ``169.254.0.0/16``, ``172.30.0.0/16``, 
    41          ``172.31.0.0/16``, or ``192.0.2.0/24`` which are 
    42          `reserved by Azure <https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#prerequisites>`__.
    43  
    44      .. code-block:: bash
    45  
    46          az network vnet create \
    47              --resource-group "${AZURE_RESOURCE_GROUP}" \
    48              --name "${NAME}-cluster-net" \
    49              --address-prefixes 192.168.10.0/24 \
    50              --subnet-name "${NAME}-node-subnet" \
    51              --subnet-prefix 192.168.10.0/24
    52  
    53          # Store the ID of the created subnet
    54          export NODE_SUBNET_ID=$(az network vnet subnet show \
    55              --resource-group "${AZURE_RESOURCE_GROUP}" \
    56              --vnet-name "${NAME}-cluster-net" \
    57              --name "${NAME}-node-subnet" \
    58              --query id \
    59              -o tsv)
    60  
    61  3.  You now have a virtual network and a subnet with the same CIDR. Create an AKS cluster without a CNI and request to use a custom VNet and subnet.
    62  
    63      During creation request to use ``"10.10.0.0/16"`` as the pod CIDR and
    64      ``"10.11.0.0/16"`` as the services CIDR. These can be changed to any range
    65      except for Azure reserved ranges and ranges used by other clusters you intend to
    66      add to the clustermesh.
    67  
    68      .. code-block:: bash
    69  
    70          az aks create \
    71              --resource-group "${AZURE_RESOURCE_GROUP}" \
    72              --name "${NAME}" \
    73              --network-plugin none \
    74              --pod-cidr "10.10.0.0/16" \
    75              --service-cidr "10.11.0.0/16" \
    76              --dns-service-ip "10.11.0.10" \
    77              --vnet-subnet-id "${NODE_SUBNET_ID}"
    78  
    79          # Get kubectl credentials, the command will merge the new credentials
    80          # with the existing ~/.kube/config
    81          az aks get-credentials \
    82              --resource-group "${AZURE_RESOURCE_GROUP}" \
    83              --name "${NAME}"
    84  
    85  4.  Install Cilium, it is important to give
    86      the cluster a unique cluster ID and to tell Cilium to use our custom pod CIDR.
    87  
    88      .. parsed-literal::
    89  
    90          cilium install |CHART_VERSION| \
    91              --set azure.resourceGroup="${AZURE_RESOURCE_GROUP}" \
    92              --set cluster.id=1 \
    93              --set ipam.operator.clusterPoolIPv4PodCIDRList='{10.10.0.0/16}'
    94  
    95  5.  Check the status of Cilium.
    96  
    97      .. code-block:: bash
    98  
    99          cilium status   
   100  
   101  6.  Before configuring cluster two, store the name of the current cluster.
   102  
   103      .. code-block:: bash
   104  
   105          export CLUSTER1=${NAME}
   106  
   107  
   108  Install cluster two
   109  ###################
   110  
   111  Installing the second cluster uses the same commands but with slightly different
   112  arguments.
   113  
   114  1.  Create a new resource group.
   115  
   116      .. code-block:: bash
   117  
   118          export NAME="$(whoami)-$RANDOM"
   119          export AZURE_RESOURCE_GROUP="${NAME}-group"
   120  
   121          # eastus2 can be changed to any available location (`az account list-locations`)
   122          az group create --name "${AZURE_RESOURCE_GROUP}" -l eastus2
   123  
   124  2.  Create a VNet in this resource group. Make sure to use a non-overlapping prefix.
   125  
   126      .. note::
   127          The example below uses range ``192.168.20.0/24``, but you could use any range except for ``169.254.0.0/16``, ``172.30.0.0/16``, 
   128          ``172.31.0.0/16``, or ``192.0.2.0/24`` which are 
   129          `reserved by Azure <https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#prerequisites>`__.
   130  
   131      .. code-block:: bash
   132  
   133          az network vnet create \
   134              --resource-group "${AZURE_RESOURCE_GROUP}" \
   135              --name "${NAME}-cluster-net" \
   136              --address-prefixes 192.168.20.0/24 \
   137              --subnet-name "${NAME}-node-subnet" \
   138              --subnet-prefix 192.168.20.0/24
   139  
   140          # Store the ID of the created subnet
   141          export NODE_SUBNET_ID=$(az network vnet subnet show \
   142              --resource-group "${AZURE_RESOURCE_GROUP}" \
   143              --vnet-name "${NAME}-cluster-net" \
   144              --name "${NAME}-node-subnet" \
   145              --query id \
   146              -o tsv)
   147  
   148  3.  Create an AKS cluster without CNI and request to use your custom VNet and 
   149      subnet.
   150  
   151      During creation use ``"10.20.0.0/16"`` as the pod CIDR and
   152      ``"10.21.0.0/16"`` as the services CIDR. These can be changed to any range
   153      except for Azure reserved ranges and ranges used by other clusters you intend to
   154      add to the clustermesh.
   155  
   156      .. code-block:: bash
   157  
   158          az aks create \
   159              --resource-group "${AZURE_RESOURCE_GROUP}" \
   160              --name "${NAME}" \
   161              --network-plugin none \
   162              --pod-cidr "10.20.0.0/16" \
   163              --service-cidr "10.21.0.0/16" \
   164              --dns-service-ip "10.21.0.10" \
   165              --vnet-subnet-id "${NODE_SUBNET_ID}"
   166  
   167          # Get kubectl credentials and add them to ~/.kube/config
   168          az aks get-credentials \
   169              --resource-group "${AZURE_RESOURCE_GROUP}" \
   170              --name "${NAME}"
   171  
   172  4.  Install Cilium, it is important to give
   173      the cluster a unique cluster ID and to tell Cilium to use your custom pod CIDR.
   174  
   175      .. parsed-literal::
   176  
   177          cilium install |CHART_VERSION| \
   178              --set azure.resourceGroup="${AZURE_RESOURCE_GROUP}" \
   179              --set cluster.id=2 \
   180              --set ipam.operator.clusterPoolIPv4PodCIDRList='{10.20.0.0/16}'
   181  
   182  5.  Check the status of Cilium.
   183  
   184      .. code-block:: bash
   185  
   186          cilium status
   187  
   188  6.  Before configuring peering and clustermesh, store the current cluster 
   189      name.
   190  
   191      .. code-block:: bash
   192  
   193          export CLUSTER2=${NAME}
   194  
   195  Peering virtual networks
   196  ########################
   197  
   198  Virtual networks can't connect to each other by default. You can enable cross
   199  VNet communication by creating bi-directional "peering".
   200  
   201  Create a peering from cluster one to cluster two using the
   202  following commands.
   203  
   204  .. code-block:: bash
   205  
   206      export VNET_ID=$(az network vnet show \
   207          --resource-group "${CLUSTER2}-group" \
   208          --name "${CLUSTER2}-cluster-net" \
   209          --query id -o tsv)
   210  
   211      az network vnet peering create \
   212          -g "${CLUSTER1}-group" \
   213          --name "peering-${CLUSTER1}-to-${CLUSTER2}" \
   214          --vnet-name "${CLUSTER1}-cluster-net" \
   215          --remote-vnet "${VNET_ID}" \
   216          --allow-vnet-access
   217  
   218  This allows outbound traffic from cluster one to cluster two. To allow 
   219  bi-directional traffic, add a peering to the other direction as well.
   220  
   221  .. code-block:: bash
   222  
   223      export VNET_ID=$(az network vnet show \
   224          --resource-group "${CLUSTER1}-group" \
   225          --name "${CLUSTER1}-cluster-net" \
   226          --query id -o tsv)
   227  
   228      az network vnet peering create \
   229          -g "${CLUSTER2}-group" \
   230          --name "peering-${CLUSTER2}-to-${CLUSTER1}" \
   231          --vnet-name "${CLUSTER2}-cluster-net" \
   232          --remote-vnet "${VNET_ID}" \
   233          --allow-vnet-access
   234  
   235  Node-to-node traffic between clusters is now possible. All requirements for 
   236  clustermesh are met. Enabling clustermesh is explained in :ref:`gs_clustermesh`.