github.com/cilium/cilium@v1.16.2/Documentation/network/kubernetes/ipam-multi-pool.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  .. _gsg_ipam_crd_multi_pool:
     8  
     9  *******************************************
    10  CRD-Backed by Cilium Multi-Pool IPAM (Beta)
    11  *******************************************
    12  
    13  .. include:: ../../beta.rst
    14  
    15  This is a quick tutorial walking through how to enable multi-pool IPAM backed by the
    16  ``CiliumPodIPPool`` CRD. The purpose of this tutorial is to show how components are configured and
    17  resources interact with each other to enable users to automate or extend on their own.
    18  
    19  For more details, see the section :ref:`ipam_crd_multi_pool`
    20  
    21  Enable Multi-pool IPAM mode
    22  ===========================
    23  
    24  #. Setup Cilium for Kubernetes using helm with the options:
    25  
    26     * ``--set ipam.mode=multi-pool``
    27     * ``--set routingMode=native``
    28     * ``--set autoDirectNodeRoutes=true``
    29     * ``--set ipv4NativeRoutingCIDR=10.0.0.0/8``
    30     * ``--set endpointRoutes.enabled=true``
    31     * ``--set kubeProxyReplacement=true``
    32     * ``--set bpf.masquerade=true``
    33  
    34     For more details on why each of these options are needed, please refer to
    35     :ref:`ipam_crd_multi_pool_limitations`.
    36  
    37  #. Create the ``default`` pool for IPv4 addresses with the options:
    38  
    39     * ``--set ipam.operator.autoCreateCiliumPodIPPools.default.ipv4.cidrs='{10.10.0.0/16}'``
    40     * ``--set ipam.operator.autoCreateCiliumPodIPPools.default.ipv4.maskSize=27``
    41  
    42  #. Deploy Cilium and Cilium-Operator. Cilium will automatically wait until the
    43     ``podCIDR`` is allocated for its node by Cilium Operator.
    44  
    45  Validate installation
    46  =====================
    47  
    48  #. Validate that Cilium has started up correctly
    49  
    50     .. code-block:: shell-session
    51  
    52         $ cilium status --wait
    53             /¯¯\
    54          /¯¯\__/¯¯\    Cilium:             OK
    55          \__/¯¯\__/    Operator:           OK
    56          /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
    57          \__/¯¯\__/    Hubble Relay:       OK
    58             \__/       ClusterMesh:        disabled
    59  
    60         [...]
    61  
    62  #. Validate that the ``CiliumPodIPPool`` resource for the ``default`` pool was created with the
    63     CIDRs specified in the ``ipam.operator.multiPoolMap.default.*`` Helm values:
    64  
    65     .. code-block:: shell-session
    66  
    67         $ kubectl get ciliumpodippool default -o yaml
    68         apiVersion: cilium.io/v2alpha1
    69         kind: CiliumPodIPPool
    70         metadata:
    71           name: default
    72         spec:
    73           ipv4:
    74             cidrs:
    75             - 10.10.0.0/16
    76             maskSize: 27
    77  
    78  #. Create an additional pod IP pool ``mars`` using the following ``CiliumPodIPPool`` resource:
    79  
    80     .. code-block:: shell-session
    81  
    82         $ cat <<EOF | kubectl apply -f -
    83         apiVersion: cilium.io/v2alpha1
    84         kind: CiliumPodIPPool
    85         metadata:
    86           name: mars
    87         spec:
    88           ipv4:
    89             cidrs:
    90             - 10.20.0.0/16
    91             maskSize: 27
    92         EOF
    93  
    94  #. Validate that both pool resources exist:
    95  
    96     .. code-block:: shell-session
    97  
    98         $ kubectl get ciliumpodippools
    99         NAME      AGE
   100         default   106s
   101         mars      7s
   102  
   103  #. Create two deployments with two pods each. One allocating from the ``default`` pool and one
   104     allocating from the ``mars`` pool by way of the ``ipam.cilium.io/ipam-pool: mars`` annotation:
   105  
   106     .. code-block:: shell-session
   107  
   108         $ cat <<EOF | kubectl apply -f -
   109         apiVersion: apps/v1
   110         kind: Deployment
   111         metadata:
   112           name: nginx-default
   113         spec:
   114           selector:
   115             matchLabels:
   116               app: nginx-default
   117           replicas: 2
   118           template:
   119             metadata:
   120               labels:
   121                 app: nginx-default
   122             spec:
   123               containers:
   124               - name: nginx
   125                 image: nginx:1.25.1
   126                 ports:
   127                 - containerPort: 80
   128         ---
   129         apiVersion: apps/v1
   130         kind: Deployment
   131         metadata:
   132           name: nginx-mars
   133         spec:
   134           selector:
   135             matchLabels:
   136               app: nginx-mars
   137           replicas: 2
   138           template:
   139             metadata:
   140               labels:
   141                 app: nginx-mars
   142               annotations:
   143                 ipam.cilium.io/ip-pool: mars
   144             spec:
   145               containers:
   146               - name: nginx
   147                 image: nginx:1.25.1
   148                 ports:
   149                 - containerPort: 80
   150         EOF
   151  
   152  #. Validate that the pods were assigned IPv4 addresses from different CIDRs as specified in the pool
   153     definition:
   154  
   155     .. code-block:: shell-session
   156  
   157         $ kubectl get pods -o wide
   158         NAME                             READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
   159         nginx-default-79885c7f58-fdfgf   1/1     Running   0          5s     10.10.10.36   kind-worker2   <none>           <none>
   160         nginx-default-79885c7f58-qch6b   1/1     Running   0          5s     10.10.10.77   kind-worker    <none>           <none>
   161         nginx-mars-76766f95f5-d9vzt      1/1     Running   0          5s     10.20.0.20    kind-worker2   <none>           <none>
   162         nginx-mars-76766f95f5-mtn2r      1/1     Running   0          5s     10.20.0.37    kind-worker    <none>           <none>
   163  
   164  #. Test connectivity between pods:
   165  
   166     .. code-block:: shell-session
   167  
   168         $ kubectl exec pod/nginx-default-79885c7f58-fdfgf -- curl -s -o /dev/null -w "%{http_code}" http://10.20.0.37
   169         200
   170  
   171  #. Alternatively, the ``ipam.cilium.io/ipam-pool`` annotation can also be applied to a namespace:
   172  
   173     .. code-block:: shell-session
   174  
   175         $ kubectl create namespace cilium-test
   176         $ kubectl annotate namespace cilium-test ipam.cilium.io/ip-pool=mars
   177  
   178     All new pods created in the namespace ``cilium-test`` will be assigned IPv4 addresses from the
   179     ``mars`` pool.  Run the Cilium connectivity tests (which use namespace ``cilium-test`` by default
   180     to create their workloads) to verify connectivity:
   181  
   182     .. code-block:: shell-session
   183  
   184         $ cilium connectivity test
   185         [...]
   186         ✅ All 42 tests (295 actions) successful, 13 tests skipped, 0 scenarios skipped.
   187  
   188    **Note:** The connectivity test requires a cluster with at least 2 worker nodes to complete successfully.
   189  
   190  #. Verify that the connectivity test pods were assigned IPv4 addresses from the 10.20.0.0/16 CIDR
   191     defined in the ``mars`` pool:
   192  
   193     .. code-block:: shell-session
   194  
   195         $ kubectl --namespace cilium-test get pods -o wide
   196         NAME                                  READY   STATUS    RESTARTS   AGE     IP            NODE                 NOMINATED NODE   READINESS GATES
   197         client-6f6788d7cc-7fw9w               1/1     Running   0          8m56s   10.20.0.238   kind-worker          <none>           <none>
   198         client2-bc59f56d5-hsv2g               1/1     Running   0          8m56s   10.20.0.193   kind-worker          <none>           <none>
   199         echo-other-node-646976b7dd-5zlr4      2/2     Running   0          8m56s   10.20.1.145   kind-worker2         <none>           <none>
   200         echo-same-node-58f99d79f4-4k5v4       2/2     Running   0          8m56s   10.20.0.202   kind-worker          <none>           <none>
   201         ...