github.com/cilium/cilium@v1.16.2/Documentation/network/concepts/ipam/cluster-pool.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  .. _ipam_crd_cluster_pool:
     8  
     9  #######################
    10  Cluster Scope (Default)
    11  #######################
    12  
    13  The cluster-scope IPAM mode assigns per-node PodCIDRs to each node and
    14  allocates IPs using a host-scope allocator on each node. It is thus similar to
    15  the :ref:`k8s_hostscope` mode. The difference is that instead of Kubernetes
    16  assigning the per-node PodCIDRs via the Kubernetes ``v1.Node`` resource, the
    17  Cilium operator will manage the per-node PodCIDRs via the ``v2.CiliumNode``
    18  resource. The advantage of this mode is that it does not depend on Kubernetes
    19  being configured to hand out per-node PodCIDRs.
    20  
    21  ************
    22  Architecture
    23  ************
    24  
    25  .. image:: cluster_pool.png
    26      :align: center
    27  
    28  This is useful if Kubernetes cannot be configured to hand out PodCIDRs or if
    29  more control is needed.
    30  
    31  In this mode, the Cilium agent will wait on startup until the ``podCIDRs`` range
    32  are made available via the Cilium Node ``v2.CiliumNode`` object for all enabled
    33  address families via the resource field set in the ``v2.CiliumNode``:
    34  
    35  ====================== ==============================
    36  Field                  Description
    37  ====================== ==============================
    38  ``spec.ipam.podCIDRs`` IPv4 and/or IPv6 PodCIDR range
    39  ====================== ==============================
    40  
    41  *************
    42  Configuration
    43  *************
    44  
    45  For a practical tutorial on how to enable this mode in Cilium, see
    46  :ref:`gsg_ipam_crd_cluster_pool`.
    47  
    48  Expanding the cluster pool
    49  ==========================
    50  
    51  Don't change any existing elements of the ``clusterPoolIPv4PodCIDRList`` list, as
    52  changes cause unexpected behavior. If the pool is exhausted,
    53  add a new element to the list instead. The minimum mask length is ``/30``, with a recommended minimum mask 
    54  length of at least ``/29``. The reason to add new elements rather than change existing elements is that
    55  the allocator reserves 2 IPs per CIDR block for the network and broadcast addresses.
    56  Changing ``clusterPoolIPv4MaskSize`` is also not possible. 
    57  
    58  ***************
    59  Troubleshooting
    60  ***************
    61  
    62  Look for allocation errors
    63  ==========================
    64  
    65  Check the ``Error`` field in the ``status.ipam.operator-status`` field:
    66  
    67  .. code-block:: shell-session
    68  
    69      kubectl get ciliumnodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.ipam.operator-status}{"\n"}{end}'
    70      
    71  Check for conflicting node CIDRs
    72  ================================
    73  
    74  ``10.0.0.0/8`` is the default pod CIDR. If your node network is in the same range
    75  you will lose connectivity to other nodes. All egress traffic will be assumed
    76  to target pods on a given node rather than other nodes.
    77  
    78  You can solve it in two ways:
    79  
    80    - Explicitly set ``clusterPoolIPv4PodCIDRList`` to a non-conflicting CIDR
    81    - Use a different CIDR for your nodes