github.com/cilium/cilium@v1.16.2/Documentation/network/concepts/ipam/gke.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _ipam_gke:
     8  
     9  ########################
    10  Google Kubernetes Engine
    11  ########################
    12  
    13  When running Cilium on Google GKE, the native networking layer of Google Cloud
    14  will be utilized for address management and IP forwarding.
    15  
    16  ************
    17  Architecture
    18  ************
    19  
    20  .. image:: gke_ipam_arch.png
    21      :align: center
    22  
    23  Cilium running in a GKE configuration mode utilizes the Kubernetes hostscope
    24  IPAM mode. It will configure the Cilium agent to wait until the Kubernetes node
    25  resource is populated with a ``spec.podCIDR`` or ``spec.podCIDRs`` as required
    26  by the enabled address families (IPv4/IPv6). See :ref:`k8s_hostscope` for
    27  additional details of this IPAM mode.
    28  
    29  The corresponding datapath is described in section :ref:`gke_datapath`.
    30  
    31  See the getting started guide :ref:`k8s_install_quick` to install Cilium Google
    32  Kubernetes Engine (GKE).
    33  
    34  *************
    35  Configuration
    36  *************
    37  
    38  The GKE IPAM mode can be enabled by setting the Helm option
    39  ``ipam.mode=kubernetes`` or by setting the ConfigMap option ``ipam:
    40  kubernetes``.
    41  
    42  ***************
    43  Troubleshooting
    44  ***************
    45  
    46  Validate the exposed PodCIDR field
    47  ==================================
    48  
    49  Check if the Kubernetes nodes contain a value in the ``podCIDR`` field:
    50  
    51  .. code-block:: shell-session
    52  
    53      $ kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
    54      gke-cluster4-default-pool-b195a3f3-k431	10.4.0.0/24
    55      gke-cluster4-default-pool-b195a3f3-zv3p	10.4.1.0/24
    56  
    57  Check the Cilium status
    58  =======================
    59  
    60  Run ``cilium status`` on the node in question and validate that the CIDR used
    61  for IPAM matches the PodCIDR announced in the Kubernetes node:
    62  
    63  .. code-block:: shell-session
    64  
    65      $ kubectl -n kube-system get pods -o wide | grep gke-cluster4-default-pool-b195a3f3-k431
    66      cilium-lv4xd                       1/1     Running   0          3h8m   10.164.0.112   gke-cluster4-default-pool-b195a3f3-k431   <none>           <none>
    67  
    68      $ kubectl -n kube-system exec -ti cilium-lv4xd -- cilium-dbg status
    69      KVStore:                Ok   Disabled
    70      Kubernetes:             Ok   1.14+ (v1.14.10-gke.27) [linux/amd64]
    71      Kubernetes APIs:        ["CustomResourceDefinition", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
    72      KubeProxyReplacement:   Probe   []
    73      Cilium:                 Ok      OK
    74      NodeMonitor:            Disabled
    75      Cilium health daemon:   Ok
    76      IPAM:                   IPv4: 7/255 allocated from 10.4.0.0/24,
    77      Controller Status:      36/36 healthy
    78      Proxy Status:           OK, ip 10.4.0.190, 0 redirects active on ports 10000-20000
    79      Hubble:                 Disabled
    80      Cluster health:         2/2 reachable   (2020-04-23T13:46:36Z)