github.com/cilium/cilium@v1.16.2/Documentation/network/clustermesh/services.rst (about)

     1  .. _gs_clustermesh_services:
     2  
     3  **********************************
     4  Load-balancing & Service Discovery
     5  **********************************
     6  
     7  This tutorial will guide you to perform load-balancing and service
     8  discovery across multiple Kubernetes clusters when using Cilium.
     9  
    10  Prerequisites
    11  #############
    12  
    13  You need to have a functioning Cluster Mesh setup, please follow the guide
    14  :ref:`gs_clustermesh` to set it up.
    15  
    16  Load-balancing with Global Services
    17  ###################################
    18  
    19  Establishing load-balancing between clusters is achieved by defining a
    20  Kubernetes service with identical name and namespace in each cluster and adding
    21  the annotation ``service.cilium.io/global: "true"`` to declare it global.
    22  Cilium will automatically perform load-balancing to pods in both clusters.
    23  
    24  .. code-block:: yaml
    25  
    26    apiVersion: v1
    27    kind: Service
    28    metadata:
    29      name: rebel-base
    30      annotations:
    31        service.cilium.io/global: "true"
    32    spec:
    33      type: ClusterIP
    34      ports:
    35      - port: 80
    36      selector:
    37        name: rebel-base
    38  
    39  
    40  Disabling Global Service Sharing
    41  ################################
    42  
    43  By default, a Global Service will load-balance across backends in multiple clusters.
    44  This implicitly configures ``service.cilium.io/shared: "true"``. To prevent service
    45  backends from being shared to other clusters, this option should be disabled.
    46  
    47  Below example will expose remote endpoint without sharing local endpoints.
    48  
    49  .. code-block:: yaml
    50  
    51     apiVersion: v1
    52     kind: Service
    53     metadata:
    54       name: rebel-base
    55       annotations:
    56         service.cilium.io/global: "true"
    57         service.cilium.io/shared: "false"
    58     spec:
    59       type: ClusterIP
    60       ports:
    61       - port: 80
    62       selector:
    63         name: rebel-base
    64  
    65  Synchronizing Kubernetes EndpointSlice (Beta)
    66  #############################################
    67  
    68  .. include:: ../../beta.rst
    69  
    70  By default Kubernetes EndpointSlice synchronization is disabled on non Headless Global services.
    71  To have Cilium discover remote clusters endpoints of a Global Service
    72  from DNS or any third party controllers, enable synchronization by adding
    73  the annotation ``service.cilium.io/global-sync-endpoint-slices: "true"``.
    74  This will allow Cilium to create Kubernetes EndpointSlices belonging to a
    75  remote cluster for services that have that annotation.
    76  Regarding Global Headless services this option is enabled by default unless
    77  explicitly opted-out by adding the annotation ``service.cilium.io/global-sync-endpoint-slices: "false"``.
    78  
    79  Note that this feature does not complement/is not required by any other Cilium features
    80  and is only required if you need to discover EndpointSlice from remote cluster on
    81  third party controllers. For instance, the Cilium ingress controller works in a Cluster Mesh
    82  without enabling this feature, although if you use any other ingress controller
    83  you may need to enable this.
    84  
    85  This feature is currently disabled by default via a feature flag.
    86  To install Cilium with EndpointSlice Cluster Mesh synchronization, run:
    87  
    88  .. parsed-literal::
    89  
    90     helm install cilium |CHART_RELEASE| \\
    91       --namespace kube-system \\
    92       --set clustermesh.enableEndpointSliceSynchronization=true
    93  
    94  To enable EndpointSlice Cluster Mesh synchronization on an existing Cilium installation, run:
    95  
    96  .. parsed-literal::
    97  
    98     helm upgrade cilium |CHART_RELEASE| \\
    99       --namespace kube-system \\
   100       --reuse-values \\
   101       --set clustermesh.enableEndpointSliceSynchronization=true
   102     kubectl -n kube-system rollout restart deployment/cilium-operator
   103  
   104  Known Limitations
   105  -----------------
   106  
   107  - This is a beta feature, you may experience bugs or shortcomings.
   108  - Hostnames are synchronized as is without any form of conflict resolution
   109    mechanisms. This means that multiple StatefulSets with a single governing
   110    Service that synchronize EndpointSlices across multiple clusters should have
   111    different names. For instance, you can add the cluster name to the StatefulSet
   112    name (``cluster1-my-statefulset`` instead of ``my-statefulset``).
   113  
   114  
   115  Deploying a Simple Example Service
   116  ==================================
   117  
   118  1. In cluster 1, deploy:
   119  
   120     .. parsed-literal::
   121  
   122         kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/clustermesh/global-service-example/cluster1.yaml
   123  
   124  2. In cluster 2, deploy:
   125  
   126     .. parsed-literal::
   127  
   128         kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/clustermesh/global-service-example/cluster2.yaml
   129  
   130  3. From either cluster, access the global service:
   131  
   132     .. code-block:: shell-session
   133  
   134        kubectl exec -ti deployment/x-wing -- curl rebel-base
   135  
   136     You will see replies from pods in both clusters.
   137  
   138  4. In cluster 1, add ``service.cilium.io/shared="false"`` to existing global service
   139  
   140     .. code-block:: shell-session
   141  
   142        kubectl annotate service rebel-base service.cilium.io/shared="false" --overwrite
   143  
   144  5. From cluster 1, access the global service one more time:
   145  
   146     .. code-block:: shell-session
   147  
   148        kubectl exec -ti deployment/x-wing -- curl rebel-base
   149  
   150     You will still see replies from pods in both clusters.
   151  
   152  6. From cluster 2, access the global service again:
   153  
   154     .. code-block:: shell-session
   155  
   156        kubectl exec -ti deployment/x-wing -- curl rebel-base
   157  
   158     You will see replies from pods only from cluster 2, as the global service in cluster 1 is no longer shared.
   159  
   160  7. In cluster 1, remove ``service.cilium.io/shared`` annotation of existing global service
   161  
   162     .. code-block:: shell-session
   163  
   164        kubectl annotate service rebel-base service.cilium.io/shared-
   165  
   166  8. From either cluster, access the global service:
   167  
   168     .. code-block:: shell-session
   169  
   170        kubectl exec -ti deployment/x-wing -- curl rebel-base
   171  
   172     You will see replies from pods in both clusters again.
   173  
   174  Global and Shared Services Reference
   175  ####################################
   176  
   177  The flow chart below summarizes the overall behavior considering a service present
   178  in two clusters (i.e., Cluster1 and Cluster2), and different combinations of the
   179  ``service.cilium.io/global`` and ``service.cilium.io/shared`` annotation values.
   180  The terminating nodes represent the endpoints used in each combination by the two
   181  clusters for the service under examination.
   182  
   183  .. image:: images/services_flowchart.svg
   184  
   185  ..
   186     The flow chart was generated on https://mermaid.live with code:
   187  
   188     flowchart LR
   189        Cluster1Global{Cluster1\nGlobal?}-->|yes|Cluster2Global{Cluster2\nGlobal?}
   190        Cluster2Global-->|yes|Cluster1Shared{Cluster1\nShared?}
   191  
   192        Cluster1Shared-->|yes|Cluster2Shared{Cluster2\nShared?}
   193        Cluster2Shared-->|yes|Cluster1BothCluster2Both[Cluster1: Local + Remote\nCluster2: Local + Remote]
   194        Cluster2Shared-->|no|Cluster1SelfClusterBoth[Cluster1: Local only\nCluster2: Local + Remote]
   195  
   196        Cluster1Shared-->|no|Cluster2Shared2{Cluster2\nShared?}
   197        Cluster2Shared2-->|yes|Cluster1BothCluster2Self[Cluster1: Local + Remote\nCluster2: Local only]
   198        Cluster2Shared2-->|no|Cluster1SelfCluster2Self[Cluster1: Local only\nCluster2: Local only]
   199  
   200        Cluster1Global-->|no|Cluster1SelfCluster2Self
   201        Cluster2Global-->|no|Cluster1SelfCluster2Self
   202  
   203  Limitations
   204  ###########
   205  
   206  * Global NodePort services load balance across both local and remote backends only
   207    if Cilium is configured to replace kube-proxy (either ``kubeProxyReplacement=true``
   208    or ``nodePort.enabled=true``). Otherwise, only local backends are eligible for
   209    load balancing when accessed through the NodePort.
   210  
   211  * Global services accessed by a Node, or a Pod running in host network, load
   212    balance across both local and remote backends only if Cilium is configured
   213    to replace kube-proxy (``kubeProxyReplacement=true``). This limitation can be
   214    overcome enabling SocketLB in the host namespace: ``socketLB.enabled=true``,
   215    ``socketLB.hostNamespaceOnly=true``. Otherwise, only local backends are eligible
   216    for load balancing.