github.com/cilium/cilium@v1.16.2/Documentation/network/kubernetes/local-redirect-policy.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _local-redirect-policy:
     8  
     9  *********************
    10  Local Redirect Policy
    11  *********************
    12  
    13  This document explains how to configure Cilium's Local Redirect Policy, that
    14  enables pod traffic destined to an IP address and port/protocol tuple
    15  or Kubernetes service to be redirected locally to backend pod(s) within a node,
    16  using eBPF. The namespace of backend pod(s) need to match with that of the policy.
    17  The CiliumLocalRedirectPolicy is configured as a ``CustomResourceDefinition``.
    18  
    19  .. admonition:: Video
    20    :class: attention
    21  
    22    Aside from this document, you can watch a video explanation of Cilium's Local Redirect Policy on `eCHO episode 39: Local Redirect Policy <https://www.youtube.com/watch?v=BT_gdlhjiQc&t=176s>`__.
    23  
    24  There are two types of Local Redirect Policies supported. When traffic for a
    25  Kubernetes service needs to be redirected, use the `ServiceMatcher` type. The
    26  service needs to be of type ``clusterIP``.
    27  When traffic matching IP address and port/protocol, that doesn't belong to
    28  any Kubernetes service, needs to be redirected, use the `AddressMatcher` type.
    29  
    30  The policies can be gated by Kubernetes Role-based access control (RBAC)
    31  framework. See the official `RBAC documentation
    32  <https://kubernetes.io/docs/reference/access-authn-authz/rbac/>`_.
    33  
    34  When policies are applied, matched pod traffic is redirected. If desired, RBAC
    35  configurations can be used such that application developers can not escape
    36  the redirection.
    37  
    38  Prerequisites
    39  =============
    40  
    41  .. note::
    42  
    43     Local Redirect Policy feature requires a v4.19.x or more recent Linux kernel.
    44  
    45  .. include:: ../../installation/k8s-install-download-release.rst
    46  
    47  The Cilium Local Redirect Policy feature relies on :ref:`kubeproxy-free`,
    48  follow the guide to create a new deployment. Enable the feature by setting
    49  the ``localRedirectPolicy`` value to ``true``.
    50  
    51  .. parsed-literal::
    52  
    53     helm upgrade cilium |CHART_RELEASE| \\
    54       --namespace kube-system \\
    55       --reuse-values \\
    56       --set localRedirectPolicy=true
    57  
    58  
    59  Rollout the operator and agent pods to make the changes effective:
    60  
    61  .. code-block:: shell-session
    62  
    63      $ kubectl rollout restart deploy cilium-operator -n kube-system
    64      $ kubectl rollout restart ds cilium -n kube-system
    65  
    66  
    67  Verify that Cilium agent and operator pods are running.
    68  
    69  .. code-block:: shell-session
    70  
    71      $ kubectl -n kube-system get pods -l k8s-app=cilium
    72      NAME           READY   STATUS    RESTARTS   AGE
    73      cilium-5ngzd   1/1     Running   0          3m19s
    74  
    75      $ kubectl -n kube-system get pods -l name=cilium-operator
    76      NAME                               READY   STATUS    RESTARTS   AGE
    77      cilium-operator-544b4d5cdd-qxvpv   1/1     Running   0          3m19s
    78  
    79  Validate that the Cilium Local Redirect Policy CRD has been registered.
    80  
    81  .. code-block:: shell-session
    82  
    83  	   $ kubectl get crds
    84  	   NAME                              CREATED AT
    85  	   [...]
    86  	   ciliumlocalredirectpolicies.cilium.io              2020-08-24T05:31:47Z
    87  
    88  Create backend and client pods
    89  ==============================
    90  
    91  Deploy a backend pod where traffic needs to be redirected to based on the
    92  configurations specified in a CiliumLocalRedirectPolicy. The metadata
    93  labels and container port and protocol respectively match with the labels,
    94  port and protocol fields specified in the CiliumLocalRedirectPolicy custom
    95  resources that will be created in the next step.
    96  
    97  .. literalinclude:: ../../../examples/kubernetes-local-redirect/backend-pod.yaml
    98  
    99  .. parsed-literal::
   100  
   101      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-local-redirect/backend-pod.yaml
   102  
   103  Verify that the pod is running.
   104  
   105  .. code-block:: shell-session
   106  
   107      $ kubectl get pods | grep lrp-pod
   108      lrp-pod                      1/1     Running   0          46s
   109  
   110  Deploy a client pod that will generate traffic which will be redirected based on
   111  the configurations specified in the CiliumLocalRedirectPolicy.
   112  
   113  .. parsed-literal::
   114  
   115     $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml
   116     $ kubectl wait pod/mediabot --for=condition=Ready
   117     $ kubectl get pods
   118     NAME                             READY   STATUS    RESTARTS   AGE
   119     pod/mediabot                     1/1     Running   0          14s
   120  
   121  Create Cilium Local Redirect Policy Custom Resources
   122  =====================================================
   123  There are two types of configurations supported in the CiliumLocalRedirectPolicy
   124  in order to match the traffic that needs to be redirected.
   125  
   126  .. _AddressMatcher:
   127  
   128  AddressMatcher
   129  ---------------
   130  
   131  This type of configuration is specified using an IP address and a Layer 4 port/protocol.
   132  When multiple ports are specified for frontend in ``toPorts``, the ports need
   133  to be named. The port names will be used to map frontend ports with backend ports.
   134  
   135  Verify that the ports specified in ``toPorts`` under ``redirectBackend``
   136  exist in the backend pod spec.
   137  
   138  The example shows how to redirect from traffic matching, IP address ``169.254.169.254``
   139  and Layer 4 port ``8080`` with protocol ``TCP``, to a backend pod deployed with
   140  labels ``app=proxy`` and Layer 4 port ``80`` with protocol ``TCP``. The
   141  ``localEndpointSelector`` set to ``app=proxy`` in the policy is used to select
   142  the backend pods where traffic is redirected to.
   143  
   144  Create a custom resource of type CiliumLocalRedirectPolicy with ``addressMatcher``
   145  configuration.
   146  
   147  .. literalinclude:: ../../../examples/kubernetes-local-redirect/lrp-addrmatcher.yaml
   148  
   149  .. parsed-literal::
   150  
   151      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-local-redirect/lrp-addrmatcher.yaml
   152  
   153  Verify that the custom resource is created.
   154  
   155  .. code-block:: shell-session
   156  
   157      $ kubectl get ciliumlocalredirectpolicies | grep lrp-addr
   158      NAME           AGE
   159      lrp-addr       20h
   160  
   161  Verify that Cilium's eBPF kube-proxy replacement created a ``LocalRedirect``
   162  service entry with the backend IP address of that of the ``lrp-pod`` that was
   163  selected by the policy. Make sure that ``cilium-dbg service list`` is run
   164  in Cilium pod running on the same node as ``lrp-pod``.
   165  
   166  .. code-block:: shell-session
   167  
   168      $ kubectl describe pod lrp-pod  | grep 'IP:'
   169      IP:           10.16.70.187
   170  
   171  .. code-block:: shell-session
   172  
   173      $ kubectl exec -it -n kube-system cilium-5ngzd -- cilium-dbg service list
   174      ID   Frontend               Service Type       Backend
   175      [...]
   176      4    172.20.0.51:80         LocalRedirect      1 => 10.16.70.187:80
   177  
   178  Invoke a curl command from the client pod to the IP address and port
   179  configuration specified in the ``lrp-addr`` custom resource above.
   180  
   181  .. code-block:: shell-session
   182  
   183      $ kubectl exec mediabot -- curl -I -s http://169.254.169.254:8080/index.html
   184      HTTP/1.1 200 OK
   185      Server: nginx/1.19.2
   186      Date: Fri, 28 Aug 2020 01:33:34 GMT
   187      Content-Type: text/html
   188      Content-Length: 612
   189      Last-Modified: Tue, 11 Aug 2020 14:50:35 GMT
   190      Connection: keep-alive
   191      ETag: "5f32b03b-264"
   192      Accept-Ranges: bytes
   193  
   194  Verify that the traffic was redirected to the ``lrp-pod`` that was deployed.
   195  ``tcpdump`` should be run on the same node that ``lrp-pod`` is running on.
   196  
   197  .. code-block:: shell-session
   198  
   199      $ sudo tcpdump -i any -n port 80
   200      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
   201      listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
   202      01:36:24.608566 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [S], seq 2119454273, win 28200, options [mss 1410,sackOK,TS val 2541637677 ecr 0,nop,wscale 7], length 0
   203      01:36:24.608600 IP 10.16.70.187.80 > 10.16.215.55.60876: Flags [S.], seq 1315636594, ack 2119454274, win 27960, options [mss 1410,sackOK,TS val 2962246962 ecr 2541637677,nop,wscale 7], length 0
   204      01:36:24.608638 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [.], ack 1, win 221, options [nop,nop,TS val 2541637677 ecr 2962246962], length 0
   205      01:36:24.608867 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [P.], seq 1:96, ack 1, win 221, options [nop,nop,TS val 2541637677 ecr 2962246962], length 95: HTTP: HEAD /index.html HTTP/1.1
   206      01:36:24.608876 IP 10.16.70.187.80 > 10.16.215.55.60876: Flags [.], ack 96, win 219, options [nop,nop,TS val 2962246962 ecr 2541637677], length 0
   207      01:36:24.609007 IP 10.16.70.187.80 > 10.16.215.55.60876: Flags [P.], seq 1:239, ack 96, win 219, options [nop,nop,TS val 2962246962 ecr 2541637677], length 238: HTTP: HTTP/1.1 200 OK
   208      01:36:24.609052 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [.], ack 239, win 229, options [nop,nop,TS val 2541637677 ecr 2962246962], length 0
   209  
   210  .. _ServiceMatcher:
   211  
   212  ServiceMatcher
   213  ---------------
   214  
   215  This type of configuration is specified using Kubernetes service name and namespace
   216  for which traffic needs to be redirected. The service must be of type ``clusterIP``.
   217  When ``toPorts`` under ``redirectFrontend`` are not specified, traffic for
   218  all the service ports will be redirected. However, if traffic destined to only
   219  a subset of ports needs to be redirected, these ports need to be specified in the spec.
   220  Additionally, when multiple service ports are specified in the spec, they must be
   221  named. The port names will be used to map frontend ports with backend ports.
   222  Verify that the ports specified in ``toPorts`` under ``redirectBackend``
   223  exist in the backend pod spec. The ``localEndpointSelector`` set to ``app=proxy``
   224  in the policy is used to select the backend pods where traffic is redirected to.
   225  
   226  When a policy of this type is applied, the existing service entry
   227  created by Cilium's eBPF kube-proxy replacement will be replaced with a new
   228  service entry of type ``LocalRedirect``. This entry may only have node-local backend pods.
   229  
   230  The example shows how to redirect from traffic matching ``my-service``, to a
   231  backend pod deployed with labels ``app=proxy`` and Layer 4 port ``80``
   232  with protocol ``TCP``. The ``localEndpointSelector`` set to ``app=proxy`` in the
   233  policy is used to select the backend pods where traffic is redirected to.
   234  
   235  Deploy the Kubernetes service for which traffic needs to be redirected.
   236  
   237  .. literalinclude:: ../../../examples/kubernetes-local-redirect/k8s-svc.yaml
   238  
   239  .. parsed-literal::
   240  
   241      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-local-redirect/k8s-svc.yaml
   242  
   243  Verify that the service is created.
   244  
   245  .. code-block:: shell-session
   246  
   247      $ kubectl get service | grep 'my-service'
   248      my-service   ClusterIP   172.20.0.51   <none>        80/TCP     2d7h
   249  
   250  Verify that Cilium's eBPF kube-proxy replacement created a ``ClusterIP``
   251  service entry.
   252  
   253  .. code-block:: shell-session
   254  
   255      $ kubectl exec -it -n kube-system ds/cilium -- cilium-dbg service list
   256      ID   Frontend               Service Type   Backend
   257      [...]
   258      4    172.20.0.51:80         ClusterIP
   259  
   260  Create a custom resource of type CiliumLocalRedirectPolicy with ``serviceMatcher``
   261  configuration.
   262  
   263  .. literalinclude:: ../../../examples/kubernetes-local-redirect/lrp-svcmatcher.yaml
   264  
   265  .. parsed-literal::
   266  
   267      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-local-redirect/lrp-svcmatcher.yaml
   268  
   269  Verify that the custom resource is created.
   270  
   271  .. code-block:: shell-session
   272  
   273      $ kubectl get ciliumlocalredirectpolicies | grep svc
   274      NAME               AGE
   275      lrp-svc   20h
   276  
   277  Verify that entry Cilium's eBPF kube-proxy replacement updated the
   278  service entry with type ``LocalRedirect`` and the node-local backend
   279  selected by the policy. Make sure to run ``cilium-dbg service list`` in Cilium pod
   280  running on the same node as ``lrp-pod``.
   281  
   282  .. code-block:: shell-session
   283  
   284      $ kubectl exec -it -n kube-system cilium-5ngzd -- cilium-dbg service list
   285      ID   Frontend               Service Type       Backend
   286      [...]
   287      4    172.20.0.51:80         LocalRedirect      1 => 10.16.70.187:80
   288  
   289  Invoke a curl command from the client pod to the Cluster IP address and port of
   290  ``my-service`` specified in the ``lrp-svc`` custom resource above.
   291  
   292  .. code-block:: shell-session
   293  
   294      $ kubectl exec mediabot -- curl -I -s http://172.20.0.51/index.html
   295      HTTP/1.1 200 OK
   296      Server: nginx/1.19.2
   297      Date: Fri, 28 Aug 2020 01:50:50 GMT
   298      Content-Type: text/html
   299      Content-Length: 612
   300      Last-Modified: Tue, 11 Aug 2020 14:50:35 GMT
   301      Connection: keep-alive
   302      ETag: "5f32b03b-264"
   303      Accept-Ranges: bytes
   304  
   305  Verify that the traffic was redirected to the ``lrp-pod`` that was deployed.
   306  ``tcpdump`` should be run on the same node that ``lrp-pod`` is running on.
   307  
   308  .. code-block:: shell-session
   309  
   310      $ kubectl describe pod lrp-pod  | grep 'IP:'
   311      IP:           10.16.70.187
   312  
   313  .. code-block:: shell-session
   314  
   315      $ sudo tcpdump -i any -n port 80
   316      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
   317      listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
   318      01:36:24.608566 IP 10.16.215.55.60186 > 10.16.70.187.80: Flags [S], seq 2119454273, win 28200, options [mss 1410,sackOK,TS val 2541637677 ecr 0,nop,wscale 7], length 0
   319      01:36:24.608600 IP 10.16.70.187.80 > 10.16.215.55.60876: Flags [S.], seq 1315636594, ack 2119454274, win 27960, options [mss 1410,sackOK,TS val 2962246962 ecr 2541637677,nop,wscale 7], length 0
   320      01:36:24.608638 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [.], ack 1, win 221, options [nop,nop,TS val 2541637677 ecr 2962246962], length 0
   321      01:36:24.608867 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [P.], seq 1:96, ack 1, win 221, options [nop,nop,TS val 2541637677 ecr 2962246962], length 95: HTTP: HEAD /index.html HTTP/1.1
   322      01:36:24.608876 IP 10.16.70.187.80 > 10.16.215.55.60876: Flags [.], ack 96, win 219, options [nop,nop,TS val 2962246962 ecr 2541637677], length 0
   323      01:36:24.609007 IP 10.16.70.187.80 > 10.16.215.55.60876: Flags [P.], seq 1:239, ack 96, win 219, options [nop,nop,TS val 2962246962 ecr 2541637677], length 238: HTTP: HTTP/1.1 200 OK
   324      01:36:24.609052 IP 10.16.215.55.60876 > 10.16.70.187.80: Flags [.], ack 239, win 229, options [nop,nop,TS val 2541637677 ecr 2962246962], length 0
   325  
   326  Limitations
   327  ===========
   328  When you create a Local Redirect Policy, traffic for all the new connections
   329  that get established after the policy is enforced will be redirected. But if
   330  you have existing active connections to remote pods that match the configurations
   331  specified in the policy, then these might not get redirected. To ensure all
   332  such connections are redirected locally, restart the client pods after
   333  configuring the CiliumLocalRedirectPolicy.
   334  
   335  Local Redirect Policy updates are currently not supported. If there are any
   336  changes to be made, delete the existing policy, and re-create a new one.
   337  
   338  Use Cases
   339  =========
   340  Local Redirect Policy allows Cilium to support the following use cases:
   341  
   342  Node-local DNS cache
   343  --------------------
   344  `DNS node-cache <https://github.com/kubernetes/dns>`_ listens on a static IP to intercept
   345  traffic from application pods to the cluster's DNS service VIP by default, which will be
   346  bypassed when Cilium is handling service resolution at or before the veth interface of the
   347  application pod. To enable the DNS node-cache in a Cilium cluster, the following example
   348  steers traffic to a local DNS node-cache which runs as a normal pod.
   349  
   350  * Deploy DNS node-cache in pod namespace.
   351  
   352    .. tabs::
   353  
   354      .. group-tab:: Quick Deployment
   355  
   356          Deploy DNS node-cache.
   357  
   358          .. note::
   359  
   360             * The example yaml is populated with default values for ``__PILLAR_LOCAL_DNS__`` and
   361               ``__PILLAR_DNS_DOMAIN__``.
   362             * If you have a different deployment, please follow the official `NodeLocal DNSCache Configuration
   363               <https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/#configuration>`_
   364               to fill in the required template variables ``__PILLAR__LOCAL__DNS__``, ``__PILLAR__DNS__DOMAIN__``,
   365               and ``__PILLAR__DNS__SERVER__`` before applying the yaml.
   366  
   367          .. parsed-literal::
   368  
   369              $ wget \ |SCM_WEB|\/examples/kubernetes-local-redirect/node-local-dns.yaml
   370  
   371              $ kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}) && sed -i "s/__PILLAR__DNS__SERVER__/$kubedns/g;" node-local-dns.yaml
   372  
   373              $ kubectl apply -f node-local-dns.yaml
   374  
   375      .. group-tab:: Manual Configuration
   376  
   377           * Follow the official `NodeLocal DNSCache Configuration
   378             <https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/#configuration>`_
   379             to fill in the required template variables ``__PILLAR__LOCAL__DNS__``, ``__PILLAR__DNS__DOMAIN__``,
   380             and ``__PILLAR__DNS__SERVER__`` before applying the yaml.
   381  
   382           * Make sure to use a Node-local DNS image with a release version >= 1.15.16.
   383             This is to ensure that we have a knob to disable dummy network interface creation/deletion in
   384             Node-local DNS when we deploy it in non-host namespace.
   385  
   386           * Modify Node-local DNS cache's deployment yaml to pass these additional arguments to node-cache:
   387             ``-skipteardown=true``, ``-setupinterface=false``, and ``-setupiptables=false``.
   388  
   389           * Modify Node-local DNS cache's deployment yaml to put it in non-host namespace by setting
   390             ``hostNetwork: false`` for the daemonset.
   391  
   392           * In the Corefile, bind to ``0.0.0.0`` instead of the static IP.
   393  
   394           * In the Corefile, let CoreDNS serve health-check on its own IP instead of the static IP by
   395             removing the host IP string after health plugin.
   396  
   397           * Modify Node-local DNS cache's deployment yaml to point readiness probe to its own IP by
   398             removing the ``host`` field under ``readinessProbe``.
   399  
   400  * Deploy Local Redirect Policy (LRP) to steer DNS traffic to the node local dns cache.
   401  
   402    .. parsed-literal::
   403  
   404        $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml
   405  
   406    .. note::
   407  
   408        * The LRP above uses ``kube-dns`` for the cluster DNS service, however if your cluster DNS service is different,
   409          you will need to modify this example LRP to specify it.
   410        * The namespace specified in the LRP above is set to the same namespace as the cluster's dns service.
   411        * The LRP above uses the same port names ``dns`` and ``dns-tcp`` as the example quick deployment yaml, you will
   412          need to modify those to match your deployment if they are different.
   413  
   414  After all ``node-local-dns`` pods are in ready status, DNS traffic will now go to the local node-cache first.
   415  You can verify by checking the DNS cache's metrics ``coredns_dns_request_count_total`` via curling
   416  ``<node-local-dns pod IP>:9253/metrics``, the metric should increment as new DNS requests being issued from
   417  application pods are now redirected to the ``node-local-dns`` pod.
   418  
   419  In the absence of a node-local DNS cache, DNS queries from application pods
   420  will get directed to cluster DNS pods backed by the ``kube-dns`` service.
   421  
   422  * Troubleshooting
   423  
   424      If DNS requests are failing to resolve, check the following:
   425  
   426          - Ensure that the node-local DNS cache pods are running and ready.
   427  
   428           .. code-block:: shell-session
   429  
   430              $ kubectl --namespace kube-system get pods --selector=k8s-app=node-local-dns
   431              NAME                   READY   STATUS    RESTARTS   AGE
   432              node-local-dns-72r7m   1/1     Running   0          2d2h
   433              node-local-dns-gc5bx   1/1     Running   0          2d2h
   434  
   435          - Check if the local redirect policy has been applied correctly on all the cilium agent pods.
   436  
   437           .. code-block:: shell-session
   438  
   439              $ kubectl exec -it cilium-mhnhz -n kube-system -- cilium-dbg lrp list
   440              LRP namespace   LRP name       FrontendType                Matching Service
   441              kube-system     nodelocaldns   clusterIP + all svc ports   kube-system/kube-dns
   442                              |              10.96.0.10:53/UDP -> 10.244.1.49:53(kube-system/node-local-dns-72r7m),
   443                              |              10.96.0.10:53/TCP -> 10.244.1.49:53(kube-system/node-local-dns-72r7m),
   444  
   445          - Check if the corresponding local redirect service entry has been created. If the service entry is missing,
   446            there might have been a race condition in applying the policy and the node-local DNS DaemonSet pod resources.
   447            As a workaround, you can restart the node-local DNS DaemonSet pods. File a `GitHub issue <https://github.com/cilium/cilium/issues/new/choose>`_
   448            with a :ref:`sysdump <sysdump>` if the issue persists.
   449  
   450           .. code-block:: shell-session
   451  
   452              $ kubectl exec -it cilium-mhnhz -n kube-system -- cilium-dbg service list | grep LocalRedirect
   453              11   10.96.0.10:53      LocalRedirect   1 => 10.244.1.49:53 (active)
   454  
   455  kiam redirect on EKS
   456  --------------------
   457  `kiam <https://github.com/uswitch/kiam>`_ agent runs on each node in an EKS
   458  cluster, and intercepts requests going to the AWS metadata server to fetch
   459  security credentials for pods.
   460  
   461  - In order to only redirect traffic from pods to the kiam agent, and pass
   462    traffic from the kiam agent to the AWS metadata server without any redirection,
   463    we need the socket lookup functionality in the datapath. This functionality
   464    requires v5.1.16, v5.2.0 or more recent Linux kernel. Make sure the kernel
   465    version installed on EKS cluster nodes satisfies these requirements.
   466  
   467  - Deploy `kiam <https://github.com/uswitch/kiam>`_ using helm charts.
   468  
   469    .. code-block:: shell-session
   470  
   471        $ helm repo add uswitch https://uswitch.github.io/kiam-helm-charts/charts/
   472        $ helm repo update
   473        $ helm install --set agent.host.iptables=false --set agent.whitelist-route-regexp=meta-data kiam uswitch/kiam
   474  
   475    - The above command may provide instructions to prepare kiam in the cluster.
   476      Follow the instructions before continuing.
   477  
   478    - kiam must run in the ``hostNetwork`` mode and without the "--iptables" argument.
   479      The install instructions above ensure this by default.
   480  
   481  - Deploy the Local Redirect Policy to redirect pod traffic to the deployed kiam agent.
   482  
   483    .. parsed-literal::
   484  
   485        $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes-local-redirect/kiam-lrp.yaml
   486  
   487  .. note::
   488  
   489      - The ``addressMatcher`` ip address in the Local Redirect Policy is set to
   490        the ip address of the AWS metadata server and the ``toPorts`` port
   491        to the default HTTP server port. The ``toPorts`` field under
   492        ``redirectBackend`` configuration in the policy is set to the port that
   493        the kiam agent listens on. The port is passed as "--port" argument in
   494        the ``kiam-agent DaemonSet``.
   495      - The Local Redirect Policy namespace is set to the namespace
   496        in which kiam-agent DaemonSet is deployed.
   497  
   498  - Once all the kiam agent pods are in ``Running`` state, the metadata requests
   499    from application pods will get redirected to the node-local kiam agent pods.
   500    You can verify this by running a curl command to the AWS metadata server from
   501    one of the application pods, and tcpdump command on the same EKS cluster node as the
   502    pod. Following is an example output, where ``192.169.98.118`` is the ip
   503    address of an application pod, and ``192.168.60.99`` is the ip address of the
   504    kiam agent running on the same node as the application pod.
   505  
   506    .. code-block:: shell-session
   507  
   508        $ kubectl exec app-pod -- curl -s -w "\n" -X GET http://169.254.169.254/latest/meta-data/
   509        ami-id
   510        ami-launch-index
   511        ami-manifest-path
   512        block-device-mapping/
   513        events/
   514        hostname
   515        iam/
   516        identity-credentials/
   517        (...)
   518  
   519    .. code-block:: shell-session
   520  
   521        $ sudo tcpdump -i any -enn "(port 8181) and (host 192.168.60.99 and 192.168.98.118)"
   522        tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
   523        listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
   524        05:16:05.229597  In de:e4:e9:94:b5:9f ethertype IPv4 (0x0800), length 76: 192.168.98.118.47934 > 192.168.60.99.8181: Flags [S], seq 669026791, win 62727, options [mss 8961,sackOK,TS val 2539579886 ecr 0,nop,wscale 7], length 0
   525        05:16:05.229657 Out 56:8f:62:18:6f:85 ethertype IPv4 (0x0800), length 76: 192.168.60.99.8181 > 192.168.98.118.47934: Flags [S.], seq 2355192249, ack 669026792, win 62643, options [mss 8961,sackOK,TS val 4263010641 ecr 2539579886,nop,wscale 7], length 0
   526  
   527  Advanced configurations
   528  =======================
   529  When a local redirect policy is applied, cilium BPF datapath redirects traffic going to the policy frontend
   530  (identified by ip/port/protocol tuple) address to a node-local backend pod selected by the policy.
   531  However, for traffic originating from a node-local backend pod destined to the policy frontend, users may want to
   532  skip redirecting the traffic back to the node-local backend pod, and instead forward the traffic to the original frontend.
   533  This behavior can be enabled by setting the ``skipRedirectFromBackend`` flag to ``true`` in the local redirect policy spec.
   534  The configuration is only supported with socket-based load-balancing, and requires ``SO_NETNS_COOKIE`` feature
   535  available in Linux kernel version >= 5.8.
   536  
   537  .. note::
   538  
   539      In order to enable this configuration starting Cilium version 1.16.0, previously applied local redirect policies
   540      and policies selected backend pods need to be deleted, and re-created.