github.com/cilium/cilium@v1.16.2/Documentation/installation/cni-chaining-aws-cni.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _chaining_aws_cni:
     8  
     9  ******************
    10  AWS VPC CNI plugin
    11  ******************
    12  
    13  This guide explains how to set up Cilium in combination with the AWS VPC CNI
    14  plugin. In this hybrid mode, the AWS VPC CNI plugin is responsible for setting
    15  up the virtual network devices as well as for IP address management (IPAM) via
    16  ENIs. After the initial networking is setup for a given pod, the Cilium CNI
    17  plugin is called to attach eBPF programs to the network devices set up by the
    18  AWS VPC CNI plugin in order to enforce network policies, perform load-balancing
    19  and provide encryption.
    20  
    21  .. image:: aws-cilium-architecture.png
    22  
    23  .. include:: cni-chaining-limitations.rst
    24  
    25  .. admonition:: Video
    26    :class: attention
    27  
    28    If you require advanced features of Cilium, consider migrating fully to Cilium.
    29    To help you with the process, you can watch two Principal Engineers at Meltwater talk about `how they migrated
    30    Meltwater's production Kubernetes clusters - from the AWS VPC CNI plugin to Cilium <https://www.youtube.com/watch?v=w6S6baRHHu8&list=PLDg_GiBbAx-kDXqDYimwytMLh2kAHyMPd&t=182s>`__.
    31  
    32  .. important::
    33  
    34     Please ensure that you are running version `1.11.2 <https://github.com/aws/amazon-vpc-cni-k8s/releases/tag/v1.11.2>`_
    35     or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium.
    36  
    37     .. code-block:: shell-session
    38  
    39        $ kubectl -n kube-system get ds/aws-node -o json | jq -r '.spec.template.spec.containers[0].image'
    40        602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.11.2
    41  
    42     If you are running an older version, as in the above example, you can upgrade it with:
    43  
    44     .. code-block:: shell-session
    45  
    46        $ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.11/config/master/aws-k8s-cni.yaml
    47  
    48  .. image:: aws-cni-architecture.png
    49  
    50  
    51  Setting up a cluster on AWS
    52  ===========================
    53  
    54  Follow the instructions in the :ref:`k8s_install_quick` guide to set up an EKS
    55  cluster, or use any other method of your preference to set up a Kubernetes
    56  cluster on AWS.
    57  
    58  Ensure that the `aws-vpc-cni-k8s <https://github.com/aws/amazon-vpc-cni-k8s>`_
    59  plugin is installed — which will already be the case if you have created an EKS
    60  cluster. Also, ensure the version of the plugin is up-to-date as per the above.
    61  
    62  .. include:: k8s-install-download-release.rst
    63  
    64  Deploy Cilium via Helm:
    65  
    66  .. parsed-literal::
    67  
    68     helm install cilium |CHART_RELEASE| \\
    69       --namespace kube-system \\
    70       --set cni.chainingMode=aws-cni \\
    71       --set cni.exclusive=false \\
    72       --set enableIPv4Masquerade=false \\
    73       --set routingMode=native \\
    74       --set endpointRoutes.enabled=true
    75  
    76  This will enable chaining with the AWS VPC CNI plugin. It will also disable
    77  tunneling, as it's not required since ENI IP addresses can be directly routed
    78  in the VPC. For the same reason, masquerading can be disabled as well.
    79  
    80  Restart existing pods
    81  =====================
    82  
    83  The new CNI chaining configuration *will not* apply to any pod that is already
    84  running in the cluster. Existing pods will be reachable, and Cilium will
    85  load-balance *to* them, but not *from* them. Policy enforcement will also not
    86  be applied. For these reasons, you must restart these pods so that the chaining
    87  configuration can be applied to them.
    88  
    89  The following command can be used to check which pods need to be restarted:
    90  
    91  .. code-block:: bash
    92  
    93     for ns in $(kubectl get ns -o jsonpath='{.items[*].metadata.name}'); do
    94          ceps=$(kubectl -n "${ns}" get cep \
    95              -o jsonpath='{.items[*].metadata.name}')
    96          pods=$(kubectl -n "${ns}" get pod \
    97              -o custom-columns=NAME:.metadata.name,NETWORK:.spec.hostNetwork \
    98              | grep -E '\s(<none>|false)' | awk '{print $1}' | tr '\n' ' ')
    99          ncep=$(echo "${pods} ${ceps}" | tr ' ' '\n' | sort | uniq -u | paste -s -d ' ' -)
   100          for pod in $(echo $ncep); do
   101            echo "${ns}/${pod}";
   102          done
   103     done
   104  
   105  .. include:: k8s-install-validate.rst
   106  
   107  Advanced
   108  ========
   109  
   110  Enabling security groups for pods (EKS)
   111  ---------------------------------------
   112  
   113  Cilium can be used alongside the `security groups for pods <https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html>`_
   114  feature of EKS in supported clusters when running in chaining mode. Follow the
   115  instructions below to enable this feature:
   116  
   117  .. important::
   118  
   119     The following guide requires `jq <https://stedolan.github.io/jq/>`_ and the
   120     `AWS CLI <https://aws.amazon.com/cli/>`_ to be installed and configured.
   121  
   122  Make sure that the ``AmazonEKSVPCResourceController`` managed policy is attached
   123  to the IAM role associated with the EKS cluster:
   124  
   125  .. code-block:: shell-session
   126  
   127     export EKS_CLUSTER_NAME="my-eks-cluster" # Change accordingly
   128     export EKS_CLUSTER_ROLE_NAME=$(aws eks describe-cluster \
   129          --name "${EKS_CLUSTER_NAME}" \
   130          | jq -r '.cluster.roleArn' | awk -F/ '{print $NF}')
   131     aws iam attach-role-policy \
   132          --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \
   133          --role-name "${EKS_CLUSTER_ROLE_NAME}"
   134  
   135  Then, as mentioned above, make sure that the version of the AWS VPC CNI
   136  plugin running in the cluster is up-to-date:
   137  
   138  .. code-block:: shell-session
   139  
   140     kubectl -n kube-system get ds/aws-node \
   141       -o jsonpath='{.spec.template.spec.containers[0].image}'
   142     602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.10
   143  
   144  Next, patch the ``kube-system/aws-node`` DaemonSet in order to enable security
   145  groups for pods:
   146  
   147  .. code-block:: shell-session
   148  
   149     kubectl -n kube-system patch ds aws-node \
   150       -p '{"spec":{"template":{"spec":{"initContainers":[{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}],"containers":[{"env":[{"name":"ENABLE_POD_ENI","value":"true"}],"name":"aws-node"}]}}}}'
   151     kubectl -n kube-system rollout status ds aws-node
   152  
   153  After the rollout is complete, all nodes in the cluster should have the ``vps.amazonaws.com/has-trunk-attached`` label set to ``true``:
   154  
   155  .. code-block:: shell-session
   156  
   157     kubectl get nodes -L vpc.amazonaws.com/has-trunk-attached
   158     NAME                                            STATUS   ROLES    AGE   VERSION              HAS-TRUNK-ATTACHED
   159     ip-192-168-111-169.eu-west-2.compute.internal   Ready    <none>   22m   v1.19.6-eks-49a6c0   true
   160     ip-192-168-129-175.eu-west-2.compute.internal   Ready    <none>   22m   v1.19.6-eks-49a6c0   true
   161  
   162  From this moment everything should be in place. For details on how to actually
   163  associate security groups to pods, please refer to the `official documentation <https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html>`_.
   164  
   165  .. include:: next-steps.rst