github.com/cilium/cilium@v1.16.2/Documentation/network/servicemesh/mutual-authentication/mutual-authentication-example.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _gs_mutual_authentication_example:
     8  
     9  *****************************
    10  Mutual Authentication Example
    11  *****************************
    12  
    13  This example shows you how to enforce mutual authentication between two Pods. 
    14  
    15  Deploy a client (pod-worker) and a server (echo) using the following manifest:
    16  
    17  .. parsed-literal::
    18  
    19      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/servicemesh/mutual-auth-example.yaml
    20      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/servicemesh/cnp-without-mutual-auth.yaml
    21      service/echo created
    22      deployment.apps/echo created
    23      pod/pod-worker created
    24      ciliumnetworkpolicy.cilium.io/no-mutual-auth-echo created 
    25  
    26  Verify that the Pods have been successfully deployed:
    27  
    28  .. code-block:: shell-session
    29  
    30      $ kubectl get svc echo
    31      NAME   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
    32      echo   ClusterIP   10.96.16.90   <none>        8080/TCP   42m
    33      $ kubectl get pod pod-worker 
    34      NAME         READY   STATUS    RESTARTS   AGE
    35      pod-worker   1/1     Running   0          40m
    36  
    37  Verify that the network policy has been deployed successfully and filters the traffic as expected. 
    38  
    39  Run the following commands:
    40  
    41  .. code-block:: shell-session
    42  
    43      $ kubectl exec -it pod-worker -- curl -s -o /dev/null -w "%{http_code}" http://echo:8080/headers
    44      200
    45      $ kubectl exec -it pod-worker -- curl http://echo:8080/headers-1
    46      Access denied
    47  
    48  The first request should be successful (the *pod-worker* Pod is able to connect to the *echo* Service over a specific HTTP path and the HTTP status code is ``200``).
    49  The second one should be denied (the *pod-worker* Pod is unable to connect to the *echo* Service over a specific HTTP path other than '/headers').
    50  
    51  Before we enable mutual authentication between ``pod-worker`` and ``echo``, let's verify that the SPIRE server is healthy.
    52  
    53  Assuming you have followed the installation instructions and have a SPIRE server serving Cilium, adding mutual authentication simply requires 
    54  adding ``authentication.mode: "required"`` in the ingress/egress block in your network policies.
    55  
    56  
    57  Verify SPIRE Health
    58  ===================
    59  
    60  .. note::
    61  
    62      This example assumes a default SPIRE installation.
    63  
    64  Let's first verify that the SPIRE server and agents automatically deployed are working as expected.
    65  
    66  The SPIRE server is deployed as a StatefulSet and the SPIRE agents are deployed as a DaemonSet (you should therefore see one SPIRE agent per node).
    67  
    68  .. code-block:: shell-session
    69  
    70      $ kubectl get all -n cilium-spire
    71      NAME                    READY   STATUS    RESTARTS   AGE
    72      pod/spire-agent-27jd7   1/1     Running   0          144m
    73      pod/spire-agent-qkc8l   1/1     Running   0          144m
    74      pod/spire-server-0      2/2     Running   0          144m
    75  
    76      NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    77      service/spire-server   ClusterIP   10.96.124.177   <none>        8081/TCP   144m
    78  
    79      NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    80      daemonset.apps/spire-agent   2         2         2       2            2           <none>          144m
    81  
    82      NAME                            READY   AGE
    83      statefulset.apps/spire-server   1/1     144m
    84          
    85  Run a healthcheck on the SPIRE server.
    86  
    87  .. code-block:: shell-session
    88  
    89      $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server healthcheck
    90      Server is healthy.
    91  
    92  Verify the list of attested agents:
    93  
    94  .. code-block:: shell-session
    95  
    96      $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server agent list
    97      Found 2 attested agents:
    98  
    99      SPIFFE ID         : spiffe://spiffe.cilium/spire/agent/k8s_psat/default/64745bf2-bd9d-4e42-bb2b-e095a6b65121
   100      Attestation type  : k8s_psat
   101      Expiration time   : 2023-07-04 18:39:50 +0000 UTC
   102      Serial number     : 110848236251310359782141595494072495768
   103  
   104      SPIFFE ID         : spiffe://spiffe.cilium/spire/agent/k8s_psat/default/d4a8a6da-d808-4993-b67a-bed250bbc53e
   105      Attestation type  : k8s_psat
   106      Expiration time   : 2023-07-04 18:39:55 +0000 UTC
   107      Serial number     : 7806033782886940845084156064765627978
   108  
   109  Notice that the SPIRE Server uses Kubernetes Projected Service Account Tokens (PSATs) to verify 
   110  the Identity of a SPIRE Agent running on a Kubernetes Cluster. 
   111  Projected Service Account Tokens provide additional security guarantees over traditional Kubernetes
   112  Service Account Tokens and when supported by a Kubernetes cluster, PSAT is the recommended attestation strategy.
   113  
   114  Verify SPIFFE Identities
   115  ========================
   116  
   117  Now that we know the SPIRE service is healthy, let's verify that the Cilium and SPIRE integration has been successful:
   118  
   119  - The Cilium agent and operator should have a registered delegate Identity with the SPIRE Server.
   120  - The Cilium operator should have registered Identities with the SPIRE server on behalf of the workloads (Kubernetes Pods).
   121  
   122  Verify that the Cilium agent and operator have Identities on the SPIRE server:
   123  
   124  .. code-block:: shell-session
   125  
   126      $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server entry show -parentID spiffe://spiffe.cilium/ns/cilium-spire/sa/spire-agent
   127      Found 2 entries
   128      Entry ID         : b6424c87-4323-4d64-98dd-cd5b51a1fcbb
   129      SPIFFE ID        : spiffe://spiffe.cilium/cilium-agent
   130      Parent ID        : spiffe://spiffe.cilium/ns/cilium-spire/sa/spire-agent
   131      Revision         : 0
   132      X509-SVID TTL    : default
   133      JWT-SVID TTL     : default
   134      Selector         : k8s:ns:kube-system
   135      Selector         : k8s:sa:cilium
   136  
   137      Entry ID         : 8aa91d65-16c4-48a0-bc1f-c9bf26e6a25f
   138      SPIFFE ID        : spiffe://spiffe.cilium/cilium-operator
   139      Parent ID        : spiffe://spiffe.cilium/ns/cilium-spire/sa/spire-agent
   140      Revision         : 0
   141      X509-SVID TTL    : default
   142      JWT-SVID TTL     : default
   143      Selector         : k8s:ns:kube-system
   144      Selector         : k8s:sa:cilium-operator
   145  
   146  
   147  Next, verify that the *echo* Pod has an Identity registered with the SPIRE server.
   148  
   149  To do this, you must first construct the Pod's SPIFFE ID. The SPIFFE ID for a workload is 
   150  based on the ``spiffe://spiffe.cilium/identity/$IDENTITY_ID`` format, where ``$IDENTITY_ID`` is a workload's Cilium Identity.
   151  
   152  Grab the Cilium Identity for the *echo* Pod;
   153  
   154  .. code-block:: shell-session
   155  
   156      $ IDENTITY_ID=$(kubectl get cep -l app=echo -o=jsonpath='{.items[0].status.identity.id}')
   157      $ echo $IDENTITY_ID
   158      17947
   159  
   160  Use the Cilium Identity for the *echo* pod to construct its SPIFFE ID and check it is registered on the SPIRE server:
   161  
   162  .. code-block:: shell-session
   163  
   164      $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server entry show -spiffeID spiffe://spiffe.cilium/identity/$IDENTITY_ID
   165      Found 1 entry
   166      Entry ID         : 9fc13971-fb19-4814-b9f0-737b30e336c6
   167      SPIFFE ID        : spiffe://spiffe.cilium/identity/17947
   168      Parent ID        : spiffe://spiffe.cilium/cilium-operator
   169      Revision         : 0
   170      X509-SVID TTL    : default
   171      JWT-SVID TTL     : default
   172      Selector         : cilium:mutual-auth
   173  
   174  You can see the that the *cilium-operator* was listed in the ``Parent ID``. 
   175  That is because the Cilium operator creates SPIRE entries for Cilium Identities as they are created.
   176  
   177  To get all registered entries, execute the following command:
   178  
   179  .. code-block:: shell-session
   180  
   181      kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server entry show -selector cilium:mutual-auth
   182  
   183  There are as many entries as there are identities. Verify that these match by running the command:
   184  
   185  .. code-block:: shell-session
   186      
   187      kubectl get ciliumidentities
   188  
   189  The identify ID listed under ``NAME`` should match with the digits at the end of the SPIFFE ID executed in the previous command.
   190  
   191  
   192  Enforce Mutual Authentication
   193  =============================
   194  
   195  Rolling out mutual authentication with Cilium is as simple as adding the following block to an existing or new CiliumNetworkPolicy egress or ingress rules:
   196  
   197  .. code-block:: yaml
   198  
   199      authentication:
   200          mode: "required"
   201  
   202  Update the existing rule to only allow ingress access to mutually authenticated workloads to access *echo* using:
   203  
   204  .. parsed-literal::
   205  
   206      $ kubectl apply -f \ |SCM_WEB|\/examples/kubernetes/servicemesh/cnp-with-mutual-auth.yaml
   207  
   208  Verify Mutual Authentication
   209  ============================
   210  
   211  Re-try your connectivity tests. They should give similar results as before:
   212  
   213  .. code-block:: shell-session
   214  
   215      $ kubectl exec -it pod-worker -- curl -s -o /dev/null -w "%{http_code}" http://echo:8080/headers
   216      200
   217      $ kubectl exec -it pod-worker -- curl http://echo:8080/headers-1
   218      Access denied
   219  
   220  Verify that mutual authentication has happened by accessing the logs on the agent. 
   221  
   222  Start by enabling debug level:
   223  
   224  .. code-block:: shell-session
   225  
   226      cilium config set debug true
   227  
   228  Examine the logs on the Cilium agent located in the same node as the *echo* Pod. 
   229  For brevity, you can search for some specific log messages:
   230  
   231  .. code-block:: shell-session
   232  
   233      $ kubectl -n kube-system -c cilium-agent logs cilium-9pshw --timestamps=true | grep "Policy is requiring authentication\|Validating Server SNI\|Validated certificate\|Successfully authenticated"
   234      2023-07-04T17:58:28.795760597Z level=debug msg="Policy is requiring authentication" key="localIdentity=17947, remoteIdentity=39239, remoteNodeID=54264, authType=spire" subsys=auth
   235      2023-07-04T17:58:28.800509503Z level=debug msg="Validating Server SNI" SNI ID=39239 subsys=auth
   236      2023-07-04T17:58:28.800525190Z level=debug msg="Validated certificate" subsys=auth uri-san="[spiffe://spiffe.cilium/identity/39239]"
   237      2023-07-04T17:58:28.801441968Z level=debug msg="Successfully authenticated" key="localIdentity=17947, remoteIdentity=39239, remoteNodeID=54264, authType=spire" remote_node_ip=10.0.1.175 subsys=auth
   238  
   239  When you apply a mutual authentication policy, the agent retrieves the identity of the source Pod, 
   240  connects to the node where the destination Pod is running and performs a mutual TLS handshake (with 
   241  the log above showing one side of the mutual TLS handshake).
   242  As the handshake succeeded, the connection was authenticated and the traffic protected by policy could proceed. 
   243  
   244  Packets between the two Pods can flow until the network policy is removed or the entry expires.