github.com/cilium/cilium@v1.16.2/Documentation/security/grpc.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  *************
     8  Securing gRPC
     9  *************
    10  
    11  This document serves as an introduction to using Cilium to enforce gRPC-aware
    12  security policies.  It is a detailed walk-through of getting a single-node
    13  Cilium environment running on your machine. It is designed to take 15-30
    14  minutes.
    15  
    16  .. include:: gsg_requirements.rst
    17  
    18  It is important for this demo that ``kube-dns`` is working correctly. To know the
    19  status of ``kube-dns`` you can run the following command:
    20  
    21  .. code-block:: shell-session
    22  
    23      $ kubectl get deployment kube-dns -n kube-system
    24      NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    25      kube-dns   1         1         1            1           13h
    26  
    27  Where at least one pod should be available.
    28  
    29  Deploy the Demo Application
    30  ===========================
    31  
    32  Now that we have Cilium deployed and ``kube-dns`` operating correctly we can
    33  deploy our demo gRPC application.  Since our first demo of Cilium + HTTP-aware security
    34  policies was Star Wars-themed, we decided to do the same for gRPC. While the
    35  `HTTP-aware Cilium  Star Wars demo <https://cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you/>`_
    36  showed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the
    37  Rebel Alliance, this gRPC demo shows how the lack of gRPC-aware security policies allowed Leia, Chewbacca, Lando, C-3PO, and R2-D2 to escape from Cloud City, which had been overtaken by
    38  empire forces.
    39  
    40  `gRPC <https://grpc.io/>`_ is a high-performance RPC framework built on top of the `protobuf <https://developers.google.com/protocol-buffers/>`_
    41  serialization/deserialization library popularized by Google.  There are gRPC bindings
    42  for many programming languages, and the efficiency of the protobuf parsing as well as
    43  advantages from leveraging HTTP 2 as a transport make it a popular RPC framework for
    44  those building new microservices from scratch.
    45  
    46  For those unfamiliar with the details of the movie, Leia and the other rebels are
    47  fleeing storm troopers and trying to reach the space port platform where the Millennium Falcon
    48  is parked, so they can fly out of Cloud City. However, the door to the platform is closed,
    49  and the access code has been changed. However, R2-D2 is able to access the Cloud City
    50  computer system via a public terminal, and disable this security, opening the door and
    51  letting the Rebels reach the Millennium Falcon just in time to escape.
    52  
    53  .. image:: images/cilium_grpc_gsg_r2d2_terminal.png
    54  
    55  In our example, Cloud City's internal computer system is built as a set of gRPC-based
    56  microservices (who knew that gRPC was actually invented a long time ago, in a galaxy
    57  far, far away?).
    58  
    59  With gRPC, each service is defined using a language independent protocol buffer definition.
    60  Here is the definition for the system used to manage doors within Cloud City:
    61  
    62  .. code-block:: java
    63  
    64    package cloudcity;
    65  
    66    // The door manager service definition.
    67    service DoorManager {
    68  
    69      // Get human readable name of door.
    70      rpc GetName(DoorRequest) returns (DoorNameReply) {}
    71  
    72      // Find the location of this door.
    73      rpc GetLocation (DoorRequest) returns (DoorLocationReply) {}
    74  
    75      // Find out whether door is open or closed
    76      rpc GetStatus(DoorRequest) returns (DoorStatusReply) {}
    77  
    78      // Request maintenance on the door
    79      rpc RequestMaintenance(DoorMaintRequest) returns (DoorActionReply) {}
    80  
    81      // Set Access Code to Open / Lock the door
    82      rpc SetAccessCode(DoorAccessCodeRequest) returns (DoorActionReply) {}
    83  
    84    }
    85  
    86  To keep the setup small, we will just launch two pods to represent this setup:
    87  
    88  - **cc-door-mgr**: A single pod running the gRPC door manager service with label ``app=cc-door-mgr``.
    89  - **terminal-87**: One of the public network access terminals scattered across Cloud City. R2-D2 plugs into terminal-87 as the rebels are desperately trying to escape. This terminal uses the gRPC client code to communicate with the door management services with label ``app=public-terminal``.
    90  
    91  
    92  .. image:: images/cilium_grpc_gsg_topology.png
    93  
    94  The file ``cc-door-app.yaml`` contains a Kubernetes Deployment for the door manager
    95  service, a Kubernetes Pod representing ``terminal-87``, and a Kubernetes Service for
    96  the door manager services. To deploy this example app, run:
    97  
    98  .. parsed-literal::
    99  
   100      $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-app.yaml
   101      deployment "cc-door-mgr" created
   102      service "cc-door-server" created
   103      pod "terminal-87" created
   104  
   105  Kubernetes will deploy the pods and service in the background. Running
   106  ``kubectl get svc,pods`` will inform you about the progress of the operation.
   107  Each pod will go through several states until it reaches ``Running`` at which
   108  point the setup is ready.
   109  
   110  .. code-block:: shell-session
   111  
   112      $ kubectl get pods,svc
   113      NAME                                 READY     STATUS    RESTARTS   AGE
   114      po/cc-door-mgr-3590146619-cv4jn      1/1       Running   0          1m
   115      po/terminal-87                       1/1       Running   0          1m
   116  
   117      NAME                 CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
   118      svc/cc-door-server   10.0.0.72    <none>        50051/TCP   1m
   119      svc/kubernetes       10.0.0.1     <none>        443/TCP     6m
   120  
   121  Test Access Between gRPC Client and Server
   122  ==========================================
   123  
   124  First, let's confirm that the public terminal can properly act as a client to the
   125  door service.  We can test this by running a Python gRPC client for the door service that
   126  exists in the *terminal-87* container.
   127  
   128  We'll invoke the 'cc_door_client' with the name of the gRPC method to call, and any
   129  parameters (in this case, the door-id):
   130  
   131  .. code-block:: shell-session
   132  
   133      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetName 1
   134      Door name is: Spaceport Door #1
   135  
   136      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetLocation 1
   137      Door location is lat = 10.222200393676758 long = 68.87879943847656
   138  
   139  Exposing this information to public terminals seems quite useful, as it helps travelers new
   140  to Cloud City identify and locate different doors. But recall that the door service also
   141  exposes several other methods, including ``SetAccessCode``. If access to the door manager
   142  service is protected only using traditional IP and port-based firewalling, the TCP port of
   143  the service (50051 in this example) will be wide open to allow legitimate calls like
   144  ``GetName`` and ``GetLocation``, which also leave more sensitive calls like ``SetAccessCode`` exposed as
   145  well. It is this mismatch between the course granularity of traditional firewalls and
   146  the fine-grained nature of gRPC calls that R2-D2 exploited to override the security
   147  and help the rebels escape.
   148  
   149  To see this, run:
   150  
   151  .. code-block:: shell-session
   152  
   153      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py SetAccessCode 1 999
   154      Successfully set AccessCode to 999
   155  
   156  
   157  Securing Access to a gRPC Service with Cilium
   158  =============================================
   159  
   160  Once the legitimate owners of Cloud City recover the city from the empire, how can they
   161  use Cilium to plug this key security hole and block requests to ``SetAccessCode`` and ``GetStatus``
   162  while still allowing ``GetName``, ``GetLocation``, and ``RequestMaintenance``?
   163  
   164  .. image:: images/cilium_grpc_gsg_policy.png
   165  
   166  Since gRPC build on top of HTTP, this can be achieved easily by understanding how a
   167  gRPC call is mapped to an HTTP URL, and then applying a Cilium HTTP-aware filter to
   168  allow public terminals to only invoke a subset of all the total gRPC methods available
   169  on the door service.
   170  
   171  Each gRPC method is mapped to an HTTP POST call to a URL of the form
   172  ``/cloudcity.DoorManager/<method-name>``.
   173  
   174  As a result, the following *CiliumNetworkPolicy* rule limits access of pods with label
   175  ``app=public-terminal`` to only invoke ``GetName``, ``GetLocation``, and ``RequestMaintenance``
   176  on the door service, identified by label ``app=cc-door-mgr``:
   177  
   178  .. literalinclude:: ../../examples/kubernetes-grpc/cc-door-ingress-security.yaml
   179     :language: yaml
   180     :emphasize-lines: 9,13,21
   181  
   182  A *CiliumNetworkPolicy* contains a list of rules that define allowed requests,
   183  meaning that requests that do not match any rules (e.g., ``SetAccessCode``) are denied as invalid.
   184  
   185  The above rule applies to inbound (i.e., "ingress") connections to ``cc-door-mgr pods`` (as
   186  indicated by ``app: cc-door-mgr``
   187  in the "endpointSelector" section). The rule will apply to connections from pods with label
   188  ``app: public-terminal`` as indicated by the "fromEndpoints" section.
   189  The rule explicitly matches
   190  gRPC connections destined to TCP 50051, and white-lists specifically the permitted URLs.
   191  
   192  Apply this gRPC-aware network security policy using ``kubectl`` in the main window:
   193  
   194  .. parsed-literal::
   195  
   196      $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-ingress-security.yaml
   197  
   198  After this security policy is in place, access to the innocuous calls like ``GetLocation``
   199  still works as intended:
   200  
   201  .. code-block:: shell-session
   202  
   203      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetLocation 1
   204      Door location is lat = 10.222200393676758 long = 68.87879943847656
   205  
   206  
   207  However, if we then again try to invoke ``SetAccessCode``, it is denied:
   208  
   209  .. code-block:: shell-session
   210  
   211      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py SetAccessCode 1 999
   212  
   213      Traceback (most recent call last):
   214        File "/cloudcity/cc_door_client.py", line 71, in <module>
   215          run()
   216        File "/cloudcity/cc_door_client.py", line 53, in run
   217          door_id=int(arg2), access_code=int(arg3)))
   218        File "/usr/local/lib/python3.4/dist-packages/grpc/_channel.py", line 492, in __call__
   219          return _end_unary_response_blocking(state, call, False, deadline)
   220        File "/usr/local/lib/python3.4/dist-packages/grpc/_channel.py", line 440, in _end_unary_response_blocking
   221          raise _Rendezvous(state, None, None, deadline)
   222      grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.CANCELLED, Received http2 header with status: 403)>
   223  
   224  
   225  This is now blocked, thanks to the Cilium network policy. And notice that unlike
   226  a traditional firewall which would just drop packets in a way indistinguishable
   227  from a network failure, because Cilium operates at the API-layer, it can
   228  explicitly reply with an custom HTTP 403 Unauthorized error, indicating that the
   229  request was intentionally denied for security reasons.
   230  
   231  Thank goodness that the empire IT staff hadn't had time to deploy Cilium on
   232  Cloud City's internal network prior to the escape attempt, or things might have
   233  turned out quite differently for Leia and the other Rebels!
   234  
   235  Clean-Up
   236  ========
   237  
   238  You have now installed Cilium, deployed a demo app, and tested
   239  L7 gRPC-aware network security policies. To clean-up, run:
   240  
   241  .. parsed-literal::
   242  
   243     $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-app.yaml
   244     $ kubectl delete cnp rule1
   245  
   246  After this, you can re-run the tutorial from Step 1.