github.com/imran-kn/cilium-fork@v1.6.9/Documentation/gettingstarted/grpc.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  ******************
     8  How to secure gRPC
     9  ******************
    10  
    11  This document serves as an introduction to using Cilium to enforce gRPC-aware
    12  security policies.  It is a detailed walk-through of getting a single-node
    13  Cilium environment running on your machine. It is designed to take 15-30
    14  minutes.
    15  
    16  .. include:: gsg_requirements.rst
    17  
    18  It is important for this demo that ``kube-dns`` is working correctly. To know the
    19  status of ``kube-dns`` you can run the following command:
    20  ::
    21  
    22      $ kubectl get deployment kube-dns -n kube-system
    23      NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
    24      kube-dns   1         1         1            1           13h
    25  
    26  Where at least one pod should be available.
    27  
    28  Deploy the Demo Application
    29  ===========================
    30  
    31  Now that we have Cilium deployed and ``kube-dns`` operating correctly we can
    32  deploy our demo gRPC application.  Since our first demo of Cilium + HTTP-aware security
    33  policies was Star Wars-themed, we decided to do the same for gRPC. While the
    34  `HTTP-aware Cilium  Star Wars demo <https://www.cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you>`_
    35  showed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the
    36  Rebel Alliance, this gRPC demo shows how the lack of gRPC-aware security policies allowed Leia, Chewbacca, Lando, C-3PO, and R2-D2 to escape from Cloud City, which had been overtaken by
    37  empire forces.
    38  
    39  `gRPC <https://grpc.io/>`_ is a high-performance RPC framework built on top of the `protobuf <https://developers.google.com/protocol-buffers/>`_
    40  serialization/deserialization library popularized by Google.  There are gRPC bindings
    41  for many programming languages, and the efficiency of the protobuf parsing as well as
    42  advantages from leveraging HTTP 2 as a transport make it a popular RPC framework for
    43  those building new microservices from scratch.
    44  
    45  For those unfamiliar with the details of the movie, Leia and the other rebels are
    46  fleeing storm troopers and trying to reach the space port platform where the Millennium Falcon
    47  is parked, so they can fly out of Cloud City. However, the door to the platform is closed,
    48  and the access code has been changed. However, R2-D2 is able to access the Cloud City
    49  computer system via a public terminal, and disable this security, opening the door and
    50  letting the Rebels reach the Millennium Falcon just in time to escape.
    51  
    52  .. image:: images/cilium_grpc_gsg_r2d2_terminal.png
    53  
    54  In our example, Cloud City's internal computer system is built as a set of gRPC-based
    55  microservices (who knew that gRPC was actually invented a long time ago, in a galaxy
    56  far, far away?).
    57  
    58  With gRPC, each service is defined using a language independent protocol buffer definition.
    59  Here is the definition for the system used to manage doors within Cloud City:
    60  
    61  .. code-block:: java
    62  
    63    package cloudcity;
    64  
    65    // The door manager service definition.
    66    service DoorManager {
    67  
    68      // Get human readable name of door.
    69      rpc GetName(DoorRequest) returns (DoorNameReply) {}
    70  
    71      // Find the location of this door.
    72      rpc GetLocation (DoorRequest) returns (DoorLocationReply) {}
    73  
    74      // Find out whether door is open or closed
    75      rpc GetStatus(DoorRequest) returns (DoorStatusReply) {}
    76  
    77      // Request maintenance on the door
    78      rpc RequestMaintenance(DoorMaintRequest) returns (DoorActionReply) {}
    79  
    80      // Set Access Code to Open / Lock the door
    81      rpc SetAccessCode(DoorAccessCodeRequest) returns (DoorActionReply) {}
    82  
    83    }
    84  
    85  To keep the setup small, we will just launch two pods to represent this setup:
    86  
    87  - **cc-door-mgr**: A single pod running the gRPC door manager service with label ``app=cc-door-mgr``.
    88  - **terminal-87**: One of the public network access terminals scattered across Cloud City. R2-D2 plugs into terminal-87 as the rebels are desperately trying to escape. This terminal uses the gRPC client code to communicate with the door management services with label ``app=public-terminal``.
    89  
    90  
    91  .. image:: images/cilium_grpc_gsg_topology.png
    92  
    93  The file ``cc-door-app.yaml`` contains a Kubernetes Deployment for the door manager
    94  service, a Kubernetes Pod representing ``terminal-87``, and a Kubernetes Service for
    95  the door manager services. To deploy this example app, run:
    96  
    97  .. parsed-literal::
    98  
    99      $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-app.yaml
   100      deployment "cc-door-mgr" created
   101      service "cc-door-server" created
   102      pod "terminal-87" created
   103  
   104  Kubernetes will deploy the pods and service in the background. Running
   105  ``kubectl get svc,pods`` will inform you about the progress of the operation.
   106  Each pod will go through several states until it reaches ``Running`` at which
   107  point the setup is ready.
   108  
   109  ::
   110  
   111      $ kubectl get pods,svc
   112      NAME                                 READY     STATUS    RESTARTS   AGE
   113      po/cc-door-mgr-3590146619-cv4jn      1/1       Running   0          1m
   114      po/terminal-87                       1/1       Running   0          1m
   115  
   116      NAME                 CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
   117      svc/cc-door-server   10.0.0.72    <none>        50051/TCP   1m
   118      svc/kubernetes       10.0.0.1     <none>        443/TCP     6m
   119  
   120  Test Access Between gRPC Client and Server
   121  ==========================================
   122  
   123  First, let's confirm that the public terminal can properly act as a client to the
   124  door service.  We can test this by running a Python gRPC client for the door service that
   125  exists in the *terminal-87* container.
   126  
   127  We'll invoke the 'cc_door_client' with the name of the gRPC method to call, and any
   128  parameters (in this case, the door-id):
   129  
   130  ::
   131  
   132      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetName 1
   133      Door name is: Spaceport Door #1
   134  
   135      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetLocation 1
   136      Door location is lat = 10.222200393676758 long = 68.87879943847656
   137  
   138  Exposing this information to public terminals seems quite useful, as it helps travelers new
   139  to Cloud City identify and locate different doors. But recall that the door service also
   140  exposes several other methods, including ``SetAccessCode``. If access to the door manager
   141  service is protected only using traditional IP and port-based firewalling, the TCP port of
   142  the service (50051 in this example) will be wide open to allow legitimate calls like
   143  ``GetName`` and ``GetLocation``, which also leave more sensitive calls like ``SetAccessCode`` exposed as
   144  well. It is this mismatch between the course granularity of traditional firewalls and
   145  the fine-grained nature of gRPC calls that R2-D2 exploited to override the security
   146  and help the rebels escape.
   147  
   148  To see this, run:
   149  ::
   150  
   151      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py SetAccessCode 1 999
   152      Successfully set AccessCode to 999
   153  
   154  
   155  Securing Access to a gRPC Service with Cilium
   156  =============================================
   157  
   158  Once the legitimate owners of Cloud City recover the city from the empire, how can they
   159  use Cilium to plug this key security hole and block requests to ``SetAccessCode`` and ``GetStatus``
   160  while still allowing ``GetName``, ``GetLocation``, and ``RequestMaintenance``?
   161  
   162  .. image:: images/cilium_grpc_gsg_policy.png
   163  
   164  Since gRPC build on top of HTTP, this can be achieved easily by understanding how a
   165  gRPC call is mapped to an HTTP URL, and then applying a Cilium HTTP-aware filter to
   166  allow public terminals to only invoke a subset of all the total gRPC methods available
   167  on the door service.
   168  
   169  Each gRPC method is mapped to an HTTP POST call to a URL of the form
   170  ``/cloudcity.DoorManager/<method-name>``.
   171  
   172  As a result, the following *CiliumNetworkPolicy* rule limits access of pods with label
   173  ``app=public-terminal`` to only invoke ``GetName``, ``GetLocation``, and ``RequestMaintenance``
   174  on the door service, identified by label ``app=cc-door-sgr``:
   175  
   176  .. literalinclude:: ../../examples/kubernetes-grpc/cc-door-ingress-security.yaml
   177     :language: yaml
   178     :emphasize-lines: 9,13,21
   179  
   180  A *CiliumNetworkPolicy* contains a list of rules that define allowed requests,
   181  meaning that requests that do not match any rules (e.g., ``SetAccessCode``) are denied as invalid.
   182  
   183  The above rule applies to inbound (i.e., "ingress") connections to ``cc-door-mgr pods`` (as
   184  indicated by ``app: cc-door-mgr``
   185  in the "endpointSelector" section). The rule will apply to connections from pods with label
   186  ``app: public-terminal`` as indicated by the "fromEndpoints" section.
   187  The rule explicitly matches
   188  gRPC connections destined to TCP 50051, and white-lists specifically the permitted URLs.
   189  
   190  Apply this gRPC-aware network security policy using ``kubectl`` in the main window:
   191  
   192  .. parsed-literal::
   193  
   194      $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-grpc/cc-door-ingress-security.yaml
   195  
   196  After this security policy is in place, access to the innocuous calls like ``GetLocation``
   197  still works as intended:
   198  
   199  ::
   200  
   201      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py GetLocation 1
   202      Door location is lat = 10.222200393676758 long = 68.87879943847656
   203  
   204  
   205  However, if we then again try to invoke ``SetAccessCode``, it is denied:
   206  
   207  ::
   208  
   209      $ kubectl exec terminal-87 -- python3 /cloudcity/cc_door_client.py SetAccessCode 1 999
   210  
   211      Traceback (most recent call last):
   212        File "/cloudcity/cc_door_client.py", line 71, in <module>
   213          run()
   214        File "/cloudcity/cc_door_client.py", line 53, in run
   215          door_id=int(arg2), access_code=int(arg3)))
   216        File "/usr/local/lib/python3.4/dist-packages/grpc/_channel.py", line 492, in __call__
   217          return _end_unary_response_blocking(state, call, False, deadline)
   218        File "/usr/local/lib/python3.4/dist-packages/grpc/_channel.py", line 440, in _end_unary_response_blocking
   219          raise _Rendezvous(state, None, None, deadline)
   220      grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.CANCELLED, Received http2 header with status: 403)>
   221  
   222  
   223  This is now blocked, thanks to the Cilium network policy. And notice that unlike
   224  a traditional firewall which would just drop packets in a way indistinguishable
   225  from a network failure, because Cilium operates at the API-layer, it can
   226  explicitly reply with an custom HTTP 403 Unauthorized error, indicating that the
   227  request was intentionally denied for security reasons.
   228  
   229  Thank goodness that the empire IT staff hadn't had time to deploy Cilium on
   230  Cloud City's internal network prior to the escape attempt, or things might have
   231  turned out quite differently for Leia and the other Rebels!
   232  
   233  Clean-Up
   234  ========
   235  
   236  You have now installed Cilium, deployed a demo app, and tested
   237  L7 gRPC-aware network security policies. To clean-up, run:
   238  
   239  ::
   240  
   241     $ minikube delete
   242  
   243  After this, you can re-run the tutorial from Step 1.