github.com/cilium/cilium@v1.16.2/Documentation/security/cassandra.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  *****************************
     8  Securing a Cassandra Database
     9  *****************************
    10  
    11  This document serves as an introduction to using Cilium to enforce Cassandra-aware
    12  security policies.  It is a detailed walk-through of getting a single-node
    13  Cilium environment running on your machine. It is designed to take 15-30
    14  minutes.
    15  
    16  **NOTE:** Cassandra-aware policy support is still in beta phase.  It is not yet ready for
    17  production use.   Additionally, the Cassandra-specific policy language is highly likely to
    18  change in a future Cilium version.
    19  
    20  .. include:: gsg_requirements.rst
    21  
    22  Deploy the Demo Application
    23  ===========================
    24  
    25  Now that we have Cilium deployed and ``kube-dns`` operating correctly we can
    26  deploy our demo Cassandra application.  Since our first
    27  `HTTP-aware Cilium  Star Wars demo <https://cilium.io/blog/2017/5/4/demo-may-the-force-be-with-you/>`_
    28  showed how the Galactic Empire used HTTP-aware security policies to protect the Death Star from the
    29  Rebel Alliance, this Cassandra demo is Star Wars-themed as well.
    30  
    31  `Apache Cassanadra <http://cassandra.apache.org>`_ is a popular NOSQL database focused on
    32  delivering high-performance transactions (especially on writes) without sacrificing on availability or scale.
    33  Cassandra operates as a cluster of servers, and Cassandra clients query these services via a
    34  the `native Cassandra protocol <https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v4.spec>`_ .
    35  Cilium understands the Cassandra protocol, and thus is able to provide deep visibility and control over
    36  which clients are able to access particular tables inside a Cassandra cluster, and which actions
    37  (e.g., "select", "insert", "update", "delete") can be performed on tables.
    38  
    39  With Cassandra, each table belongs to a "keyspace", allowing multiple groups to use a single cluster without conflicting.
    40  Cassandra queries specify the full table name qualified by the keyspace using the syntax "<keyspace>.<table>".
    41  
    42  In our simple example, the Empire uses a Cassandra cluster to store two different types of information:
    43  
    44  - **Employee Attendance Records** : Use to store daily attendance data (attendance.daily_records).
    45  - **Deathstar Scrum Reports** : Daily scrum reports from the teams working on the Deathstar (deathstar.scrum_reports).
    46  
    47  To keep the setup small, we will just launch a small number of pods to represent this setup:
    48  
    49  - **cass-server** : A single pod running the Cassandra service, representing a Cassandra cluster
    50    (label app=cass-server).
    51  - **empire-hq** : A pod representing the Empire's Headquarters, which is the only pod that should
    52    be able to read all attendance data, or read/write the Deathstar scrum notes (label app=empire-hq).
    53  - **empire-outpost** : A random outpost in the empire.  It should be able to insert employee attendance
    54    records, but not read records for other empire facilities.   It also should not have any access to the
    55    deathstar keyspace (label app=empire-outpost).
    56  
    57  All pods other than *cass-server* are Cassandra clients, which need access to the *cass-server*
    58  container on TCP port 9042 in order to send Cassandra protocol messages.
    59  
    60  .. image:: images/cilium_cass_gsg_topology.png
    61  
    62  The file ``cass-sw-app.yaml`` contains a Kubernetes Deployment for each of the pods described
    63  above, as well as a Kubernetes Service *cassandra-svc* for the Cassandra cluster.
    64  
    65  .. parsed-literal::
    66  
    67      $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-sw-app.yaml
    68      deployment.apps/cass-server created
    69      service/cassandra-svc created
    70      deployment.apps/empire-hq created
    71      deployment.apps/empire-outpost created
    72  
    73  Kubernetes will deploy the pods and service in the background.
    74  Running ``kubectl get svc,pods`` will inform you about the progress of the operation.
    75  Each pod will go through several states until it reaches ``Running`` at which
    76  point the setup is ready.
    77  
    78  .. code-block:: shell-session
    79  
    80      $ kubectl get svc,pods
    81      NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
    82      service/cassandra-svc   ClusterIP   None         <none>        9042/TCP   1m
    83      service/kubernetes      ClusterIP   10.96.0.1    <none>        443/TCP    15h
    84  
    85      NAME                                  READY     STATUS    RESTARTS   AGE
    86      pod/cass-server-5674d5b946-x8v4j      1/1       Running   0          1m
    87      pod/empire-hq-c494c664d-xmvdl         1/1       Running   0          1m
    88      pod/empire-outpost-68bf76858d-flczn   1/1       Running   0          1m
    89  
    90  
    91  Step 3: Test Basic Cassandra Access
    92  ===================================
    93  
    94  First, we'll create the keyspaces and tables mentioned above, and populate them with some initial data:
    95  
    96  .. parsed-literal::
    97  
    98     $  curl -s \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-populate-tables.sh | bash
    99  
   100  Next, create two environment variables that refer to the *empire-hq* and *empire-outpost* pods:
   101  
   102  .. code-block:: shell-session
   103  
   104     $ HQ_POD=$(kubectl get pods -l app=empire-hq -o jsonpath='{.items[0].metadata.name}')
   105     $ OUTPOST_POD=$(kubectl get pods -l app=empire-outpost -o jsonpath='{.items[0].metadata.name}')
   106  
   107  
   108  Now we will run the 'cqlsh' Cassandra client in the *empire-outpost* pod, telling it to access
   109  the Cassandra cluster identified by the 'cassandra-svc' DNS name:
   110  
   111  .. code-block:: shell-session
   112  
   113      $ kubectl exec -it $OUTPOST_POD -- cqlsh cassandra-svc
   114      Connected to Test Cluster at cassandra-svc:9042.
   115      [cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]
   116      Use HELP for help.
   117      cqlsh>
   118  
   119  Next, using the cqlsh prompt, we'll show that the outpost can add records to the "daily_records" table
   120  in the "attendance" keyspace:
   121  
   122  .. code-block:: shell-session
   123  
   124      cqlsh> INSERT INTO attendance.daily_records (creation, loc_id, present, empire_member_id) values (now(), 074AD3B9-A47D-4EBC-83D3-CAD75B1911CE, true, 6AD3139F-EBFC-4E0C-9F79-8F997BA01D90);
   125  
   126  We have confirmed that outposts are able to report daily attendance records as intended. We're off to a good start!
   127  
   128  The Danger of a Compromised Cassandra Client
   129  ============================================
   130  
   131  But what if a rebel spy gains access to any of the remote outposts that act as a Cassandra client?
   132  Since every client has access to the Cassandra API on port 9042, it can do some bad stuff.
   133  For starters, the outpost container can not only add entries to the attendance.daily_reports table,
   134  but it could read all entries as well.
   135  
   136  To see this, we can run the following command:
   137  
   138  .. code-block:: shell-session
   139  
   140    $ cqlsh> SELECT * FROM attendance.daily_records;
   141  
   142      loc_id                               | creation                             | empire_member_id                     | present
   143     --------------------------------------+--------------------------------------+--------------------------------------+---------
   144     a855e745-69d8-4159-b8b6-e2bafed8387a | c692ce90-bf57-11e8-98e6-f1a9f45fc4d8 | cee6d956-dbeb-4b09-ad21-1dd93290fa6c |    True
   145     5b9a7990-657e-442d-a3f7-94484f06696e | c8493120-bf57-11e8-98e6-f1a9f45fc4d8 | e74a0300-94f3-4b3d-aee4-fea85eca5af7 |    True
   146     53ed94d0-ddac-4b14-8c2f-ba6f83a8218c | c641a150-bf57-11e8-98e6-f1a9f45fc4d8 | 104ddbb6-f2f7-4cd0-8683-cc18cccc1326 |    True
   147     074ad3b9-a47d-4ebc-83d3-cad75b1911ce | 9674ed40-bf59-11e8-98e6-f1a9f45fc4d8 | 6ad3139f-ebfc-4e0c-9f79-8f997ba01d90 |    True
   148     fe72cc39-dffb-45dc-8e5f-86c674a58951 | c5e79a70-bf57-11e8-98e6-f1a9f45fc4d8 | 6782689c-0488-4ecb-b582-a2ccd282405e |    True
   149     461f4176-eb4c-4bcc-a08a-46787ca01af3 | c6fefde0-bf57-11e8-98e6-f1a9f45fc4d8 | 01009199-3d6b-4041-9c43-b1ca9aef021c |    True
   150     64dbf608-6947-4a23-98e9-63339c413136 | c8096900-bf57-11e8-98e6-f1a9f45fc4d8 | 6ffe024e-beff-4370-a1b5-dcf6330ec82b |    True
   151     13cefcac-5652-4c69-a3c2-1484671f2467 | c53f4c80-bf57-11e8-98e6-f1a9f45fc4d8 | 55218adc-2f3d-4f84-a693-87a2c238bb26 |    True
   152     eabf5185-376b-4d4a-a5b5-99f912d98279 | c593fc30-bf57-11e8-98e6-f1a9f45fc4d8 | 5e22159b-f3a9-4f8a-9944-97375df570e9 |    True
   153     3c0ae2d1-c836-4aa4-8fe2-5db6cc1f92fc | c7af1400-bf57-11e8-98e6-f1a9f45fc4d8 | 0ccb3df7-78d0-4434-8a7f-4bfa8d714275 |    True
   154     31a292e0-2e28-4a7d-8c84-8d4cf0c57483 | c4e0d8d0-bf57-11e8-98e6-f1a9f45fc4d8 | 8fe7625c-f482-4eb6-b33e-271440777403 |    True
   155  
   156    (11 rows)
   157  
   158  
   159  Uh oh!  The rebels now has strategic information about empire troop strengths at each location in the galaxy.
   160  
   161  But even more nasty from a security perspective is that the outpost container can also access information in any keyspace,
   162  including the deathstar keyspace.  For example, run:
   163  
   164  .. code-block:: shell-session
   165  
   166   $ cqlsh> SELECT * FROM deathstar.scrum_notes;
   167  
   168    empire_member_id                     | content                                                                                                        | creation
   169   --------------------------------------+----------------------------------------------------------------------------------------------------------------+--------------------------------------
   170   34e564c2-781b-477e-acd0-b357d67f94f2 | Designed protective shield for deathstar.  Could be based on nearby moon.  Feature punted to v2.  Not blocked. | c3c8b210-bf57-11e8-98e6-f1a9f45fc4d8
   171   dfa974ea-88cd-4e9b-85e3-542b9d00e2df |   I think the exhaust port could be vulnerable to a direct hit.  Hope no one finds out about it.  Not blocked. | c37f4d00-bf57-11e8-98e6-f1a9f45fc4d8
   172   ee12306a-7b44-46a4-ad68-42e86f0f111e |        Trying to figure out if we should paint it medium grey, light grey, or medium-light grey.  Not blocked. | c32daa90-bf57-11e8-98e6-f1a9f45fc4d8
   173  
   174   (3 rows)
   175  
   176  We see that any outpost can actually access the deathstar scrum notes, which mentions a pretty serious issue with the exhaust port.
   177  
   178  Securing Access to Cassandra with Cilium
   179  ========================================
   180  
   181  Obviously, it would be much more secure to limit each pod's access to the Cassandra server to be
   182  least privilege (i.e., only what is needed for the app to operate correctly and nothing more).
   183  
   184  We can do that with the following Cilium security policy.   As with Cilium HTTP policies, we can write
   185  policies that identify pods by labels, and then limit the traffic in/out of this pod.  In
   186  this case, we'll create a policy that identifies the tables that each client should be able to access,
   187  the actions that are allowed on those tables, and deny the rest.
   188  
   189  As an example, a policy could limit containers with label *app=empire-outpost* to only be able to
   190  insert entries into the table "attendance.daily_reports", but would block any attempt by a compromised outpost
   191  to read all attendance information or access other keyspaces.
   192  
   193  .. image:: images/cilium_cass_gsg_attack.png
   194  
   195  Here is the *CiliumNetworkPolicy* rule that limits access of pods with label *app=empire-outpost* to
   196  only insert records into "attendance.daily_reports":
   197  
   198  .. literalinclude:: ../../examples/kubernetes-cassandra/cass-sw-security-policy.yaml
   199  
   200  A *CiliumNetworkPolicy* contains a list of rules that define allowed requests, meaning that requests
   201  that do not match any rules are denied as invalid.
   202  
   203  The rule explicitly matches Cassandra connections destined to TCP 9042 on cass-server pods, and allows
   204  query actions like select/insert/update/delete only on a specified set of tables.
   205  The above rule applies to inbound (i.e., "ingress") connections to cass-server pods (as indicated by "app:cass-server"
   206  in the "endpointSelector" section).  The rule applies different rules based on whether the
   207  client pod has labels "app: empire-outpost" or "app: empire-hq" as indicated by the "fromEndpoints" section.
   208  
   209  The policy limits the *empire-outpost* pod to performing "select" queries on the "system" and "system_schema"
   210  keyspaces (required by cqlsh on startup) and "insert" queries to the "attendance.daily_records" table.
   211  
   212  The full policy adds another rule that allows all queries from the *empire-hq* pod.
   213  
   214  Apply this Cassandra-aware network security policy using ``kubectl`` in a new window:
   215  
   216  .. parsed-literal::
   217  
   218      $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-sw-security-policy.yaml
   219  
   220  If we then again try to perform the attacks from the *empire-outpost* pod, we'll see that they are denied:
   221  
   222  .. code-block:: shell-session
   223  
   224    $ cqlsh> SELECT * FROM attendance.daily_records;
   225    Unauthorized: Error from server: code=2100 [Unauthorized] message="Request Unauthorized"
   226  
   227  This is because the policy only permits pods with labels app: empire-outpost to insert into attendance.daily_records, it does
   228  not permit select on that table, or any action on other tables (with the exception of the system.* and system_schema.*
   229  keyspaces).  Its worth noting that we don't simply drop the message (which
   230  could easily be confused with a network error), but rather we respond with the Cassandra Unauthorized error message.
   231  (similar to how HTTP would return an error code of 403 unauthorized).
   232  
   233  Likewise, if the outpost pod ever tries to access a table in another keyspace, like deathstar, this request will also be
   234  denied:
   235  
   236  .. code-block:: shell-session
   237  
   238    $ cqlsh> SELECT * FROM deathstar.scrum_notes;
   239    Unauthorized: Error from server: code=2100 [Unauthorized] message="Request Unauthorized"
   240  
   241  This is blocked as well, thanks to the Cilium network policy.
   242  
   243  Use another window to confirm that the *empire-hq* pod still has full access to the cassandra cluster:
   244  
   245  .. code-block:: shell-session
   246  
   247      $ kubectl exec -it $HQ_POD -- cqlsh cassandra-svc
   248      Connected to Test Cluster at cassandra-svc:9042.
   249      [cqlsh 5.0.1 | Cassandra 3.11.3 | CQL spec 3.4.4 | Native protocol v4]
   250      Use HELP for help.
   251      cqlsh>
   252  
   253  The power of Cilium's identity-based security allows *empire-hq* to still have full access
   254  to both tables:
   255  
   256  .. code-block:: shell-session
   257  
   258  
   259    $ cqlsh> SELECT * FROM attendance.daily_records;
   260     loc_id                               | creation                             | empire_member_id                     | present
   261    --------------------------------------+--------------------------------------+--------------------------------------+---------
   262    a855e745-69d8-4159-b8b6-e2bafed8387a | c692ce90-bf57-11e8-98e6-f1a9f45fc4d8 | cee6d956-dbeb-4b09-ad21-1dd93290fa6c |    True
   263  
   264    <snip>
   265  
   266    (12 rows)
   267  
   268  
   269  Similarly, the deathstar can still access the scrum notes:
   270  
   271  .. code-block:: shell-session
   272  
   273    $ cqlsh> SELECT * FROM deathstar.scrum_notes;
   274  
   275      <snip>
   276  
   277    (3 rows)
   278  
   279  Cassandra-Aware Visibility (Bonus)
   280  ==================================
   281  
   282  As a bonus, you can re-run the above queries with policy enforced and view how Cilium provides Cassandra-aware visibility, including
   283  whether requests are forwarded or denied.   First, use "kubectl exec" to access the cilium pod.
   284  
   285  .. code-block:: shell-session
   286  
   287    $ CILIUM_POD=$(kubectl get pods -n kube-system -l k8s-app=cilium -o jsonpath='{.items[0].metadata.name}')
   288    $ kubectl exec -it -n kube-system $CILIUM_POD -- /bin/bash
   289    root@minikube:~#
   290  
   291  Next, start Cilium monitor, and limit the output to only "l7" type messages using the "-t" flag:
   292  
   293  ::
   294  
   295    root@minikube:~# cilium-dbg monitor -t l7
   296    Listening for events on 2 CPUs with 64x4096 of shared memory
   297    Press Ctrl-C to quit
   298  
   299  In the other windows, re-run the above queries, and you will see that Cilium provides full visibility at the level of
   300  each Cassandra request, indicating:
   301  
   302  - The Kubernetes label-based identity of both the sending and receiving pod.
   303  - The details of the Cassandra request, including the 'query_action' (e.g., 'select', 'insert')
   304    and 'query_table' (e.g., 'system.local', 'attendance.daily_records')
   305  - The 'verdict' indicating whether the request was allowed by policy ('Forwarded' or 'Denied').
   306  
   307  Example output is below.   All requests are from *empire-outpost* to *cass-server*.   The first two requests are
   308  allowed, a 'select' into 'system.local' and an 'insert' into 'attendance.daily_records'.
   309  The second two requests are denied, a 'select' into 'attendance.daily_records' and a select into 'deathstar.scrum_notes' :
   310  
   311  ::
   312  
   313    <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Forwarded query_table:system.local query_action:selec
   314    <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Forwarded query_action:insert query_table:attendance.daily_records
   315    <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Denied query_action:select query_table:attendance.daily_records
   316    <- Request cassandra from 0 ([k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:app=empire-outpost]) to 64503 ([k8s:app=cass-server k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 12443->16168, verdict Denied query_table:deathstar.scrum_notes query_action:select
   317  
   318  Clean Up
   319  ========
   320  
   321  You have now installed Cilium, deployed a demo app, and tested
   322  L7 Cassandra-aware network security policies.  To clean up, run:
   323  
   324  .. parsed-literal::
   325  
   326     $ kubectl delete -f \ |SCM_WEB|\/examples/kubernetes-cassandra/cass-sw-app.yaml
   327     $ kubectl delete cnp secure-empire-cassandra
   328  
   329  After this, you can re-run the tutorial from Step 1.