github.com/imran-kn/cilium-fork@v1.6.9/Documentation/gettingstarted/elasticsearch.rst (about) 1 .. only:: not (epub or latex or html) 2 3 WARNING: You are looking at unreleased Cilium documentation. 4 Please use the official rendered version released here: 5 http://docs.cilium.io 6 7 ************************************** 8 Getting Started Securing Elasticsearch 9 ************************************** 10 11 This document serves as an introduction for using Cilium to enforce Elasticsearch-aware 12 security policies. It is a detailed walk-through of getting a single-node 13 Cilium environment running on your machine. It is designed to take 15-30 14 minutes. 15 16 .. include:: gsg_requirements.rst 17 18 Deploy the Demo Application 19 =========================== 20 21 Following the Cilium tradition, we will use a Star Wars-inspired example. The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including: 22 23 * ``index: troop_logs``: Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers! 24 * ``index: spaceship_diagnostics``: Spaceships diagnostics data collected from every spaceship which is used for R&D and improvement of the spaceships. 25 26 Every outpost has an Elasticsearch client service to upload the Stormtroopers logs. And every spaceship has a service to upload diagnostics. Similarly, the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data. Before we look into the security concerns, let's first create this application scenario in minikube. 27 28 Deploy the app using command below, which will create 29 30 * An ``elasticsearch`` service with the selector label ``component:elasticsearch`` and a pod running Elasticsearch. 31 * Three Elasticsearch clients one each for ``empire-hq``, ``outpost`` and ``spaceship``. 32 33 .. parsed-literal:: 34 35 $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-es/es-sw-app.yaml 36 serviceaccount "elasticsearch" created 37 service "elasticsearch" created 38 replicationcontroller "es" created 39 role "elasticsearch" created 40 rolebinding "elasticsearch" created 41 pod "outpost" created 42 pod "empire-hq" created 43 pod "spaceship" created 44 45 :: 46 47 $ kubectl get svc,pods 48 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 49 svc/elasticsearch NodePort 10.111.238.254 <none> 9200:30130/TCP,9300:31721/TCP 2d 50 svc/etcd-cilium NodePort 10.98.67.60 <none> 32379:31079/TCP,32380:31080/TCP 9d 51 svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d 52 53 NAME READY STATUS RESTARTS AGE 54 po/empire-hq 1/1 Running 0 2d 55 po/es-g9qk2 1/1 Running 0 2d 56 po/etcd-cilium-0 1/1 Running 0 9d 57 po/outpost 1/1 Running 0 2d 58 po/spaceship 1/1 Running 0 2d 59 60 61 Security Risks for Elasticsearch Access 62 ======================================= 63 64 For Elasticsearch clusters the **least privilege security** challenge is to give clients access only to particular indices, and to limit the operations each client is allowed to perform on each index. In this example, the ``outpost`` Elasticsearch clients only need access to upload troop logs; and the ``empire-hq`` client only needs search access to both the indices. From the security perspective, the outposts are weak spots and susceptible to be captured by the rebels. Once compromised, the clients can be used to search and manipulate the critical data in Elasticsearch. We can simulate this attack, but first let's run the commands for legitimate behavior for all the client services. 65 66 ``outpost`` client uploading troop logs 67 68 :: 69 70 $ kubectl exec outpost -- python upload_logs.py 71 Uploading Stormtroopers Performance Logs 72 created : {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True} 73 74 ``spaceship`` uploading diagnostics 75 76 :: 77 78 $ kubectl exec spaceship -- python upload_diagnostics.py 79 Uploading Spaceship Diagnostics 80 created : {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True} 81 82 ``empire-hq`` running search queries for logs and diagnostics 83 84 :: 85 86 $ kubectl exec empire-hq -- python search.py 87 Searching for Spaceship Diagnostics 88 Got 1 Hits: 89 {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \ 90 '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \ 91 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}} 92 Searching for Stormtroopers Performance Logs 93 Got 1 Hits: 94 {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \ 95 '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \ 96 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}} 97 98 99 Now imagine an outpost captured by the rebels. In the commands below, the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost. 100 101 :: 102 103 $ kubectl exec outpost -- python search.py 104 Searching for Spaceship Diagnostics 105 Got 1 Hits: 106 {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \ 107 '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \ 108 'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}} 109 Searching for Stormtroopers Performance Logs 110 Got 1 Hits: 111 {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \ 112 '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \ 113 'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}} 114 115 Rebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire-hq! (Hint: Rebels have changed the ``stats`` for the tiefighter spaceship, a change hard to detect but with adverse impact!) 116 117 118 :: 119 120 $ kubectl exec outpost -- python update.py 121 Uploading Spaceship Diagnostics 122 {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \ 123 '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \ 124 'stats': '[OK] [ENGINE OK @SPEED 5000 km/s]'}} 125 126 127 Securing Elasticsearch Using Cilium 128 ==================================== 129 130 131 .. image:: images/cilium_es_gsg_topology.png 132 :scale: 40 % 133 134 Following the least privilege security principle, we want to the allow the following legitimate actions and nothing more: 135 136 * ``outpost`` service only has upload access to ``index: troop_logs`` 137 * ``spaceship`` service only has upload access to ``index: spaceship_diagnostics`` 138 * ``empire-hq`` service only has search access for both the indices 139 140 Fortunately, the Empire DevOps team is using Cilium for their Kubernetes cluster. Cilium provides L7 visibility and security policies to control Elasticsearch API access. Cilium follows the **white-list, least privilege model** for security. That is to say, a *CiliumNetworkPolicy* contains a list of rules that define **allowed requests** and any request that does not match the rules is denied. 141 142 In this example, the policy rules are defined for inbound traffic (i.e., "ingress") connections to the *elasticsearch* service. Note that endpoints selected as backend pods for the service are defined by the *selector* labels. *Selector* labels use the same concept as Kubernetes to define a service. In this example, label ``component: elasticsearch`` defines the pods that are part of the *elasticsearch* service in Kubernetes. 143 144 In the policy file below, you will see the following rules for controlling the indices access and actions performed: 145 146 * ``fromEndpoints`` with labels ``app:spaceship`` only ``HTTP`` ``PUT`` is allowed on paths matching regex ``^/spaceship_diagnostics/stats/.*$`` 147 * ``fromEndpoints`` with labels ``app:outpost`` only ``HTTP`` ``PUT`` is allowed on paths matching regex ``^/troop_logs/log/.*$`` 148 * ``fromEndpoints`` with labels ``app:empire`` only ``HTTP`` ``GET`` is allowed on paths matching regex ``^/spaceship_diagnostics/_search/??.*$`` and ``^/troop_logs/search/??.*$`` 149 150 .. literalinclude:: ../../examples/kubernetes-es/es-sw-policy.yaml 151 152 Apply this Elasticsearch-aware network security policy using ``kubectl``: 153 154 .. parsed-literal:: 155 156 $ kubectl create -f \ |SCM_WEB|\/examples/kubernetes-es/es-sw-policy.yaml 157 ciliumnetworkpolicy "secure-empire-elasticsearch" created 158 159 Let's test the security policies. Firstly, the search access is blocked for both outpost and spaceship. So from a compromised outpost, rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics. Secondly, the outpost clients don't have access to create or update the ``index: spaceship_diagnostics``. 160 161 :: 162 163 $ kubectl exec outpost -- python search.py 164 GET http://elasticsearch:9200/spaceship_diagnostics/_search [status:403 request:0.008s] 165 ... 166 ... 167 elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n') 168 command terminated with exit code 1 169 170 :: 171 172 $ kubectl exec outpost -- python update.py 173 PUT http://elasticsearch:9200/spaceship_diagnostics/stats/1 [status:403 request:0.006s] 174 ... 175 ... 176 elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n') 177 command terminated with exit code 1 178 179 We can re-run any of the below commands to show that the security policy still allows all legitimate requests (i.e., no 403 errors are returned). 180 181 :: 182 183 $ kubectl exec outpost -- python upload_logs.py 184 ... 185 $ kubectl exec spaceship -- python upload_diagnostics.py 186 ... 187 $ kubectl exec empire-hq -- python search.py 188 ... 189 190 191 Clean Up 192 ======== 193 194 You have now installed Cilium, deployed a demo app, and finally deployed & tested Elasticsearch-aware network security policies. To clean up, run: 195 196 :: 197 198 $ minikube delete