github.com/imran-kn/cilium-fork@v1.6.9/Documentation/gettingstarted/http.rst (about) 1 .. only:: not (epub or latex or html) 2 3 WARNING: You are looking at unreleased Cilium documentation. 4 Please use the official rendered version released here: 5 http://docs.cilium.io 6 7 .. _gs_http: 8 9 ******************************** 10 HTTP/REST API call authorization 11 ******************************** 12 13 .. include:: gsg_requirements.rst 14 15 Deploy the Demo Application 16 =========================== 17 18 Now that we have Cilium deployed and ``kube-dns`` operating correctly we can deploy our demo application. 19 20 In our Star Wars-inspired example, there are three microservices applications: *deathstar*, *tiefighter*, and *xwing*. The *deathstar* runs an HTTP webservice on port 80, which is exposed as a `Kubernetes Service <https://kubernetes.io/docs/concepts/services-networking/service/>`_ to load-balance requests to *deathstar* across two pod replicas. The *deathstar* service provides landing services to the empire's spaceships so that they can request a landing port. The *tiefighter* pod represents a landing-request client service on a typical empire ship and *xwing* represents a similar service on an alliance ship. They exist so that we can test different security policies for access control to *deathstar* landing services. 21 22 **Application Topology for Cilium and Kubernetes** 23 24 .. image:: images/cilium_http_gsg.png 25 :scale: 30 % 26 27 The file ``http-sw-app.yaml`` contains a `Kubernetes Deployment <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_ for each of the three services. 28 Each deployment is identified using the Kubernetes labels (``org=empire, class=deathstar``), (``org=empire, class=tiefighter``), 29 and (``org=alliance, class=xwing``). 30 It also includes a deathstar-service, which load-balances traffic to all pods with label (``org=empire, class=deathstar``). 31 32 .. parsed-literal:: 33 34 $ kubectl create -f \ |SCM_WEB|\/examples/minikube/http-sw-app.yaml 35 service/deathstar created 36 deployment.extensions/deathstar created 37 pod/tiefighter created 38 pod/xwing created 39 40 41 Kubernetes will deploy the pods and service in the background. Running 42 ``kubectl get pods,svc`` will inform you about the progress of the operation. 43 Each pod will go through several states until it reaches ``Running`` at which 44 point the pod is ready. 45 46 :: 47 48 $ kubectl get pods,svc 49 NAME READY STATUS RESTARTS AGE 50 pod/deathstar-6fb5694d48-5hmds 1/1 Running 0 107s 51 pod/deathstar-6fb5694d48-fhf65 1/1 Running 0 107s 52 pod/tiefighter 1/1 Running 0 107s 53 pod/xwing 1/1 Running 0 107s 54 55 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 56 service/deathstar ClusterIP 10.96.110.8 <none> 80/TCP 107s 57 service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m53s 58 59 Each pod will be represented in Cilium as an :ref:`endpoint`. We can invoke the 60 ``cilium`` tool inside the Cilium pod to list them: 61 62 :: 63 64 $ kubectl -n kube-system get pods -l k8s-app=cilium 65 NAME READY STATUS RESTARTS AGE 66 cilium-5ngzd 1/1 Running 0 3m19s 67 68 $ kubectl -n kube-system exec cilium-1c2cz -- cilium endpoint list 69 ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS 70 ENFORCEMENT ENFORCEMENT 71 108 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default 10.15.233.139 ready 72 k8s:io.cilium.k8s.policy.serviceaccount=coredns 73 k8s:io.kubernetes.pod.namespace=kube-system 74 k8s:k8s-app=kube-dns 75 1011 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default 10.15.96.117 ready 76 k8s:io.cilium.k8s.policy.serviceaccount=coredns 77 k8s:io.kubernetes.pod.namespace=kube-system 78 k8s:k8s-app=kube-dns 79 2407 Disabled Disabled 22839 k8s:class=deathstar 10.15.129.95 ready 80 k8s:io.cilium.k8s.policy.cluster=default 81 k8s:io.cilium.k8s.policy.serviceaccount=default 82 k8s:io.kubernetes.pod.namespace=default 83 k8s:org=empire 84 2607 Disabled Disabled 4 reserved:health 10.15.28.196 ready 85 3339 Disabled Disabled 22839 k8s:class=deathstar 10.15.72.39 ready 86 k8s:io.cilium.k8s.policy.cluster=default 87 k8s:io.cilium.k8s.policy.serviceaccount=default 88 k8s:io.kubernetes.pod.namespace=default 89 k8s:org=empire 90 3738 Disabled Disabled 47764 k8s:class=xwing 10.15.116.85 ready 91 k8s:io.cilium.k8s.policy.cluster=default 92 k8s:io.cilium.k8s.policy.serviceaccount=default 93 k8s:io.kubernetes.pod.namespace=default 94 k8s:org=alliance 95 3837 Disabled Disabled 9164 k8s:class=tiefighter 10.15.22.126 ready 96 k8s:io.cilium.k8s.policy.cluster=default 97 k8s:io.cilium.k8s.policy.serviceaccount=default 98 k8s:io.kubernetes.pod.namespace=default 99 k8s:org=empire 100 101 102 Both ingress and egress policy enforcement is still disabled on all of these pods because no network 103 policy has been imported yet which select any of the pods. 104 105 Check Current Access 106 ==================== 107 From the perspective of the *deathstar* service, only the ships with label ``org=empire`` are allowed to connect and request landing. Since we have no rules enforced, both *xwing* and *tiefighter* will be able to request landing. To test this, use the commands below. 108 109 .. parsed-literal:: 110 111 $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing 112 Ship landed 113 $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing 114 Ship landed 115 116 Apply an L3/L4 Policy 117 ===================== 118 119 When using Cilium, endpoint IP addresses are irrelevant when defining security 120 policies. Instead, you can use the labels assigned to the pods to define 121 security policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster. 122 123 We'll start with the basic policy restricting deathstar landing requests to only the ships that have label (``org=empire``). This will not allow any ships that don't have the ``org=empire`` label to even connect with the *deathstar* service. 124 This is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), so it is often referred to as an L3/L4 network security policy. 125 126 Note: Cilium performs stateful *connection tracking*, meaning that if policy allows 127 the frontend to reach backend, it will automatically allow all required reply 128 packets that are part of backend replying to frontend within the context of the 129 same TCP/UDP connection. 130 131 **L4 Policy with Cilium and Kubernetes** 132 133 .. image:: images/cilium_http_l3_l4_gsg.png 134 :scale: 30 % 135 136 We can achieve that with the following CiliumNetworkPolicy: 137 138 .. literalinclude:: ../../examples/minikube/sw_l3_l4_policy.yaml 139 140 CiliumNetworkPolicies match on pod labels using an "endpointSelector" to identify the sources and destinations to which the policy applies. 141 The above policy whitelists traffic sent from any pods with label (``org=empire``) to *deathstar* pods with label (``org=empire, class=deathstar``) on TCP port 80. 142 143 To apply this L3/L4 policy, run: 144 145 .. parsed-literal:: 146 147 $ kubectl create -f \ |SCM_WEB|\/examples/minikube/sw_l3_l4_policy.yaml 148 ciliumnetworkpolicy.cilium.io/rule1 created 149 150 151 Now if we run the landing requests again, only the *tiefighter* pods with the label ``org=empire`` will succeed. The *xwing* pods will be blocked! 152 153 .. parsed-literal:: 154 $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing 155 Ship landed 156 157 This works as expected. Now the same request run from an *xwing* pod will fail: 158 159 .. parsed-literal:: 160 $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing 161 162 This request will hang, so press Control-C to kill the curl request, or wait for it to time out. 163 164 Inspecting the Policy 165 ===================== 166 167 If we run ``cilium endpoint list`` again we will see that the pods with the label ``org=empire`` and ``class=deathstar`` now have ingress policy enforcement enabled as per the policy above. 168 169 :: 170 171 $ kubectl -n kube-system exec cilium-1c2cz -- cilium endpoint list 172 ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS 173 ENFORCEMENT ENFORCEMENT 174 108 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default 10.15.233.139 ready 175 k8s:io.cilium.k8s.policy.serviceaccount=coredns 176 k8s:io.kubernetes.pod.namespace=kube-system 177 k8s:k8s-app=kube-dns 178 1011 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default 10.15.96.117 ready 179 k8s:io.cilium.k8s.policy.serviceaccount=coredns 180 k8s:io.kubernetes.pod.namespace=kube-system 181 k8s:k8s-app=kube-dns 182 1518 Disabled Disabled 4 reserved:health 10.15.28.196 ready 183 2407 Enabled Disabled 22839 k8s:class=deathstar 10.15.129.95 ready 184 k8s:io.cilium.k8s.policy.cluster=default 185 k8s:io.cilium.k8s.policy.serviceaccount=default 186 k8s:io.kubernetes.pod.namespace=default 187 k8s:org=empire 188 3339 Enabled Disabled 22839 k8s:class=deathstar 10.15.72.39 ready 189 k8s:io.cilium.k8s.policy.cluster=default 190 k8s:io.cilium.k8s.policy.serviceaccount=default 191 k8s:io.kubernetes.pod.namespace=default 192 k8s:org=empire 193 3738 Disabled Disabled 47764 k8s:class=xwing 10.15.116.85 ready 194 k8s:io.cilium.k8s.policy.cluster=default 195 k8s:io.cilium.k8s.policy.serviceaccount=default 196 k8s:io.kubernetes.pod.namespace=default 197 k8s:org=alliance 198 3837 Disabled Disabled 9164 k8s:class=tiefighter 10.15.22.126 ready 199 k8s:io.cilium.k8s.policy.cluster=default 200 k8s:io.cilium.k8s.policy.serviceaccount=default 201 k8s:io.kubernetes.pod.namespace=default 202 k8s:org=empire 203 204 205 You can also inspect the policy details via ``kubectl`` 206 207 :: 208 209 $ kubectl get cnp 210 NAME AGE 211 rule1 2m 212 213 $ kubectl describe cnp rule1 214 Name: rule1 215 Namespace: default 216 Labels: <none> 217 Annotations: <none> 218 API Version: cilium.io/v2 219 Description: L3-L4 policy to restrict deathstar access to empire ships only 220 Kind: CiliumNetworkPolicy 221 Metadata: 222 Creation Timestamp: 2019-01-23T12:36:32Z 223 Generation: 1 224 Resource Version: 1115 225 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 226 UID: 837a2f1b-1f0b-11e9-9609-080027702f09 227 Spec: 228 Endpoint Selector: 229 Match Labels: 230 Class: deathstar 231 Org: empire 232 Ingress: 233 From Endpoints: 234 Match Labels: 235 Org: empire 236 To Ports: 237 Ports: 238 Port: 80 239 Protocol: TCP 240 Status: 241 Nodes: 242 Minikube: 243 Enforcing: true 244 Last Updated: 2019-01-23T12:36:32.277839184Z 245 Local Policy Revision: 5 246 Ok: true 247 Events: <none> 248 249 250 251 Apply and Test HTTP-aware L7 Policy 252 =================================== 253 254 In the simple scenario above, it was sufficient to either give *tiefighter* / 255 *xwing* full access to *deathstar's* API or no access at all. But to 256 provide the strongest security (i.e., enforce least-privilege isolation) 257 between microservices, each service that calls *deathstar's* API should be 258 limited to making only the set of HTTP requests it requires for legitimate 259 operation. 260 261 For example, consider that the *deathstar* service exposes some maintenance APIs which should not be called by random empire ships. To see this run: 262 263 :: 264 265 $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port 266 Panic: deathstar exploded 267 268 goroutine 1 [running]: 269 main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa) 270 /code/src/github.com/empire/deathstar/ 271 temp/main.go:9 +0x64 272 main.main() 273 /code/src/github.com/empire/deathstar/ 274 temp/main.go:5 +0x85 275 276 277 While this is an illustrative example, unauthorized access such as above can have adverse security repercussions. 278 279 **L7 Policy with Cilium and Kubernetes** 280 281 .. image:: images/cilium_http_l3_l4_l7_gsg.png 282 :scale: 30 % 283 284 Cilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what 285 URLs the *tiefighter* is allowed to reach. Here is an example policy file that 286 extends our original policy by limiting *tiefighter* to making only a POST /v1/request-landing 287 API call, but disallowing all other calls (including PUT /v1/exhaust-port). 288 289 .. literalinclude:: ../../examples/minikube/sw_l3_l4_l7_policy.yaml 290 291 Update the existing rule to apply L7-aware policy to protect *app1* using: 292 293 .. parsed-literal:: 294 295 $ kubectl apply -f \ |SCM_WEB|\/examples/minikube/sw_l3_l4_l7_policy.yaml 296 ciliumnetworkpolicy.cilium.io/rule1 configured 297 298 299 We can now re-run the same test as above, but we will see a different outcome: 300 301 :: 302 303 $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing 304 Ship landed 305 306 307 and 308 309 :: 310 311 $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port 312 Access denied 313 314 As you can see, with Cilium L7 security policies, we are able to permit 315 *tiefighter* to access only the required API resources on *deathstar*, thereby 316 implementing a "least privilege" security approach for communication between 317 microservices. 318 319 You can observe the L7 policy via ``kubectl``: 320 321 :: 322 323 $ kubectl describe ciliumnetworkpolicies 324 Name: rule1 325 Namespace: default 326 Labels: <none> 327 Annotations: kubectl.kubernetes.io/last-applied-configuration: 328 {"apiVersion":"cilium.io/v2","description":"L7 policy to restrict access to specific HTTP call","kind":"CiliumNetworkPolicy","metadata":{"... 329 API Version: cilium.io/v2 330 Description: L7 policy to restrict access to specific HTTP call 331 Kind: CiliumNetworkPolicy 332 Metadata: 333 Creation Timestamp: 2019-01-23T12:36:32Z 334 Generation: 2 335 Resource Version: 1484 336 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 337 UID: 837a2f1b-1f0b-11e9-9609-080027702f09 338 Spec: 339 Endpoint Selector: 340 Match Labels: 341 Class: deathstar 342 Org: empire 343 Ingress: 344 From Endpoints: 345 Match Labels: 346 Org: empire 347 To Ports: 348 Ports: 349 Port: 80 350 Protocol: TCP 351 Rules: 352 Http: 353 Method: POST 354 Path: /v1/request-landing 355 Status: 356 Nodes: 357 Minikube: 358 Annotations: 359 Kubectl . Kubernetes . Io / Last - Applied - Configuration: {"apiVersion":"cilium.io/v2","description":"L7 policy to restrict access to specific HTTP call","kind":"CiliumNetworkPolicy","metadata":{"annotations":{},"name":"rule1","namespace":"default"},"spec":{"endpointSelector":{"matchLabels":{"class":"deathstar","org":"empire"}},"ingress":[{"fromEndpoints":[{"matchLabels":{"org":"empire"}}],"toPorts":[{"ports":[{"port":"80","protocol":"TCP"}],"rules":{"http":[{"method":"POST","path":"/v1/request-landing"}]}}]}]}} 360 361 Enforcing: true 362 Last Updated: 2019-01-23T12:39:30.823729308Z 363 Local Policy Revision: 7 364 Ok: true 365 Events: <none> 366 367 368 and ``cilium`` CLI: 369 370 :: 371 372 $ kubectl -n kube-system exec cilium-qh5l2 cilium policy get 373 [ 374 { 375 "endpointSelector": { 376 "matchLabels": { 377 "any:class": "deathstar", 378 "any:org": "empire", 379 "k8s:io.kubernetes.pod.namespace": "default" 380 } 381 }, 382 "ingress": [ 383 { 384 "fromEndpoints": [ 385 { 386 "matchLabels": { 387 "any:org": "empire", 388 "k8s:io.kubernetes.pod.namespace": "default" 389 } 390 } 391 ], 392 "toPorts": [ 393 { 394 "ports": [ 395 { 396 "port": "80", 397 "protocol": "TCP" 398 } 399 ], 400 "rules": { 401 "http": [ 402 { 403 "path": "/v1/request-landing", 404 "method": "POST" 405 } 406 ] 407 } 408 } 409 ] 410 } 411 ], 412 "labels": [ 413 { 414 "key": "io.cilium.k8s.policy.name", 415 "value": "rule1", 416 "source": "k8s" 417 }, 418 { 419 "key": "io.cilium.k8s.policy.uid", 420 "value": "837a2f1b-1f0b-11e9-9609-080027702f09", 421 "source": "k8s" 422 }, 423 { 424 "key": "io.cilium.k8s.policy.namespace", 425 "value": "default", 426 "source": "k8s" 427 }, 428 { 429 "key": "io.cilium.k8s.policy.derived-from", 430 "value": "CiliumNetworkPolicy", 431 "source": "k8s" 432 } 433 ] 434 } 435 ] 436 Revision: 7 437 438 439 We hope you enjoyed the tutorial. Feel free to play more with the setup, read 440 the rest of the documentation, and reach out to us on the `Cilium 441 Slack channel <https://cilium.herokuapp.com>`_ with any questions!