github.com/datadog/cilium@v1.6.12/Documentation/concepts/security.rst (about) 1 .. only:: not (epub or latex or html) 2 3 WARNING: You are looking at unreleased Cilium documentation. 4 Please use the official rendered version released here: 5 http://docs.cilium.io 6 7 .. _concepts_security: 8 9 ******** 10 Security 11 ******** 12 13 Cilium provides security on multiple levels. Each can be used individually or 14 combined together. 15 16 * :ref:`arch_id_security`: Connectivity policies between endpoints (Layer 3), 17 e.g. any endpoint with label ``role=frontend`` can connect to any endpoint with 18 label ``role=backend``. 19 * Restriction of accessible ports (Layer 4) for both incoming and outgoing 20 connections, e.g. endpoint with label ``role=frontend`` can only make outgoing 21 connections on port 443 (https) and endpoint ``role=backend`` can only accept 22 connections on port 443 (https). 23 * Fine grained access control on application protocol level to secure HTTP and 24 remote procedure call (RPC) protocols, e.g the endpoint with label 25 ``role=frontend`` can only perform the REST API call ``GET /userdata/[0-9]+``, 26 all other API interactions with ``role=backend`` are restricted. 27 28 Currently on the roadmap, to be added soon: 29 30 * Authentication: Any endpoint which wants to initiate a connection to an 31 endpoint with the label ``role=backend`` must have a particular security 32 certificate to authenticate itself before being able to initiate any 33 connections. See `GH issue 502 34 <https://github.com/cilium/cilium/issues/502>`_ for additional details. 35 * Encryption: Communication between any endpoint with the label ``role=frontend`` 36 to any endpoint with the label ``role=backend`` is automatically encrypted with 37 a key that is automatically rotated. See `GH issue 504 38 <https://github.com/cilium/cilium/issues/504>`_ to track progress on this 39 feature. 40 41 .. _arch_id_security: 42 43 Identity based Connectivity Access Control 44 ========================================== 45 46 Container management systems such as Kubernetes deploy a networking model which 47 assigns an individual IP address to each pod (group of containers). This 48 ensures simplicity in architecture, avoids unnecessary network address 49 translation (NAT) and provides each individual container with a full range of 50 port numbers to use. The logical consequence of this model is that depending on 51 the size of the cluster and total number of pods, the networking layer has to 52 manage a large number of IP addresses. 53 54 Traditionally security enforcement architectures have been based on IP address 55 filters. Let's walk through a simple example: If all pods with the label 56 ``role=frontend`` should be allowed to initiate connections to all pods with 57 the label ``role=backend`` then each cluster node which runs at least one pod 58 with the label ``role=backend`` must have a corresponding filter installed 59 which allows all IP addresses of all ``role=frontend`` pods to initiate a 60 connection to the IP addresses of all local ``role=backend`` pods. All other 61 connection requests should be denied. This could look like this: If the 62 destination address is *10.1.1.2* then allow the connection only if the source 63 address is one of the following *[10.1.2.2,10.1.2.3,20.4.9.1]*. 64 65 Every time a new pod with the label ``role=frontend`` or ``role=backend`` is 66 either started or stopped, the rules on every cluster node which run any such 67 pods must be updated by either adding or removing the corresponding IP address 68 from the list of allowed IP addresses. In large distributed applications, this 69 could imply updating thousands of cluster nodes multiple times per second 70 depending on the churn rate of deployed pods. Worse, the starting of new 71 ``role=frontend`` pods must be delayed until all servers running 72 ``role=backend`` pods have been updated with the new security rules as 73 otherwise connection attempts from the new pod could be mistakenly dropped. 74 This makes it difficult to scale efficiently. 75 76 In order to avoid these complications which can limit scalability and 77 flexibility, Cilium entirely separates security from network addressing. 78 Instead, security is based on the identity of a pod, which is derived through 79 labels. This identity can be shared between pods. This means that when the 80 first ``role=frontend`` pod is started, Cilium assigns an identity to that pod 81 which is then allowed to initiate connections to the identity of the 82 ``role=backend`` pod. The subsequent start of additional ``role=frontend`` pods 83 only requires to resolve this identity via a key-value store, no action has to 84 be performed on any of the cluster nodes hosting ``role=backend`` pods. The 85 starting of a new pod must only be delayed until the identity of the pod has 86 been resolved which is a much simpler operation than updating the security 87 rules on all other cluster nodes. 88 89 .. image:: ../images/identity.png 90 :align: center 91 92 93 Policy Enforcement 94 ================== 95 96 All security policies are described assuming stateful policy enforcement for 97 session based protocols. This means that the intent of the policy is to 98 describe allowed direction of connection establishment. If the policy allows 99 ``A => B`` then reply packets from ``B`` to ``A`` are automatically allowed as 100 well. However, ``B`` is not automatically allowed to initiate connections to 101 ``A``. If that outcome is desired, then both directions must be explicitly 102 allowed. 103 104 Security policies may be enforced at *ingress* or *egress*. For *ingress*, 105 this means that each cluster node verifies all incoming packets and determines 106 whether the packet is allowed to be transmitted to the intended endpoint. 107 Correspondingly, for *egress* each cluster node verifies outgoing packets and 108 determines whether the packet is allowed to be transmitted to its intended 109 destination. 110 111 In order to enforce identity based security in a multi host cluster, the 112 identity of the transmitting endpoint is embedded into every network packet 113 that is transmitted in between cluster nodes. The receiving cluster node can 114 then extract the identity and verify whether a particular identity is allowed 115 to communicate with any of the local endpoints. 116 117 Default Security Policy 118 ----------------------- 119 120 If no policy is loaded, the default behavior is to allow all communication 121 unless policy enforcement has been explicitly enabled. As soon as the first 122 policy rule is loaded, policy enforcement is enabled automatically and any 123 communication must then be white listed or the relevant packets will be 124 dropped. 125 126 Similarly, if an endpoint is not subject to an *L4* policy, communication from 127 and to all ports is permitted. Associating at least one *L4* policy to an 128 endpoint will block all connectivity to ports unless explicitly allowed. 129 130 131 Orchestration System Specifics 132 ============================== 133 134 Kubernetes 135 ---------- 136 137 Cilium regards each deployed `Pod` as an endpoint with regards to networking and 138 security policy enforcement. Labels associated with pods can be used to define 139 the identity of the endpoint. 140 141 When two pods communicate via a service construct, then the labels of the 142 origin pod apply to determine the identity.