github.com/imran-kn/cilium-fork@v1.6.9/Documentation/policy/troubleshooting.rst (about) 1 .. only:: not (epub or latex or html) 2 3 WARNING: You are looking at unreleased Cilium documentation. 4 Please use the official rendered version released here: 5 http://docs.cilium.io 6 7 .. _policy_troubleshooting: 8 9 *************** 10 Troubleshooting 11 *************** 12 13 .. _policy_tracing: 14 15 Policy Tracing 16 ============== 17 18 If Cilium is allowing / denying connections in a way that is not aligned with the 19 intent of your Cilium Network policy, there is an easy way to 20 verify if and what policy rules apply between two 21 endpoints. We can use the ``cilium policy trace`` to simulate a policy decision 22 between the source and destination endpoints. 23 24 We will use the example from the `Minikube Getting Started Guide <http://cilium.readthedocs.io/en/latest/gettingstarted/minikube/#getting-started-using-minikube>`_ to trace the policy. In this example, there is: 25 26 * ``deathstar`` service identified by labels: ``org=empire, class=deathstar``. The service is backed by two pods. 27 * ``tiefighter`` spaceship client pod with labels: ``org=empire, class=tiefighter`` 28 * ``xwing`` spaceship client pod with labels: ``org=alliance, class=xwing`` 29 30 An L3/L4 policy is enforced on the ``deathstar`` service to allow access to all spaceships with labels ``org=empire``. With this policy, the ``tiefighter`` access is allowed but ``xwing`` access will be denied. Let's use the ``cilium policy trace`` to simulate the policy decision. The command provides flexibility to run using pod names, labels or Cilium security identities. 31 32 .. note:: 33 34 If the ``--dport`` option is not specified, then L4 policy will not be 35 consulted in this policy trace command. 36 37 Currently, there is no support for tracing L7 policies via this tool. 38 39 .. code:: bash 40 41 # Policy trace using pod name and service labels 42 43 $ kubectl exec -ti cilium-88k78 -n kube-system -- cilium policy trace --src-k8s-pod default:xwing -d any:class=deathstar,k8s:org=empire,k8s:io.kubernetes.pod.namespace=default --dport 80 44 level=info msg="Waiting for k8s api-server to be ready..." subsys=k8s 45 level=info msg="Connected to k8s api-server" ipAddr="https://10.96.0.1:443" subsys=k8s 46 ---------------------------------------------------------------- 47 Tracing From: [k8s:class=xwing, k8s:io.cilium.k8s.policy.serviceaccount=default, k8s:io.kubernetes.pod.namespace=default, k8s:org=alliance] => To: [any:class=deathstar, k8s:org=empire, k8s:io.kubernetes.pod.namespace=default] Ports: [80/ANY] 48 49 Resolving ingress policy for [any:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default] 50 * Rule {"matchLabels":{"any:class":"deathstar","any:org":"empire","k8s:io.kubernetes.pod.namespace":"default"}}: selected 51 Allows from labels {"matchLabels":{"any:org":"empire","k8s:io.kubernetes.pod.namespace":"default"}} 52 Labels [k8s:class=xwing k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance] not found 53 1/1 rules selected 54 Found no allow rule 55 Ingress verdict: denied 56 57 Final verdict: DENIED 58 59 .. code:: bash 60 61 # Get the Cilium security id 62 63 $ kubectl exec -ti cilium-88k78 -n kube-system -- cilium endpoint list | egrep 'deathstar|xwing|tiefighter' 64 ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS 65 ENFORCEMENT ENFORCEMENT 66 568 Enabled Disabled 22133 k8s:class=deathstar f00d::a0f:0:0:238 10.15.65.193 ready 67 900 Enabled Disabled 22133 k8s:class=deathstar f00d::a0f:0:0:384 10.15.114.17 ready 68 33633 Disabled Disabled 53208 k8s:class=xwing f00d::a0f:0:0:8361 10.15.151.230 ready 69 38654 Disabled Disabled 22962 k8s:class=tiefighter f00d::a0f:0:0:96fe 10.15.88.156 ready 70 71 # Policy trace using Cilium security ids 72 73 $ kubectl exec -ti cilium-88k78 -n kube-system -- cilium policy trace --src-identity 53208 --dst-identity 22133 --dport 80 74 ---------------------------------------------------------------- 75 Tracing From: [k8s:class=xwing, k8s:io.cilium.k8s.policy.serviceaccount=default, k8s:io.kubernetes.pod.namespace=default, k8s:org=alliance] => To: [any:class=deathstar, k8s:org=empire, k8s:io.kubernetes.pod.namespace=default] Ports: [80/ANY] 76 77 Resolving ingress policy for [any:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default] 78 * Rule {"matchLabels":{"any:class":"deathstar","any:org":"empire","k8s:io.kubernetes.pod.namespace":"default"}}: selected 79 Allows from labels {"matchLabels":{"any:org":"empire","k8s:io.kubernetes.pod.namespace":"default"}} 80 Labels [k8s:class=xwing k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance] not found 81 1/1 rules selected 82 Found no allow rule 83 Ingress verdict: denied 84 85 Final verdict: DENIED 86 87 88 Policy Rule to Endpoint Mapping 89 =============================== 90 91 To determine which policy rules are currently in effect for an endpoint the 92 data from ``cilium endpoint list`` and ``cilium endpoint get`` can be paired 93 with the data from ``cilium policy get``. ``cilium endpoint get`` will list the 94 labels of each rule that applies to an endpoint. The list of labels can be 95 passed to ``cilium policy get`` to show that exact source policy. Note that 96 rules that have no labels cannot be fetched alone (a no label ``cililum policy 97 get`` returns the complete policy on the node). Rules with the same labels will 98 be returned together. 99 100 In the above example, for one of the ``deathstar`` pods the endpoint id is 568. We can print all policies applied to it with: 101 102 .. code:: bash 103 104 # Get a shell on the Cilium pod 105 106 $ kubectl exec -ti cilium-88k78 -n kube-system /bin/bash 107 108 # print out the ingress labels 109 # clean up the data 110 # fetch each policy via each set of labels 111 # (Note that while the structure is "...l4.ingress...", it reflects all L3, L4 and L7 policy. 112 113 $ cilium endpoint get 568 -o jsonpath='{range ..status.policy.realized.l4.ingress[*].derived-from-rules}{@}{"\n"}{end}'|tr -d '][' | xargs -I{} bash -c 'echo "Labels: {}"; cilium policy get {}' 114 Labels: k8s:io.cilium.k8s.policy.name=rule1 k8s:io.cilium.k8s.policy.namespace=default 115 [ 116 { 117 "endpointSelector": { 118 "matchLabels": { 119 "any:class": "deathstar", 120 "any:org": "empire", 121 "k8s:io.kubernetes.pod.namespace": "default" 122 } 123 }, 124 "ingress": [ 125 { 126 "fromEndpoints": [ 127 { 128 "matchLabels": { 129 "any:org": "empire", 130 "k8s:io.kubernetes.pod.namespace": "default" 131 } 132 } 133 ], 134 "toPorts": [ 135 { 136 "ports": [ 137 { 138 "port": "80", 139 "protocol": "TCP" 140 } 141 ], 142 "rules": { 143 "http": [ 144 { 145 "path": "/v1/request-landing", 146 "method": "POST" 147 } 148 ] 149 } 150 } 151 ] 152 } 153 ], 154 "labels": [ 155 { 156 "key": "io.cilium.k8s.policy.name", 157 "value": "rule1", 158 "source": "k8s" 159 }, 160 { 161 "key": "io.cilium.k8s.policy.namespace", 162 "value": "default", 163 "source": "k8s" 164 } 165 ] 166 } 167 ] 168 Revision: 217 169 170 171 # repeat for egress 172 $ cilium endpoint get 568 -o jsonpath='{range ..status.policy.realized.l4.egress[*].derived-from-rules}{@}{"\n"}{end}' | tr -d '][' | xargs -I{} bash -c 'echo "Labels: {}"; cilium policy get {}' 173 174 Troubleshooting ``toFQDNs`` rules 175 ================================= 176 177 The effect of ``toFQDNs`` may change long after a policy is applied, as DNS 178 data changes. This can make it difficult to debug unexpectedly blocked 179 connections, or transient failures. Cilium provides CLI tools to introspect 180 the state of applying FQDN policy in multiple layers of the daemon: 181 182 #. ``cilium policy get`` should show the FQDN policy that was imported: 183 184 .. code-block:: json 185 186 { 187 "endpointSelector": { 188 "matchLabels": { 189 "any:class": "mediabot", 190 "any:org": "empire", 191 "k8s:io.kubernetes.pod.namespace": "default" 192 } 193 }, 194 "egress": [ 195 { 196 "toFQDNs": [ 197 { 198 "matchName": "api.twitter.com" 199 } 200 ] 201 }, 202 { 203 "toEndpoints": [ 204 { 205 "matchLabels": { 206 "k8s:io.kubernetes.pod.namespace": "kube-system", 207 "k8s:k8s-app": "kube-dns" 208 } 209 } 210 ], 211 "toPorts": [ 212 { 213 "ports": [ 214 { 215 "port": "53", 216 "protocol": "ANY" 217 } 218 ], 219 "rules": { 220 "dns": [ 221 { 222 "matchPattern": "*" 223 } 224 ] 225 } 226 } 227 ] 228 } 229 ], 230 "labels": [ 231 { 232 "key": "io.cilium.k8s.policy.derived-from", 233 "value": "CiliumNetworkPolicy", 234 "source": "k8s" 235 }, 236 { 237 "key": "io.cilium.k8s.policy.name", 238 "value": "fqdn", 239 "source": "k8s" 240 }, 241 { 242 "key": "io.cilium.k8s.policy.namespace", 243 "value": "default", 244 "source": "k8s" 245 }, 246 { 247 "key": "io.cilium.k8s.policy.uid", 248 "value": "fc9d6022-2ffa-4f72-b59e-b9067c3cfecf", 249 "source": "k8s" 250 } 251 ] 252 } 253 254 255 #. After making a DNS request, the FQDN to IP mapping should be available via 256 ``cilium fqdn cache list``: 257 258 .. code-block:: shell-session 259 260 # cilium fqdn cache list 261 Endpoint FQDN TTL ExpirationTime IPs 262 2761 help.twitter.com. 604800 2019-07-16T17:57:38.179Z 104.244.42.67,104.244.42.195,104.244.42.3,104.244.42.131 263 2761 api.twitter.com. 604800 2019-07-16T18:11:38.627Z 104.244.42.194,104.244.42.130,104.244.42.66,104.244.42.2 264 265 #. If the traffic is allowed, then these IPs should have corresponding local identities via 266 ``cilium identity list | grep <IP>``: 267 268 .. code-block:: shell-session 269 270 # cilium identity list | grep -A 1 104.244.42.194 271 16777220 cidr:104.244.42.194/32 272 reserved:world 273 274 #. Given the identity of the traffic that should be allowed, the regular 275 :ref:`policy_tracing` steps can be used to validate that the policy is 276 calculated correctly.