github.com/cilium/cilium@v1.16.2/Documentation/security/policy-creation.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _policy_verdicts:
     8  
     9  *******************************
    10  Creating Policies from Verdicts
    11  *******************************
    12  
    13  Policy Audit Mode configures Cilium to allow all traffic while logging all
    14  connections that would otherwise be dropped by network policies. Policy Audit
    15  Mode may be configured for the entire daemon using ``--policy-audit-mode=true``
    16  or for individual Cilium Endpoints. When Policy Audit Mode is enabled, no
    17  network policy is enforced so this setting is **not recommended for production
    18  deployment**. Policy Audit Mode supports auditing network policies implemented
    19  at networks layers 3 and 4. This guide walks through the process of creating
    20  policies using Policy Audit Mode.
    21  
    22  .. include:: gsg_requirements.rst
    23  .. include:: gsg_sw_demo.rst
    24  
    25  Scale down the deathstar Deployment
    26  ===================================
    27  
    28  In this guide we're going to scale down the deathstar Deployment in order to
    29  simplify the next steps:
    30  
    31  .. code-block:: shell-session
    32  
    33     $ kubectl scale --replicas=1 deployment deathstar
    34     deployment.apps/deathstar scaled
    35  
    36  Enable Policy Audit Mode (Entire Daemon)
    37  ========================================
    38  
    39  To observe policy audit messages for all endpoints managed by this Daemonset,
    40  modify the Cilium ConfigMap and restart all daemons:
    41  
    42     .. tabs::
    43  
    44        .. group-tab:: Configure via kubectl
    45  
    46           .. code-block:: shell-session
    47  
    48              $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"true"}}'
    49              configmap/cilium-config patched
    50              $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium
    51              daemonset.apps/cilium restarted
    52              $ kubectl -n $CILIUM_NAMESPACE rollout status ds/cilium
    53              Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available...
    54              daemon set "cilium" successfully rolled out
    55  
    56        .. group-tab:: Helm Upgrade
    57  
    58           If you installed Cilium via ``helm install``, then you can use ``helm
    59           upgrade`` to enable Policy Audit Mode:
    60  
    61           .. parsed-literal::
    62  
    63              $ helm upgrade cilium |CHART_RELEASE| \\
    64                  --namespace $CILIUM_NAMESPACE \\
    65                  --reuse-values \\
    66                  --set policyAuditMode=true
    67  
    68  
    69  Enable Policy Audit Mode (Specific Endpoint)
    70  ============================================
    71  
    72  Cilium can enable Policy Audit Mode for a specific endpoint. This may be helpful when enabling
    73  Policy Audit Mode for the entire daemon is too broad. Enabling per endpoint will ensure other
    74  endpoints managed by the same daemon are not impacted.
    75  
    76  This approach is meant to be temporary.  **Restarting Cilium pod will reset the Policy Audit
    77  Mode to match the daemon's configuration.**
    78  
    79  Policy Audit Mode is enabled for a given endpoint by modifying the endpoint configuration via
    80  the ``cilium`` tool on the endpoint's Kubernetes node. The steps include:
    81  
    82  #. Determine the endpoint id on which Policy Audit Mode will be enabled.
    83  #. Identify the Cilium pod running on the same Kubernetes node corresponding to the endpoint.
    84  #. Using the Cilium pod above, modify the endpoint configuration by setting ``PolicyAuditMode=Enabled``.
    85  
    86  The following shell commands perform these steps:
    87  
    88  .. code-block:: shell-session
    89  
    90     $ PODNAME=$(kubectl get pods -l app.kubernetes.io/name=deathstar -o jsonpath='{.items[*].metadata.name}')
    91     $ NODENAME=$(kubectl get pod -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}")
    92     $ ENDPOINT=$(kubectl get cep -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].status.id}")
    93     $ CILIUM_POD=$(kubectl -n "$CILIUM_NAMESPACE" get pod --all-namespaces --field-selector spec.nodeName="$NODENAME" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}')
    94     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
    95         cilium-dbg endpoint config "$ENDPOINT" PolicyAuditMode=Enabled
    96      Endpoint 232 configuration updated successfully
    97  
    98  We can check that Policy Audit Mode is enabled for this endpoint with
    99  
   100  .. code-block:: shell-session
   101  
   102     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   103         cilium-dbg endpoint get "$ENDPOINT" -o jsonpath='{[*].spec.options.PolicyAuditMode}'
   104     Enabled
   105  
   106  .. _observe_policy_verdicts:
   107  
   108  Observe policy verdicts
   109  =======================
   110  
   111  In this example, we are tasked with applying security policy for the deathstar.
   112  First, from the Cilium pod we need to monitor the notifications for policy
   113  verdicts using the Hubble CLI. We'll be monitoring for inbound traffic towards
   114  the deathstar to identify it and determine whether to extend the network policy
   115  to allow that traffic.
   116  
   117  Apply a default-deny policy:
   118  
   119  .. literalinclude:: ../../examples/minikube/sw_deny_policy.yaml
   120  
   121  CiliumNetworkPolicies match on pod labels using an ``endpointSelector`` to identify
   122  the sources and destinations to which the policy applies. The above policy denies
   123  traffic sent to any pods with label (``org=empire``). Due to the Policy Audit Mode
   124  enabled above (either for the entire daemon, or for just the ``deathstar`` endpoint),
   125  the traffic will not actually be denied but will instead trigger policy verdict
   126  notifications.
   127  
   128  To apply this policy, run:
   129  
   130  .. parsed-literal::
   131  
   132      $ kubectl create -f \ |SCM_WEB|\/examples/minikube/sw_deny_policy.yaml
   133      ciliumnetworkpolicy.cilium.io/empire-default-deny created
   134  
   135  With the above policy, we will enable a default-deny posture on ingress to pods
   136  with the label ``org=empire`` and enable the policy verdict notifications for
   137  those pods. The same principle applies on egress as well.
   138  
   139  Now let's send some traffic from the tiefighter to the deathstar:
   140  
   141  .. code-block:: shell-session
   142  
   143      $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
   144      Ship landed
   145  
   146  We can check the policy verdict from the Cilium Pod:
   147  
   148  .. code-block:: shell-session
   149  
   150     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   151         hubble observe flows -t policy-verdict --last 1
   152     Feb  7 12:53:39.168: default/tiefighter:54134 (ID:31028) -> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:none AUDITED (TCP Flags: SYN)
   153  
   154  In the above example, we can see that the Pod ``deathstar-6fb5694d48-5hmds`` has
   155  received traffic from the ``tiefighter`` Pod which doesn't match the policy
   156  (``policy-verdict:none AUDITED``).
   157  
   158  .. _create_network_policy:
   159  
   160  Create the Network Policy
   161  =========================
   162  
   163  We can get more information about the flow with
   164  
   165  .. code-block:: shell-session
   166  
   167     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   168         hubble observe flows -t policy-verdict -o json --last 1
   169  
   170  Given the above information, we now know the labels of the source and
   171  destination Pods, the traffic direction, and the destination port. In this case,
   172  we can see clearly that the source (i.e. the tiefighter Pod) is an empire
   173  aircraft (as it has the ``org=empire`` label) so once we've determined that we
   174  expect this traffic to arrive at the deathstar, we can form a policy to match
   175  the traffic:
   176  
   177  .. literalinclude:: ../../examples/minikube/sw_l3_l4_policy.yaml
   178  
   179  To apply this L3/L4 policy, run:
   180  
   181  .. parsed-literal::
   182  
   183      $ kubectl create -f \ |SCM_WEB|\/examples/minikube/sw_l3_l4_policy.yaml
   184      ciliumnetworkpolicy.cilium.io/rule1 created
   185  
   186  Now if we run the landing requests again,
   187  
   188  .. code-block:: shell-session
   189  
   190      $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
   191      Ship landed
   192  
   193  we can then observe that the traffic which was previously audited to be dropped
   194  by the policy is reported as allowed:
   195  
   196  .. code-block:: shell-session
   197  
   198     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   199         hubble observe flows -t policy-verdict --last 1
   200     ...
   201     Feb  7 13:06:45.130: default/tiefighter:59824 (ID:31028) -> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:L3-L4 ALLOWED (TCP Flags: SYN)
   202  
   203  Now the policy verdict states that the traffic would be allowed:
   204  ``policy-verdict:L3-L4 ALLOWED``. Success!
   205  
   206  Disable Policy Audit Mode (Entire Daemon)
   207  =========================================
   208  
   209  These steps should be repeated for each connection in the cluster to ensure
   210  that the network policy allows all of the expected traffic. The final step
   211  after deploying the policy is to disable Policy Audit Mode again:
   212  
   213     .. tabs::
   214  
   215        .. group-tab:: Configure via kubectl
   216  
   217           .. code-block:: shell-session
   218  
   219              $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"false"}}'
   220              configmap/cilium-config patched
   221              $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium
   222              daemonset.apps/cilium restarted
   223              $ kubectl -n kube-system rollout status ds/cilium
   224              Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available...
   225              daemon set "cilium" successfully rolled out
   226  
   227        .. group-tab:: Helm Upgrade
   228  
   229           .. parsed-literal::
   230  
   231              $ helm upgrade cilium |CHART_RELEASE| \\
   232                  --namespace $CILIUM_NAMESPACE \\
   233                  --reuse-values \\
   234                  --set policyAuditMode=false
   235  
   236  
   237  Disable Policy Audit Mode (Specific Endpoint)
   238  =============================================
   239  
   240  These steps are nearly identical to enabling Policy Audit Mode.
   241  
   242  .. code-block:: shell-session
   243  
   244     $ PODNAME=$(kubectl get pods -l app.kubernetes.io/name=deathstar -o jsonpath='{.items[*].metadata.name}')
   245     $ NODENAME=$(kubectl get pod -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}")
   246     $ ENDPOINT=$(kubectl get cep -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].status.id}")
   247     $ CILIUM_POD=$(kubectl -n "$CILIUM_NAMESPACE" get pod --all-namespaces --field-selector spec.nodeName="$NODENAME" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}')
   248     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   249         cilium-dbg endpoint config "$ENDPOINT" PolicyAuditMode=Disabled
   250      Endpoint 232 configuration updated successfully
   251  
   252  Alternatively, **restarting the Cilium pod** will set the endpoint Policy Audit Mode to the daemon set configuration.
   253  
   254  
   255  Verify Policy Audit Mode is Disabled
   256  ====================================
   257  
   258  .. code-block:: shell-session
   259  
   260     $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   261         cilium-dbg endpoint get "$ENDPOINT" -o jsonpath='{[*].spec.options.PolicyAuditMode}'
   262     Disabled
   263  
   264  Now if we run the landing requests again, only the *tiefighter* pods with the
   265  label ``org=empire`` should succeed:
   266  
   267  .. code-block:: shell-session
   268  
   269      $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
   270      Ship landed
   271  
   272  And we can observe that the traffic was allowed by the policy:
   273  
   274  .. code-block:: shell-session
   275  
   276      $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   277          hubble observe flows -t policy-verdict --from-pod tiefighter --last 1
   278      Feb  7 13:34:26.112: default/tiefighter:37314 (ID:31028) -> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:L3-L4 ALLOWED (TCP Flags: SYN)
   279  
   280  
   281  This works as expected. Now the same request from an *xwing* Pod should fail:
   282  
   283  .. code-block:: shell-session
   284  
   285      $ kubectl exec xwing -- curl --connect-timeout 3 -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
   286      command terminated with exit code 28
   287  
   288  This curl request should timeout after three seconds, we can observe the policy
   289  verdict with:
   290  
   291  .. code-block:: shell-session
   292  
   293      $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- \
   294          hubble observe flows -t policy-verdict --from-pod xwing --last 1
   295      Feb  7 13:43:46.791: default/xwing:54842 (ID:22654) <> default/deathstar-6fb5694d48-5hmds:80 (ID:16530) policy-verdict:none DENIED (TCP Flags: SYN)
   296  
   297  
   298  We hope you enjoyed the tutorial.  Feel free to play more with the setup,
   299  follow the `gs_http` guide, and reach out to us on `Cilium Slack`_ with any
   300  questions!
   301  
   302  Clean-up
   303  ========
   304  
   305  .. parsed-literal::
   306  
   307     $ kubectl delete -f \ |SCM_WEB|\/examples/minikube/http-sw-app.yaml
   308     $ kubectl delete cnp empire-default-deny rule1