github.com/cilium/cilium@v1.16.2/Documentation/network/kubernetes/configuration.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      https://docs.cilium.io
     6  
     7  .. _k8s_configuration:
     8  
     9  *************
    10  Configuration
    11  *************
    12  
    13  ConfigMap Options
    14  -----------------
    15  
    16  In the :term:`ConfigMap` there are several options that can be configured according
    17  to your preferences:
    18  
    19  * ``debug`` - Sets to run Cilium in full debug mode, which enables verbose
    20    logging and configures eBPF programs to emit more visibility events into the
    21    output of ``cilium-dbg monitor``.
    22  
    23  * ``enable-ipv4`` - Enable IPv4 addressing support
    24  
    25  * ``enable-ipv6`` - Enable IPv6 addressing support
    26  
    27  * ``clean-cilium-bpf-state`` - Removes all eBPF state from the filesystem on
    28    startup. Endpoints will be restored with the same IP addresses, but ongoing
    29    connections may be briefly disrupted and loadbalancing decisions will be
    30    lost, so active connections via the loadbalancer will break. All eBPF state
    31    will be reconstructed from their original sources (for example, from
    32    Kubernetes or the kvstore). This may be used to mitigate serious issues
    33    regarding eBPF maps. This option should be turned off again after restarting
    34    the daemon.
    35  
    36  * ``clean-cilium-state`` - Removes **all** Cilium state, including unrecoverable
    37    information such as all endpoint state, as well as recoverable state such as
    38    eBPF state pinned to the filesystem, CNI configuration files, library code,
    39    links, routes, and other information. **This operation is irreversible**.
    40    Existing endpoints currently managed by Cilium may continue to operate as
    41    before, but Cilium will no longer manage them and they may stop working
    42    without warning. After using this operation, endpoints must be deleted and
    43    reconnected to allow the new instance of Cilium to manage them.
    44  
    45  * ``monitor-aggregation`` - This option enables coalescing of tracing events in
    46    ``cilium-dbg monitor`` to only include periodic updates from active flows, or any
    47    packets that involve an L4 connection state change. Valid options are
    48    ``none``, ``low``, ``medium``, ``maximum``.
    49  
    50    - ``none`` - Generate a tracing event on every receive and send packet.
    51    - ``low`` - Generate a tracing event on every send packet.
    52    - ``medium`` - Generate a tracing event on every new connection, any time a
    53      packet contains TCP flags that have not been previously seen for the packet
    54      direction, and on average once per ``monitor-aggregation-interval``
    55      (assuming that a packet is seen during the interval). Each direction tracks
    56      TCP flags and report interval separately. If Cilium drops a packet, it will
    57      emit one event per packet dropped.
    58    - ``maximum`` - An alias for the most aggressive aggregation level. Currently
    59      this is equivalent to setting ``monitor-aggregation`` to ``medium``.
    60  
    61  * ``monitor-aggregation-interval`` - Defines the interval to report tracing
    62    events. Only applicable for ``monitor-aggregation`` levels ``medium`` or higher.
    63    Assuming new packets are sent at least once per interval, this ensures that on
    64    average one event is sent during the interval.
    65  
    66  * ``preallocate-bpf-maps`` - Pre-allocation of map entries allows per-packet
    67    latency to be reduced, at the expense of up-front memory allocation for the
    68    entries in the maps. Set to ``true`` to optimize for latency. If this value
    69    is modified, then during the next Cilium startup connectivity may be
    70    temporarily disrupted for endpoints with active connections.
    71  
    72  Any changes that you perform in the Cilium :term:`ConfigMap` and in
    73  ``cilium-etcd-secrets`` ``Secret`` will require you to restart any existing
    74  Cilium pods in order for them to pick the latest configuration.
    75  
    76  .. attention::
    77  
    78     When updating keys or values in the ConfigMap, the changes might take up to
    79     2 minutes to be propagated to all nodes running in the cluster. For more
    80     information see the official Kubernetes docs:
    81     `Mounted ConfigMaps are updated automatically <https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#mounted-configmaps-are-updated-automatically>`__
    82  
    83  The following :term:`ConfigMap` is an example where the etcd cluster is running in 2
    84  nodes, ``node-1`` and ``node-2`` with TLS, and client to server authentication
    85  enabled.
    86  
    87  .. code-block:: yaml
    88  
    89      apiVersion: v1
    90      kind: ConfigMap
    91      metadata:
    92        name: cilium-config
    93        namespace: kube-system
    94      data:
    95        # The kvstore configuration is used to enable use of a kvstore for state
    96        # storage.
    97        kvstore: etcd
    98        kvstore-opt: '{"etcd.config": "/var/lib/etcd-config/etcd.config"}'
    99  
   100        # This etcd-config contains the etcd endpoints of your cluster. If you use
   101        # TLS please make sure you follow the tutorial in https://cilium.link/etcd-config
   102        etcd-config: |-
   103          ---
   104          endpoints:
   105            - https://node-1:31079
   106            - https://node-2:31079
   107          #
   108          # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line
   109          # and create a kubernetes secret by following the tutorial in
   110          # https://cilium.link/etcd-config
   111          trusted-ca-file: '/var/lib/etcd-secrets/etcd-client-ca.crt'
   112          #
   113          # In case you want client to server authentication, uncomment the following
   114          # lines and create a kubernetes secret by following the tutorial in
   115          # https://cilium.link/etcd-config
   116          key-file: '/var/lib/etcd-secrets/etcd-client.key'
   117          cert-file: '/var/lib/etcd-secrets/etcd-client.crt'
   118  
   119        # If you want to run cilium in debug mode change this value to true
   120        debug: "false"
   121        enable-ipv4: "true"
   122        # If you want to clean cilium state; change this value to true
   123        clean-cilium-state: "false"
   124  
   125  CNI
   126  ===
   127  
   128  :term:`CNI` - Container Network Interface is the plugin layer used by Kubernetes to
   129  delegate networking configuration. You can find additional information on the
   130  :term:`CNI` project website.
   131  
   132  CNI configuration is automatically taken care of when deploying Cilium via the provided
   133  :term:`DaemonSet`. The ``cilium`` pod will generate an appropriate CNI configuration
   134  file and write it to disk on startup.
   135  
   136  .. note:: In order for CNI installation to work properly, the
   137            ``kubelet`` task must either be running on the host filesystem of the
   138            worker node, or the ``/etc/cni/net.d`` and ``/opt/cni/bin``
   139            directories must be mounted into the container where ``kubelet`` is
   140            running. This can be achieved with :term:`Volumes` mounts.
   141  
   142  The CNI auto installation is performed as follows:
   143  
   144  1. The ``/etc/cni/net.d`` and ``/opt/cni/bin`` directories are mounted from the
   145     host filesystem into the pod where Cilium is running.
   146  
   147  2. The binary ``cilium-cni`` is installed to ``/opt/cni/bin``. Any existing
   148     binary with the name ``cilium-cni`` is overwritten.
   149  
   150  3. The file ``/etc/cni/net.d/05-cilium.conflist`` is written.
   151  
   152  
   153  Adjusting CNI configuration
   154  ---------------------------
   155  
   156  The CNI configuration file is automatically written and maintained by the
   157  cilium pod. It is written after the agent has finished initialization and
   158  is ready to handle pod sandbox creation. In addition, the agent will remove
   159  any other CNI configuration files by default.
   160  
   161  There are a number of Helm variables that adjust CNI configuration management.
   162  For a full description, see the helm documentation. A brief summary:
   163  
   164  +--------------------+----------------------------------------+---------+
   165  | Helm variable      | Description                            | Default |
   166  +====================+========================================+=========+
   167  | ``cni.customConf`` | Disable CNI configuration management   | false   |
   168  +--------------------+----------------------------------------+---------+
   169  | ``cni.exclusive``  | Remove other CNI configuration files   | true    |
   170  +--------------------+----------------------------------------+---------+
   171  | ``cni.install``    | Install CNI configuration and binaries | true    |
   172  +--------------------+----------------------------------------+---------+
   173  
   174  
   175  If you want to provide your own custom CNI configuration file, you can do
   176  so by passing a path to a cni template file, either on disk or provided
   177  via a configMap. The Helm options that configure this are:
   178  
   179  +----------------------+----------------------------------------------------------------+
   180  | Helm variable        | Description                                                    |
   181  +======================+================================================================+
   182  | ``cni.readCniConf``  | Path (inside the agent) to a source CNI configuration file     |
   183  +----------------------+----------------------------------------------------------------+
   184  | ``cni.configMap``    | Name of a ConfigMap containing a source CNI configuration file |
   185  +----------------------+----------------------------------------------------------------+
   186  | ``cni.configMapKey`` | Install CNI configuration and binaries                         |
   187  +----------------------+----------------------------------------------------------------+
   188  
   189  These Helm variables are converted to a smaller set of cilium ConfigMap keys:
   190  
   191  +-------------------------------+--------------------------------------------------------+
   192  | ConfigMap key                 | Description                                            |
   193  +===============================+========================================================+
   194  | ``write-cni-conf-when-ready`` | Path to write the CNI configuration file               |
   195  +-------------------------------+--------------------------------------------------------+
   196  | ``read-cni-conf``             | Path to read the source CNI configuration file         |
   197  +-------------------------------+--------------------------------------------------------+
   198  | ``cni-exclusive``             | Whether or not to remove other CNI configuration files |
   199  +-------------------------------+--------------------------------------------------------+
   200  
   201  
   202  CRD Validation
   203  ==============
   204  
   205  Custom Resource Validation was introduced in Kubernetes since version ``1.8.0``.
   206  This is still considered an alpha feature in Kubernetes ``1.8.0`` and beta in
   207  Kubernetes ``1.9.0``.
   208  
   209  Since Cilium ``v1.0.0-rc3``, Cilium will create, or update in case it exists,
   210  the Cilium Network Policy (CNP) Resource Definition with the embedded
   211  validation schema. This allows the validation of CiliumNetworkPolicy to be done
   212  on the kube-apiserver when the policy is imported with an ability to provide
   213  direct feedback when importing the resource.
   214  
   215  To enable this feature, the flag ``--feature-gates=CustomResourceValidation=true``
   216  must be set when starting kube-apiserver. Cilium itself will automatically make
   217  use of this feature and no additional flag is required.
   218  
   219  .. note:: In case there is an invalid CNP before updating to Cilium
   220            ``v1.0.0-rc3``, which contains the validator, the kube-apiserver
   221            validator will prevent Cilium from updating that invalid CNP with
   222            Cilium node status. By checking Cilium logs for ``unable to update
   223            CNP, retrying...``, it is possible to determine which Cilium Network
   224            Policies are considered invalid after updating to Cilium
   225            ``v1.0.0-rc3``.
   226  
   227  To verify that the CNP resource definition contains the validation schema, run
   228  the following command:
   229  
   230  .. code-block:: shell-session
   231  
   232      $ kubectl get crd ciliumnetworkpolicies.cilium.io -o json | grep -A 12 openAPIV3Schema
   233              "openAPIV3Schema": {
   234                  "oneOf": [
   235                      {
   236                          "required": [
   237                              "spec"
   238                          ]
   239                      },
   240                      {
   241                          "required": [
   242                              "specs"
   243                          ]
   244                      }
   245                  ],
   246  
   247  In case the user writes a policy that does not conform to the schema, Kubernetes
   248  will return an error, e.g.:
   249  
   250  .. code-block:: shell-session
   251  
   252  	cat <<EOF > ./bad-cnp.yaml
   253  	apiVersion: "cilium.io/v2"
   254  	kind: CiliumNetworkPolicy
   255  	metadata:
   256  	  name: my-new-cilium-object
   257  	spec:
   258  	  description: "Policy to test multiple rules in a single file"
   259  	  endpointSelector:
   260  	    matchLabels:
   261  	      app: details
   262  	      track: stable
   263  	      version: v1
   264  	  ingress:
   265  	  - fromEndpoints:
   266  	    - matchLabels:
   267  	        app: reviews
   268  	        track: stable
   269  	        version: v1
   270  	    toPorts:
   271  	    - ports:
   272  	      - port: '65536'
   273  	        protocol: TCP
   274  	      rules:
   275  	        http:
   276  	        - method: GET
   277  	          path: "/health"
   278  	EOF
   279  
   280  	kubectl create -f ./bad-cnp.yaml
   281  	...
   282  	spec.ingress.toPorts.ports.port in body should match '^(6553[0-5]|655[0-2][0-9]|65[0-4][0-9]{2}|6[0-4][0-9]{3}|[1-5][0-9]{4}|[0-9]{1,4})$'
   283  
   284  
   285  In this case, the policy has a port out of the 0-65535 range.
   286  
   287  .. _bpffs_systemd:
   288  
   289  Mounting BPFFS with systemd
   290  ===========================
   291  
   292  Due to how systemd `mounts
   293  <https://unix.stackexchange.com/questions/283442/systemd-mount-fails-where-setting-doesnt-match-unit-name>`__
   294  filesystems, the mount point path must be reflected in the unit filename.
   295  
   296  .. code-block:: shell-session
   297  
   298          cat <<EOF | sudo tee /etc/systemd/system/sys-fs-bpf.mount
   299          [Unit]
   300          Description=Cilium BPF mounts
   301          Documentation=https://docs.cilium.io/
   302          DefaultDependencies=no
   303          Before=local-fs.target umount.target
   304          After=swap.target
   305  
   306          [Mount]
   307          What=bpffs
   308          Where=/sys/fs/bpf
   309          Type=bpf
   310          Options=rw,nosuid,nodev,noexec,relatime,mode=700
   311  
   312          [Install]
   313          WantedBy=multi-user.target
   314          EOF
   315  
   316  Container Runtimes
   317  ==================
   318  
   319  .. _crio-instructions:
   320  
   321  CRIO
   322  ----
   323  
   324  If you want to use CRIO, use the instructions below.
   325  
   326  .. include:: ../../installation/k8s-install-download-release.rst
   327  
   328  .. note::
   329  
   330     The Helm flag ``--set bpf.autoMount.enabled=false`` might not be
   331     required for your setup. For more info see :ref:`crio-known-issues`.
   332  
   333  .. parsed-literal::
   334  
   335     helm install cilium |CHART_RELEASE| \\
   336       --namespace kube-system
   337  
   338  Since CRI-O does not automatically detect that a new CNI plugin has been
   339  installed, you will need to restart the CRI-O daemon for it to pick up the
   340  Cilium CNI configuration.
   341  
   342  First make sure Cilium is running:
   343  
   344  .. code-block:: shell-session
   345  
   346      $ kubectl get pods -n kube-system -o wide
   347      NAME               READY     STATUS    RESTARTS   AGE       IP          NODE
   348      cilium-mqtdz       1/1       Running   0          3m       10.0.2.15   minikube
   349  
   350  After that you can restart CRI-O:
   351  
   352  .. code-block:: shell-session
   353  
   354      minikube ssh -- sudo systemctl restart crio
   355  
   356  .. _crio-known-issues:
   357  
   358  Common CRIO issues
   359  ------------------
   360  
   361  Some CRI-O environments automatically mount the bpf filesystem in the pods,
   362  which is something that Cilium avoids doing when
   363  ``--set bpf.autoMount.enabled=false`` is set. However, some
   364  CRI-O environments do not mount the bpf filesystem automatically which causes
   365  Cilium to print the following message::
   366  
   367          level=warning msg="BPF system config check: NOT OK." error="CONFIG_BPF kernel parameter is required" subsys=linux-datapath
   368          level=warning msg="================================= WARNING ==========================================" subsys=bpf
   369          level=warning msg="BPF filesystem is not mounted. This will lead to network disruption when Cilium pods" subsys=bpf
   370          level=warning msg="are restarted. Ensure that the BPF filesystem is mounted in the host." subsys=bpf
   371          level=warning msg="https://docs.cilium.io/en/stable/operations/system_requirements/#mounted-ebpf-filesystem" subsys=bpf
   372          level=warning msg="====================================================================================" subsys=bpf
   373          level=info msg="Mounting BPF filesystem at /sys/fs/bpf" subsys=bpf
   374  
   375  If you see this warning in the Cilium pod logs with your CRI-O environment,
   376  please remove the flag ``--set bpf.autoMount.enabled=false`` from
   377  your Helm setup and redeploy Cilium.