github.com/imran-kn/cilium-fork@v1.6.9/Documentation/kubernetes/configuration.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  .. _k8s_configuration:
     8  
     9  *************
    10  Configuration
    11  *************
    12  
    13  ConfigMap Options
    14  -----------------
    15  
    16  In the `ConfigMap` there are several options that can be configured according
    17  to your preferences:
    18  
    19  * ``debug`` - Sets to run Cilium in full debug mode, which enables verbose
    20    logging and configures BPF programs to emit more visibility events into the
    21    output of ``cilium monitor``.
    22  
    23  * ``enable-ipv4`` - Enable IPv4 addressing support
    24  
    25  * ``enable-ipv6`` - Enable IPv6 addressing support
    26  
    27  * ``clean-cilium-bpf-state`` - Removes all BPF state from the filesystem on
    28    startup. Endpoints will be restored with the same IP addresses, but ongoing
    29    connections may be briefly disrupted and loadbalancing decisions will be
    30    lost, so active connections via the loadbalancer will break. All BPF state
    31    will be reconstructed from their original sources (for example, from
    32    kubernetes or the kvstore). This may be used to mitigate serious issues
    33    regarding BPF maps. This option should be turned off again after restarting
    34    the daemon.
    35  
    36  * ``clean-cilium-state`` - Removes **all** Cilium state, including unrecoverable
    37    information such as all endpoint state, as well as recoverable state such as
    38    BPF state pinned to the filesystem, CNI configuration files, library code,
    39    links, routes, and other information. **This operation is irreversible**.
    40    Existing endpoints currently managed by Cilium may continue to operate as
    41    before, but Cilium will no longer manage them and they may stop working
    42    without warning. After using this operation, endpoints must be deleted and
    43    reconnected to allow the new instance of Cilium to manage them.
    44  
    45  * ``monitor-aggregation`` - This option enables coalescing of tracing events in
    46    ``cilium monitor`` to only include periodic updates from active flows, or any
    47    packets that involve an L4 connection state change. Valid options are
    48    ``none``, ``low``, ``medium``, ``maximum``.
    49  
    50  * ``preallocate-bpf-maps`` - Pre-allocation of map entries allows per-packet
    51    latency to be reduced, at the expense of up-front memory allocation for the
    52    entries in the maps. Set to ``true`` to optimize for latency. If this value
    53    is modified, then during the next Cilium startup connectivity may be
    54    temporarily disrupted for endpoints with active connections.
    55  
    56  Any changes that you perform in the Cilium `ConfigMap` and in
    57  ``cilium-etcd-secrets`` ``Secret`` will require you to restart any existing
    58  Cilium pods in order for them to pick the latest configuration.
    59  
    60  The following `ConfigMap` is an example where the etcd cluster is running in 2
    61  nodes, ``node-1`` and ``node-2`` with TLS, and client to server authentication
    62  enabled.
    63  
    64  .. code:: yaml
    65  
    66      apiVersion: v1
    67      kind: ConfigMap
    68      metadata:
    69        name: cilium-config
    70        namespace: kube-system
    71      data:
    72          endpoints:
    73          - https://node-1:31079
    74          - https://node-2:31079
    75          #
    76          # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line
    77          # and create a kubernetes secret by following the tutorial in
    78          # https://cilium.link/etcd-config
    79          trusted-ca-file: '/var/lib/etcd-secrets/etcd-client-ca.crt'
    80          #
    81          # In case you want client to server authentication, uncomment the following
    82          # lines and create a kubernetes secret by following the tutorial in
    83          # https://cilium.link/etcd-config
    84          key-file: '/var/lib/etcd-secrets/etcd-client.key'
    85          cert-file: '/var/lib/etcd-secrets/etcd-client.crt'
    86  
    87        # If you want to run cilium in debug mode change this value to true
    88        debug: "false"
    89        enable-ipv4: "true"
    90        # If you want to clean cilium state; change this value to true
    91        clean-cilium-state: "false"
    92  
    93  CNI
    94  ===
    95  
    96  `CNI` - Container Network Interface is the plugin layer used by Kubernetes to
    97  delegate networking configuration. You can find additional information on the
    98  `CNI` project website.
    99  
   100  .. note:: Kubernetes `` >= 1.3.5`` requires the ``loopback`` `CNI` plugin to be
   101            installed on all worker nodes. The binary is typically provided by
   102            most Kubernetes distributions. See section :ref:`install_cni` for
   103            instructions on how to install `CNI` in case the ``loopback`` binary
   104            is not already installed on your worker nodes.
   105  
   106  CNI configuration is automatically being taken care of when deploying Cilium
   107  via the provided `DaemonSet`. The script ``cni-install.sh`` is automatically run
   108  via the ``postStart`` mechanism when the ``cilium`` pod is started.
   109  
   110  .. note:: In order for the the ``cni-install.sh`` script to work properly, the
   111            ``kubelet`` task must either be running on the host filesystem of the
   112            worker node, or the ``/etc/cni/net.d`` and ``/opt/cni/bin``
   113            directories must be mounted into the container where ``kubelet`` is
   114            running. This can be achieved with `Volumes` mounts.
   115  
   116  The CNI auto installation is performed as follows:
   117  
   118  1. The ``/etc/cni/net.d`` and ``/opt/cni/bin`` directories are mounted from the
   119     host filesystem into the pod where Cilium is running.
   120  
   121  2. The file ``/etc/cni/net.d/05-cilium.conf`` is written in case it does not
   122     exist yet.
   123  
   124  3. The binary ``cilium-cni`` is installed to ``/opt/cni/bin``. Any existing
   125     binary with the name ``cilium-cni`` is overwritten.
   126  
   127  .. _install_cni:
   128  
   129  Manually installing CNI
   130  -----------------------
   131  
   132  This step is typically already included in all Kubernetes distributions or
   133  Kubernetes installers but can be performed manually:
   134  
   135  .. code:: bash
   136  
   137      sudo mkdir -p /opt/cni
   138      wget https://storage.googleapis.com/kubernetes-release/network-plugins/cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
   139      sudo tar -xvf cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz -C /opt/cni
   140      rm cni-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz
   141  
   142  
   143  Adjusting CNI configuration
   144  ---------------------------
   145  
   146  The CNI configuration file is automatically written and maintained by the
   147  scripts ``cni-install.sh`` and ``cni-uninstall.sh`` which are running as
   148  ``postStart`` and ``preStop`` hooks of the Cilium pod.
   149  
   150  If you want to provide your own custom CNI configuration file, set the
   151  ``CILIUM_CUSTOM_CNI_CONF`` environment variable to avoid overwriting the
   152  configuration file by adding the following to the ``env:`` section of the
   153  ``cilium`` DaemonSet:
   154  
   155  .. code:: bash
   156  
   157          - name: CILIUM_CUSTOM_CNI_CONF
   158            value: "true"
   159  
   160  The CNI installation can be configured with environment variables. These
   161  environment variables can be specified in the `DaemonSet` file like this:
   162  
   163  .. code:: bash
   164  
   165      env:
   166        - name: "CNI_CONF_NAME"
   167          value: "05-cilium.conf"
   168  
   169  The following variables are supported:
   170  
   171  +---------------------+--------------------------------------+------------------------+
   172  | Option              | Description                          | Default                |
   173  +---------------------+--------------------------------------+------------------------+
   174  | HOST_PREFIX         | Path prefix of all host mounts       | /host                  |
   175  +---------------------+--------------------------------------+------------------------+
   176  | CNI_DIR             | Path to mounted CNI directory        | ${HOST_PREFIX}/opt/cni |
   177  +---------------------+--------------------------------------+------------------------+
   178  | CNI_CONF_NAME       | Name of configuration file           | 05-cilium.conf         |
   179  +---------------------+--------------------------------------+------------------------+
   180  
   181  If you want to further adjust the CNI configuration you may do so by creating
   182  the CNI configuration ``/etc/cni/net.d/05-cilium.conf`` manually:
   183  
   184  .. code:: bash
   185  
   186      sudo mkdir -p /etc/cni/net.d
   187      sudo sh -c 'echo "{
   188          "name": "cilium",
   189          "type": "cilium-cni"
   190      }
   191      " > /etc/cni/net.d/05-cilium.conf'
   192  
   193  Cilium will use any existing ``/etc/cni/net.d/05-cilium.conf`` file if it
   194  already exists on a worker node and only creates it if it does not exist yet.
   195  
   196  CRD Validation
   197  ==============
   198  
   199  Custom Resource Validation was introduced in Kubernetes since version ``1.8.0``.
   200  This is still considered an alpha feature in Kubernetes ``1.8.0`` and beta in
   201  Kubernetes ``1.9.0``.
   202  
   203  Since Cilium ``v1.0.0-rc3``, Cilium will create, or update in case it exists,
   204  the Cilium Network Policy (CNP) Resource Definition with the embedded
   205  validation schema. This allows the validation of CiliumNetworkPolicy to be done
   206  on the kube-apiserver when the policy is imported with an ability to provide
   207  direct feedback when importing the resource.
   208  
   209  To enable this feature, the flag ``--feature-gates=CustomResourceValidation=true``
   210  must be set when starting kube-apiserver. Cilium itself will automatically make
   211  use of this feature and no additional flag is required.
   212  
   213  .. note:: In case there is an invalid CNP before updating to Cilium
   214            ``v1.0.0-rc3``, which contains the validator, the kube-apiserver
   215            validator will prevent Cilium from updating that invalid CNP with
   216            Cilium node status. By checking Cilium logs for ``unable to update
   217            CNP, retrying...``, it is possible to determine which Cilium Network
   218            Policies are considered invalid after updating to Cilium
   219            ``v1.0.0-rc3``.
   220  
   221  To verify that the CNP resource definition contains the validation schema, run
   222  the following command:
   223  
   224  ``kubectl get crd ciliumnetworkpolicies.cilium.io -o json``
   225  
   226  .. code:: bash
   227  
   228  	kubectl get crd ciliumnetworkpolicies.cilium.io -o json | grep -A 12 openAPIV3Schema
   229              "openAPIV3Schema": {
   230                  "oneOf": [
   231                      {
   232                          "required": [
   233                              "spec"
   234                          ]
   235                      },
   236                      {
   237                          "required": [
   238                              "specs"
   239                          ]
   240                      }
   241                  ],
   242  
   243  In case the user writes a policy that does not conform to the schema, Kubernetes
   244  will return an error, e.g.:
   245  
   246  .. code:: bash
   247  
   248  	cat <<EOF > ./bad-cnp.yaml
   249  	apiVersion: "cilium.io/v2"
   250  	kind: CiliumNetworkPolicy
   251  	description: "Policy to test multiple rules in a single file"
   252  	metadata:
   253  	  name: my-new-cilium-object
   254  	spec:
   255  	  endpointSelector:
   256  	    matchLabels:
   257  	      app: details
   258  	      track: stable
   259  	      version: v1
   260  	  ingress:
   261  	  - fromEndpoints:
   262  	    - matchLabels:
   263  	        app: reviews
   264  	        track: stable
   265  	        version: v1
   266  	    toPorts:
   267  	    - ports:
   268  	      - port: '65536'
   269  	        protocol: TCP
   270  	      rules:
   271  	        http:
   272  	        - method: GET
   273  	          path: "/health"
   274  	EOF
   275  
   276  	kubectl create -f ./bad-cnp.yaml
   277  	...
   278  	spec.ingress.toPorts.ports.port in body should match '^(6553[0-5]|655[0-2][0-9]|65[0-4][0-9]{2}|6[0-4][0-9]{3}|[1-5][0-9]{4}|[0-9]{1,4})$'
   279  
   280  
   281  In this case, the policy has a port out of the 0-65535 range.
   282  
   283  .. _bpffs_systemd:
   284  
   285  Mounting BPFFS with systemd
   286  ===========================
   287  
   288  Due to how systemd `mounts
   289  <https://unix.stackexchange.com/questions/283442/systemd-mount-fails-where-setting-doesnt-match-unit-name>`__
   290  filesystems, the mount point path must be reflected in the unit filename.
   291  
   292  .. code:: bash
   293  
   294          cat <<EOF | sudo tee /etc/systemd/system/sys-fs-bpf.mount
   295          [Unit]
   296          Description=Cilium BPF mounts
   297          Documentation=http://docs.cilium.io/
   298          DefaultDependencies=no
   299          Before=local-fs.target umount.target
   300          After=swap.target
   301  
   302          [Mount]
   303          What=bpffs
   304          Where=/sys/fs/bpf
   305          Type=bpf
   306  
   307          [Install]
   308          WantedBy=multi-user.target
   309          EOF
   310  
   311  Container Runtimes
   312  ==================
   313  
   314  CRIO
   315  ----
   316  
   317  If you want to use CRIO, generate the YAML using:
   318  
   319  .. include:: ../gettingstarted/k8s-install-download-release.rst
   320  
   321  .. code:: bash
   322  
   323     helm template cilium \
   324       --namespace kube-system \
   325       --set global.containerRuntime.integration=crio \
   326       > cilium.yaml
   327  
   328  Since CRI-O does not automatically detect that a new CNI plugin has been
   329  installed, you will need to restart the CRI-O daemon for it to pick up the
   330  Cilium CNI configuration.
   331  
   332  First make sure cilium is running:
   333  
   334  ::
   335  
   336      kubectl get pods -n kube-system -o wide
   337      NAME               READY     STATUS    RESTARTS   AGE       IP          NODE
   338      cilium-mqtdz       1/1       Running   0          3m       10.0.2.15   minikube
   339  
   340  After that you can restart CRI-O:
   341  
   342  ::
   343  
   344      minikube ssh -- sudo systemctl restart crio
   345  
   346  Finally, you need to restart the Cilium pod so it can re-mount
   347  ``/var/run/crio/crio.sock`` which was recreated by CRI-O
   348  
   349  ::
   350  
   351      kubectl delete -n kube-system pod -l k8s-app=cilium
   352  
   353  Disable container runtime
   354  -------------------------
   355  
   356  If you want to run the Cilium agent on a node that will not host any
   357  application containers, then that node may not have a container runtime
   358  installed at all. You may still want to run the Cilium agent on the node to
   359  ensure that local processes on that node can reach application containers on
   360  other nodes. The default behavior of Cilium on startup when no container
   361  runtime has been found is to abort startup. To avoid this abort, you can run
   362  the ``cilium-agent`` with the following option.
   363  
   364  .. code:: bash
   365  
   366     helm template cilium \
   367       --namespace kube-system \
   368       --set global.containerRuntime.integration=none \
   369       > cilium.yaml