github.com/cilium/cilium@v1.16.2/Documentation/contributing/development/dev_setup.rst (about)

     1  
     2  .. only:: not (epub or latex or html)
     3  
     4      WARNING: You are looking at unreleased Cilium documentation.
     5      Please use the official rendered version released here:
     6      https://docs.cilium.io
     7  
     8  .. _dev_env:
     9  
    10  Development Setup
    11  =================
    12  
    13  This page provides an overview of different methods for efficient
    14  development on Cilium. Depending on your needs, you can choose the most
    15  suitable method.
    16  
    17  Quick Start
    18  -----------
    19  
    20  If you're in a hurry, here are the essential steps to get started:
    21  
    22  On Linux:
    23  
    24  1. ``make kind`` - Provisions a Kind cluster.
    25  2. ``make kind-install-cilium-fast`` - Installs Cilium on the Kind cluster.
    26  3. ``make kind-image-fast`` - Builds Cilium and deploys it.
    27  
    28  On any OS:
    29  
    30  1. ``make kind`` - Provisions a Kind cluster.
    31  2. ``make kind-image`` - Builds Docker images.
    32  3. ``make kind-install-cilium`` - Installs Cilium on the Kind cluster.
    33  
    34  Detailed Instructions
    35  ---------------------
    36  
    37  Depending on your specific development environment and requirements, you
    38  can follow the detailed instructions below.
    39  
    40  Verifying Your Development Setup
    41  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    42  
    43  Assuming you have Go installed, you can quickly verify many elements of your
    44  development setup by running the following command:
    45  
    46  .. code-block:: shell-session
    47  
    48      $ make dev-doctor
    49  
    50  Depending on your end-goal, not all dependencies listed are required to develop
    51  on Cilium. For example, "Ginkgo" is not required if you want to improve our
    52  documentation. Thus, do not consider that you need to have all tools installed.
    53  
    54  Version Requirements
    55  ~~~~~~~~~~~~~~~~~~~~
    56  
    57  If using these tools, you need to have the following versions from them
    58  in order to effectively contribute to Cilium:
    59  
    60  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    61  | Dependency                                                        | Version / Commit ID          | Download Command                                                |
    62  +===================================================================+==============================+=================================================================+
    63  |  git                                                              | latest                       | N/A (OS-specific)                                               |
    64  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    65  |  clang                                                            | >= 17.0 (latest recommended) | N/A (OS-specific)                                               |
    66  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    67  |  llvm                                                             | >= 17.0 (latest recommended) | N/A (OS-specific)                                               |
    68  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    69  | `go <https://golang.org/dl/>`_                                    | |GO_RELEASE|                 | N/A (OS-specific)                                               |
    70  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    71  + `ginkgo <https://github.com/onsi/ginkgo>`__                       | >= 1.4.0 and < 2.0.0         | ``go install github.com/onsi/ginkgo/ginkgo@v1.16.5``            |
    72  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    73  + `golangci-lint <https://github.com/golangci/golangci-lint>`_      | >= v1.27                     | N/A (OS-specific)                                               |
    74  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    75  + `Docker <https://docs.docker.com/engine/installation/>`_          | OS-Dependent                 | N/A (OS-specific)                                               |
    76  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    77  + `Docker-Compose <https://docs.docker.com/compose/install/>`_      | OS-Dependent                 | N/A (OS-specific)                                               |
    78  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    79  + python3-pip                                                       | latest                       | N/A (OS-specific)                                               |
    80  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    81  + `helm <https://helm.sh/docs/intro/install/>`_                     | >= v3.13.0                   | N/A (OS-specific)                                               |
    82  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    83  + `kind <https://kind.sigs.k8s.io/docs/user/quick-start/>`__        | >= v0.7.0                    | ``go install sigs.k8s.io/kind@v0.19.0``                         |
    84  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    85  + `kubectl <https://kubernetes.io/docs/tasks/tools/#kubectl>`_      | >= v1.26.0                   | N/A (OS-specific)                                               |
    86  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    87  + `cilium-cli <https://github.com/cilium/cilium-cli#installation>`_ | Cilium-Dependent             | N/A (OS-specific)                                               |
    88  +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+
    89  
    90  For `integration_testing`, you will need to run ``docker`` without privileges.
    91  You can usually achieve this by adding your current user to the ``docker``
    92  group.
    93  
    94  Finally, in order to run Cilium locally on VMs, you need:
    95  
    96  +------------------------------------------------------------+-----------------------+--------------------------------------------------------------------------------+
    97  | Dependency                                                 | Version / Commit ID   | Download Command                                                               |
    98  +============================================================+=======================+================================================================================+
    99  | `Vagrant <https://www.vagrantup.com/downloads>`_           | >= 2.0                | `Vagrant Install Instructions <https://www.vagrantup.com/docs/installation>`_  |
   100  +------------------------------------------------------------+-----------------------+--------------------------------------------------------------------------------+
   101  | `VirtualBox <https://www.virtualbox.org/wiki/Downloads>`_  | >= 5.2                | N/A (OS-specific)                                                              |
   102  +------------------------------------------------------------+-----------------------+--------------------------------------------------------------------------------+
   103  
   104  Kind-based Setup (preferred)
   105  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   106  
   107  You can find the setup for a `kind <https://kind.sigs.k8s.io/>`_ environment in
   108  ``contrib/scripts/kind.sh``. This setup doesn't require any VMs and/or
   109  VirtualBox on Linux, but does require `Docker for Mac
   110  <https://docs.docker.com/desktop/install/mac-install/>`_ for Mac OS.
   111  
   112  Makefile targets automate the task of spinning up an environment:
   113  
   114  * ``make kind``: Creates a kind cluster based on the configuration passed in.
   115    For more information, see `configurations_for_clusters`.
   116  * ``make kind-down``: Tears down and deletes the cluster.
   117  
   118  Depending on your environment you can build Cilium by using the following
   119  makefile targets:
   120  
   121  For Linux and Mac OS
   122  ^^^^^^^^^^^^^^^^^^^^
   123  
   124  Makefile targets automate building and installing Cilium images:
   125  
   126  * ``make kind-image``: Builds all Cilium images and loads them into the
   127    cluster.
   128  * ``make kind-image-agent``: Builds only the Cilium Agent image and loads it
   129    into the cluster.
   130  * ``make kind-image-operator``: Builds only the Cilium Operator (generic) image
   131    and loads it into the cluster.
   132  * ``make kind-debug``: Builds all Cilium images with optimizations disabled and
   133    ``dlv`` embedded for live debugging enabled and loads the images into the
   134    cluster.
   135  * ``make kind-debug-agent``: Like ``kind-debug``, but for the agent image only.
   136    Use if only the agent image needs to be rebuilt for faster iteration.
   137  * ``make kind-install-cilium``: Installs Cilium into the cluster using the
   138    Cilium CLI.
   139  
   140  The preceding list includes the most used commands for **convenience**. For more
   141  targets, see the ``Makefile`` (or simply run ``make help``).
   142  
   143  For Linux only - with shorter development workflow time
   144  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   145  
   146  On Linux environments, or on environments where you can compile and run
   147  Cilium, it is possible to use "fast" targets. These fast targets will build
   148  Cilium in the local environment and mount that binary, as well the bpf source
   149  code, in an pre-existing running Cilium container.
   150  
   151  * ``make kind-install-cilium-fast``: Installs Cilium into the cluster using the
   152    Cilium CLI with the volume mounts defined.
   153  
   154  * ``make kind-image-fast``: Builds all Cilium binaries and loads them into all
   155    kind clusters available in the host.
   156  
   157  Configuration for Cilium
   158  ^^^^^^^^^^^^^^^^^^^^^^^^
   159  
   160  The Makefile targets that install Cilium pass the following list of Helm
   161  values (YAML files) to the Cilium CLI.
   162  
   163  * ``contrib/testing/kind-common.yaml``: Shared between normal and fast installation modes.
   164  * ``contrib/testing/kind-values.yaml``: Used by normal installation mode.
   165  * ``contrib/testing/kind-fast.yaml``: Used by fast installation mode.
   166  * ``contrib/testing/kind-custom.yaml``: User defined custom values that are applied if
   167    the file is present. The file is ignored by Git as specified in ``contrib/testing/.gitignore``.
   168  
   169  .. _configurations_for_clusters:
   170  
   171  Configuration for clusters
   172  ^^^^^^^^^^^^^^^^^^^^^^^^^^
   173  
   174  ``make kind`` takes a few environment variables to modify the configuration of
   175  the clusters it creates. The following parameters are the most commonly used:
   176  
   177  * ``CONTROLPLANES``: How many control-plane nodes are created.
   178  * ``WORKERS``: How many worker nodes are created.
   179  * ``CLUSTER_NAME``: The name of the Kubernetes cluster.
   180  * ``IMAGE``: The image for kind, for example: ``kindest/node:v1.11.10``.
   181  * ``KUBEPROXY_MODE``: Pass directly as ``kubeProxyMode`` to the kind
   182    configuration Custom Resource Definition (CRD).
   183  
   184  For more environment variables, see ``contrib/scripts/kind.sh``.
   185  
   186  Vagrant Setup
   187  ~~~~~~~~~~~~~
   188  
   189  The setup for the Vagrantfile in the root of the Cilium tree depends on a
   190  number of environment variables and network setup that are managed via
   191  ``contrib/vagrant/start.sh``.
   192  
   193  Option 1 - Using the Provided Vagrantfiles
   194  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   195  
   196  To bring up a Vagrant VM with Cilium plus dependencies installed, run:
   197  
   198  .. code-block:: shell-session
   199  
   200      $ contrib/vagrant/start.sh [vm_name]
   201  
   202  This will create and run a vagrant VM based on the base box ``cilium/ubuntu``.
   203  The ``vm_name`` argument is optional and allows you to add new nodes to an
   204  existing cluster. For example, to add a net-next VM to a one-node cluster:
   205  
   206  .. code-block:: shell-session
   207  
   208      $ K8S=1 NWORKERS=1 NETNEXT=1 ./contrib/vagrant/start.sh k8s2+
   209  
   210  Cilium Vagrantfiles look for a file ``.devvmrc`` in the root of your
   211  Cilium repository. This file is ignored for Git, so it does not exist
   212  by default. If this file exists and is executable, it will be executed
   213  in the beginning of the VM bootstrap. This allows you to automatically
   214  customize the new VM, e.g., with your personal Git configuration. You
   215  may also want to add any local entries you need in ``/etc/hosts``,
   216  etc.
   217  
   218  For example, you could have something like this in your ``.devvmrc``:
   219  
   220  .. code-block:: bash
   221  
   222      #!/usr/bin/env bash
   223  
   224      git config --global user.name "Firstname Lastname"
   225      git config --global user.email developer@company.com
   226  
   227      sudo tee -a /etc/hosts <<EOF
   228      192.168.99.99 nas
   229      EOF
   230  
   231  Remember to make the script executable (``chmod +x .devvmrc``). When
   232  successfully running, the VM bootstrap shows a message like this right
   233  after the shared folders have been set up:
   234  
   235  ::
   236  
   237      runtime: ----------------------------------------------------------------
   238      runtime: Executing .devvmrc
   239  
   240  The box is currently available for the following providers:
   241  
   242  * virtualbox
   243  
   244  Configuration Options
   245  ^^^^^^^^^^^^^^^^^^^^^
   246  
   247  The following environment variables can be set to customize the VMs
   248  brought up by vagrant:
   249  
   250  * ``NWORKERS=n``: Number of child nodes you want to start with the master,
   251    default 0.
   252  * ``RELOAD=1``: Issue a ``vagrant reload`` instead of ``vagrant up``, useful
   253    to resume halted VMs.
   254  * ``NO_PROVISION=1``: Avoid provisioning Cilium inside the VM. Supports quick
   255    restart without recompiling all of Cilium.
   256  * ``K8S=1``: Build & install kubernetes on the nodes. ``k8s1`` is the master
   257    node, which contains both master components: etcd, kube-controller-manager,
   258    kube-scheduler, kube-apiserver, and node components: kubelet,
   259    kube-proxy, kubectl and Cilium. When used in combination with ``NWORKERS=1`` a
   260    second node is created, where ``k8s2`` will be a kubernetes node, which
   261    contains: kubelet, kube-proxy, kubectl and cilium.
   262  * ``NETNEXT=1``: Run with net-next kernel.
   263  * ``SERVER_BOX`` and ``SERVER_VERSION``: Run with a specified vagrant
   264    box. See: ``vagrant_box_defaults.rb`` for the supported
   265    versions.
   266  * ``IPV4=1``: Run Cilium with IPv4 enabled.
   267  * ``RUNTIME=x``: Sets up the container runtime to be used inside a kubernetes
   268    cluster. Valid options are: ``containerd`` and ``crio``. If not
   269    set, it defaults to ``containerd``.
   270  * ``VM_SET_PROXY=https://127.0.0.1:80/`` Sets up VM's ``https_proxy``.
   271  * ``INSTALL=1``: Restarts the installation of Cilium, Kubernetes, etc. Only
   272    useful when the installation was interrupted.
   273  * ``MAKECLEAN=1``: Execute ``make clean`` before building cilium in the VM.
   274  * ``NO_BUILD=1``: Does not run the "build" provision step in the VM. Assumes
   275    the developer had previously executed ``make build`` before provisioning the
   276    VM.
   277  * ``SHARE_PARENT``: Share the parent of your Cilium directory instead. This
   278    requires your Cilium directory to be named ``cilium``, but will also make
   279    all other files and folders in the parent directory available for the VM.
   280    This is useful to share all the cilium repos to the VM, for example.
   281  * ``USER_MOUNTS``: Additional mounts for the VM in a comma-separated list of
   282    mount specifications. Each mount specification can be simply a directory name
   283    relative to the home directory, or include a '=' character separating the
   284    destination mount point from the host directory. For example:
   285  
   286    * ``USER_MOUNTS=foo``
   287  
   288      * Mounts host directory ``~/foo`` as ``/home/vagrant/foo``
   289  
   290    * ``USER_MOUNTS=foo,/tmp/bar=/tmp/bar``
   291  
   292      * Mounts host directory ``~/foo`` as ``/home/vagrant/foo`` in the VM, and host
   293        directory ``/tmp/bar`` as ``/tmp/bar`` in the VM.
   294  
   295  * ``VM_MEMORY``: Memory in megabytes to configure for the VMs (default 4096).
   296  * ``VM_CPUS``: Number of CPUs to configure for the VMs (default 2).
   297  
   298  If you want to start the VM with cilium enabled with ``containerd``, with
   299  kubernetes installed and plus a worker, run:
   300  
   301  .. code-block:: shell-session
   302  
   303      $ RUNTIME=containerd K8S=1 NWORKERS=1 contrib/vagrant/start.sh
   304  
   305  If you want to get VM status, run:
   306  
   307  .. code-block:: shell-session
   308  
   309      $ RUNTIME=containerd K8S=1 NWORKERS=1 vagrant status
   310  
   311  If you want to connect to the Kubernetes cluster running inside the developer VM via ``kubectl`` from your host machine, set ``KUBECONFIG`` environment variable to include new kubeconfig file:
   312  
   313  .. code-block:: shell-session
   314  
   315      $ export KUBECONFIG=$KUBECONFIG:${PATH_TO_CILIUM_REPO}/vagrant.kubeconfig
   316  
   317  where ``PATH_TO_CILIUM_REPO`` is the path of your local clone of the Cilium git repository. Also add ``127.0.0.1 k8s1`` to your hosts file.
   318  
   319  If you have any issue with the provided vagrant box
   320  ``cilium/ubuntu`` or need a different box format, you may
   321  build the box yourself using the `packer scripts <https://github.com/cilium/packer-ci-build>`_
   322  
   323  Launch CI VMs
   324  ^^^^^^^^^^^^^
   325  
   326  The ``test`` directory also contains a ``Vagrantfile`` that can be
   327  used to bring up the CI VM images that will cache a Vagrant box
   328  locally (in ``test/.vagrant/`` that prepulls all the docker images
   329  needed for the CI tests. Unfortunately some of the options are different
   330  from the main Vagrantfile, for example:
   331  
   332  - ``K8S_NODES`` determines the total number of k8s nodes, including the master.
   333    - ``NWORKERS`` is not supported.
   334  - ``USER_MOUNTS`` is not available.
   335  
   336  To start a local k8s 1.18 cluster with one CI VM locally, run:
   337  
   338  .. code-block:: shell-session
   339  
   340      $ cd test
   341      $ K8S_VERSION=1.18 K8S_NODES=1 ./vagrant-local-start.sh
   342  
   343  This will first destroy any CI VMs you may have running on the current
   344  ``K8S_VERSION``, and then create a local Vagrant box if not already
   345  created. This can take some time.
   346  
   347  VM preloading can be turned off by exporting ``PRELOAD_VM=false``. You
   348  can run ``make clean`` in ``test`` to delete the cached vagrant box.
   349  
   350  To start the CI runtime VM locally, run:
   351  
   352  .. code-block:: shell-session
   353  
   354      $ cd test
   355      $ ./vagrant-local-start-runtime.sh
   356  
   357  The runtime VM is connected to the same private VirtualBox network as
   358  the local CI k8s nodes.
   359  
   360  The runtime VM uses the same cached box as the k8s nodes, but does not start
   361  K8s, but runs Cilium as a systemd service.
   362  
   363  Option 2 - Manual Installation
   364  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   365  
   366  Alternatively you can import the vagrant box ``cilium/ubuntu``
   367  directly and manually install Cilium:
   368  
   369  .. code-block:: shell-session
   370  
   371          $ vagrant init cilium/ubuntu
   372          $ vagrant up
   373          $ vagrant ssh [...]
   374          $ go get github.com/cilium/cilium
   375          $ cd go/src/github.com/cilium/cilium/
   376          $ make
   377          $ sudo make install
   378          $ sudo mkdir -p /etc/sysconfig/
   379          $ sudo cp contrib/systemd/cilium.service /etc/systemd/system/
   380          $ sudo cp contrib/systemd/cilium-docker.service /etc/systemd/system/
   381          $ sudo cp contrib/systemd/cilium-consul.service /etc/systemd/system/
   382          $ sudo cp contrib/systemd/cilium  /etc/sysconfig/cilium
   383          $ sudo usermod -a -G cilium vagrant
   384          $ sudo systemctl enable cilium-docker
   385          $ sudo systemctl restart cilium-docker
   386          $ sudo systemctl enable cilium-consul
   387          $ sudo systemctl restart cilium-consul
   388          $ sudo systemctl enable cilium
   389          $ sudo systemctl restart cilium
   390  
   391  Notes
   392  ^^^^^
   393  
   394  Your Cilium tree is mapped to the VM so that you do not need to keep manually
   395  copying files between your host and the VM. Folders are by default synced
   396  automatically using `VirtualBox Shared Folders <https://www.virtualbox.org/manual/ch04.html#sharedfolders>`_
   397  with NFS. Note that your host firewall must have a variety of ports open. The
   398  Vagrantfile will inform you of the configuration of these addresses and ports
   399  to enable NFS.
   400  
   401  .. note::
   402  
   403     Although providing a Developer preview for macOS/arm64 (M1/M2) hosts, 
   404     Oracle is not going to offer official support for ARM64 on Mac. As of VirtualBox 7.0.6
   405     the developer preview is *not* working with the Cilium Vagrant Setup.
   406     
   407  .. note::
   408  
   409     OSX file system is by default case insensitive, which can confuse
   410     git.  At the writing of this Cilium repo has no file names that
   411     would be considered referring to the same file on a case
   412     insensitive file system.  Regardless, it may be useful to create a
   413     disk image with a case sensitive file system for holding your git
   414     repos.
   415  
   416  .. note::
   417  
   418     VirtualBox for OSX currently (version 5.1.22) always reports
   419     host-only networks' prefix length as 64.  Cilium needs this prefix
   420     to be 16, and the startup script will check for this.  This check
   421     always fails when using VirtualBox on OSX, but it is safe to let
   422     the startup script to reset the prefix length to 16.
   423  
   424  .. note::
   425  
   426     Make sure your host NFS configuration is setup to use tcp:
   427  
   428     .. code-block:: none
   429  
   430        # cat /etc/nfs.conf
   431        ...
   432        [nfsd]
   433        # grace-time=90
   434        tcp=y
   435        # vers2=n
   436        # vers3=y
   437        ...
   438  
   439  .. note::
   440  
   441     Linux 5.18 on newer Intel CPUs which support Intel CET (11th and
   442     12th gen) has a bug that prevents the VMs from starting. If you see
   443     a stacktrace with ``kernel BUG at arch/x86/kernel/traps.c`` and
   444     ``traps: Missing ENDBR`` messages in dmesg, that means you are
   445     affected. A workaround for now is to pass ``ibt=off`` to the kernel
   446     command line.
   447  
   448  .. note::
   449  
   450     VirtualBox for Ubuntu desktop might have network issues after
   451     suspending and resuming the host OS (typically by closing and
   452     re-opening the laptop lid). If the ``cilium-dbg status`` keeps showing
   453     unreachable from nodes but reachable from endpoints, you could
   454     hit this. Run the following code on each VM to rebuild routing
   455     and neighbor entries:
   456  
   457     .. code-block:: shell-session
   458  
   459        # assume we deployed the cluster with "NWORKERS=1" and "NETNEXT=1"
   460  
   461        # fetch ipv6 addresses
   462        $ ipv6_k8s1=$(vagrant ssh k8s1+ -c 'ip -6 --br a sh enp0s9 scope global' | awk '{print $3}')
   463        $ ipv6_k8s2=$(vagrant ssh k8s2+ -c 'ip -6 --br a sh enp0s9 scope global' | awk '{print $3}')
   464  
   465        # fetch mac addresses
   466        $ mac_k8s1=$(vagrant ssh k8s1+ -c 'ip --br l sh enp0s9' | awk '{print $3}')
   467        $ mac_k8s2=$(vagrant ssh k8s2+ -c 'ip --br l sh enp0s9' | awk '{print $3}')
   468  
   469        # add route
   470        $ vagrant ssh k8s1+ -c 'ip -6 r a fd00::/16 dev enp0s9'
   471        $ vagrant ssh k8s2+ -c 'ip -6 r a fd00::/16 dev enp0s9'
   472  
   473        # add neighbor
   474        $ vagrant ssh k8s1+ -c "ip n r $ipv6_k8s2 dev enp0s9 lladdr $mac_k8s2 nud reachable"
   475        $ vagrant ssh k8s2+ -c "ip n r $ipv6_k8s1 dev enp0s9 lladdr $mac_k8s1 nud reachable"
   476  
   477  If for some reason, running of the provisioning script fails, you should bring the VM down before trying again:
   478  
   479  .. code-block:: shell-session
   480  
   481      $ vagrant halt
   482  
   483  Local Development in Vagrant Box
   484  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   485  
   486  See :ref:`dev_env` for information on how to setup the development environment.
   487  
   488  When the development VM is provisioned, it builds and installs Cilium.  After
   489  the initial build and install you can do further building and testing
   490  incrementally inside the VM. ``vagrant ssh`` takes you to the Cilium source
   491  tree directory (``/home/vagrant/go/src/github.com/cilium/cilium``) by default,
   492  and the following commands assume that you are working within that directory.
   493  
   494  Build Cilium
   495  ^^^^^^^^^^^^
   496  
   497  When you make changes, the tree is automatically kept in sync via NFS.
   498  You can issue a build as follows:
   499  
   500  .. code-block:: shell-session
   501  
   502      $ make
   503  
   504  Install to dev environment
   505  ^^^^^^^^^^^^^^^^^^^^^^^^^^
   506  
   507  After a successful build and test you can re-install Cilium by:
   508  
   509  .. code-block:: shell-session
   510  
   511      $ sudo -E make install
   512  
   513  Restart Cilium service
   514  ^^^^^^^^^^^^^^^^^^^^^^
   515  
   516  To run the newly installed version of Cilium, restart the service:
   517  
   518  .. code-block:: shell-session
   519  
   520      $ sudo systemctl restart cilium
   521  
   522  You can verify the service and cilium-agent status by the following
   523  commands, respectively:
   524  
   525  .. code-block:: shell-session
   526  
   527      $ sudo systemctl status cilium
   528      $ cilium-dbg status
   529  
   530  Simple smoke-test with HTTP policies
   531  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   532  
   533  After Cilium daemon has been restarted, you may want to verify that it
   534  boots up properly and integration with Envoy still works. To do this,
   535  run this bash test script:
   536  
   537  .. code-block:: shell-session
   538  
   539      $ test/envoy/envoy-smoke-test.sh
   540  
   541  This test launches three docker containers (one curl client, and two
   542  httpd servers) and tests various simple network policies with
   543  them. These containers should be automatically removed when the test
   544  finishes.
   545  
   546  .. _making_changes:
   547  
   548  Making Changes
   549  --------------
   550  
   551  #. Make sure the ``main`` branch of your fork is up-to-date:
   552  
   553     .. code-block:: shell-session
   554  
   555        git fetch upstream main:main
   556  
   557  #. Create a PR branch with a descriptive name, branching from ``main``:
   558  
   559     .. code-block:: shell-session
   560  
   561        git switch -c pr/changes-to-something main
   562  
   563  #. Make the changes you want.
   564  #. Separate the changes into logical commits.
   565  
   566     #. Describe the changes in the commit messages. Focus on answering the
   567        question why the change is required and document anything that might be
   568        unexpected.
   569     #. If any description is required to understand your code changes, then
   570        those instructions should be code comments instead of statements in the
   571        commit description.
   572  
   573     .. note::
   574  
   575        For submitting PRs, all commits need be to signed off (``git commit -s``). See the section :ref:`dev_coo`.
   576  
   577  #. Make sure your changes meet the following criteria:
   578  
   579     #. New code is covered by :ref:`integration_testing`.
   580     #. End to end integration / runtime tests have been extended or added. If
   581        not required, mention in the commit message what existing test covers the
   582        new code.
   583     #. Follow-up commits are squashed together nicely. Commits should separate
   584        logical chunks of code and not represent a chronological list of changes.
   585  
   586  #. Run ``git diff --check`` to catch obvious white space violations
   587  #. Run ``make`` to build your changes. This will also run ``make lint`` and error out
   588     on any golang linting errors. The rules are configured in ``.golangci.yaml``
   589  #. Run ``make -C bpf checkpatch`` to validate against your changes
   590     coding style and commit messages.
   591  #. See :ref:`integration_testing` on how to run integration tests.
   592  #. See :ref:`testsuite` for information how to run the end to end integration
   593     tests
   594  #. If you are making documentation changes, you can generate documentation files
   595     and serve them locally on ``http://localhost:9081`` by running ``make render-docs``.
   596     This make target works both inside and outside the Vagrant VM, assuming that ``docker``
   597     is running in the environment.
   598  
   599  Dev Container
   600  -------------
   601  
   602  Cilium provides `Dev Container <https://code.visualstudio.com/docs/devcontainers/containers>`_ configuration for Visual Studio Code Remote Containers
   603  and `Github Codespaces <https://docs.github.com/en/codespaces/setting-up-your-project-for-codespaces/introduction-to-dev-containers>`_.
   604  This allows you to use a preconfigured development environment in the cloud or locally.
   605  The container is based on the official Cilium builder image and provides all the dependencies
   606  required to build Cilium.
   607  
   608  You can also install common packages, such as kind, kubectl, and cilium-cli, with ``contrib/scripts/devcontainer-setup.sh``:
   609  
   610  .. code-block:: shell-session
   611  
   612      $ ./contrib/scripts/devcontainer-setup.sh
   613  
   614  Package versions can be modified to fit your requirements.
   615  This needs to only be set up once when the ``devcontainer`` is first created.
   616  
   617  .. note::
   618  
   619      The current Dev Container is running as root. Non-root user support requires non-root
   620      user in Cilium builder image, which is related to :gh-issue:`23217`.
   621  
   622  Update a golang version
   623  -----------------------
   624  
   625  Minor version
   626  ~~~~~~~~~~~~~
   627  
   628  Each Cilium release is tied to a specific version of Golang via an explicit constraint
   629  in our Renovate configuration.
   630  
   631  We aim to build and release all maintained Cilium branches using a Golang version
   632  that is actively supported. This needs to be balanced against the desire to avoid
   633  regressions in Golang that may impact Cilium. Golang supports two minor versions
   634  at any given time – when updating the version used by a Cilium branch, you should
   635  choose the older of the two supported versions.
   636  
   637  To update the minor version of Golang used by a release, you will first need to
   638  update the Renovate configuration found in ``.github/renovate.json5``. For each
   639  minor release, there will be a section that looks like this:
   640  
   641  .. code-block:: json
   642  
   643      {
   644        "matchPackageNames": [
   645          "docker.io/library/golang",
   646          "go"
   647        ],
   648        "allowedVersions": "<1.21",
   649        "matchBaseBranches": [
   650          "v1.14"
   651        ]
   652      }
   653  
   654  To allow Renovate to create a pull request that updates the minor Golang version,
   655  bump the ``allowedVersions`` constraint to include the desired minor version. Once
   656  this change has been merged, Renovate will create a pull request that updates the
   657  Golang version. Minor version updates may require further changes to ensure that
   658  all Cilium features are working correctly – use the CI to identify any issues that
   659  require further changes, and bring them to the attention of the Cilium maintainers
   660  in the pull request.
   661  
   662  Once the CI is passing, the PR will be merged as part of the standard version
   663  upgrade process.
   664  
   665  Patch version
   666  ~~~~~~~~~~~~~
   667  
   668  New patch versions of Golang are picked up automatically by the CI; there should
   669  normally be no need to update the version manually.
   670  
   671  Add/update a golang dependency
   672  ------------------------------
   673  
   674  Let's assume we want to add ``github.com/containernetworking/cni`` version ``v0.5.2``:
   675  
   676  .. code-block:: shell-session
   677  
   678      $ go get github.com/containernetworking/cni@v0.5.2
   679      $ go mod tidy
   680      $ go mod vendor
   681      $ git add go.mod go.sum vendor/
   682  
   683  For a first run, it can take a while as it will download all dependencies to
   684  your local cache but the remaining runs will be faster.
   685  
   686  Updating k8s is a special case which requires updating k8s libraries in a single
   687  change:
   688  
   689  .. code-block:: shell-session
   690  
   691      $ # get the tag we are updating (for example ``v0.17.3`` corresponds to k8s ``v1.17.3``)
   692      $ # open go.mod and search and replace all ``v0.17.3`` with the version
   693      $ # that we are trying to upgrade with, for example: ``v0.17.4``.
   694      $ # Close the file and run:
   695      $ go mod tidy
   696      $ go mod vendor
   697      $ make generate-k8s-api
   698      $ git add go.mod go.sum vendor/
   699  
   700  Add/update a cilium/kindest-node image
   701  --------------------------------------
   702  
   703  Cilium might use its own fork of kindest-node so that it can use k8s versions
   704  that have not been released by Kind maintainers yet.
   705  
   706  One other reason for using a fork is that the base image used on kindest-node
   707  may not have been release yet. For example, as of this writing, Cilium requires
   708  Debian Bookworm (yet to be released), because the glibc version available on
   709  Cilium's base Docker image is the same as the one used in the Bookworm Docker
   710  image which is relevant for testing with Go's race detector.
   711  
   712  Currently, only maintainers can publish an image on ``quay.io/cilium/kindest-node``.
   713  However, anyone can build a kindest-node image and try it out
   714  
   715  To build a cilium/kindest-node image, first build the base Docker image:
   716  
   717     .. code-block:: shell-session
   718  
   719      git clone https://github.com/kubernetes-sigs/kind.git
   720      cd kind
   721      make -C images/base/ quick
   722  
   723  Take note of the resulting image tag for that command, it should be the last
   724  tag built for the ``gcr.io/k8s-staging-kind/base`` repository in ``docker ps -a``.
   725  
   726  Secondly, change into the directory with Kubernetes' source code which will be
   727  used for the kindest node image. On this example, we will build a kindest-base
   728  image with Kubernetes version ``v1.28.3`` using the recently-built base image
   729  ``gcr.io/k8s-staging-kind/base:v20231108-a9fbf702``:
   730  
   731     .. code-block:: shell-session
   732  
   733      $ # Change to k8s' source code directory.
   734      $ git clone https://github.com/kubernetes/kubernetes.git
   735      $ cd kubernetes
   736      $ tag=v1.28.3
   737      $ git fetch origin --tags
   738      $ git checkout tags/${tag}
   739      $ kind build node-image \
   740        --image=quay.io/cilium/kindest-node:${tag} \
   741        --base-image=gcr.io/k8s-staging-kind/base:v20231108-a9fbf702
   742  
   743  Finally, publish the image to a public repository. If you are a maintainer and
   744  have permissions to publish on ``quay.io/cilium/kindest-node``, the Renovate bot
   745  will automatically pick the new version and create a new Pull Request with this
   746  update. If you are not a maintainer you will have to update the image manually
   747  in Cilium's repository.
   748  
   749  Add/update a new Kubernetes version
   750  -----------------------------------
   751  
   752  Let's assume we want to add a new Kubernetes version ``v1.19.0``:
   753  
   754  #. Follow the above instructions to update the Kubernetes libraries.
   755  
   756  #. Follow the next instructions depending on if it is a minor update or a patch
   757     update.
   758  
   759  Minor version
   760  ~~~~~~~~~~~~~
   761  
   762  #. Check if it is possible to remove the last supported Kubernetes version from
   763     :ref:`k8scompatibility`, :ref:`k8s_requirements`, :ref:`test_matrix`,
   764     :ref:`running_k8s_tests`, :ref:`gsg_istio` and add the new Kubernetes
   765     version to that list.
   766  
   767  #. If the minimal supported version changed, leave a note in the upgrade guide
   768     stating the minimal supported Kubernetes version.
   769  
   770  #. If the minimal supported version changed, search over the code, more likely
   771     under ``pkg/k8s``, if there is code that can be removed which specifically
   772     exists for the compatibility of the previous Kubernetes minimal version
   773     supported.
   774  
   775  #. If the minimal supported version changed, update the field
   776     ``MinimalVersionConstraint`` in ``pkg/k8s/version/version.go``
   777  
   778  #. Sync all "``slim``" types by following the instructions in
   779     ``pkg/k8s/slim/README.md``.  The overall goal is to update changed fields or
   780     deprecated fields from the upstream code. New functions / fields / structs
   781     added in upstream that are not used in Cilium, can be removed.
   782  
   783  #. Make sure the workflows used on all PRs are running with the new Kubernetes
   784     version by default. Make sure the files ``contributing/testing/{ci,e2e}.rst``
   785     are up to date with these changes.
   786  
   787  #. Update documentation files:
   788     - ``Documentation/contributing/testing/e2e.rst``
   789     - ``Documentation/network/kubernetes/compatibility.rst``
   790     - ``Documentation/network/kubernetes/requirements.rst``
   791  
   792  #. Update the Kubernetes version with the newer version in ``test/Vagrantfile``,
   793     ``test/test_suite_test.go`` and ``test/vagrant-local-start.sh``.
   794  
   795  #. Add the new coredns files specific for the Kubernetes version,
   796     for ``1.19`` is ``test/provision/manifest/1.19``. The coredns deployment
   797     files can be found upstream as mentioned in the previous k8s version
   798     coredns files. Perform a diff with the previous versions to check which
   799     changes are required for our CI and which changes were added upstream.
   800  
   801  #. If necessary, update the ``coredns`` files from
   802     ``contrib/vagrant/deployments`` with newer the file versions from upstream.
   803  
   804  #. Update the constraint in the function ``getK8sSupportedConstraints``, that
   805     exists in the ``test/helpers/utils.go``, with the new Kubernetes version that
   806     Cilium supports. It is possible that a new ``IsCiliumV1*`` var in that file
   807     is required as well.
   808  
   809  #. Add the new version in ``test/provision/k8s_install.sh``, if it is an RC
   810     install it using binaries.
   811  
   812  #. Bump the kindest/node version in all of kind's config files (for example, ``.github/kind-config*``).
   813  
   814  #. Bump the Kubernetes version in ``contrib/vagrant/scripts/helpers.bash`` and
   815     the etcd version to the latest version.
   816  
   817  #. Run ``./contrib/scripts/check-k8s-code-gen.sh``
   818  
   819  #. Run ``go mod vendor && go mod tidy``
   820  
   821  #. Run ``./contrib/scripts/check-k8s-code-gen.sh`` (again)
   822  
   823  #. Run ``make -C Documentation update-helm-values``
   824  
   825  #. Compile the code locally to make sure all the library updates didn't removed
   826     any used code.
   827  
   828  #. Provision a new dev VM to check if the provisioning scripts work correctly
   829     with the new k8s version.
   830  
   831  #. Run ``git add vendor/ test/provision/manifest/ Documentation/ && git commit -sam "Update k8s tests and libraries to v1.28.0-rc.0"``
   832  
   833  #. Submit all your changes into a new PR.
   834  
   835  #. Ensure that the target CI workflows are running and passing after updating
   836     the target k8s versions in the GitHub action workflows.
   837  
   838  #. Once CI is green and PR has been merged, ping the CI team again so that they
   839     update the `Cilium CI matrix`_, ``.github/maintainers-little-helper.yaml``,
   840     and GitHub required PR checks accordingly.
   841  
   842  .. _Cilium CI matrix: https://docs.google.com/spreadsheets/d/1TThkqvVZxaqLR-Ela4ZrcJ0lrTJByCqrbdCjnI32_X0
   843  
   844  Patch version
   845  ~~~~~~~~~~~~~
   846  
   847  #. Bump the Kubernetes version in ``contrib/vagrant/scripts/helpers.bash``.
   848  
   849  #. Bump the Kubernetes version in ``test/provision/k8s_install.sh``.
   850  
   851  #. Submit all your changes into a new PR.
   852  
   853  Making changes to the Helm chart
   854  --------------------------------
   855  
   856  The Helm chart is located in the ``install/kubernetes`` directory. The
   857  ``values.yaml.tmpl`` file contains the values for the Helm chart which are being used into the ``values.yaml`` file.
   858  
   859  To prepare your changes you need to run the make scripts for the chart:
   860  
   861  .. code-block:: shell-session
   862  
   863     $ make -C install/kubernetes
   864  
   865  This does all needed steps in one command. Your change to the Helm chart is now ready to be submitted!
   866  
   867  You can also run them one by one using the individual targets below.
   868  
   869  When updating or adding a value they can be synced to the ``values.yaml`` file by running the following command:
   870  
   871  .. code-block:: shell-session
   872  
   873     $ make -C install/kubernetes cilium/values.yaml
   874  
   875  Before submitting the changes the ``README.md`` file needs to be updated, this can be done using the ``docs`` target:
   876  
   877  .. code-block:: shell-session
   878  
   879     $ make -C install/kubernetes docs
   880  
   881  At last you might want to check the chart using the ``lint`` target:
   882  
   883  .. code-block:: shell-session
   884  
   885     $ make -C install/kubernetes lint
   886  
   887  
   888  Optional: Docker and IPv6
   889  -------------------------
   890  
   891  Note that these instructions are useful to you if you care about having IPv6
   892  addresses for your Docker containers.
   893  
   894  If you'd like IPv6 addresses, you will need to follow these steps:
   895  
   896  1) Edit ``/etc/docker/daemon.json`` and set the ``ipv6`` key to ``true``.
   897  
   898     .. code-block:: json
   899  
   900        {
   901          "ipv6": true
   902        }
   903  
   904  
   905     If that doesn't work alone, try assigning a fixed range. Many people have
   906     reported trouble with IPv6 and Docker. `Source here.
   907     <https://github.com/moby/moby/issues/29443#issuecomment-495808871>`_
   908  
   909     .. code-block:: json
   910  
   911        {
   912          "ipv6": true,
   913          "fixed-cidr-v6": "2001:db8:1::/64"
   914        }
   915  
   916  
   917     And then:
   918  
   919     .. code-block:: shell-session
   920  
   921      ip -6 route add 2001:db8:1::/64 dev docker0
   922      sysctl net.ipv6.conf.default.forwarding=1
   923      sysctl net.ipv6.conf.all.forwarding=1
   924  
   925  
   926  2) Restart the docker daemon to pick up the new configuration.
   927  
   928  3) The new command for creating a network managed by Cilium:
   929  
   930     .. code-block:: shell-session
   931  
   932        $ docker network create --ipv6 --driver cilium --ipam-driver cilium cilium-net
   933  
   934  
   935  Now new containers will have an IPv6 address assigned to them.
   936  
   937  Debugging
   938  ---------
   939  
   940  Datapath code
   941  ~~~~~~~~~~~~~
   942  
   943  The tool ``cilium-dbg monitor`` can also be used to retrieve debugging information
   944  from the eBPF based datapath. To enable all log messages:
   945  
   946  - Start the ``cilium-agent`` with ``--debug-verbose=datapath``, or
   947  - Run ``cilium-dbg config debug=true debugLB=true`` from an already running agent.
   948  
   949  These options enable logging functions in the datapath: ``cilium_dbg()``,
   950  ``cilium_dbg_lb()`` and ``printk()``.
   951  
   952  .. note::
   953  
   954     The ``printk()`` logging function is used by the developer to debug the datapath outside of the ``cilium
   955     monitor``.  In this case, ``bpftool prog tracelog`` can be used to retrieve
   956     debugging information from the eBPF based datapath. Both ``cilium_dbg()`` and
   957     ``printk()`` functions are available from the ``bpf/lib/dbg.h`` header file.
   958  
   959  The image below shows the options that could be used as startup options by
   960  ``cilium-agent`` (see upper blue box) or could be changed at runtime by running
   961  ``cilium-dbg config <option(s)>`` for an already running agent (see lower blue box).
   962  Along with each option, there is one or more logging function associated with it:
   963  ``cilium_dbg()`` and ``printk()``, for ``DEBUG`` and ``cilium_dbg_lb()`` for
   964  ``DEBUG_LB``.
   965  
   966  .. image:: _static/cilium-debug-datapath-options.svg
   967    :align: center
   968    :alt: Cilium debug datapath options
   969  
   970  .. note::
   971  
   972     If you need to enable the ``LB_DEBUG`` for an already running agent by running
   973     ``cilium-dbg config debugLB=true``, you must pass the option ``debug=true`` along.
   974  
   975  Debugging of an individual endpoint can be enabled by running
   976  ``cilium-dbg endpoint config ID debug=true``. Running ``cilium-dbg monitor -v`` will
   977  print the normal form of monitor output along with debug messages:
   978  
   979  .. code-block:: shell-session
   980  
   981     $ cilium-dbg endpoint config 731 debug=true
   982     Endpoint 731 configuration updated successfully
   983     $ cilium-dbg monitor -v
   984     Press Ctrl-C to quit
   985     level=info msg="Initializing dissection cache..." subsys=monitor
   986     <- endpoint 745 flow 0x6851276 identity 4->0 state new ifindex 0 orig-ip 0.0.0.0: 8e:3c:a3:67:cc:1e -> 16:f9:cd:dc:87:e5 ARP
   987     -> lxc_health: 16:f9:cd:dc:87:e5 -> 8e:3c:a3:67:cc:1e ARP
   988     CPU 00: MARK 0xbbe3d555 FROM 0 DEBUG: Inheriting identity=1 from stack
   989     <- host flow 0xbbe3d555 identity 1->0 state new ifindex 0 orig-ip 0.0.0.0: 10.11.251.76:57896 -> 10.11.166.21:4240 tcp ACK
   990     CPU 00: MARK 0xbbe3d555 FROM 0 DEBUG: Successfully mapped addr=10.11.251.76 to identity=1
   991     CPU 00: MARK 0xbbe3d555 FROM 0 DEBUG: Attempting local delivery for container id 745 from seclabel 1
   992     CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: Conntrack lookup 1/2: src=10.11.251.76:57896 dst=10.11.166.21:4240
   993     CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: Conntrack lookup 2/2: nexthdr=6 flags=0
   994     CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: CT entry found lifetime=21925, revnat=0
   995     CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: CT verdict: Established, revnat=0
   996     -> endpoint 745 flow 0xbbe3d555 identity 1->4 state established ifindex lxc_health orig-ip 10.11.251.76: 10.11.251.76:57896 -> 10.11.166.21:4240 tcp ACK
   997  
   998  Passing ``-v -v`` supports deeper detail, for example:
   999  
  1000  .. code-block:: shell-session
  1001  
  1002      $ cilium-dbg endpoint config 3978 debug=true
  1003      Endpoint 3978 configuration updated successfully
  1004      $ cilium-dbg monitor -v -v --hex
  1005      Listening for events on 2 CPUs with 64x4096 of shared memory
  1006      Press Ctrl-C to quit
  1007      ------------------------------------------------------------------------------
  1008      CPU 00: MARK 0x1c56d86c FROM 3978 DEBUG: 70 bytes Incoming packet from container ifindex 85
  1009      00000000  33 33 00 00 00 02 ae 45  75 73 11 04 86 dd 60 00  |33.....Eus....`.|
  1010      00000010  00 00 00 10 3a ff fe 80  00 00 00 00 00 00 ac 45  |....:..........E|
  1011      00000020  75 ff fe 73 11 04 ff 02  00 00 00 00 00 00 00 00  |u..s............|
  1012      00000030  00 00 00 00 00 02 85 00  15 b4 00 00 00 00 01 01  |................|
  1013      00000040  ae 45 75 73 11 04 00 00  00 00 00 00              |.Eus........|
  1014      CPU 00: MARK 0x1c56d86c FROM 3978 DEBUG: Handling ICMPv6 type=133
  1015      ------------------------------------------------------------------------------
  1016      CPU 00: MARK 0x1c56d86c FROM 3978 Packet dropped 131 (Invalid destination mac) 70 bytes ifindex=0 284->0
  1017      00000000  33 33 00 00 00 02 ae 45  75 73 11 04 86 dd 60 00  |33.....Eus....`.|
  1018      00000010  00 00 00 10 3a ff fe 80  00 00 00 00 00 00 ac 45  |....:..........E|
  1019      00000020  75 ff fe 73 11 04 ff 02  00 00 00 00 00 00 00 00  |u..s............|
  1020      00000030  00 00 00 00 00 02 85 00  15 b4 00 00 00 00 01 01  |................|
  1021      00000040  00 00 00 00                                       |....|
  1022      ------------------------------------------------------------------------------
  1023      CPU 00: MARK 0x7dc2b704 FROM 3978 DEBUG: 86 bytes Incoming packet from container ifindex 85
  1024      00000000  33 33 ff 00 8a d6 ae 45  75 73 11 04 86 dd 60 00  |33.....Eus....`.|
  1025      00000010  00 00 00 20 3a ff fe 80  00 00 00 00 00 00 ac 45  |... :..........E|
  1026      00000020  75 ff fe 73 11 04 ff 02  00 00 00 00 00 00 00 00  |u..s............|
  1027      00000030  00 01 ff 00 8a d6 87 00  20 40 00 00 00 00 fd 02  |........ @......|
  1028      00000040  00 00 00 00 00 00 c0 a8  21 0b 00 00 8a d6 01 01  |........!.......|
  1029      00000050  ae 45 75 73 11 04 00 00  00 00 00 00              |.Eus........|
  1030      CPU 00: MARK 0x7dc2b704 FROM 3978 DEBUG: Handling ICMPv6 type=135
  1031      CPU 00: MARK 0x7dc2b704 FROM 3978 DEBUG: ICMPv6 neighbour soliciation for address b21a8c0:d68a0000
  1032  
  1033  
  1034  One of the most common issues when developing datapath code is that the eBPF
  1035  code cannot be loaded into the kernel. This frequently manifests as the
  1036  endpoints appearing in the "not-ready" state and never switching out of it:
  1037  
  1038  .. code-block:: shell-session
  1039  
  1040      $ cilium-dbg endpoint list
  1041      ENDPOINT   POLICY        IDENTITY   LABELS (source:key[=value])   IPv6                     IPv4            STATUS
  1042                 ENFORCEMENT
  1043      48896      Disabled      266        container:id.server           fd02::c0a8:210b:0:bf00   10.11.13.37     not-ready
  1044      60670      Disabled      267        container:id.client           fd02::c0a8:210b:0:ecfe   10.11.167.158   not-ready
  1045  
  1046  Running ``cilium-dbg endpoint get`` for one of the endpoints will provide a
  1047  description of known state about it, which includes eBPF verification logs.
  1048  
  1049  The files under ``/var/run/cilium/state`` provide context about how the eBPF
  1050  datapath is managed and set up. The .h files describe specific configurations
  1051  used for eBPF program compilation. The numbered directories describe
  1052  endpoint-specific state, including header configuration files and eBPF binaries.
  1053  
  1054  Current eBPF map state for particular programs is held under ``/sys/fs/bpf/``,
  1055  and the `bpf-map <https://github.com/cilium/bpf-map>`_ utility can be useful
  1056  for debugging what is going on inside them, for example:
  1057  
  1058  .. code-block:: shell-session
  1059  
  1060      # ls /sys/fs/bpf/tc/globals/
  1061      cilium_calls_15124  cilium_calls_48896        cilium_ct4_global       cilium_lb4_rr_seq       cilium_lb6_services  cilium_policy_25729  cilium_policy_60670       cilium_proxy6
  1062      cilium_calls_25729  cilium_calls_60670        cilium_ct6_global       cilium_lb4_services     cilium_lxc           cilium_policy_3978   cilium_policy_reserved_1  cilium_reserved_policy
  1063      cilium_calls_3978   cilium_calls_netdev_ns_1  cilium_events           cilium_lb6_reverse_nat  cilium_policy        cilium_policy_4314   cilium_policy_reserved_2  cilium_tunnel_map
  1064      cilium_calls_4314   cilium_calls_overlay_2    cilium_lb4_reverse_nat  cilium_lb6_rr_seq       cilium_policy_15124  cilium_policy_48896  cilium_proxy4
  1065      # bpf-map info /sys/fs/bpf/tc/globals/cilium_policy_15124
  1066      Type:           Hash
  1067      Key size:       8
  1068      Value size:     24
  1069      Max entries:    1024
  1070      Flags:          0x0
  1071      # bpf-map dump /sys/fs/bpf/tc/globals/cilium_policy_15124
  1072      Key:
  1073      00000000  6a 01 00 00 82 23 06 00                           |j....#..|
  1074      Value:
  1075      00000000  01 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
  1076      00000010  00 00 00 00 00 00 00 00                           |........|
  1077  
  1078