github.com/datadog/cilium@v1.6.12/Documentation/concepts/overview.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  ******************
     8  Component Overview
     9  ******************
    10  
    11  .. image:: ../images/cilium-arch.png
    12      :align: center
    13  
    14  A deployment of Cilium consists of the following components running on each
    15  Linux container node in the container cluster:
    16  
    17  * **Cilium Agent (Daemon):** Userspace daemon that interacts with the container runtime
    18    and orchestration systems such as Kubernetes via Plugins to setup networking
    19    and security for containers running on the local server.  Provides an API for
    20    configuring network security policies, extracting network visibility data,
    21    etc.
    22  
    23  * **Cilium CLI Client:** Simple CLI client for communicating with the local
    24    Cilium Agent, for example, to configure network security or visibility
    25    policies.
    26  
    27  * **Linux Kernel BPF:** Integrated capability of the Linux kernel to accept
    28    compiled bytecode that is run at various hook / trace points within the kernel.
    29    Cilium compiles BPF programs and has the kernel run them at key points in the
    30    network stack to have visibility and control over all network traffic in /
    31    out of all containers.
    32  
    33  * **Container Platform Network Plugin:**  Each container platform (e.g.,
    34    Docker, Kubernetes) has its own plugin model for how external networking
    35    platforms integrate.  In the case of Docker, each Linux node runs a process
    36    (cilium-docker) that handles each Docker libnetwork call and passes data /
    37    requests on to the main Cilium Agent.
    38  
    39  In addition to these components, Cilium also depends on the following
    40  components running in the cluster:
    41  
    42  * **Key-Value Store:** Cilium shares data between Cilium Agents on different
    43    nodes via a kvstore. The currently supported key-value stores are etcd or
    44    consul.
    45  
    46  * **Cilium Operator:** Daemon for handling cluster management duties which can
    47    be handled once per cluster, rather than once per node.
    48  
    49  Cilium Agent
    50  ============
    51  
    52  The Cilium agent (cilium-agent) runs on each Linux container host.  At a
    53  high-level, the agent accepts configuration that describes service-level
    54  network security and visibility policies.   It then listens to events in the
    55  container runtime to learn when containers are started or stopped, and it
    56  creates custom BPF programs which the Linux kernel uses to control all network
    57  access in / out of those containers.  In more detail, the agent:
    58  
    59  * Exposes APIs to allow operations / security teams to configure security
    60    policies (see below) that control all communication between containers in the
    61    cluster.  These APIs also expose monitoring capabilities to gain additional
    62    visibility into network forwarding and filtering behavior.
    63  
    64  * Gathers metadata about each new container that is created.  In particular, it
    65    queries identity metadata like container / pod labels, which are used to
    66    identify `endpoints` in Cilium security policies.
    67  
    68  * Interacts with the container platforms network plugin to perform IP address
    69    management (IPAM), which controls what IPv4 and IPv6 addresses are assigned
    70    to each container. The IPAM is managed by the agent in a shared pool between
    71    all plugins which means that the Docker and CNI network plugin can run side
    72    by side allocating a single address pool.
    73  
    74  * Combines its knowledge about container identity and addresses with the
    75    already configured security and visibility policies to generate highly
    76    efficient BPF programs that are tailored to the network forwarding and
    77    security behavior appropriate for each container.
    78  
    79  * Compiles the BPF programs to bytecode using `clang/LLVM
    80    <https://clang.llvm.org/>`_ and passes them to the Linux kernel to run for
    81    all packets in / out of the container's virtual ethernet device(s).
    82  
    83  
    84  Cilium CLI Client
    85  =================
    86  
    87  The Cilium CLI Client (cilium) is a command-line tool that is installed along
    88  with the Cilium Agent.  It gives a command-line interface to interact with all
    89  aspects of the Cilium Agent API.   This includes inspecting Cilium's state
    90  about each network endpoint (i.e., container), configuring and viewing security
    91  policies, and configuring network monitoring behavior.
    92  
    93  Linux Kernel BPF
    94  ================
    95  
    96  Berkeley Packet Filter (BPF) is a Linux kernel bytecode interpreter originally
    97  introduced to filter network packets, e.g. tcpdump and socket filters. It has
    98  since been extended with additional data structures such as hashtable and
    99  arrays as well as additional actions to support packet mangling, forwarding,
   100  encapsulation, etc. An in-kernel verifier ensures that BPF programs are safe to
   101  run and a JIT compiler converts the bytecode to CPU architecture specific
   102  instructions for native execution efficiency. BPF programs can be run at
   103  various hooking points in the kernel such as for incoming packets, outgoing
   104  packets, system calls, kprobes, etc.
   105  
   106  BPF continues to evolve and gain additional capabilities with each new Linux
   107  release.  Cilium leverages BPF to perform core datapath filtering, mangling,
   108  monitoring and redirection, and requires BPF capabilities that are in any Linux
   109  kernel version 4.8.0 or newer. On the basis that 4.8.x is already declared end
   110  of life and 4.9.x has been nominated as a stable release we recommend to run at
   111  least kernel 4.9.17 (the latest current stable Linux kernel as of this writing
   112  is 4.10.x).
   113  
   114  Cilium is capable of probing the Linux kernel for available features and will
   115  automatically make use of more recent features as they are detected.
   116  
   117  Linux distros that focus on being a container runtime (e.g., CoreOS, Fedora
   118  Atomic) typically already ship kernels that are newer than 4.8, but even recent
   119  versions of general purpose operating systems such as Ubuntu 16.10 ship fairly
   120  recent kernels. Some Linux distributions still ship older kernels but many of
   121  them allow installing recent kernels from separate kernel package repositories.
   122  
   123  For more detail on kernel versions, see: :ref:`admin_kernel_version`.
   124  
   125  Key-Value Store
   126  ===============
   127  
   128  The Key-Value (KV) Store is used for the following state:
   129  
   130  * Policy Identities: list of labels <=> policy identity identifier
   131  
   132  * Global Services: global service id to VIP association (optional)
   133  
   134  * Encapsulation VTEP mapping (optional)
   135  
   136  To simplify things in a larger deployment, the key-value store can be the same
   137  one used by the container orchestrator (e.g., Kubernetes using etcd).
   138  
   139  Cilium Operator
   140  ===============
   141  
   142  The Cilium Operator is responsible for managing duties in the cluster which
   143  should logically be handled once for the entire cluster, rather than once for
   144  each node in the cluster. Its design helps with scale limitations in large
   145  kubernetes clusters (>1000 nodes). The responsibilities of Cilium operator
   146  include:
   147  
   148  * Synchronizing kubernetes services with etcd for :ref:`Cluster Mesh`
   149  
   150  * Synchronizing node resources with etcd
   151  
   152  * Ensuring that DNS pods are managed by Cilium
   153  
   154  * Garbage-collection of Cilium Endpoints resources, unused security identities
   155    from the key-value store, and status of deleted nodes from CiliumNetworkPolicy
   156  
   157  * Translation of ``toGroups`` policy
   158  
   159  * Interaction with the AWS API for managing :ref:`ipam_eni`
   160