github.com/fafucoder/cilium@v1.6.11/Documentation/policy/language.rst (about)

     1  .. only:: not (epub or latex or html)
     2  
     3      WARNING: You are looking at unreleased Cilium documentation.
     4      Please use the official rendered version released here:
     5      http://docs.cilium.io
     6  
     7  .. _policy_examples:
     8  
     9  Layer 3 Examples
    10  ================
    11  
    12  The layer 3 policy establishes the base connectivity rules regarding which endpoints
    13  can talk to each other. Layer 3 policies can be specified using the following methods:
    14  
    15  * `Labels based`: This is used to describe the relationship if both endpoints
    16    are managed by Cilium and are thus assigned labels. The big advantage of this
    17    method is that IP addresses are not encoded into the policies and the policy is
    18    completely decoupled from the addressing.
    19  
    20  * `Services based`: This is an intermediate form between Labels and CIDR and
    21    makes use of the services concept in the orchestration system. A good example
    22    of this is the Kubernetes concept of Service endpoints which are
    23    automatically maintained to contain all backend IP addresses of a service.
    24    This allows to avoid hardcoding IP addresses into the policy even if the
    25    destination endpoint is not controlled by Cilium.
    26  
    27  * `Entities based`: Entities are used to describe remote peers which can be
    28    categorized without knowing their IP addresses. This includes connectivity
    29    to the local host serving the endpoints or all connectivity to outside of
    30    the cluster.
    31  
    32  * `CIDR based`: This is used to describe the relationship to or from external
    33    services if the remote peer is not an endpoint. This requires to hardcode either
    34    IP addresses or subnets into the policies. This construct should be used as a
    35    last resort as it requires stable IP or subnet assignments.
    36  
    37  * `DNS based`: Selects remote, non-cluster, peers using DNS names converted to
    38    IPs via DNS lookups. It shares all limitations of the `CIDR based` rules
    39    above. DNS information is acquired by routing DNS traffic via a proxy, or
    40    polling for listed DNS targets. DNS TTLs are respected.
    41  
    42  .. _Labels based:
    43  
    44  Labels Based
    45  ------------
    46  
    47  Label-based L3 policy is used to establish policy between endpoints inside the
    48  cluster managed by Cilium. Label-based L3 policies are defined by using an
    49  `EndpointSelector` inside a rule to choose what kind of traffic that can be
    50  received (on ingress), or sent (on egress). An empty `EndpointSelector` allows
    51  all traffic. The examples below demonstrate this in further detail.
    52  
    53  .. note:: **Kubernetes:** See section :ref:`k8s_namespaces` for details on how
    54  	  the `EndpointSelector` applies in a Kubernetes environment with
    55  	  regard to namespaces.
    56  
    57  Ingress
    58  ~~~~~~~
    59  
    60  An endpoint is allowed to receive traffic from another endpoint if at least one
    61  ingress rule exists which selects the destination endpoint with the
    62  `EndpointSelector` in the ``endpointSelector`` field. To restrict traffic upon
    63  ingress to the selected endpoint, the rule selects the source endpoint with the
    64  `EndpointSelector` in the ``fromEndpoints`` field.
    65  
    66  Simple Ingress Allow
    67  ~~~~~~~~~~~~~~~~~~~~
    68  
    69  The following example illustrates how to use a simple ingress rule to allow
    70  communication from endpoints with the label ``role=frontend`` to endpoints with
    71  the label ``role=backend``.
    72  
    73  .. only:: html
    74  
    75     .. tabs::
    76       .. group-tab:: k8s YAML
    77  
    78          .. literalinclude:: ../../examples/policies/l3/simple/l3.yaml
    79       .. group-tab:: JSON
    80  
    81          .. literalinclude:: ../../examples/policies/l3/simple/l3.json
    82  
    83  .. only:: epub or latex
    84  
    85          .. literalinclude:: ../../examples/policies/l3/simple/l3.json
    86  
    87  
    88  Ingress Allow All
    89  ~~~~~~~~~~~~~~~~~
    90  
    91  An empty `EndpointSelector` will select all endpoints, thus writing a rule that will allow
    92  all ingress traffic to an endpoint may be done as follows:
    93  
    94  .. only:: html
    95  
    96     .. tabs::
    97       .. group-tab:: k8s YAML
    98  
    99          .. literalinclude:: ../../examples/policies/l3/ingress-allow-all/ingress-allow-all.yaml
   100       .. group-tab:: JSON
   101  
   102          .. literalinclude:: ../../examples/policies/l3/ingress-allow-all/ingress-allow-all.json
   103  
   104  .. only:: epub or latex
   105  
   106          .. literalinclude:: ../../examples/policies/l3/ingress-allow-all/ingress-allow-all.json
   107  
   108  Note that while the above examples allow all ingress traffic to an endpoint, this does not
   109  mean that all endpoints are allowed to send traffic to this endpoint per their policies.
   110  In other words, policy must be configured on both sides (sender and receiver).
   111  
   112  Egress
   113  ~~~~~~
   114  
   115  An endpoint is allowed to send traffic to another endpoint if at least one
   116  egress rule exists which selects the destination endpoint with the
   117  `EndpointSelector` in the ``endpointSelector`` field. To restrict traffic upon
   118  egress to the selected endpoint, the rule selects the destination endpoint with
   119  the `EndpointSelector` in the ``toEndpoints`` field.
   120  
   121  Simple Egress Allow
   122  ~~~~~~~~~~~~~~~~~~~~
   123  
   124  The following example illustrates how to use a simple egress rule to allow
   125  communication to endpoints with the label ``role=backend`` from endpoints with
   126  the label ``role=frontend``.
   127  
   128  .. only:: html
   129  
   130     .. tabs::
   131       .. group-tab:: k8s YAML
   132  
   133          .. literalinclude:: ../../examples/policies/l3/simple/l3_egress.yaml
   134       .. group-tab:: JSON
   135  
   136          .. literalinclude:: ../../examples/policies/l3/simple/l3_egress.json
   137  
   138  .. only:: epub or latex
   139  
   140          .. literalinclude:: ../../examples/policies/l3/simple/l3_egress.json
   141  
   142  
   143  Egress Allow All
   144  ~~~~~~~~~~~~~~~~~
   145  
   146  An empty `EndpointSelector` will select all endpoints, thus writing a rule that will allow
   147  all egress traffic from an endpoint may be done as follows:
   148  
   149  .. only:: html
   150  
   151     .. tabs::
   152       .. group-tab:: k8s YAML
   153  
   154          .. literalinclude:: ../../examples/policies/l3/egress-allow-all/egress-allow-all.yaml
   155       .. group-tab:: JSON
   156  
   157          .. literalinclude:: ../../examples/policies/l3/egress-allow-all/egress-allow-all.json
   158  
   159  .. only:: epub or latex
   160  
   161          .. literalinclude:: ../../examples/policies/l3/egress-allow-all/egress-allow-all.json
   162  
   163  
   164  Note that while the above examples allow all egress traffic from an endpoint, the receivers
   165  of the egress traffic may have ingress rules that deny the traffic. In other words,
   166  policy must be configured on both sides (sender and receiver).
   167  
   168  Ingress/Egress Default Deny
   169  ~~~~~~~~~~~~~~~~~~~~~~~~~~~
   170  
   171  An endpoint can be put into the default deny mode at ingress or egress if a
   172  rule selects the endpoint and contains the respective rule section ingress or
   173  egress.
   174  
   175  .. note:: Any rule selecting the endpoint will have this effect, this example
   176            illustrates how to put an endpoint into default deny mode without
   177            whitelisting other peers at the same time.
   178  
   179  .. only:: html
   180  
   181     .. tabs::
   182       .. group-tab:: k8s YAML
   183  
   184          .. literalinclude:: ../../examples/policies/l3/egress-default-deny/egress-default-deny.yaml
   185       .. group-tab:: JSON
   186  
   187          .. literalinclude:: ../../examples/policies/l3/egress-default-deny/egress-default-deny.json
   188  
   189  .. only:: epub or latex
   190  
   191          .. literalinclude:: ../../examples/policies/l3/egress-default-deny/egress-default-deny.json
   192  
   193  Additional Label Requirements
   194  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   195  
   196  It is often required to apply the principle of *separation of concern* when defining
   197  policies. For this reason, an additional construct exists which allows to establish
   198  base requirements for any connectivity to happen.
   199  
   200  For this purpose, the ``fromRequires`` field can be used to establish label
   201  requirements which serve as a foundation for any ``fromEndpoints``
   202  relationship.  ``fromRequires`` is a list of additional constraints which must
   203  be met in order for the selected endpoints to be reachable. These additional
   204  constraints do not grant access privileges by themselves, so to allow traffic
   205  there must also be rules which match ``fromEndpoints``. The same applies for
   206  egress policies, with ``toRequires`` and ``toEndpoints``.
   207  
   208  The purpose of this rule is to allow establishing base requirements such as, any
   209  endpoint in ``env=prod`` can only be accessed if the source endpoint also carries
   210  the label ``env=prod``.
   211  
   212  This example shows how to require every endpoint with the label ``env=prod`` to
   213  be only accessible if the source endpoint also has the label ``env=prod``.
   214  
   215  .. only:: html
   216  
   217     .. tabs::
   218       .. group-tab:: k8s YAML
   219  
   220          .. literalinclude:: ../../examples/policies/l3/requires/requires.yaml
   221       .. group-tab:: JSON
   222  
   223          .. literalinclude:: ../../examples/policies/l3/requires/requires.json
   224  
   225  .. only:: epub or latex
   226  
   227          .. literalinclude:: ../../examples/policies/l3/requires/requires.json
   228  
   229  .. _Services based:
   230  
   231  Services based
   232  --------------
   233  
   234  Services running in your cluster can be whitelisted in Egress rules.
   235  Currently Kubernetes `Services without a Selector
   236  <https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors>`_
   237  are supported when defined by their name and namespace or label selector.
   238  Future versions of Cilium will support specifying non-Kubernetes services
   239  and Kubernetes services which are backed by pods.
   240  
   241  This example shows how to allow all endpoints with the label ``id=app2``
   242  to talk to all endpoints of kubernetes service ``myservice`` in kubernetes
   243  namespace ``default``.
   244  
   245  .. note::
   246  
   247  	These rules will only take effect on Kubernetes services without a
   248  	selector.
   249  
   250  .. only:: html
   251  
   252     .. tabs::
   253       .. group-tab:: k8s YAML
   254  
   255          .. literalinclude:: ../../examples/policies/l3/service/service.yaml
   256       .. group-tab:: JSON
   257  
   258          .. literalinclude:: ../../examples/policies/l3/service/service.json
   259  
   260  .. only:: epub or latex
   261  
   262          .. literalinclude:: ../../examples/policies/l3/service/service.json
   263  
   264  This example shows how to allow all endpoints with the label ``id=app2``
   265  to talk to all endpoints of all kubernetes headless services which
   266  have ``head:none`` set as the label.
   267  
   268  .. only:: html
   269  
   270     .. tabs::
   271       .. group-tab:: k8s YAML
   272  
   273          .. literalinclude:: ../../examples/policies/l3/service/service-labels.yaml
   274       .. group-tab:: JSON
   275  
   276          .. literalinclude:: ../../examples/policies/l3/service/service-labels.json
   277  
   278  .. only:: epub or latex
   279  
   280          .. literalinclude:: ../../examples/policies/l3/service/service-labels.json
   281  
   282  
   283  .. _Entities based:
   284  
   285  Entities Based
   286  --------------
   287  
   288  ``fromEntities`` is used to describe the entities that can access the selected
   289  endpoints. ``toEntities`` is used to describe the entities that can be accessed
   290  by the selected endpoints.
   291  
   292  The following entities are defined:
   293  
   294  host
   295      The host entity includes all cluster nodes. This also includes all
   296      containers running in host networking mode.
   297  cluster
   298      Cluster is the logical group of all network endpoints inside of the local
   299      cluster. This includes all Cilium-managed endpoints of the local cluster.
   300      It also includes the host entity to cover host networking containers as
   301      well as the init entity to include endpoints currently being bootstrapped.
   302  init
   303      The init entity contains all endpoints in bootstrap phase for which the
   304      security identity has not been resolved yet. See section
   305      :ref:`endpoint_lifecycle` for details.
   306  world
   307      The world entity corresponds to all endpoints outside of the cluster.
   308      Allowing to world is identical to allowing to CIDR 0/0. An alternative
   309      to allowing from and to world is to define fine grained DNS or CIDR based
   310      policies.
   311  all
   312      The all entity represents the combination of all known clusters as well
   313      world and whitelists all communication.
   314  
   315  .. versionadded:: future
   316     Allowing users to `define custom identities <https://github.com/cilium/cilium/issues/3553>`_
   317     is on the roadmap but has not been implemented yet.
   318  
   319  Access to/from local host
   320  ~~~~~~~~~~~~~~~~~~~~~~~~~
   321  
   322  Allow all endpoints with the label ``env=dev`` to access the host that is
   323  serving the particular endpoint.
   324  
   325  .. note:: Kubernetes will automatically allow all communication from and to the
   326  	  local host of all local endpoints. You can run the agent with the
   327  	  option ``--allow-localhost=policy`` to disable this behavior which
   328  	  will give you control over this via policy.
   329  
   330  .. only:: html
   331  
   332     .. tabs::
   333       .. group-tab:: k8s YAML
   334  
   335          .. literalinclude:: ../../examples/policies/l3/entities/host.yaml
   336       .. group-tab:: JSON
   337  
   338          .. literalinclude:: ../../examples/policies/l3/entities/host.json
   339  
   340  .. only:: epub or latex
   341  
   342          .. literalinclude:: ../../examples/policies/l3/entities/host.json
   343  
   344  
   345  Access to/from outside cluster
   346  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   347  
   348  This example shows how to enable access from outside of the cluster to all
   349  endpoints that have the label ``role=public``.
   350  
   351  .. only:: html
   352  
   353     .. tabs::
   354       .. group-tab:: k8s YAML
   355  
   356          .. literalinclude:: ../../examples/policies/l3/entities/world.yaml
   357       .. group-tab:: JSON
   358  
   359          .. literalinclude:: ../../examples/policies/l3/entities/world.json
   360  
   361  .. only:: epub or latex
   362  
   363          .. literalinclude:: ../../examples/policies/l3/entities/world.json
   364  
   365  .. _policy_cidr:
   366  .. _CIDR based:
   367  
   368  IP/CIDR based
   369  -------------
   370  
   371  CIDR policies are used to define policies to and from endpoints which are not
   372  managed by Cilium and thus do not have labels associated with them. These are
   373  typically external services, VMs or metal machines running in particular
   374  subnets. CIDR policy can also be used to limit access to external services, for
   375  example to limit external access to a particular IP range. CIDR policies can
   376  be applied at ingress or egress.
   377  
   378  CIDR rules apply if Cilium cannot map the source or destination to an identity
   379  derived from endpoint labels, ie the `reserved_labels`. For example, CIDR rules
   380  will apply to traffic where one side of the connection is:
   381  
   382  * A network endpoint outside the cluster
   383  * The host network namespace where the pod is running.
   384  * Within the cluster prefix but the IP's networking is not provided by Cilium.
   385  
   386  .. note::
   387  
   388     When running Cilium on Linux 4.10 or earlier, there are :ref:`cidr_limitations`.
   389  
   390  Ingress
   391  ~~~~~~~
   392  
   393  fromCIDR
   394    List of source prefixes/CIDRs that are allowed to talk to all endpoints
   395    selected by the ``endpointSelector``.
   396  
   397  fromCIDRSet
   398    List of source prefixes/CIDRs that are allowed to talk to all endpoints
   399    selected by the ``endpointSelector``, along with an optional list of
   400    prefixes/CIDRs per source prefix/CIDR that are subnets of the source
   401    prefix/CIDR from which communication is not allowed.
   402  
   403  Egress
   404  ~~~~~~
   405  
   406  toCIDR
   407    List of destination prefixes/CIDRs that endpoints selected by
   408    ``endpointSelector`` are allowed to talk to. Note that endpoints which are
   409    selected by a ``fromEndpoints`` are automatically allowed to talk to their
   410    respective destination endpoints.
   411  
   412  toCIDRSet
   413    List of destination prefixes/CIDRs that are allowed to talk to all endpoints
   414    selected by the ``endpointSelector``, along with an optional list of
   415    prefixes/CIDRs per source prefix/CIDR that are subnets of the destination
   416    prefix/CIDR to which communication is not allowed.
   417  
   418  Allow to external CIDR block
   419  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   420  
   421  This example shows how to allow all endpoints with the label ``app=myService``
   422  to talk to the external IP ``20.1.1.1``, as well as the CIDR prefix ``10.0.0.0/8``,
   423  but not CIDR prefix ``10.96.0.0/12``
   424  
   425  .. only:: html
   426  
   427     .. tabs::
   428       .. group-tab:: k8s YAML
   429  
   430          .. literalinclude:: ../../examples/policies/l3/cidr/cidr.yaml
   431       .. group-tab:: JSON
   432  
   433          .. literalinclude:: ../../examples/policies/l3/cidr/cidr.json
   434  
   435  .. only:: epub or latex
   436  
   437          .. literalinclude:: ../../examples/policies/l3/cidr/cidr.json
   438  
   439  .. _DNS based:
   440  
   441  DNS based
   442  ---------
   443  
   444  DNS policies are used to define Layer 3 policies to endpoints that are not
   445  managed by cilium, but have DNS queryable domain names. The IP addresses
   446  provided in DNS responses are allowed by Cilium in a similar manner to IPs in
   447  `CIDR based`_ policies. They are an alternative when the remote IPs may change
   448  or are not know a priori, or when DNS is more convenient. To enforce policy on
   449  DNS requests themselves, see `Layer 7 Examples`_.
   450  
   451  IP information is captured from DNS responses per-Endpoint via a `DNS Proxy`_
   452  or `DNS Polling`_. An L3 `CIDR based`_ rule is generated for every ``toFQDNs``
   453  rule and applies to the same endpoints. The IP information is selected for
   454  insertion by ``matchName`` or ``matchPattern`` rules, and is collected from all
   455  DNS responses seen by Cilium on the node. Multiple selectors may be included in
   456  a single egress rule. See :ref:`DNS Obtaining Data` for information on
   457  collecting this IP data.
   458  
   459  ``toFQDNs`` egress rules cannot contain any other L3 rules, such as
   460  ``toEndpoints`` (under `Labels Based`_) and ``toCIDRs`` (under `CIDR Based`_).
   461  They may contain L4/L7 rules, such as ``toPorts`` (see `Layer 4 Examples`_)
   462  with, optionally, ``HTTP`` and ``Kafka`` sections (see `Layer 7 Examples`_).
   463  
   464  .. note:: DNS based rules are intended for external connections and behave
   465            similarly to `CIDR based`_ rules. See `Services based`_ and
   466            `Labels based`_ for cluster-internal traffic.
   467  
   468  IPs to be allowed are selected via:
   469  
   470  ``toFQDNs.matchName``
   471    Inserts IPs of domains that match ``matchName`` exactly. Multiple distinct
   472    names may be included in separate ``matchName`` entries and IPs for domains
   473    that match any ``matchName`` will be inserted.
   474  
   475  ``toFQDNs.matchPattern``
   476    Inserts IPs of domains that match the pattern in ``matchPattern``, accounting
   477    for wildcards. Patterns are composed of literal characters that that are
   478    allowed in domain names: a-z, 0-9, ``.`` and ``-``.
   479  
   480    ``*`` is allowed as a wildcard with a number of convenience behaviors:
   481  
   482    * ``*`` within a domain allows 0 or more valid DNS characters, except for the
   483      ``.`` separator. ``*.cilium.io`` will match ``sub.cilium.io`` but not
   484      ``cilium.io``. ``part*ial.com`` will match ``partial.com`` and
   485      ``part-extra-ial.com``.
   486    * ``*`` alone matches all names, and inserts all cached DNS IPs into this
   487      rule.
   488  
   489  .. note:: `DNS Polling`_ will not poll ``matchPattern`` entries even if they
   490            are literal DNS names.
   491  
   492  Example
   493  ~~~~~~~
   494  
   495  .. only:: html
   496  
   497     .. tabs::
   498       .. group-tab:: k8s YAML
   499  
   500          .. literalinclude:: ../../examples/policies/l3/fqdn/fqdn.yaml
   501       .. group-tab:: JSON
   502  
   503          .. literalinclude:: ../../examples/policies/l3/fqdn/fqdn.json
   504  
   505  .. only:: epub or latex
   506  
   507          .. literalinclude:: ../../examples/policies/l3/fqdn/fqdn.json
   508  
   509  
   510  .. _DNS and Long-Lived Connections:
   511  
   512  Managing Long-Lived Connections & Minimum DNS Cache Times
   513  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   514  Often, an application may keep a connection open for longer than the configured
   515  DNS TTL. Without further DNS queries the remote IP used in the long-lived
   516  connection may expire out of the DNS cache. When this occurs, existing
   517  connections will become disallowed by policy and will be blocked. In cases
   518  where an application retries the connection, a new DNS query is issued and the
   519  IP is added to the policy.
   520  
   521  A minimum TTL is used to ensure a lower bound to DNS data expiration, and DNS
   522  data in the Cilium DNS cache will not expire sooner than this minimum. It
   523  can be configured with the ``--tofqdns-min-ttl`` CLI option. The value is in
   524  integer seconds and must be 1 or more. The default is 1 week, or 1 hour when
   525  `DNS Polling`_ is enabled.
   526  
   527  Some care needs to be taken when setting ``--tofqdns-min-ttl`` with DNS data
   528  that returns many distinct IPs over time. A long TTL will keep each IP cached
   529  long after the related connections may have terminated. Large numbers of IPs
   530  have corresponding Security Identities and too many may slow down Cilium policy
   531  regeneration. This can be especially pronounced when using `DNS Polling`_ to
   532  obtain DNS data. In such cases a shorter minimum TTL is recommended, as
   533  `DNS Polling`_ will recover up-do-date IPs regularly.
   534  
   535  .. note:: It is recommended that ``--tofqdns-min-ttl`` be set to the minimum
   536            time a connection must be maintained.
   537  
   538  Managing Short-Lived Connections & Maximum IPs per FQDN/endpoint
   539  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   540  
   541  The minimal TTL for DNS entries in the cache is deliberately long with 1 week
   542  per default. This is done to accommodate long-lived, persistent connections.  On
   543  the other end of the spectrum are workloads which perform short-lived
   544  connections in repetition to FQDNs which are backed by a large number of IP
   545  addresses (e.g. AWS S3). Such workloads can grow the number of IPs mapping to an
   546  FQDN quickly. In order to limit the number of IP addresses that map a particular
   547  FQDN, each FQDN per endpoint has a max capacity of IPs that are being maintained
   548  (default: 50). Once the capacity is exceeded, the oldest entries are
   549  automatically expired from the cache. This capacity can be changed using the
   550  ``--tofqdns-max-ip-per-hostname`` option.
   551  
   552  
   553  
   554  .. _l4_policy:
   555  
   556  Layer 4 Examples
   557  ================
   558  
   559  Limit ingress/egress ports
   560  --------------------------
   561  
   562  Layer 4 policy can be specified in addition to layer 3 policies or independently.
   563  It restricts the ability of an endpoint to emit and/or receive packets on a
   564  particular port using a particular protocol. If no layer 4 policy is specified
   565  for an endpoint, the endpoint is allowed to send and receive on all layer 4
   566  ports and protocols including ICMP. If any layer 4 policy is specified, then
   567  ICMP will be blocked unless it's related to a connection that is otherwise
   568  allowed by the policy. Layer 4 policies apply to ports after service port
   569  mapping has been applied.
   570  
   571  Layer 4 policy can be specified at both ingress and egress using the
   572  ``toPorts`` field. The ``toPorts`` field takes a ``PortProtocol`` structure
   573  which is defined as follows:
   574  
   575  .. code-block:: go
   576  
   577          // PortProtocol specifies an L4 port with an optional transport protocol
   578          type PortProtocol struct {
   579                  // Port is an L4 port number. For now the string will be strictly
   580                  // parsed as a single uint16. In the future, this field may support
   581                  // ranges in the form "1024-2048
   582                  Port string `json:"port"`
   583  
   584                  // Protocol is the L4 protocol. If omitted or empty, any protocol
   585                  // matches. Accepted values: "TCP", "UDP", ""/"ANY"
   586                  //
   587                  // Matching on ICMP is not supported.
   588                  //
   589                  // +optional
   590                  Protocol string `json:"protocol,omitempty"`
   591          }
   592  
   593  Example (L4)
   594  ~~~~~~~~~~~~
   595  
   596  The following rule limits all endpoints with the label ``app=myService`` to
   597  only be able to emit packets using TCP on port 80, to any layer 3 destination:
   598  
   599  .. only:: html
   600  
   601     .. tabs::
   602       .. group-tab:: k8s YAML
   603  
   604          .. literalinclude:: ../../examples/policies/l4/l4.yaml
   605       .. group-tab:: JSON
   606  
   607          .. literalinclude:: ../../examples/policies/l4/l4.json
   608  
   609  .. only:: epub or latex
   610  
   611          .. literalinclude:: ../../examples/policies/l4/l4.json
   612  
   613  Labels-dependent Layer 4 rule
   614  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   615  
   616  This example enables all endpoints with the label ``role=frontend`` to
   617  communicate with all endpoints with the label ``role=backend``, but they must
   618  communicate using TCP on port 80. Endpoints with other labels will not be
   619  able to communicate with the endpoints with the label ``role=backend``, and
   620  endpoints with the label ``role=frontend`` will not be able to communicate with
   621  ``role=backend`` on ports other than 80.
   622  
   623  .. only:: html
   624  
   625     .. tabs::
   626       .. group-tab:: k8s YAML
   627  
   628          .. literalinclude:: ../../examples/policies/l4/l3_l4_combined.yaml
   629       .. group-tab:: JSON
   630  
   631          .. literalinclude:: ../../examples/policies/l4/l3_l4_combined.json
   632  
   633  .. only:: epub or latex
   634  
   635          .. literalinclude:: ../../examples/policies/l4/l3_l4_combined.json
   636  
   637  CIDR-dependent Layer 4 Rule
   638  ~~~~~~~~~~~~~~~~~~~~~~~~~~~
   639  
   640  This example enables all endpoints with the label ``role=crawler`` to
   641  communicate with all remote destinations inside the CIDR ``192.0.2.0/24``, but
   642  they must communicate using TCP on port 80. The policy does not allow Endpoints
   643  without the label ``role=crawler`` to communicate with destinations in the CIDR
   644  ``192.0.2.0/24``. Furthermore, endpoints with the label ``role=crawler`` will
   645  not be able to communicate with destinations in the CIDR ``192.0.2.0/24`` on
   646  ports other than port 80.
   647  
   648  .. only:: html
   649  
   650     .. tabs::
   651       .. group-tab:: k8s YAML
   652  
   653          .. literalinclude:: ../../examples/policies/l4/cidr_l4_combined.yaml
   654       .. group-tab:: JSON
   655  
   656          .. literalinclude:: ../../examples/policies/l4/cidr_l4_combined.json
   657  
   658  .. only:: epub or latex
   659  
   660          .. literalinclude:: ../../examples/policies/l4/cidr_l4_combined.json
   661  
   662  
   663  
   664  .. _l7_policy:
   665  
   666  Layer 7 Examples
   667  ================
   668  
   669  Layer 7 policy rules are embedded into `l4_policy` rules and can be specified
   670  for ingress and egress. ``L7Rules`` structure is a base type containing an
   671  enumeration of protocol specific fields.
   672  
   673  .. code-block:: go
   674  
   675          // L7Rules is a union of port level rule types. Mixing of different port
   676          // level rule types is disallowed, so exactly one of the following must be set.
   677          // If none are specified, then no additional port level rules are applied.
   678          type L7Rules struct {
   679                  // HTTP specific rules.
   680                  //
   681                  // +optional
   682                  HTTP []PortRuleHTTP `json:"http,omitempty"`
   683  
   684                  // Kafka-specific rules.
   685                  //
   686                  // +optional
   687                  Kafka []PortRuleKafka `json:"kafka,omitempty"`
   688  
   689                  // DNS-specific rules.
   690                  //
   691                  // +optional
   692                  DNS []PortRuleDNS `json:"dns,omitempty"`
   693          }
   694  
   695  The structure is implemented as a union, i.e. only one member field can be used
   696  per port. If multiple ``toPorts`` rules with identical ``PortProtocol`` select
   697  an overlapping list of endpoints, then the layer 7 rules are combined together
   698  if they are of the same type. If the type differs, the policy is rejected.
   699  
   700  Each member consists of a list of application protocol rules. A layer 7
   701  request is permitted if at least one of the rules matches. If no rules are
   702  specified, then all traffic is permitted.
   703  
   704  If a layer 4 rule is specified in the policy, and a similar layer 4 rule
   705  with layer 7 rules is also specified, then the layer 7 portions of the
   706  latter rule will have no effect.
   707  
   708  .. note:: Unlike layer 3 and layer 4 policies, violation of layer 7 rules does
   709            not result in packet drops. Instead, if possible, an application
   710            protocol specific access denied message is crafted and returned, e.g.
   711            an *HTTP 403 access denied* is sent back for HTTP requests which
   712            violate the policy, or a *DNS REFUSED* response for DNS requests.
   713  
   714  .. note:: There is currently a max limit of 40 ports with layer 7 policies per
   715            endpoint. This might change in the future when support for ranges is
   716            added.
   717  
   718  HTTP
   719  ----
   720  
   721  The following fields can be matched on:
   722  
   723  Path
   724    Path is an extended POSIX regex matched against the path of a request.
   725    Currently it can contain characters disallowed from the conventional "path"
   726    part of a URL as defined by RFC 3986. Paths must begin with a ``/``. If
   727    omitted or empty, all paths are all allowed.
   728  
   729  Method
   730    Method is an extended POSIX regex matched against the method of a request,
   731    e.g. ``GET``, ``POST``, ``PUT``, ``PATCH``, ``DELETE``, ...  If omitted or
   732    empty, all methods are allowed.
   733  
   734  Host
   735    Host is an extended POSIX regex matched against the host header of a request,
   736    e.g. ``foo.com``. If omitted or empty, the value of the host header is
   737    ignored.
   738  
   739  Headers
   740    Headers is a list of HTTP headers which must be present in the request. If
   741    omitted or empty, requests are allowed regardless of headers present.
   742  
   743  Allow GET /public
   744  ~~~~~~~~~~~~~~~~~
   745  
   746  The following example allows ``GET`` requests to the URL ``/public`` to be
   747  allowed to endpoints with the labels ``env:prod``, but requests to any other
   748  URL, or using another method, will be rejected. Requests on ports other than
   749  port 80 will be dropped.
   750  
   751  .. only:: html
   752  
   753     .. tabs::
   754       .. group-tab:: k8s YAML
   755  
   756          .. literalinclude:: ../../examples/policies/l7/http/simple/l7.yaml
   757       .. group-tab:: JSON
   758  
   759          .. literalinclude:: ../../examples/policies/l7/http/simple/l7.json
   760  
   761  .. only:: epub or latex
   762  
   763          .. literalinclude:: ../../examples/policies/l7/http/simple/l7.json
   764  
   765  All GET /path1 and PUT /path2 when header set
   766  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   767  
   768  The following example limits all endpoints which carry the labels
   769  ``app=myService`` to only be able to receive packets on port 80 using TCP.
   770  While communicating on this port, the only API endpoints allowed will be ``GET
   771  /path1`` and ``PUT /path2`` with the HTTP header ``X-My_header`` set to
   772  ``true``:
   773  
   774  .. only:: html
   775  
   776     .. tabs::
   777       .. group-tab:: k8s YAML
   778  
   779          .. literalinclude:: ../../examples/policies/l7/http/http.yaml
   780       .. group-tab:: JSON
   781  
   782          .. literalinclude:: ../../examples/policies/l7/http/http.json
   783  
   784  .. only:: epub or latex
   785  
   786          .. literalinclude:: ../../examples/policies/l7/http/http.json
   787  
   788  
   789  Kafka (beta)
   790  ------------
   791  
   792  .. note:: Kafka support is currently in beta phase.
   793  
   794  PortRuleKafka is a list of Kafka protocol constraints. All fields are optional,
   795  if all fields are empty or missing, the rule will match all Kafka messages.
   796  There are two ways to specify the Kafka rules. We can choose to specify a
   797  high-level "produce" or "consume" role to a topic or choose to specify more
   798  low-level Kafka protocol specific apiKeys. Writing rules based on Kafka roles
   799  is easier and covers most common use cases, however if more granularity is
   800  needed then users can alternatively write rules using specific apiKeys.
   801  
   802  The following fields can be matched on:
   803  
   804  Role
   805    Role is a case-insensitive string which describes a group of API keys
   806    necessary to perform certain higher-level Kafka operations such as "produce"
   807    or "consume". A Role automatically expands into all APIKeys required
   808    to perform the specified higher-level operation.
   809    The following roles are supported:
   810  
   811      - "produce": Allow producing to the topics specified in the rule.
   812      - "consume": Allow consuming from the topics specified in the rule.
   813  
   814    This field is incompatible with the APIKey field, i.e APIKey and Role
   815    cannot both be specified in the same rule.
   816    If omitted or empty, and if APIKey is not specified, then all keys are
   817    allowed.
   818  
   819  APIKey
   820    APIKey is a case-insensitive string matched against the key of a request,
   821    for example "produce", "fetch", "createtopic", "deletetopic". For a more
   822    extensive list, see the `Kafka protocol reference <https://kafka.apache.org/protocol#protocol_api_keys>`_.
   823    This field is incompatible with the Role field.
   824  
   825  APIVersion
   826    APIVersion is the version matched against the api version of the Kafka
   827    message. If set, it must be a string representing a positive integer. If
   828    omitted or empty, all versions are allowed.
   829  
   830  ClientID
   831    ClientID is the client identifier as provided in the request.
   832  
   833    From Kafka protocol documentation: This is a user supplied identifier for the
   834    client application. The user can use any identifier they like and it will be
   835    used when logging errors, monitoring aggregates, etc. For example, one might
   836    want to monitor not just the requests per second overall, but the number
   837    coming from each client application (each of which could reside on multiple
   838    servers). This id acts as a logical grouping across all requests from a
   839    particular client.
   840  
   841    If omitted or empty, all client identifiers are allowed.
   842  
   843  Topic
   844    Topic is the topic name contained in the message. If a Kafka request contains
   845    multiple topics, then all topics in the message must be allowed by the policy
   846    or the message will be rejected.
   847  
   848    This constraint is ignored if the matched request message type does not
   849    contain any topic. The maximum length of the Topic is 249 characters,
   850    which must be either ``a-z``, ``A-Z``, ``0-9``, ``-``, ``.`` or ``_``.
   851  
   852    If omitted or empty, all topics are allowed.
   853  
   854  Allow producing to topic empire-announce using Role
   855  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   856  
   857  .. only:: html
   858  
   859     .. tabs::
   860       .. group-tab:: k8s YAML
   861  
   862          .. literalinclude:: ../../examples/policies/l7/kafka/kafka-role.yaml
   863       .. group-tab:: JSON
   864  
   865          .. literalinclude:: ../../examples/policies/l7/kafka/kafka-role.json
   866  
   867  .. only:: epub or latex
   868  
   869          .. literalinclude:: ../../examples/policies/l7/kafka/kafka-role.json
   870  
   871  Allow producing to topic empire-announce using apiKeys
   872  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   873  
   874  .. only:: html
   875  
   876     .. tabs::
   877       .. group-tab:: k8s YAML
   878  
   879          .. literalinclude:: ../../examples/policies/l7/kafka/kafka.yaml
   880       .. group-tab:: JSON
   881  
   882          .. literalinclude:: ../../examples/policies/l7/kafka/kafka.json
   883  
   884  .. only:: epub or latex
   885  
   886          .. literalinclude:: ../../examples/policies/l7/kafka/kafka.json
   887  
   888  
   889  .. _dns_discovery:
   890  
   891  DNS Policy and IP Discovery
   892  ---------------------------
   893  
   894  Policy may be applied to DNS traffic, allowing or disallowing specific DNS
   895  query names or patterns of names (other DNS fields, such as query type, are not
   896  considered). This policy is effected via a DNS proxy, which is also used to
   897  collect IPs used to populate L3 `DNS based`_ ``toFQDNs`` rules.
   898  
   899  .. note::  While Layer 7 DNS policy can be applied without any other Layer 3
   900             rules, the presence of a Layer 7 rule (with its Layer 3 and 4
   901             components) will block other traffic.
   902  
   903  DNS policy may be applied via:
   904  
   905  ``matchName``
   906    Allows queries for domains that match ``matchName`` exactly. Multiple
   907    distinct names may be included in separate ``matchName`` entries and queries
   908    for domains that match any ``matchName`` will be allowed.
   909  
   910  ``matchPattern``
   911    Allows queries for domains that match the pattern in ``matchPattern``,
   912    accounting for wildcards. Patterns are composed of literal characters that
   913    that are allowed in domain names: a-z, 0-9, ``.`` and ``-``.
   914  
   915    ``*`` is allowed as a wildcard with a number of convenience behaviors:
   916  
   917    * ``*`` within a domain allows 0 or more valid DNS characters, except for the
   918      ``.`` separator. ``*.cilium.io`` will match ``sub.cilium.io`` but not
   919      ``cilium.io``. ``part*ial.com`` will match ``partial.com`` and
   920      ``part-extra-ial.com``.
   921    * ``*`` alone matches all names, and inserts all IPs in DNS responses into
   922      the cilium-agent DNS cache.
   923  
   924  In this example, L7 DNS policy allows queries for ``cilium.io`` and any
   925  subdomains of ``cilium.io`` and ``api.cilium.io``. No other DNS queries will be
   926  allowed.
   927  
   928  The separate L3 ``toFQDNs`` egress rule allows connections to any IPs returned
   929  in DNS queries for ``cilium.io``, ``sub.cilium.io``, ``service1.api.cilium.io``
   930  and any matches of ``special*service.api.cilium.io``, such as
   931  ``special-region1-service.api.cilium.io`` but not
   932  ``region1-service.api.cilium.io``. DNS queries to ``anothersub.cilium.io`` are
   933  allowed but connections to the returned IPs are not, as there is no L3
   934  ``toFQDNs`` rule selecting them. L4 and L7 policy may also be applied (see
   935  `DNS based`_), restricting connections to TCP port 80 in this case.
   936  
   937  .. only:: html
   938  
   939     .. tabs::
   940       .. group-tab:: k8s YAML
   941  
   942          .. literalinclude:: ../../examples/policies/l7/dns/dns.yaml
   943       .. group-tab:: JSON
   944  
   945          .. literalinclude:: ../../examples/policies/l7/dns/dns.json
   946  
   947  .. only:: epub or latex
   948  
   949          .. literalinclude:: ../../examples/policies/l7/dns/dns.json
   950  
   951  
   952  .. note:: When applying DNS policy in kubernetes, queries for
   953            service.namespace.svc.cluster.local. must be explicitly allowed
   954            with ``matchPattern: *.*.svc.cluster.local.``.
   955  
   956            Similarly, queries that rely on the DNS search list to complete the
   957            FQDN must be allowed in their entirety. e.g. A query for
   958            ``servicename`` that succeeds with
   959            ``servicename.namespace.svc.cluster.local.`` must have the latter
   960            allowed with ``matchName`` or ``matchPattern``.
   961  
   962  .. _DNS Obtaining Data:
   963  
   964  Obtaining DNS Data for use by ``toFQDNs``
   965  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   966  IPs are obtained via intercepting DNS requests with a proxy or DNS polling, and
   967  matching names are inserted irrespective of how the data is obtained. These IPs
   968  can be selected with ``toFQDN`` rules. DNS responses are cached within cilium
   969  agent respecting TTL.
   970  
   971  .. _DNS Proxy:
   972  
   973  DNS Proxy (preferred)
   974  """""""""""""""""""""
   975    A DNS Proxy intercepts egress DNS traffic and records IPs seen in the
   976    responses. This interception is, itself, a separate policy rule governing the
   977    DNS requests, and must be specified separately. For details on how to enforce
   978    policy on DNS requests and configuring the DNS proxy, see `Layer 7
   979    Examples`_.
   980  
   981    Only IPs in intercepted DNS responses to an application will be allowed in
   982    the cilium policy rules. For a given domain name, IPs from responses to all
   983    pods managed by a Cilium instance are allowed by policy (respecting TTLs).
   984    This ensures that allowed IPs are consistent with those returned to
   985    applications. The DNS Proxy is the only method to allow IPs from responses
   986    allowed by wildcard L7 DNS ``matchPattern`` rules for use in ``toFQDNs``
   987    rules.
   988  
   989    The following example obtains DNS data by interception without blocking any
   990    DNS requests. It allows L3 connections to ``cilium.io``, ``sub.cilium.io``
   991    and any subdomains of ``sub.cilium.io``.
   992  
   993  .. only:: html
   994  
   995     .. tabs::
   996       .. group-tab:: k8s YAML
   997  
   998          .. literalinclude:: ../../examples/policies/l7/dns/dns-visibility.yaml
   999       .. group-tab:: JSON
  1000  
  1001          .. literalinclude:: ../../examples/policies/l7/dns/dns-visibility.json
  1002  
  1003  .. only:: epub or latex
  1004  
  1005          .. literalinclude:: ../../examples/policies/l7/dns/dns-visibility.json
  1006  
  1007  .. _DNS Polling:
  1008  
  1009  DNS Polling
  1010  """""""""""
  1011    DNS Polling periodically issues a DNS lookup for each ``matchName`` from
  1012    cilium-agent. The result is used to regenerate endpoint policy.  Despite the
  1013    name, the ``matchName`` field does not have to be a fully-qualified domain
  1014    name. In cases where search domains are configured for cilium-agent, the DNS
  1015    lookups from Cilium will not be qualified and will utilize the search list.
  1016    Unqualified names must be matched as-is by ``matchPattern`` in order to
  1017    insert related IPs.
  1018  
  1019    DNS lookups are repeated with an interval of 5 seconds, and are made for
  1020    A(IPv4) and AAAA(IPv6) addresses. Should a lookup fail, the most recent IP
  1021    data is used instead. An IP change will trigger a regeneration of the Cilium
  1022    policy for each endpoint and increment the per cilium-agent policy repository
  1023    revision.
  1024  
  1025    Polling may be enabled by the ``--tofqdns-enable-poller`` cilium-agent
  1026    CLI option. It is disabled by default.
  1027  
  1028    The DNS polling implementation is very limited. It may not behave as expected.
  1029  
  1030    #. The DNS polling is done from the cilium-agent process. This may result in
  1031       different IPs being returned in the DNS response than those seen by an
  1032       application.
  1033  
  1034    #. When using DNS Polling with DNS responses that return a new IP on every
  1035       query, the IP being whitelisted may differ from the one used for
  1036       connections by applications. This is because the application will make
  1037       a DNS query independent from the poll.
  1038  
  1039    #. When DNS lookups return many distinct IPs over time, large values of
  1040       ``--tofqdns-min-ttl`` may result in unacceptably slow policy
  1041       regeneration. See `DNS and Long-Lived Connections`_ for details.
  1042  
  1043    #. The lookups from Cilium follow the configuration of the environment it
  1044       is in via ``/etc/resolv.conf``. When running as a kubernetes pod, the
  1045       contents of ``resolv.conf`` are controlled via the ``dnsPolicy`` field of a
  1046       spec. When running directly on a host, it will use the host's file.
  1047       Irrespective of how the DNS lookups are configured, TTLs and caches on the
  1048       resolver will impact the IPs seen by the cilium-agent lookups.
  1049  
  1050  .. note:: Connections to the DNS resolver must be explicitly whitelisted to
  1051            allow DNS queries. This is independent of the source of DNS
  1052            information, whether from polling or the DNS proxy.
  1053  
  1054  
  1055  Kubernetes
  1056  ==========
  1057  
  1058  This section covers Kubernetes specific network policy aspects.
  1059  
  1060  .. _k8s_namespaces:
  1061  
  1062  Namespaces
  1063  ----------
  1064  
  1065  `Namespaces <https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/>`_
  1066  are used to create virtual clusters within a Kubernetes cluster. All Kubernetes objects
  1067  including NetworkPolicy and CiliumNetworkPolicy belong to a particular
  1068  namespace. Depending on how a policy is being defined and created, Kubernetes
  1069  namespaces are automatically being taken into account:
  1070  
  1071  * Network policies created and imported as `CiliumNetworkPolicy` CRD and
  1072    `NetworkPolicy` apply within the namespace, i.e. the policy only applies
  1073    to pods within that namespace. It is however possible to grant access to and
  1074    from pods in other namespaces as described below.
  1075  
  1076  * Network policies imported directly via the :ref:`api_ref` apply to all
  1077    namespaces unless a namespace selector is specified as described below.
  1078  
  1079  .. note:: While specification of the namespace via the label
  1080  	  ``k8s:io.kubernetes.pod.namespace`` in the ``fromEndpoints`` and
  1081  	  ``toEndpoints`` fields is deliberately supported. Specification of the
  1082  	  namespace in the ``endpointSelector`` is prohibited as it would
  1083  	  violate the namespace isolation principle of Kubernetes. The
  1084  	  ``endpointSelector`` always applies to pods of the namespace which is
  1085  	  associated with the CiliumNetworkPolicy resource itself.
  1086  
  1087  Example: Enforce namespace boundaries
  1088  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1089  
  1090  This example demonstrates how to enforce Kubernetes namespace-based boundaries
  1091  for the namespaces ``ns1`` and ``ns2`` by enabling default-deny on all pods of
  1092  either namespace and then allowing communication from all pods within the same
  1093  namespace.
  1094  
  1095  .. note:: The example locks down ingress of the pods in ``ns1`` and ``ns2``.
  1096  	  This means that the pods can still communicate egress to anywhere
  1097  	  unless the destination is in either ``ns1`` or ``ns2`` in which case
  1098  	  both source and destination have to be in the same namespace. In
  1099  	  order to enforce namespace boundaries at egress, the same example can
  1100  	  be used by specifying the rules at egress in addition to ingress.
  1101  
  1102  .. only:: html
  1103  
  1104     .. tabs::
  1105       .. group-tab:: k8s YAML
  1106  
  1107          .. literalinclude:: ../../examples/policies/kubernetes/namespace/isolate-namespaces.yaml
  1108       .. group-tab:: JSON
  1109  
  1110          .. literalinclude:: ../../examples/policies/kubernetes/namespace/isolate-namespaces.json
  1111  
  1112  .. only:: epub or latex
  1113  
  1114          .. literalinclude:: ../../examples/policies/kubernetes/namespace/isolate-namespaces.json
  1115  
  1116  Example: Expose pods across namespaces
  1117  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1118  
  1119  The following example exposes all pods with the label ``name=leia`` in the
  1120  namespace ``ns1`` to all pods with the label ``name=luke`` in the namespace
  1121  ``ns2``.
  1122  
  1123  Refer to the :git-tree:`example YAML files <examples/policies/kubernetes/namespace/demo-pods.yaml>`
  1124  for a fully functional example including pods deployed to different namespaces.
  1125  
  1126  .. only:: html
  1127  
  1128     .. tabs::
  1129       .. group-tab:: k8s YAML
  1130  
  1131          .. literalinclude:: ../../examples/policies/kubernetes/namespace/namespace-policy.yaml
  1132       .. group-tab:: JSON
  1133  
  1134          .. literalinclude:: ../../examples/policies/kubernetes/namespace/namespace-policy.json
  1135  
  1136  .. only:: epub or latex
  1137  
  1138          .. literalinclude:: ../../examples/policies/kubernetes/namespace/namespace-policy.json
  1139  
  1140  Example: Allow egress to kube-dns in kube-system namespace
  1141  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  1142  
  1143  The following example allows all pods in the namespace in which the policy is
  1144  created to communicate with kube-dns on port 53/UDP in the ``kube-system``
  1145  namespace.
  1146  
  1147  .. only:: html
  1148  
  1149     .. tabs::
  1150       .. group-tab:: k8s YAML
  1151  
  1152          .. literalinclude:: ../../examples/policies/kubernetes/namespace/kubedns-policy.yaml
  1153       .. group-tab:: JSON
  1154  
  1155          .. literalinclude:: ../../examples/policies/kubernetes/namespace/kubedns-policy.json
  1156  
  1157  .. only:: epub or latex
  1158  
  1159          .. literalinclude:: ../../examples/policies/kubernetes/namespace/kubedns-policy.json
  1160  
  1161  
  1162  ServiceAccounts
  1163  ----------------
  1164  
  1165  Kubernetes `Service Accounts
  1166  <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>`_ are used
  1167  to associate an identity to a pod or process managed by Kubernetes and grant
  1168  identities access to Kubernetes resources and secrets. Cilium supports the
  1169  specification of network security policies based on the service account
  1170  identity of a pod.
  1171  
  1172  The service account of a pod is either defined via the `service account
  1173  admission controller
  1174  <https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount>`_
  1175  or can be directly specified in the Pod, Deployment, ReplicationController
  1176  resource like this:
  1177  
  1178  .. code:: bash
  1179  
  1180          apiVersion: v1
  1181          kind: Pod
  1182          metadata:
  1183            name: my-pod
  1184          spec:
  1185            serviceAccountName: leia
  1186            ...
  1187  
  1188  Example
  1189  ~~~~~~~
  1190  
  1191  The following example grants any pod running under the service account of
  1192  "luke" to issue a ``HTTP GET /public`` request on TCP port 80 to all pods
  1193  running associated to the service account of "leia".
  1194  
  1195  Refer to the :git-tree:`example YAML files <examples/policies/kubernetes/serviceaccount/demo-pods.yaml>`
  1196  for a fully functional example including deployment and service account
  1197  resources.
  1198  
  1199  
  1200  .. only:: html
  1201  
  1202     .. tabs::
  1203       .. group-tab:: k8s YAML
  1204  
  1205          .. literalinclude:: ../../examples/policies/kubernetes/serviceaccount/serviceaccount-policy.yaml
  1206       .. group-tab:: JSON
  1207  
  1208          .. literalinclude:: ../../examples/policies/kubernetes/serviceaccount/serviceaccount-policy.json
  1209  
  1210  .. only:: epub or latex
  1211  
  1212          .. literalinclude:: ../../examples/policies/kubernetes/serviceaccount/serviceaccount-policy.json
  1213  
  1214  Multi-Cluster
  1215  -------------
  1216  
  1217  When operating multiple cluster with cluster mesh, the cluster name is exposed
  1218  via the label ``io.cilium.k8s.policy.cluster`` and can be used to restrict
  1219  policies to a particular cluster.
  1220  
  1221  .. only:: html
  1222  
  1223     .. tabs::
  1224       .. group-tab:: k8s YAML
  1225  
  1226          .. literalinclude:: ../../examples/policies/kubernetes/clustermesh/cross-cluster-policy.yaml
  1227  
  1228  .. only:: epub or latex
  1229  
  1230          .. literalinclude:: ../../examples/policies/kubernetes/clustermesh/cross-cluster-policy.yaml