github.com/projectcontour/contour@v1.28.2/site/content/docs/main/guides/gatekeeper.md (about)

     1  ---
     2  title: Using Gatekeeper as a validating admission controller with Contour
     3  ---
     4  
     5  This tutorial demonstrates how to use [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) as a validating admission controller for Contour.
     6  
     7  Gatekeeper is a project that enables users to define flexible policies for Kubernetes resources using [Open Policy Agent (OPA)](https://www.openpolicyagent.org/) that are enforced when those resources are created/updated via the Kubernetes API.
     8  
     9  The benefits of using Gatekeeper with Contour are:
    10  - Immediate feedback for the user when they try to create an `HTTPProxy` with an invalid spec. Instead of having to check the `HTTPProxy`'s status after creation for a possible error message, the create is rejected and the user is immediately provided with a reason for the rejection.
    11  - User-defined policies for `HTTPProxy` specs. For example, the Contour admin can define policies to enforce maximum limits on timeouts and retries, disallow certain FQDNs, etc.
    12  
    13  ## Prerequisites
    14  
    15  - A Kubernetes cluster with a minimum version of 1.14 (to enable webhook timeouts for Gatekeeper).
    16  - Cluster-admin permissions
    17  
    18  ## Deploy Contour
    19  
    20  Run:
    21  
    22  ```bash
    23  $ kubectl apply -f {{< param base_url >}}/quickstart/contour.yaml
    24  ```
    25  
    26  This creates a `projectcontour` namespace and sets up Contour as a deployment and Envoy as a daemonset, with communication between them secured by mutual TLS.
    27  
    28  Check the status of the Contour pods with this command:
    29  
    30  ```bash
    31  $ kubectl -n projectcontour get pods -l app=contour
    32  NAME                           READY   STATUS      RESTARTS   AGE
    33  contour-8596d6dbd7-9nrg2       1/1     Running     0          32m
    34  contour-8596d6dbd7-mmtc8       1/1     Running     0          32m
    35  ```
    36  
    37  If installation was successful, all pods should reach `Running` status shortly.
    38  
    39  ## Deploy Gatekeeper
    40  
    41  The following instructions are summarized from the [Gatekeeper documentation](https://github.com/open-policy-agent/gatekeeper#installation-instructions).
    42  If you already have Gatekeeper running in your cluster, you can skip this section.
    43  
    44  Run:
    45  
    46  ```bash
    47  $ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
    48  ```
    49  
    50  This creates a `gatekeeper-system` namespace and sets up the Gatekeeper controller manager and audit deployments using the latest Gatekeeper release.
    51  
    52  Check the status of the Gatekeeper pods with this command:
    53  
    54  ```bash
    55  $ kubectl -n gatekeeper-system get pods
    56  NAME                                             READY   STATUS    RESTARTS   AGE
    57  gatekeeper-audit-67dfc46db6-kjcmc                1/1     Running   0          40m
    58  gatekeeper-controller-manager-7cbc758844-64hhn   1/1     Running   0          40m
    59  gatekeeper-controller-manager-7cbc758844-c4dkd   1/1     Running   0          40m
    60  gatekeeper-controller-manager-7cbc758844-xv9jn   1/1     Running   0          40m
    61  ```
    62  
    63  If installation was successful, all pods should reach `Running` status shortly.
    64  
    65  ## Configure Gatekeeper
    66  
    67  ### Background
    68  
    69  Gatekeeper uses the [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) to define and enforce policies.
    70  This framework has two key types: `ConstraintTemplate` and `Constraint`.
    71  A `ConstraintTemplate` defines a reusable OPA policy, along with the parameters that can be passed to it when it is instantiated.
    72  When a `ConstraintTemplate` is created, Gatekeeper automatically creates a custom resource definition (CRD) to represent it in the cluster.
    73  
    74  A `Constraint` is an instantiation of a `ConstraintTemplate`, which tells Gatekeeper to apply it to specific Kubernetes resource types (e.g. `HTTPProxy`) and provides any relevant parameter values.
    75  A `Constraint` is defined as an instance of the CRD representing the associated `ConstraintTemplate`.
    76  
    77  We'll now look at some examples to make these concepts concrete.
    78  
    79  ### Configure resource caching
    80  
    81  First, Gatekeeper needs to be configured to store all `HTTPProxy` resources in its internal cache, so that existing `HTTPProxy` resources can be referenced within constraint template policies.
    82  This is essential for being able to define constraints that look across all `HTTPProxies` -- for example, to verify FQDN uniqueness.
    83  
    84  Create a file called `config.yml` containing the following YAML:
    85  
    86  ```yaml
    87  apiVersion: config.gatekeeper.sh/v1alpha1
    88  kind: Config
    89  metadata:
    90    name: config
    91    namespace: "gatekeeper-system"
    92  spec:
    93    sync:
    94      syncOnly:
    95        - group: "projectcontour.io"
    96          version: "v1"
    97          kind: "HTTPProxy"
    98  ```
    99  
   100  Apply it to the cluster:
   101  
   102  ```bash
   103  $ kubectl apply -f config.yml
   104  ```
   105  
   106  Note that if you already had Gatekeeper running in your cluster, you may already have the `Config` resource defined.
   107  In that case, you'll need to edit the existing resource to add `HTTPProxy` to the `spec.sync.syncOnly` list.
   108  
   109  ### Configure HTTPProxy validations
   110  
   111  The first constraint template and constraint that we'll define are what we'll refer to as a **validation**.
   112  These are rules for `HTTPProxy` specs that Contour universally requires to be true.
   113  In this example, we'll define a constraint template and constraint to enforce that all `HTTPProxies` must have a unique FQDN.
   114  
   115  Create a file called `unique-fqdn-template.yml` containing the following YAML:
   116  
   117  ```yaml
   118  apiVersion: templates.gatekeeper.sh/v1beta1
   119  kind: ConstraintTemplate
   120  metadata:
   121    name: httpproxyuniquefqdn
   122  spec:
   123    crd:
   124      spec:
   125        names:
   126          kind: HTTPProxyUniqueFQDN
   127          listKind: HTTPProxyUniqueFQDNList
   128          plural: HTTPProxyUniqueFQDNs
   129          singular: HTTPProxyUniqueFQDN
   130    targets:
   131      - target: admission.k8s.gatekeeper.sh
   132        rego: |
   133          package httpproxy.uniquefqdn
   134  
   135          violation[{"msg": msg, "other": sprintf("%v/%v", [other.metadata.namespace, other.metadata.name])}] {
   136            got := input.review.object.spec.virtualhost.fqdn
   137            other := data.inventory.namespace[_]["projectcontour.io/v1"]["HTTPProxy"][_]
   138            other.spec.virtualhost.fqdn = got
   139  
   140            not same(other, input.review.object)
   141            msg := "HTTPProxy must have a unique spec.virtualhost.fqdn"
   142          }
   143  
   144          same(a, b) {
   145            a.metadata.namespace == b.metadata.namespace
   146            a.metadata.name == b.metadata.name
   147          }
   148  ```
   149  
   150  Apply it to the cluster:
   151  
   152  ```bash
   153  $ kubectl apply -f unique-fqdn-template.yml
   154  ```
   155  
   156  Within a few seconds, you'll see that a corresponding CRD has been created in the cluster:
   157  
   158  ```bash
   159  $ kubectl get crd httpproxyuniquefqdn.constraints.gatekeeper.sh
   160  NAME                                            CREATED AT
   161  httpproxyuniquefqdn.constraints.gatekeeper.sh   2020-08-13T16:08:57Z
   162  ```
   163  
   164  Now, create a file called `unique-fqdn-constraint.yml` containing the following YAML:
   165  
   166  ```yaml
   167  apiVersion: constraints.gatekeeper.sh/v1beta1
   168  kind: HTTPProxyUniqueFQDN
   169  metadata:
   170    name: httpproxy-unique-fqdn
   171  spec:
   172    match:
   173      kinds:
   174        - apiGroups: ["projectcontour.io"]
   175          kinds: ["HTTPProxy"]
   176  ```
   177  
   178  Note that the `Kind` of this resource corresponds to the new CRD.
   179  
   180  Apply it to the cluster:
   181  
   182  ```bash
   183  $ kubectl apply -f unique-fqdn-constraint.yml
   184  ```
   185  
   186  Now, let's create some `HTTPProxies` to see the validation in action.
   187  
   188  Create a file called `httpproxies.yml` containing the following YAML:
   189  
   190  ```yaml
   191  apiVersion: projectcontour.io/v1
   192  kind: HTTPProxy
   193  metadata:
   194    name: demo
   195    namespace: default
   196  spec:
   197    virtualhost:
   198      fqdn: demo.projectcontour.io
   199  ---
   200  apiVersion: projectcontour.io/v1
   201  kind: HTTPProxy
   202  metadata:
   203    name: demo2
   204    namespace: default
   205  spec:
   206    virtualhost:
   207      fqdn: demo.projectcontour.io
   208  ```
   209  
   210  Note that both `HTTPProxies` have the same FQDN.
   211  
   212  Apply the YAML:
   213  
   214  ```bash
   215  $ kubectl apply -f httpproxies.yml
   216  ```
   217  
   218  You should see something like:
   219  ```
   220  httpproxy.projectcontour.io/demo created
   221  Error from server ([denied by httpproxy-unique-fqdn] HTTPProxy must have a unique FQDN): error when creating "httpproxies.yml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by httpproxy-unique-fqdn] HTTPProxy must have a unique FQDN
   222  ```
   223  
   224  The first `HTTPProxy` was created successfully, because there was not already an existing proxy with the `demo.projectcontour.io` FQDN.
   225  However, when the second `HTTPProxy` was submitted, Gatekeeper rejected its creation because it used the same FQDN as the first one.
   226  
   227  ### Configure HTTPProxy policies
   228  
   229  The next constraint template and constraint that we'll create are what we refer to as a **policy**.
   230  These are rules for `HTTPProxy` specs that an individual Contour administrator may want to enforce for their cluster, but that are not explicitly required by Contour itself.
   231  In this example, we'll define a constraint template and constraint to enforce that all `HTTPProxies` can be configured with at most five retries for any route.
   232  
   233  Create a file called `retry-count-range-template.yml` containing the following YAML:
   234  
   235  ```yaml
   236  apiVersion: templates.gatekeeper.sh/v1beta1
   237  kind: ConstraintTemplate
   238  metadata:
   239    name: httpproxyretrycountrange
   240  spec:
   241    crd:
   242      spec:
   243        names:
   244          kind: HTTPProxyRetryCountRange
   245          listKind: HTTPProxyRetryCountRangeList
   246          plural: HTTPProxyRetryCountRanges
   247          singular: HTTPProxyRetryCountRange
   248        scope: Namespaced
   249        validation:
   250          openAPIV3Schema:
   251            properties:
   252              min:
   253                type: integer
   254              max: 
   255                type: integer
   256    targets:
   257      - target: admission.k8s.gatekeeper.sh
   258        rego: |
   259          package httpproxy.retrycountrange
   260  
   261          # build a set of all the retry count values
   262          retry_counts[val] {
   263            val := input.review.object.spec.routes[_].retryPolicy.count
   264          }
   265  
   266          # is there a retry count value that's greater than the allowed max?
   267          violation[{"msg": msg}] {
   268            retry_counts[_] > input.parameters.max
   269            msg := sprintf("retry count must be less than or equal to %v", [input.parameters.max])
   270          }
   271  
   272          # is there a retry count value that's less than the allowed min?
   273          violation[{"msg": msg}] {
   274            retry_counts[_] < input.parameters.min
   275            msg := sprintf("retry count must be greater than or equal to %v", [input.parameters.min])
   276          }
   277  ```
   278  
   279  Apply it to the cluster:
   280  
   281  ```bash
   282  $ kubectl apply -f retry-count-range-template.yml
   283  ```
   284  
   285  Again, within a few seconds, you'll see that a corresponding CRD has been created in the cluster:
   286  
   287  ```bash
   288  $ kubectl get crd httpproxyretrycountrange.constraints.gatekeeper.sh
   289  NAME                                                 CREATED AT
   290  httpproxyretrycountrange.constraints.gatekeeper.sh   2020-08-13T16:12:10Z
   291  ```
   292  
   293  Now, create a file called `retry-count-range-constraint.yml` containing the following YAML:
   294  
   295  ```yaml
   296  apiVersion: constraints.gatekeeper.sh/v1beta1
   297  kind: HTTPProxyRetryCountRange
   298  metadata:
   299    name: httpproxy-retry-count-range
   300  spec:
   301    match:
   302      kinds:
   303        - apiGroups: ["projectcontour.io"]
   304          kinds: ["HTTPProxy"]
   305      namespaces:
   306        - my-namespace
   307    parameters:
   308      max: 5
   309  ```
   310  
   311  Note that for this `Constraint`, we've added a `spec.match.namespaces` field which defines that this policy should only be applied to `HTTPProxies` created in the `my-namespace` namespace.
   312  If this `namespaces` matcher is not specified, then the `Constraint` applies to all namespaces.
   313  You can read more about `Constraint` matchers on the [Gatekeeper website](https://github.com/open-policy-agent/gatekeeper#constraints).
   314  
   315  Apply it to the cluster:
   316  
   317  ```bash
   318  $ kubectl apply -f retry-count-range-constraint.yml
   319  ```
   320  
   321  Now, let's create some `HTTPProxies` to see the policy in action.
   322  
   323  Create a namespace called `my-namespace`:
   324  
   325  ```bash
   326  $ kubectl create namespace my-namespace
   327  namespace/my-namespace created
   328  ```
   329  
   330  Create a file called `httpproxy-retries.yml` containing the following YAML:
   331  
   332  ```yaml
   333  apiVersion: projectcontour.io/v1
   334  kind: HTTPProxy
   335  metadata:
   336    name: demo-retries
   337    namespace: my-namespace
   338  spec:
   339    virtualhost:
   340      fqdn: retries.projectcontour.io
   341    routes:
   342      - conditions:
   343          - prefix: /foo
   344        services:
   345          - name: s1
   346            port: 80
   347        retryPolicy:
   348          count: 6
   349  ```
   350  
   351  Apply the YAML:
   352  
   353  ```bash
   354  $ kubectl apply -f httpproxy-retries.yml
   355  ```
   356  
   357  You should see something like:
   358  ```
   359  Error from server ([denied by httpproxy-retry-count-range] retry count must be less than or equal to 5): error when creating "proxy-retries.yml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by httpproxy-retry-count-range] retry count must be less than or equal to 5
   360  ```
   361  
   362  Now, change the `count` field on the last line of `httpproxy-retries.yml` to have a value of `5`. Save the file, and apply it again:
   363  
   364  ```bash
   365  $ kubectl apply -f httpproxy-retries.yml
   366  ```
   367  
   368  Now the `HTTPProxy` creates successfully*.
   369  
   370  _* Note that the HTTPProxy is still marked invalid by Contour after creation because the service `s1` does not exist, but that's outside the scope of this guide._
   371  
   372  Finally, create a file called `httpproxy-retries-default.yml` containing the following YAML:
   373  
   374  ```yaml
   375  apiVersion: projectcontour.io/v1
   376  kind: HTTPProxy
   377  metadata:
   378    name: demo-retries
   379    namespace: default
   380  spec:
   381    virtualhost:
   382      fqdn: default.retries.projectcontour.io
   383    routes:
   384      - conditions:
   385          - prefix: /foo
   386        services:
   387          - name: s1
   388            port: 80
   389        retryPolicy:
   390          count: 6
   391  ```
   392  
   393  Remember that our `Constraint` was defined to apply only to the `my-namespace` namespace, so it should not block the creation of this proxy, even though it has a retry policy count outside the allowed range.
   394  
   395  Apply the YAML:
   396  
   397  ```bash
   398  $ kubectl apply -f httpproxy-retries-default.yml
   399  ```
   400  
   401  The `HTTPProxy` creates successfully.
   402  
   403  ## Gatekeeper Audit
   404  
   405  We've seen how Gatekeeper constraints can enforce constraints when a user tries to create a new `HTTPProxy`. Now let's look at how constraints can be applied to pre-existing resources in the cluster.
   406  
   407  Gatekeeper has an audit functionality, that periodically (every `60s` by default) checks all existing resources against the relevant set of constraints. Any violations are reported in the `Constraint` custom resource's `status.violations` field. This allows an administrator to periodically review & correct any pre-existing misconfigurations, while not having to worry about breaking existing resources when rolling out a new or updated constraint.
   408  
   409  To try this out, let's revisit the previous example, and change our constraint to allow a maximum retry count of four.
   410  
   411  Edit `retry-count-range-constraint.yml` and change the `max` field to have a value of `4`. Save the file.
   412  
   413  Apply it to the cluster:
   414  
   415  ```bash
   416  $ kubectl apply -f retry-count-range-constraint.yml
   417  ```
   418  
   419  We know that the `demo-retries` proxy has a route with a `retryPolicy.count` of `5`. This should now be invalid according to the updated constraint.
   420  
   421  Wait up to `60s` for the next periodic audit to finish, then run:
   422  
   423  ```bash
   424  $ kubectl describe httpproxyretrycountrange httpproxy-retry-count-range
   425  ```
   426  
   427  You should see something like:
   428  
   429  ```
   430  ...
   431  Status:
   432      ...
   433      Violations:
   434          Enforcement Action:  deny
   435          Kind:                HTTPProxy
   436          Message:             retry policy count must be less than or equal to 4
   437          Name:                demo-retries
   438          Namespace:           my-namespace
   439  ```
   440  
   441  However, our `HTTPProxy` remains in the cluster and can continue to route requests, and the user can remediate the proxy to bring it inline with the policy on their own timeline.
   442  
   443  ## Next steps
   444  
   445  Contour has a [growing library](https://github.com/projectcontour/contour/tree/main/examples/gatekeeper) of Gatekeeper constraint templates and constraints, for both **validations** and **policies**.
   446  
   447  If you're using Gatekeeper, we recommend that you apply all of the **validations** we've defined, since these rules are already being checked internally by Contour and reported as status errors/invalid proxies.
   448  Using the Gatekeeper constraints will only improve the user experience since users will get earlier feedback if their proxies are invalid.
   449  The **validations** can be found in `examples/gatekeeper/validations`.
   450  
   451  
   452  You should take more of a pick-and-choose approach to our sample **policies**, since every organization will have different policy needs.
   453  Feel free to use any/all/none of them, and augment them with your own policies if applicable.
   454  The sample **policies** can be found in `examples/gatekeeper/policies`.
   455  
   456  And of course, if you do develop any new constraints that you think may be useful for the broader Contour community, we welcome contributions!