github.com/projectcontour/contour@v1.28.2/site/content/guides/gatekeeper.md (about)

     1  ---
     2  title: Using Gatekeeper as a validating admission controller with Contour
     3  layout: page
     4  ---
     5  
     6  This tutorial demonstrates how to use [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) as a validating admission controller for Contour.
     7  
     8  Gatekeeper is a project that enables users to define flexible policies for Kubernetes resources using [Open Policy Agent (OPA)](https://www.openpolicyagent.org/) that are enforced when those resources are created/updated via the Kubernetes API.
     9  
    10  The benefits of using Gatekeeper with Contour are:
    11  - Immediate feedback for the user when they try to create an `HTTPProxy` with an invalid spec. Instead of having to check the `HTTPProxy`'s status after creation for a possible error message, the create is rejected and the user is immediately provided with a reason for the rejection.
    12  - User-defined policies for `HTTPProxy` specs. For example, the Contour admin can define policies to enforce maximum limits on timeouts and retries, disallow certain FQDNs, etc.
    13  
    14  ## Prerequisites
    15  
    16  - A Kubernetes cluster with a minimum version of 1.14 (to enable webhook timeouts for Gatekeeper).
    17  - Cluster-admin permissions
    18  
    19  ## Deploy Contour
    20  
    21  Run:
    22  
    23  ```bash
    24  $ kubectl apply -f {{< param base_url >}}/quickstart/contour.yaml
    25  ```
    26  
    27  This creates a `projectcontour` namespace and sets up Contour as a deployment and Envoy as a daemonset, with communication between them secured by mutual TLS.
    28  
    29  Check the status of the Contour pods with this command:
    30  
    31  ```bash
    32  $ kubectl -n projectcontour get pods -l app=contour
    33  NAME                           READY   STATUS      RESTARTS   AGE
    34  contour-8596d6dbd7-9nrg2       1/1     Running     0          32m
    35  contour-8596d6dbd7-mmtc8       1/1     Running     0          32m
    36  ```
    37  
    38  If installation was successful, all pods should reach `Running` status shortly.
    39  
    40  ## Deploy Gatekeeper
    41  
    42  The following instructions are summarized from the [Gatekeeper documentation](https://github.com/open-policy-agent/gatekeeper#installation-instructions).
    43  If you already have Gatekeeper running in your cluster, you can skip this section.
    44  
    45  Run:
    46  
    47  ```bash
    48  $ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
    49  ```
    50  
    51  This creates a `gatekeeper-system` namespace and sets up the Gatekeeper controller manager and audit deployments using the latest Gatekeeper release.
    52  
    53  Check the status of the Gatekeeper pods with this command:
    54  
    55  ```bash
    56  $ kubectl -n gatekeeper-system get pods
    57  NAME                                             READY   STATUS    RESTARTS   AGE
    58  gatekeeper-audit-67dfc46db6-kjcmc                1/1     Running   0          40m
    59  gatekeeper-controller-manager-7cbc758844-64hhn   1/1     Running   0          40m
    60  gatekeeper-controller-manager-7cbc758844-c4dkd   1/1     Running   0          40m
    61  gatekeeper-controller-manager-7cbc758844-xv9jn   1/1     Running   0          40m
    62  ```
    63  
    64  If installation was successful, all pods should reach `Running` status shortly.
    65  
    66  ## Configure Gatekeeper
    67  
    68  ### Background
    69  
    70  Gatekeeper uses the [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) to define and enforce policies.
    71  This framework has two key types: `ConstraintTemplate` and `Constraint`.
    72  A `ConstraintTemplate` defines a reusable OPA policy, along with the parameters that can be passed to it when it is instantiated.
    73  When a `ConstraintTemplate` is created, Gatekeeper automatically creates a custom resource definition (CRD) to represent it in the cluster.
    74  
    75  A `Constraint` is an instantiation of a `ConstraintTemplate`, which tells Gatekeeper to apply it to specific Kubernetes resource types (e.g. `HTTPProxy`) and provides any relevant parameter values.
    76  A `Constraint` is defined as an instance of the CRD representing the associated `ConstraintTemplate`.
    77  
    78  We'll now look at some examples to make these concepts concrete.
    79  
    80  ### Configure resource caching
    81  
    82  First, Gatekeeper needs to be configured to store all `HTTPProxy` resources in its internal cache, so that existing `HTTPProxy` resources can be referenced within constraint template policies.
    83  This is essential for being able to define constraints that look across all `HTTPProxies` -- for example, to verify FQDN uniqueness.
    84  
    85  Create a file called `config.yml` containing the following YAML:
    86  
    87  ```yaml
    88  apiVersion: config.gatekeeper.sh/v1alpha1
    89  kind: Config
    90  metadata:
    91    name: config
    92    namespace: "gatekeeper-system"
    93  spec:
    94    sync:
    95      syncOnly:
    96        - group: "projectcontour.io"
    97          version: "v1"
    98          kind: "HTTPProxy"
    99  ```
   100  
   101  Apply it to the cluster:
   102  
   103  ```bash
   104  $ kubectl apply -f config.yml
   105  ```
   106  
   107  Note that if you already had Gatekeeper running in your cluster, you may already have the `Config` resource defined.
   108  In that case, you'll need to edit the existing resource to add `HTTPProxy` to the `spec.sync.syncOnly` list.
   109  
   110  ### Configure HTTPProxy validations
   111  
   112  The first constraint template and constraint that we'll define are what we'll refer to as a **validation**.
   113  These are rules for `HTTPProxy` specs that Contour universally requires to be true.
   114  In this example, we'll define a constraint template and constraint to enforce that all `HTTPProxies` must have a unique FQDN.
   115  
   116  Create a file called `unique-fqdn-template.yml` containing the following YAML:
   117  
   118  ```yaml
   119  apiVersion: templates.gatekeeper.sh/v1beta1
   120  kind: ConstraintTemplate
   121  metadata:
   122    name: httpproxyuniquefqdn
   123  spec:
   124    crd:
   125      spec:
   126        names:
   127          kind: HTTPProxyUniqueFQDN
   128          listKind: HTTPProxyUniqueFQDNList
   129          plural: HTTPProxyUniqueFQDNs
   130          singular: HTTPProxyUniqueFQDN
   131    targets:
   132      - target: admission.k8s.gatekeeper.sh
   133        rego: |
   134          package httpproxy.uniquefqdn
   135  
   136          violation[{"msg": msg, "other": sprintf("%v/%v", [other.metadata.namespace, other.metadata.name])}] {
   137            got := input.review.object.spec.virtualhost.fqdn
   138            other := data.inventory.namespace[_]["projectcontour.io/v1"]["HTTPProxy"][_]
   139            other.spec.virtualhost.fqdn = got
   140  
   141            not same(other, input.review.object)
   142            msg := "HTTPProxy must have a unique spec.virtualhost.fqdn"
   143          }
   144  
   145          same(a, b) {
   146            a.metadata.namespace == b.metadata.namespace
   147            a.metadata.name == b.metadata.name
   148          }
   149  ```
   150  
   151  Apply it to the cluster:
   152  
   153  ```bash
   154  $ kubectl apply -f unique-fqdn-template.yml
   155  ```
   156  
   157  Within a few seconds, you'll see that a corresponding CRD has been created in the cluster:
   158  
   159  ```bash
   160  $ kubectl get crd httpproxyuniquefqdn.constraints.gatekeeper.sh
   161  NAME                                            CREATED AT
   162  httpproxyuniquefqdn.constraints.gatekeeper.sh   2020-08-13T16:08:57Z
   163  ```
   164  
   165  Now, create a file called `unique-fqdn-constraint.yml` containing the following YAML:
   166  
   167  ```yaml
   168  apiVersion: constraints.gatekeeper.sh/v1beta1
   169  kind: HTTPProxyUniqueFQDN
   170  metadata:
   171    name: httpproxy-unique-fqdn
   172  spec:
   173    match:
   174      kinds:
   175        - apiGroups: ["projectcontour.io"]
   176          kinds: ["HTTPProxy"]
   177  ```
   178  
   179  Note that the `Kind` of this resource corresponds to the new CRD.
   180  
   181  Apply it to the cluster:
   182  
   183  ```bash
   184  $ kubectl apply -f unique-fqdn-constraint.yml
   185  ```
   186  
   187  Now, let's create some `HTTPProxies` to see the validation in action.
   188  
   189  Create a file called `httpproxies.yml` containing the following YAML:
   190  
   191  ```yaml
   192  apiVersion: projectcontour.io/v1
   193  kind: HTTPProxy
   194  metadata:
   195    name: demo
   196    namespace: default
   197  spec:
   198    virtualhost:
   199      fqdn: demo.projectcontour.io
   200  ---
   201  apiVersion: projectcontour.io/v1
   202  kind: HTTPProxy
   203  metadata:
   204    name: demo2
   205    namespace: default
   206  spec:
   207    virtualhost:
   208      fqdn: demo.projectcontour.io
   209  ```
   210  
   211  Note that both `HTTPProxies` have the same FQDN.
   212  
   213  Apply the YAML:
   214  
   215  ```bash
   216  $ kubectl apply -f httpproxies.yml
   217  ```
   218  
   219  You should see something like:
   220  ```
   221  httpproxy.projectcontour.io/demo created
   222  Error from server ([denied by httpproxy-unique-fqdn] HTTPProxy must have a unique FQDN): error when creating "httpproxies.yml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by httpproxy-unique-fqdn] HTTPProxy must have a unique FQDN
   223  ```
   224  
   225  The first `HTTPProxy` was created successfully, because there was not already an existing proxy with the `demo.projectcontour.io` FQDN.
   226  However, when the second `HTTPProxy` was submitted, Gatekeeper rejected its creation because it used the same FQDN as the first one.
   227  
   228  ### Configure HTTPProxy policies
   229  
   230  The next constraint template and constraint that we'll create are what we refer to as a **policy**.
   231  These are rules for `HTTPProxy` specs that an individual Contour administrator may want to enforce for their cluster, but that are not explicitly required by Contour itself.
   232  In this example, we'll define a constraint template and constraint to enforce that all `HTTPProxies` can be configured with at most five retries for any route.
   233  
   234  Create a file called `retry-count-range-template.yml` containing the following YAML:
   235  
   236  ```yaml
   237  apiVersion: templates.gatekeeper.sh/v1beta1
   238  kind: ConstraintTemplate
   239  metadata:
   240    name: httpproxyretrycountrange
   241  spec:
   242    crd:
   243      spec:
   244        names:
   245          kind: HTTPProxyRetryCountRange
   246          listKind: HTTPProxyRetryCountRangeList
   247          plural: HTTPProxyRetryCountRanges
   248          singular: HTTPProxyRetryCountRange
   249        scope: Namespaced
   250        validation:
   251          openAPIV3Schema:
   252            properties:
   253              min:
   254                type: integer
   255              max: 
   256                type: integer
   257    targets:
   258      - target: admission.k8s.gatekeeper.sh
   259        rego: |
   260          package httpproxy.retrycountrange
   261  
   262          # build a set of all the retry count values
   263          retry_counts[val] {
   264            val := input.review.object.spec.routes[_].retryPolicy.count
   265          }
   266  
   267          # is there a retry count value that's greater than the allowed max?
   268          violation[{"msg": msg}] {
   269            retry_counts[_] > input.parameters.max
   270            msg := sprintf("retry count must be less than or equal to %v", [input.parameters.max])
   271          }
   272  
   273          # is there a retry count value that's less than the allowed min?
   274          violation[{"msg": msg}] {
   275            retry_counts[_] < input.parameters.min
   276            msg := sprintf("retry count must be greater than or equal to %v", [input.parameters.min])
   277          }
   278  ```
   279  
   280  Apply it to the cluster:
   281  
   282  ```bash
   283  $ kubectl apply -f retry-count-range-template.yml
   284  ```
   285  
   286  Again, within a few seconds, you'll see that a corresponding CRD has been created in the cluster:
   287  
   288  ```bash
   289  $ kubectl get crd httpproxyretrycountrange.constraints.gatekeeper.sh
   290  NAME                                                 CREATED AT
   291  httpproxyretrycountrange.constraints.gatekeeper.sh   2020-08-13T16:12:10Z
   292  ```
   293  
   294  Now, create a file called `retry-count-range-constraint.yml` containing the following YAML:
   295  
   296  ```yaml
   297  apiVersion: constraints.gatekeeper.sh/v1beta1
   298  kind: HTTPProxyRetryCountRange
   299  metadata:
   300    name: httpproxy-retry-count-range
   301  spec:
   302    match:
   303      kinds:
   304        - apiGroups: ["projectcontour.io"]
   305          kinds: ["HTTPProxy"]
   306      namespaces:
   307        - my-namespace
   308    parameters:
   309      max: 5
   310  ```
   311  
   312  Note that for this `Constraint`, we've added a `spec.match.namespaces` field which defines that this policy should only be applied to `HTTPProxies` created in the `my-namespace` namespace.
   313  If this `namespaces` matcher is not specified, then the `Constraint` applies to all namespaces.
   314  You can read more about `Constraint` matchers on the [Gatekeeper website](https://github.com/open-policy-agent/gatekeeper#constraints).
   315  
   316  Apply it to the cluster:
   317  
   318  ```bash
   319  $ kubectl apply -f retry-count-range-constraint.yml
   320  ```
   321  
   322  Now, let's create some `HTTPProxies` to see the policy in action.
   323  
   324  Create a namespace called `my-namespace`:
   325  
   326  ```bash
   327  $ kubectl create namespace my-namespace
   328  namespace/my-namespace created
   329  ```
   330  
   331  Create a file called `httpproxy-retries.yml` containing the following YAML:
   332  
   333  ```yaml
   334  apiVersion: projectcontour.io/v1
   335  kind: HTTPProxy
   336  metadata:
   337    name: demo-retries
   338    namespace: my-namespace
   339  spec:
   340    virtualhost:
   341      fqdn: retries.projectcontour.io
   342    routes:
   343      - conditions:
   344          - prefix: /foo
   345        services:
   346          - name: s1
   347            port: 80
   348        retryPolicy:
   349          count: 6
   350  ```
   351  
   352  Apply the YAML:
   353  
   354  ```bash
   355  $ kubectl apply -f httpproxy-retries.yml
   356  ```
   357  
   358  You should see something like:
   359  ```
   360  Error from server ([denied by httpproxy-retry-count-range] retry count must be less than or equal to 5): error when creating "proxy-retries.yml": admission webhook "validation.gatekeeper.sh" denied the request: [denied by httpproxy-retry-count-range] retry count must be less than or equal to 5
   361  ```
   362  
   363  Now, change the `count` field on the last line of `httpproxy-retries.yml` to have a value of `5`. Save the file, and apply it again:
   364  
   365  ```bash
   366  $ kubectl apply -f httpproxy-retries.yml
   367  ```
   368  
   369  Now the `HTTPProxy` creates successfully*.
   370  
   371  _* Note that the HTTPProxy is still marked invalid by Contour after creation because the service `s1` does not exist, but that's outside the scope of this guide._
   372  
   373  Finally, create a file called `httpproxy-retries-default.yml` containing the following YAML:
   374  
   375  ```yaml
   376  apiVersion: projectcontour.io/v1
   377  kind: HTTPProxy
   378  metadata:
   379    name: demo-retries
   380    namespace: default
   381  spec:
   382    virtualhost:
   383      fqdn: default.retries.projectcontour.io
   384    routes:
   385      - conditions:
   386          - prefix: /foo
   387        services:
   388          - name: s1
   389            port: 80
   390        retryPolicy:
   391          count: 6
   392  ```
   393  
   394  Remember that our `Constraint` was defined to apply only to the `my-namespace` namespace, so it should not block the creation of this proxy, even though it has a retry policy count outside the allowed range.
   395  
   396  Apply the YAML:
   397  
   398  ```bash
   399  $ kubectl apply -f httpproxy-retries-default.yml
   400  ```
   401  
   402  The `HTTPProxy` creates successfully.
   403  
   404  ## Gatekeeper Audit
   405  
   406  We've seen how Gatekeeper constraints can enforce constraints when a user tries to create a new `HTTPProxy`. Now let's look at how constraints can be applied to pre-existing resources in the cluster.
   407  
   408  Gatekeeper has an audit functionality, that periodically (every `60s` by default) checks all existing resources against the relevant set of constraints. Any violations are reported in the `Constraint` custom resource's `status.violations` field. This allows an administrator to periodically review & correct any pre-existing misconfigurations, while not having to worry about breaking existing resources when rolling out a new or updated constraint.
   409  
   410  To try this out, let's revisit the previous example, and change our constraint to allow a maximum retry count of four.
   411  
   412  Edit `retry-count-range-constraint.yml` and change the `max` field to have a value of `4`. Save the file.
   413  
   414  Apply it to the cluster:
   415  
   416  ```bash
   417  $ kubectl apply -f retry-count-range-constraint.yml
   418  ```
   419  
   420  We know that the `demo-retries` proxy has a route with a `retryPolicy.count` of `5`. This should now be invalid according to the updated constraint.
   421  
   422  Wait up to `60s` for the next periodic audit to finish, then run:
   423  
   424  ```bash
   425  $ kubectl describe httpproxyretrycountrange httpproxy-retry-count-range
   426  ```
   427  
   428  You should see something like:
   429  
   430  ```
   431  ...
   432  Status:
   433      ...
   434      Violations:
   435          Enforcement Action:  deny
   436          Kind:                HTTPProxy
   437          Message:             retry policy count must be less than or equal to 4
   438          Name:                demo-retries
   439          Namespace:           my-namespace
   440  ```
   441  
   442  However, our `HTTPProxy` remains in the cluster and can continue to route requests, and the user can remediate the proxy to bring it inline with the policy on their own timeline.
   443  
   444  ## Next steps
   445  
   446  Contour has a [growing library](https://github.com/projectcontour/contour/tree/main/examples/gatekeeper) of Gatekeeper constraint templates and constraints, for both **validations** and **policies**.
   447  
   448  If you're using Gatekeeper, we recommend that you apply all of the **validations** we've defined, since these rules are already being checked internally by Contour and reported as status errors/invalid proxies.
   449  Using the Gatekeeper constraints will only improve the user experience since users will get earlier feedback if their proxies are invalid.
   450  The **validations** can be found in `examples/gatekeeper/validations`.
   451  
   452  
   453  You should take more of a pick-and-choose approach to our sample **policies**, since every organization will have different policy needs.
   454  Feel free to use any/all/none of them, and augment them with your own policies if applicable.
   455  The sample **policies** can be found in `examples/gatekeeper/policies`.
   456  
   457  And of course, if you do develop any new constraints that you think may be useful for the broader Contour community, we welcome contributions!