github.com/projectcontour/contour@v1.28.2/site/content/docs/1.26/deploy-options.md (about)

     1  # Deployment Options
     2  
     3  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     4  This topic explains the details and shows you additional options.
     5  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     6  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     7  
     8  ## Installation
     9  
    10  Contour requires a secret containing TLS certificates that are used to secure the gRPC communication between Contour<>Envoy.
    11  This secret can be auto-generated by the Contour `certgen` job or provided by an administrator.
    12  Traffic must be forwarded to Envoy, typically via a Service of `type: LoadBalancer`.
    13  All other requirements such as RBAC permissions, configuration details, are provided or have good defaults for most installations.
    14  
    15  ### Setting resource requests and limits
    16  
    17  It is recommended that resource requests and limits be set on all Contour and Envoy containers.
    18  The example YAML manifests used in the [Getting Started][8] guide do not include these, because the appropriate values can vary widely from user to user.
    19  The table below summarizes the Contour and Envoy containers, and provides some reasonable resource requests to start with (note that these should be adjusted based on observed usage and expected load):
    20  
    21  | Workload            | Container        | Request (mem) | Request (cpu) |
    22  | ------------------- | ---------------- | ------------- | ------------- |
    23  | deployment/contour  | contour          | 128Mi         | 250m          |
    24  | daemonset/envoy     | envoy            | 256Mi         | 500m          |
    25  | daemonset/envoy     | shutdown-manager | 50Mi          | 25m           |
    26  
    27  
    28  ### Envoy as Daemonset
    29  
    30  The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset.
    31  The example Damonset places a single instance of Envoy per node in the cluster as well as attaches to `hostPorts` on each node.
    32  This model allows for simple scaling of Envoy instances as well as ensuring even distribution of instances across the cluster.
    33  
    34  The [example daemonset manifest][2] or [Contour Gateway Provisioner][12] will create an installation based on these recommendations.
    35  
    36  _Note: If the size of the cluster is scaled down, connections can be lost since Kubernetes Damonsets do not follow proper `preStop` hooks._
    37  
    38  ### Envoy as Deployment
    39  
    40  An alternative Envoy deployment model is utilizing a Kubernetes Deployment with a configured `podAntiAffinity` which attempts to mirror the Daemonset deployment model.
    41  A benefit of this model compared to the Daemonset version is when a node is removed from the cluster, the proper shutdown events are available so connections can be cleanly drained from Envoy before terminating.
    42  
    43  The [example deployment manifest][14] will create an installation based on these recommendations.
    44  
    45  ## Testing your installation
    46  
    47  ### Get your hostname or IP address
    48  
    49  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    50  
    51  ```bash
    52  $ kubectl get -n projectcontour service envoy -o wide
    53  ```
    54  
    55  On AWS, for example, the response looks like:
    56  
    57  ```
    58  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    59  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    60  ```
    61  
    62  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    63  
    64  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    65  See the [instructions for enabling the PROXY protocol.][4]
    66  
    67  #### Minikube
    68  
    69  On Minikube, to get the IP address of the Contour service run:
    70  
    71  ```bash
    72  $ minikube service -n projectcontour envoy --url
    73  ```
    74  
    75  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    76  
    77  #### kind
    78  
    79  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
    80  
    81  ```yaml
    82  kind: Cluster
    83  apiVersion: kind.x-k8s.io/v1alpha4
    84  nodes:
    85  - role: control-plane
    86  - role: worker
    87    extraPortMappings:
    88    - containerPort: 80
    89      hostPort: 80
    90      listenAddress: "0.0.0.0"  
    91    - containerPort: 443
    92      hostPort: 443
    93      listenAddress: "0.0.0.0"
    94  ```
    95  
    96  Then run the create cluster command passing the config file as a parameter.
    97  This file is in the `examples/kind` directory:
    98  
    99  ```bash
   100  $ kind create cluster --config examples/kind/kind-expose-port.yaml
   101  ```
   102  
   103  Then, your CONTOUR_IP (as used below) will just be `localhost:80`.
   104  
   105  _Note: We've created a public DNS record (`local.projectcontour.io`) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster._
   106  
   107  ### Test with Ingress
   108  
   109  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
   110  To test your Contour deployment, deploy `kuard` with the following command:
   111  
   112  ```bash
   113  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
   114  ```
   115  
   116  Then monitor the progress of the deployment with:
   117  
   118  ```bash
   119  $ kubectl get po,svc,ing -l app=kuard
   120  ```
   121  
   122  You should see something like:
   123  
   124  ```
   125  NAME                       READY     STATUS    RESTARTS   AGE
   126  po/kuard-370091993-ps2gf   1/1       Running   0          4m
   127  po/kuard-370091993-r63cm   1/1       Running   0          4m
   128  po/kuard-370091993-t4dqk   1/1       Running   0          4m
   129  
   130  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   131  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   132  
   133  NAME        HOSTS     ADDRESS     PORTS     AGE
   134  ing/kuard   *         10.0.0.47   80        4m
   135  ```
   136  
   137  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   138  
   139  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   140  
   141  ### Test with HTTPProxy
   142  
   143  To test your Contour deployment with [HTTPProxy][9], run the following command:
   144  
   145  ```sh
   146  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   147  ```
   148  
   149  Then monitor the progress of the deployment with:
   150  
   151  ```sh
   152  $ kubectl get po,svc,httpproxy -l app=kuard
   153  ```
   154  
   155  You should see something like:
   156  
   157  ```sh
   158  NAME                        READY     STATUS    RESTARTS   AGE
   159  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   160  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   161  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   162  
   163  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   164  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   165  
   166  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   167  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy
   168  ```
   169  
   170  ... showing that there are three Pods, one Service, and one HTTPProxy .
   171  
   172  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   173  
   174  ```sh
   175  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   176  ```
   177  
   178  ## Running without a Kubernetes LoadBalancer
   179  
   180  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   181  
   182  ### NodePort Service
   183  
   184  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   185  or if you want to configure the load balancer outside Kubernetes,
   186  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   187  
   188  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   189  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   190  
   191  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   192  
   193  ### Host Networking
   194  
   195  You can run Contour without a Kubernetes Service at all.
   196  This is done by having the Envoy pod run with host networking.
   197  Contour's examples utilize this model in the `/examples` directory.
   198  To configure, set: `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` on your Envoy pod definition.
   199  Next, pass `--envoy-service-http-port=80 --envoy-service-https-port=443` to the contour `serve` command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
   200  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   201  See the [AWS NLB tutorial][10] as an example.
   202  
   203  ## Disabling Features
   204  
   205  You can run Contour with certain features disabled by passing `--disable-feature` flag to the Contour `serve` command.
   206  The flag is used to disable the informer for a custom resource, effectively making the corresponding CRD optional in the cluster.
   207  You can provide the flag multiple times.
   208  
   209  For example, to disable ExtensionService CRD, use the flag as follows: `--disable-feature=extensionservices`.
   210  
   211  See the [configuration section entry][19] for all options.
   212  
   213  ## Upgrading Contour/Envoy
   214  
   215  At times, it's needed to upgrade Contour, the version of Envoy, or both.
   216  The included `shutdown-manager` can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
   217  
   218  See the [redeploy envoy][11] docs for more information about how to not drop active connections to Envoy.
   219  Also see the [upgrade guides][15] on steps to roll out a new version of Contour.
   220  
   221  ## Running Multiple Instances of Contour
   222  
   223  It's possible to run multiple instances of Contour within a single Kubernetes cluster.
   224  This can be useful for separating external vs. internal ingress, for having separate ingress controllers for different ingress classes, and more.
   225  Each Contour instance can also be configured via the `--watch-namespaces` flag to handle their own namespaces. This allows the Kubernetes RBAC objects
   226  to be restricted further.
   227  
   228  The recommended way to deploy multiple Contour instances is to put each instance in its own namespace.
   229  This avoids most naming conflicts that would otherwise occur, and provides better logical separation between the instances.
   230  However, it is also possible to deploy multiple instances in a single namespace if needed; this approach requires more modifications to the example manifests to function properly.
   231  Each approach is described in detail below, using the [examples/contour][17] directory's manifests for reference.
   232  
   233  ### In Separate Namespaces (recommended)
   234  
   235  In general, this approach requires updating the `namespace` of all resources, as well as giving unique names to cluster-scoped resources to avoid conflicts.
   236  
   237  - `00-common.yaml`:
   238    - update the name of the `Namespace`
   239    - update the namespace of both `ServiceAccounts`
   240  - `01-contour-config.yaml`: 
   241    - update the namespace of the `ConfigMap`
   242    - if you have any namespaced references within the ConfigMap contents (e.g. `fallback-certificate`, `envoy-client-certificate`), ensure those point to the correct namespace as well.
   243  - `01-crds.yaml` will be shared between the two instances; no changes are needed.
   244  - `02-job-certgen.yaml`: 
   245    - update the namespace of all resources
   246    - update the namespace of the `ServiceAccount` subject within the `RoleBinding`
   247  - `02-role-contour.yaml`:
   248    - update the name of the `ClusterRole` to be unique
   249    - update the namespace of the `Role`
   250  - `02-rbac.yaml`: 
   251    - update the name of the `ClusterRoleBinding` to be unique
   252    - update the namespace of the `RoleBinding`
   253    - update the namespaces of the `ServiceAccount` subject within both resources
   254    - update the name of the ClusterRole within the ClusterRoleBinding's roleRef to match the unique name used in `02-role-contour.yaml`
   255  - `02-service-contour.yaml`:
   256    - update the namespace of the `Service`
   257  - `02-service-envoy.yaml`:
   258    - update the namespace of the `Service`
   259  - `03-contour.yaml`:
   260    - update the namespace of the `Deployment`
   261    - add an argument to the container, `--ingress-class-name=<unique ingress class>`, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
   262  - `03-envoy.yaml`:
   263    - update the namespace of the `DaemonSet`
   264    - remove the two `hostPort` definitions from the container (otherwise, these would conflict between the two instances)
   265  
   266  
   267  ### In The Same Namespace
   268  
   269  This approach requires giving unique names to all resources to avoid conflicts, and updating all resource references to use the correct names.
   270  
   271  - `00-common.yaml`:
   272    - update the names of both `ServiceAccounts` to be unique
   273  - `01-contour-config.yaml`:
   274    - update the name of the `ConfigMap` to be unique
   275  - `01-crds.yaml` will be shared between the two instances; no changes are needed.
   276  - `02-job-certgen.yaml`:
   277    - update the names of all resources to be unique
   278    - update the name of the `Role` within the `RoleBinding`'s roleRef to match the unique name used for the `Role`
   279    - update the name of the `ServiceAccount` within the `RoleBinding`'s subjects to match the unique name used for the `ServiceAccount`
   280    - update the serviceAccountName of the `Job`
   281    - add an argument to the container, `--secrets-name-suffix=<unique suffix>`, so the generated TLS secrets have unique names
   282    - update the spec.template.metadata.labels on the `Job` to be unique
   283  - `02-role-contour.yaml`:
   284    - update the names of the `ClusterRole` and `Role` to be unique
   285  - `02-rbac.yaml`:
   286    - update the names of the `ClusterRoleBinding` and `RoleBinding` to be unique
   287    - update the roleRefs within both resources to reference the unique `Role` and `ClusterRole` names used in `02-role-contour.yaml`
   288    - update the subjects within both resources to reference the unique `ServiceAccount` name used in `00-common.yaml`
   289  - `02-service-contour.yaml`:
   290    - update the name of the `Service` to be unique
   291    - update the selector to be unique (this must match the labels used in `03-contour.yaml`, below)
   292  - `02-service-envoy.yaml`:
   293    - update the name of the `Service` to be unique
   294    - update the selector to be unique (this must match the labels used in `03-envoy.yaml`, below)
   295  - `03-contour.yaml`:
   296    - update the name of the `Deployment` to be unique
   297    - update the metadata.labels, the spec.selector.matchLabels, the spec.template.metadata.labels, and the spec.template.spec.affinity.podAntiAffinity labels to match the labels used in `02-service-contour.yaml`
   298    - update the serviceAccountName to match the unique name used in `00-common.yaml`
   299    - update the `contourcert` volume to reference the unique `Secret` name generated from `02-certgen.yaml` (e.g. `contourcert<unique-suffix>`)
   300    - update the `contour-config` volume to reference the unique `ConfigMap` name used in `01-contour-config.yaml`
   301    - add an argument to the container, `--leader-election-resource-name=<unique lease name>`, so this Contour instance uses a separate leader election `Lease`
   302    - add an argument to the container, `--envoy-service-name=<unique envoy service name>`, referencing the unique name used in `02-service-envoy.yaml`
   303    - add an argument to the container, `--ingress-class-name=<unique ingress class>`, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
   304  - `03-envoy.yaml`:
   305    - update the name of the `DaemonSet` to be unique
   306    - update the metadata.labels, the spec.selector.matchLabels, and the spec.template.metadata.labels to match the unique labels used in `02-service-envoy.yaml`
   307    - update the `--xds-address` argument to the initContainer to use the unique name of the contour Service from `02-service-contour.yaml`
   308    - update the serviceAccountName to match the unique name used in `00-common.yaml`
   309    - update the `envoycert` volume to reference the unique `Secret` name generated from `02-certgen.yaml` (e.g. `envoycert<unique-suffix>`)
   310    - remove the two `hostPort` definitions from the container (otherwise, these would conflict between the two instances)
   311  
   312  ### Using the Gateway provisioner
   313  
   314  The Contour Gateway provisioner also supports deploying multiple instances of Contour, either in the same namespace or different namespaces.
   315  See [Getting Started with the Gateway provisioner][16] for more information on getting started with the Gateway provisioner.
   316  To deploy multiple Contour instances, you create multiple `Gateways`, either in the same namespace or in different namespaces.
   317  
   318  Note that although the provisioning request itself is made via a Gateway API resource (`Gateway`), this method of installation still allows you to use *any* of the supported APIs for defining virtual hosts and routes: `Ingress`, `HTTPProxy`, or Gateway API's `HTTPRoute` and `TLSRoute`.
   319  
   320  If you are using `Ingress` or `HTTPProxy`, you will likely want to assign each Contour instance a different ingress class, so they each handle different subsets of `Ingress`/`HTTPProxy` resources.
   321  To do this, [create two separate GatewayClasses][18], each with a different `ContourDeployment` parametersRef.
   322  The `ContourDeployment` specs should look like:
   323  
   324  ```yaml
   325  kind: ContourDeployment
   326  apiVersion: projectcontour.io/v1alpha1
   327  metadata:
   328    namespace: projectcontour
   329    name: ingress-class-1
   330  spec:
   331    runtimeSettings:
   332      ingress:
   333        classNames:
   334          - ingress-class-1
   335  ---
   336  kind: ContourDeployment
   337  apiVersion: projectcontour.io/v1alpha1
   338  metadata:
   339    namespace: projectcontour
   340    name: ingress-class-2
   341  spec:
   342    runtimeSettings:
   343      ingress:
   344        classNames:
   345          - ingress-class-2
   346  ```
   347  
   348  Then create each `Gateway` with the appropriate `spec.gatewayClassName`.
   349  
   350  ## Running Contour in tandem with another ingress controller
   351  
   352  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   353  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   354  You can customize the class name with the `--ingress-class-name` flag at runtime. (A comma-separated list of class names is allowed.)
   355  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   356  
   357  ## Uninstall Contour
   358  
   359  To remove Contour or the Contour Gateway Provisioner from your cluster, delete the namespace:
   360  
   361  ```bash
   362  $ kubectl delete ns projectcontour
   363  ```
   364  **Note**: Your namespace may differ from above.
   365  
   366  [1]: #running-without-a-kubernetes-loadbalancer
   367  [2]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour.yaml
   368  [3]: #host-networking
   369  [4]: guides/proxy-proto.md
   370  [5]: https://github.com/kubernetes-up-and-running/kuard
   371  [7]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour/02-service-envoy.yaml
   372  [8]: /getting-started
   373  [9]: config/fundamentals.md
   374  [10]: guides/deploy-aws-nlb.md
   375  [11]: redeploy-envoy.md
   376  [12]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-gateway-provisioner.yaml
   377  [13]: https://projectcontour.io/resources/deprecation-policy/
   378  [14]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-deployment.yaml
   379  [15]: /resources/upgrading/
   380  [16]: https://projectcontour.io/getting-started/#option-3-contour-gateway-provisioner-alpha
   381  [17]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour
   382  [18]: guides/gateway-api/#next-steps
   383  [19]: configuration.md