github.com/projectcontour/contour@v1.28.2/site/content/docs/1.25/deploy-options.md (about)

     1  # Deployment Options
     2  
     3  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     4  This topic explains the details and shows you additional options.
     5  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     6  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     7  
     8  ## Installation
     9  
    10  Contour requires a secret containing TLS certificates that are used to secure the gRPC communication between Contour<>Envoy.
    11  This secret can be auto-generated by the Contour `certgen` job or provided by an administrator.
    12  Traffic must be forwarded to Envoy, typically via a Service of `type: LoadBalancer`.
    13  All other requirements such as RBAC permissions, configuration details, are provided or have good defaults for most installations.
    14  
    15  ### Setting resource requests and limits
    16  
    17  It is recommended that resource requests and limits be set on all Contour and Envoy containers.
    18  The example YAML manifests used in the [Getting Started][8] guide do not include these, because the appropriate values can vary widely from user to user.
    19  The table below summarizes the Contour and Envoy containers, and provides some reasonable resource requests to start with (note that these should be adjusted based on observed usage and expected load):
    20  
    21  | Workload            | Container        | Request (mem) | Request (cpu) |
    22  | ------------------- | ---------------- | ------------- | ------------- |
    23  | deployment/contour  | contour          | 128Mi         | 250m          |
    24  | daemonset/envoy     | envoy            | 256Mi         | 500m          |
    25  | daemonset/envoy     | shutdown-manager | 50Mi          | 25m           |
    26  
    27  
    28  ### Envoy as Daemonset
    29  
    30  The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset.
    31  The example Damonset places a single instance of Envoy per node in the cluster as well as attaches to `hostPorts` on each node.
    32  This model allows for simple scaling of Envoy instances as well as ensuring even distribution of instances across the cluster.
    33  
    34  The [example daemonset manifest][2] or [Contour Operator][12] will create an installation based on these recommendations.
    35  
    36  _Note: If the size of the cluster is scaled down, connections can be lost since Kubernetes Damonsets do not follow proper `preStop` hooks._
    37  _Note: Contour Operator is alpha and therefore follows the Contour [deprecation policy][13]._
    38  
    39  ### Envoy as Deployment
    40  
    41  An alternative Envoy deployment model is utilizing a Kubernetes Deployment with a configured `podAntiAffinity` which attempts to mirror the Daemonset deployment model.
    42  A benefit of this model compared to the Daemonset version is when a node is removed from the cluster, the proper shutdown events are available so connections can be cleanly drained from Envoy before terminating.
    43  
    44  The [example deployment manifest][14] will create an installation based on these recommendations.
    45  
    46  ## Testing your installation
    47  
    48  ### Get your hostname or IP address
    49  
    50  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    51  
    52  ```bash
    53  $ kubectl get -n projectcontour service envoy -o wide
    54  ```
    55  
    56  On AWS, for example, the response looks like:
    57  
    58  ```
    59  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    60  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    61  ```
    62  
    63  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    64  
    65  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    66  See the [instructions for enabling the PROXY protocol.][4]
    67  
    68  #### Minikube
    69  
    70  On Minikube, to get the IP address of the Contour service run:
    71  
    72  ```bash
    73  $ minikube service -n projectcontour envoy --url
    74  ```
    75  
    76  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    77  
    78  #### kind
    79  
    80  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
    81  
    82  ```yaml
    83  kind: Cluster
    84  apiVersion: kind.x-k8s.io/v1alpha4
    85  nodes:
    86  - role: control-plane
    87  - role: worker
    88    extraPortMappings:
    89    - containerPort: 80
    90      hostPort: 80
    91      listenAddress: "0.0.0.0"  
    92    - containerPort: 443
    93      hostPort: 443
    94      listenAddress: "0.0.0.0"
    95  ```
    96  
    97  Then run the create cluster command passing the config file as a parameter.
    98  This file is in the `examples/kind` directory:
    99  
   100  ```bash
   101  $ kind create cluster --config examples/kind/kind-expose-port.yaml
   102  ```
   103  
   104  Then, your CONTOUR_IP (as used below) will just be `localhost:80`.
   105  
   106  _Note: We've created a public DNS record (`local.projectcontour.io`) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster._
   107  
   108  ### Test with Ingress
   109  
   110  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
   111  To test your Contour deployment, deploy `kuard` with the following command:
   112  
   113  ```bash
   114  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
   115  ```
   116  
   117  Then monitor the progress of the deployment with:
   118  
   119  ```bash
   120  $ kubectl get po,svc,ing -l app=kuard
   121  ```
   122  
   123  You should see something like:
   124  
   125  ```
   126  NAME                       READY     STATUS    RESTARTS   AGE
   127  po/kuard-370091993-ps2gf   1/1       Running   0          4m
   128  po/kuard-370091993-r63cm   1/1       Running   0          4m
   129  po/kuard-370091993-t4dqk   1/1       Running   0          4m
   130  
   131  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   132  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   133  
   134  NAME        HOSTS     ADDRESS     PORTS     AGE
   135  ing/kuard   *         10.0.0.47   80        4m
   136  ```
   137  
   138  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   139  
   140  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   141  
   142  ### Test with HTTPProxy
   143  
   144  To test your Contour deployment with [HTTPProxy][9], run the following command:
   145  
   146  ```sh
   147  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   148  ```
   149  
   150  Then monitor the progress of the deployment with:
   151  
   152  ```sh
   153  $ kubectl get po,svc,httpproxy -l app=kuard
   154  ```
   155  
   156  You should see something like:
   157  
   158  ```sh
   159  NAME                        READY     STATUS    RESTARTS   AGE
   160  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   161  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   162  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   163  
   164  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   165  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   166  
   167  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   168  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy
   169  ```
   170  
   171  ... showing that there are three Pods, one Service, and one HTTPProxy .
   172  
   173  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   174  
   175  ```sh
   176  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   177  ```
   178  
   179  ## Running without a Kubernetes LoadBalancer
   180  
   181  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   182  
   183  ### NodePort Service
   184  
   185  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   186  or if you want to configure the load balancer outside Kubernetes,
   187  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   188  
   189  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   190  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   191  
   192  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   193  
   194  ### Host Networking
   195  
   196  You can run Contour without a Kubernetes Service at all.
   197  This is done by having the Envoy pod run with host networking.
   198  Contour's examples utilize this model in the `/examples` directory.
   199  To configure, set: `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` on your Envoy pod definition.
   200  Next, pass `--envoy-service-http-port=80 --envoy-service-https-port=443` to the contour `serve` command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
   201  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   202  See the [AWS NLB tutorial][10] as an example.
   203  
   204  ## Disabling Features
   205  
   206  You can run Contour with certain features disabled by passing `--disable-feature` flag to the Contour `serve` command.
   207  The flag is used to disable the informer for a custom resource, effectively making the corresponding CRD optional in the cluster.
   208  You can provide the flag multiple times.
   209  
   210  For example, to disable ExtensionService CRD, use the flag as follows: `--disable-feature=extensionservices`.
   211  
   212  See the [configuration section entry][19] for all options.
   213  
   214  ## Upgrading Contour/Envoy
   215  
   216  At times, it's needed to upgrade Contour, the version of Envoy, or both.
   217  The included `shutdown-manager` can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
   218  
   219  See the [redeploy envoy][11] docs for more information about how to not drop active connections to Envoy.
   220  Also see the [upgrade guides][15] on steps to roll out a new version of Contour.
   221  
   222  ## Running Multiple Instances of Contour
   223  
   224  It's possible to run multiple instances of Contour within a single Kubernetes cluster.
   225  This can be useful for separating external vs. internal ingress, for having separate ingress controllers for different ingress classes, and more.
   226  The recommended way to deploy multiple Contour instances is to put each instance in its own namespace.
   227  This avoids most naming conflicts that would otherwise occur, and provides better logical separation between the instances.
   228  However, it is also possible to deploy multiple instances in a single namespace if needed; this approach requires more modifications to the example manifests to function properly.
   229  Each approach is described in detail below, using the [examples/contour][17] directory's manifests for reference.
   230  
   231  ### In Separate Namespaces (recommended)
   232  
   233  In general, this approach requires updating the `namespace` of all resources, as well as giving unique names to cluster-scoped resources to avoid conflicts.
   234  
   235  - `00-common.yaml`:
   236    - update the name of the `Namespace`
   237    - update the namespace of both `ServiceAccounts`
   238  - `01-contour-config.yaml`: 
   239    - update the namespace of the `ConfigMap`
   240    - if you have any namespaced references within the ConfigMap contents (e.g. `fallback-certificate`, `envoy-client-certificate`), ensure those point to the correct namespace as well.
   241  - `01-crds.yaml` will be shared between the two instances; no changes are needed.
   242  - `02-job-certgen.yaml`: 
   243    - update the namespace of all resources
   244    - update the namespace of the `ServiceAccount` subject within the `RoleBinding`
   245  - `02-role-contour.yaml`:
   246    - update the name of the `ClusterRole` to be unique
   247    - update the namespace of the `Role`
   248  - `02-rbac.yaml`: 
   249    - update the name of the `ClusterRoleBinding` to be unique
   250    - update the namespace of the `RoleBinding`
   251    - update the namespaces of the `ServiceAccount` subject within both resources
   252    - update the name of the ClusterRole within the ClusterRoleBinding's roleRef to match the unique name used in `02-role-contour.yaml`
   253  - `02-service-contour.yaml`:
   254    - update the namespace of the `Service`
   255  - `02-service-envoy.yaml`:
   256    - update the namespace of the `Service`
   257  - `03-contour.yaml`:
   258    - update the namespace of the `Deployment`
   259    - add an argument to the container, `--ingress-class-name=<unique ingress class>`, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
   260  - `03-envoy.yaml`:
   261    - update the namespace of the `DaemonSet`
   262    - remove the two `hostPort` definitions from the container (otherwise, these would conflict between the two instances)
   263  
   264  
   265  ### In The Same Namespace
   266  
   267  This approach requires giving unique names to all resources to avoid conflicts, and updating all resource references to use the correct names.
   268  
   269  - `00-common.yaml`:
   270    - update the names of both `ServiceAccounts` to be unique
   271  - `01-contour-config.yaml`:
   272    - update the name of the `ConfigMap` to be unique
   273  - `01-crds.yaml` will be shared between the two instances; no changes are needed.
   274  - `02-job-certgen.yaml`:
   275    - update the names of all resources to be unique
   276    - update the name of the `Role` within the `RoleBinding`'s roleRef to match the unique name used for the `Role`
   277    - update the name of the `ServiceAccount` within the `RoleBinding`'s subjects to match the unique name used for the `ServiceAccount`
   278    - update the serviceAccountName of the `Job`
   279    - add an argument to the container, `--secrets-name-suffix=<unique suffix>`, so the generated TLS secrets have unique names
   280    - update the spec.template.metadata.labels on the `Job` to be unique
   281  - `02-role-contour.yaml`:
   282    - update the names of the `ClusterRole` and `Role` to be unique
   283  - `02-rbac.yaml`:
   284    - update the names of the `ClusterRoleBinding` and `RoleBinding` to be unique
   285    - update the roleRefs within both resources to reference the unique `Role` and `ClusterRole` names used in `02-role-contour.yaml`
   286    - update the subjects within both resources to reference the unique `ServiceAccount` name used in `00-common.yaml`
   287  - `02-service-contour.yaml`:
   288    - update the name of the `Service` to be unique
   289    - update the selector to be unique (this must match the labels used in `03-contour.yaml`, below)
   290  - `02-service-envoy.yaml`:
   291    - update the name of the `Service` to be unique
   292    - update the selector to be unique (this must match the labels used in `03-envoy.yaml`, below)
   293  - `03-contour.yaml`:
   294    - update the name of the `Deployment` to be unique
   295    - update the metadata.labels, the spec.selector.matchLabels, the spec.template.metadata.labels, and the spec.template.spec.affinity.podAntiAffinity labels to match the labels used in `02-service-contour.yaml`
   296    - update the serviceAccountName to match the unique name used in `00-common.yaml`
   297    - update the `contourcert` volume to reference the unique `Secret` name generated from `02-certgen.yaml` (e.g. `contourcert<unique-suffix>`)
   298    - update the `contour-config` volume to reference the unique `ConfigMap` name used in `01-contour-config.yaml`
   299    - add an argument to the container, `--leader-election-resource-name=<unique lease name>`, so this Contour instance uses a separate leader election `Lease`
   300    - add an argument to the container, `--envoy-service-name=<unique envoy service name>`, referencing the unique name used in `02-service-envoy.yaml`
   301    - add an argument to the container, `--ingress-class-name=<unique ingress class>`, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
   302  - `03-envoy.yaml`:
   303    - update the name of the `DaemonSet` to be unique
   304    - update the metadata.labels, the spec.selector.matchLabels, and the spec.template.metadata.labels to match the unique labels used in `02-service-envoy.yaml`
   305    - update the `--xds-address` argument to the initContainer to use the unique name of the contour Service from `02-service-contour.yaml`
   306    - update the serviceAccountName to match the unique name used in `00-common.yaml`
   307    - update the `envoycert` volume to reference the unique `Secret` name generated from `02-certgen.yaml` (e.g. `envoycert<unique-suffix>`)
   308    - remove the two `hostPort` definitions from the container (otherwise, these would conflict between the two instances)
   309  
   310  ### Using the Gateway provisioner
   311  
   312  The Contour Gateway provisioner also supports deploying multiple instances of Contour, either in the same namespace or different namespaces.
   313  See [Getting Started with the Gateway provisioner][16] for more information on getting started with the Gateway provisioner.
   314  To deploy multiple Contour instances, you create multiple `Gateways`, either in the same namespace or in different namespaces.
   315  
   316  Note that although the provisioning request itself is made via a Gateway API resource (`Gateway`), this method of installation still allows you to use *any* of the supported APIs for defining virtual hosts and routes: `Ingress`, `HTTPProxy`, or Gateway API's `HTTPRoute` and `TLSRoute`.
   317  
   318  If you are using `Ingress` or `HTTPProxy`, you will likely want to assign each Contour instance a different ingress class, so they each handle different subsets of `Ingress`/`HTTPProxy` resources.
   319  To do this, [create two separate GatewayClasses][18], each with a different `ContourDeployment` parametersRef.
   320  The `ContourDeployment` specs should look like:
   321  
   322  ```yaml
   323  kind: ContourDeployment
   324  apiVersion: projectcontour.io/v1alpha1
   325  metadata:
   326    namespace: projectcontour
   327    name: ingress-class-1
   328  spec:
   329    runtimeSettings:
   330      ingress:
   331        classNames:
   332          - ingress-class-1
   333  ---
   334  kind: ContourDeployment
   335  apiVersion: projectcontour.io/v1alpha1
   336  metadata:
   337    namespace: projectcontour
   338    name: ingress-class-2
   339  spec:
   340    runtimeSettings:
   341      ingress:
   342        classNames:
   343          - ingress-class-2
   344  ```
   345  
   346  Then create each `Gateway` with the appropriate `spec.gatewayClassName`.
   347  
   348  ## Running Contour in tandem with another ingress controller
   349  
   350  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   351  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   352  You can customize the class name with the `--ingress-class-name` flag at runtime. (A comma-separated list of class names is allowed.)
   353  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   354  
   355  ## Uninstall Contour
   356  
   357  To remove Contour from your cluster, delete the namespace:
   358  
   359  ```bash
   360  $ kubectl delete ns projectcontour
   361  ```
   362  **Note**: The namespace may differ from above if [Contour Operator][12] was used to
   363  deploy Contour.
   364  
   365  ## Uninstall Contour Operator
   366  
   367  To remove Contour Operator from your cluster, delete the operator's namespace:
   368  
   369  ```bash
   370  $ kubectl delete ns contour-operator
   371  ```
   372  
   373  [1]: #running-without-a-kubernetes-loadbalancer
   374  [2]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour.yaml
   375  [3]: #host-networking
   376  [4]: guides/proxy-proto.md
   377  [5]: https://github.com/kubernetes-up-and-running/kuard
   378  [7]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour/02-service-envoy.yaml
   379  [8]: /getting-started
   380  [9]: config/fundamentals.md
   381  [10]: guides/deploy-aws-nlb.md
   382  [11]: redeploy-envoy.md
   383  [12]: https://github.com/projectcontour/contour-operator
   384  [13]: https://projectcontour.io/resources/deprecation-policy/
   385  [14]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-deployment.yaml
   386  [15]: /resources/upgrading/
   387  [16]: https://projectcontour.io/getting-started/#option-3-contour-gateway-provisioner-alpha
   388  [17]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour
   389  [18]: guides/gateway-api/#next-steps
   390  [19]: configuration.md