github.com/projectcontour/contour@v1.28.2/site/content/docs/1.23/deploy-options.md (about)

     1  # Deployment Options
     2  
     3  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     4  This topic explains the details and shows you additional options.
     5  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     6  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     7  
     8  ## Installation
     9  
    10  Contour requires a secret containing TLS certificates that are used to secure the gRPC communication between Contour<>Envoy.
    11  This secret can be auto-generated by the Contour `certgen` job or provided by an administrator.
    12  Traffic must be forwarded to Envoy, typically via a Service of `type: LoadBalancer`.
    13  All other requirements such as RBAC permissions, configuration details, are provided or have good defaults for most installations.
    14  
    15  ### Envoy as Daemonset
    16  
    17  The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset.
    18  The example Damonset places a single instance of Envoy per node in the cluster as well as attaches to `hostPorts` on each node.
    19  This model allows for simple scaling of Envoy instances as well as ensuring even distribution of instances across the cluster.
    20  
    21  The [example daemonset manifest][2] or [Contour Operator][12] will create an installation based on these recommendations.
    22  
    23  _Note: If the size of the cluster is scaled down, connections can be lost since Kubernetes Damonsets do not follow proper `preStop` hooks._
    24  _Note: Contour Operator is alpha and therefore follows the Contour [deprecation policy][13]._
    25  
    26  ### Envoy as Deployment
    27  
    28  An alternative Envoy deployment model is utilizing a Kubernetes Deployment with a configured `podAntiAffinity` which attempts to mirror the Daemonset deployment model.
    29  A benefit of this model compared to the Daemonset version is when a node is removed from the cluster, the proper shutdown events are available so connections can be cleanly drained from Envoy before terminating.
    30  
    31  The [example deployment manifest][14] will create an installation based on these recommendations.
    32  
    33  ## Testing your installation
    34  
    35  ### Get your hostname or IP address
    36  
    37  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    38  
    39  ```bash
    40  $ kubectl get -n projectcontour service envoy -o wide
    41  ```
    42  
    43  On AWS, for example, the response looks like:
    44  
    45  ```
    46  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    47  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    48  ```
    49  
    50  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    51  
    52  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    53  See the [instructions for enabling the PROXY protocol.][4]
    54  
    55  #### Minikube
    56  
    57  On Minikube, to get the IP address of the Contour service run:
    58  
    59  ```bash
    60  $ minikube service -n projectcontour envoy --url
    61  ```
    62  
    63  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    64  
    65  #### kind
    66  
    67  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
    68  
    69  ```yaml
    70  kind: Cluster
    71  apiVersion: kind.x-k8s.io/v1alpha4
    72  nodes:
    73  - role: control-plane
    74  - role: worker
    75    extraPortMappings:
    76    - containerPort: 80
    77      hostPort: 80
    78      listenAddress: "0.0.0.0"  
    79    - containerPort: 443
    80      hostPort: 443
    81      listenAddress: "0.0.0.0"
    82  ```
    83  
    84  Then run the create cluster command passing the config file as a parameter.
    85  This file is in the `examples/kind` directory:
    86  
    87  ```bash
    88  $ kind create cluster --config examples/kind/kind-expose-port.yaml
    89  ```
    90  
    91  Then, your CONTOUR_IP (as used below) will just be `localhost:80`.
    92  
    93  _Note: We've created a public DNS record (`local.projectcontour.io`) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster._
    94  
    95  ### Test with Ingress
    96  
    97  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
    98  To test your Contour deployment, deploy `kuard` with the following command:
    99  
   100  ```bash
   101  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
   102  ```
   103  
   104  Then monitor the progress of the deployment with:
   105  
   106  ```bash
   107  $ kubectl get po,svc,ing -l app=kuard
   108  ```
   109  
   110  You should see something like:
   111  
   112  ```
   113  NAME                       READY     STATUS    RESTARTS   AGE
   114  po/kuard-370091993-ps2gf   1/1       Running   0          4m
   115  po/kuard-370091993-r63cm   1/1       Running   0          4m
   116  po/kuard-370091993-t4dqk   1/1       Running   0          4m
   117  
   118  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   119  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   120  
   121  NAME        HOSTS     ADDRESS     PORTS     AGE
   122  ing/kuard   *         10.0.0.47   80        4m
   123  ```
   124  
   125  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   126  
   127  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   128  
   129  ### Test with HTTPProxy
   130  
   131  To test your Contour deployment with [HTTPProxy][9], run the following command:
   132  
   133  ```sh
   134  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   135  ```
   136  
   137  Then monitor the progress of the deployment with:
   138  
   139  ```sh
   140  $ kubectl get po,svc,httpproxy -l app=kuard
   141  ```
   142  
   143  You should see something like:
   144  
   145  ```sh
   146  NAME                        READY     STATUS    RESTARTS   AGE
   147  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   148  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   149  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   150  
   151  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   152  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   153  
   154  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   155  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy
   156  ```
   157  
   158  ... showing that there are three Pods, one Service, and one HTTPProxy .
   159  
   160  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   161  
   162  ```sh
   163  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   164  ```
   165  
   166  ## Running without a Kubernetes LoadBalancer
   167  
   168  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   169  
   170  ### NodePort Service
   171  
   172  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   173  or if you want to configure the load balancer outside Kubernetes,
   174  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   175  
   176  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   177  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   178  
   179  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   180  
   181  ### Host Networking
   182  
   183  You can run Contour without a Kubernetes Service at all.
   184  This is done by having the Envoy pod run with host networking.
   185  Contour's examples utilize this model in the `/examples` directory.
   186  To configure, set: `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` on your Envoy pod definition.
   187  Next, pass `--envoy-service-http-port=80 --envoy-service-https-port=443` to the contour `serve` command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
   188  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   189  See the [AWS NLB tutorial][10] as an example.
   190  
   191  ## Upgrading Contour/Envoy
   192  
   193  At times, it's needed to upgrade Contour, the version of Envoy, or both.
   194  The included `shutdown-manager` can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
   195  
   196  See the [redeploy envoy][11] docs for more information about how to not drop active connections to Envoy.
   197  Also see the [upgrade guides][15] on steps to roll out a new version of Contour.
   198  
   199  ## Running Multiple Instances of Contour
   200  
   201  It's possible to run multiple instances of Contour within a single Kubernetes cluster.
   202  This can be useful for separating external vs. internal ingress, for having separate ingress controllers for different ingress classes, and more.
   203  The recommended way to deploy multiple Contour instances is to put each instance in its own namespace.
   204  This avoids most naming conflicts that would otherwise occur, and provides better logical separation between the instances.
   205  However, it is also possible to deploy multiple instances in a single namespace if needed; this approach requires more modifications to the example manifests to function properly.
   206  Each approach is described in detail below, using the [examples/contour][17] directory's manifests for reference.
   207  
   208  ### In Separate Namespaces (recommended)
   209  
   210  In general, this approach requires updating the `namespace` of all resources, as well as giving unique names to cluster-scoped resources to avoid conflicts.
   211  
   212  - `00-common.yaml`:
   213    - update the name of the `Namespace`
   214    - update the namespace of both `ServiceAccounts`
   215  - `01-contour-config.yaml`: 
   216    - update the namespace of the `ConfigMap`
   217    - if you have any namespaced references within the ConfigMap contents (e.g. `fallback-certificate`, `envoy-client-certificate`), ensure those point to the correct namespace as well.
   218  - `01-crds.yaml` will be shared between the two instances; no changes are needed.
   219  - `02-job-certgen.yaml`: 
   220    - update the namespace of all resources
   221    - update the namespace of the `ServiceAccount` subject within the `RoleBinding`
   222  - `02-role-contour.yaml`:
   223    - update the name of the `ClusterRole` to be unique
   224    - update the namespace of the `Role`
   225  - `02-rbac.yaml`: 
   226    - update the name of the `ClusterRoleBinding` to be unique
   227    - update the namespace of the `RoleBinding`
   228    - update the namespaces of the `ServiceAccount` subject within both resources
   229    - update the name of the ClusterRole within the ClusterRoleBinding's roleRef to match the unique name used in `02-role-contour.yaml`
   230  - `02-service-contour.yaml`:
   231    - update the namespace of the `Service`
   232  - `02-service-envoy.yaml`:
   233    - update the namespace of the `Service`
   234  - `03-contour.yaml`:
   235    - update the namespace of the `Deployment`
   236    - add an argument to the container, `--ingress-class-name=<unique ingress class>`, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
   237  - `03-envoy.yaml`:
   238    - update the namespace of the `DaemonSet`
   239    - remove the two `hostPort` definitions from the container (otherwise, these would conflict between the two instances)
   240  
   241  
   242  ### In The Same Namespace
   243  
   244  This approach requires giving unique names to all resources to avoid conflicts, and updating all resource references to use the correct names.
   245  
   246  - `00-common.yaml`:
   247    - update the names of both `ServiceAccounts` to be unique
   248  - `01-contour-config.yaml`:
   249    - update the name of the `ConfigMap` to be unique
   250  - `01-crds.yaml` will be shared between the two instances; no changes are needed.
   251  - `02-job-certgen.yaml`:
   252    - update the names of all resources to be unique
   253    - update the name of the `Role` within the `RoleBinding`'s roleRef to match the unique name used for the `Role`
   254    - update the name of the `ServiceAccount` within the `RoleBinding`'s subjects to match the unique name used for the `ServiceAccount`
   255    - update the serviceAccountName of the `Job`
   256    - add an argument to the container, `--secrets-name-suffix=<unique suffix>`, so the generated TLS secrets have unique names
   257    - update the spec.template.metadata.labels on the `Job` to be unique
   258  - `02-role-contour.yaml`:
   259    - update the names of the `ClusterRole` and `Role` to be unique
   260  - `02-rbac.yaml`:
   261    - update the names of the `ClusterRoleBinding` and `RoleBinding` to be unique
   262    - update the roleRefs within both resources to reference the unique `Role` and `ClusterRole` names used in `02-role-contour.yaml`
   263    - update the subjects within both resources to reference the unique `ServiceAccount` name used in `00-common.yaml`
   264  - `02-service-contour.yaml`:
   265    - update the name of the `Service` to be unique
   266    - update the selector to be unique (this must match the labels used in `03-contour.yaml`, below)
   267  - `02-service-envoy.yaml`:
   268    - update the name of the `Service` to be unique
   269    - update the selector to be unique (this must match the labels used in `03-envoy.yaml`, below)
   270  - `03-contour.yaml`:
   271    - update the name of the `Deployment` to be unique
   272    - update the metadata.labels, the spec.selector.matchLabels, the spec.template.metadata.labels, and the spec.template.spec.affinity.podAntiAffinity labels to match the labels used in `02-service-contour.yaml`
   273    - update the serviceAccountName to match the unique name used in `00-common.yaml`
   274    - update the `contourcert` volume to reference the unique `Secret` name generated from `02-certgen.yaml` (e.g. `contourcert<unique-suffix>`)
   275    - update the `contour-config` volume to reference the unique `ConfigMap` name used in `01-contour-config.yaml`
   276    - add an argument to the container, `--leader-election-resource-name=<unique lease name>`, so this Contour instance uses a separate leader election `Lease`
   277    - add an argument to the container, `--envoy-service-name=<unique envoy service name>`, referencing the unique name used in `02-service-envoy.yaml`
   278    - add an argument to the container, `--ingress-class-name=<unique ingress class>`, so this instance only processes Ingresses/HTTPProxies with the given ingress class.
   279  - `03-envoy.yaml`:
   280    - update the name of the `DaemonSet` to be unique
   281    - update the metadata.labels, the spec.selector.matchLabels, and the spec.template.metadata.labels to match the unique labels used in `02-service-envoy.yaml`
   282    - update the `--xds-address` argument to the initContainer to use the unique name of the contour Service from `02-service-contour.yaml`
   283    - update the serviceAccountName to match the unique name used in `00-common.yaml`
   284    - update the `envoycert` volume to reference the unique `Secret` name generated from `02-certgen.yaml` (e.g. `envoycert<unique-suffix>`)
   285    - remove the two `hostPort` definitions from the container (otherwise, these would conflict between the two instances)
   286  
   287  ### Using the Gateway provisioner
   288  
   289  The Contour Gateway provisioner also supports deploying multiple instances of Contour, either in the same namespace or different namespaces.
   290  See [Getting Started with the Gateway provisioner][16] for more information on getting started with the Gateway provisioner.
   291  To deploy multiple Contour instances, you create multiple `Gateways`, either in the same namespace or in different namespaces.
   292  
   293  Note that although the provisioning request itself is made via a Gateway API resource (`Gateway`), this method of installation still allows you to use *any* of the supported APIs for defining virtual hosts and routes: `Ingress`, `HTTPProxy`, or Gateway API's `HTTPRoute` and `TLSRoute`.
   294  
   295  If you are using `Ingress` or `HTTPProxy`, you will likely want to assign each Contour instance a different ingress class, so they each handle different subsets of `Ingress`/`HTTPProxy` resources.
   296  To do this, [create two separate GatewayClasses][18], each with a different `ContourDeployment` parametersRef.
   297  The `ContourDeployment` specs should look like:
   298  
   299  ```yaml
   300  kind: ContourDeployment
   301  apiVersion: projectcontour.io/v1alpha1
   302  metadata:
   303    namespace: projectcontour
   304    name: ingress-class-1
   305  spec:
   306    runtimeSettings:
   307      ingress:
   308        classNames:
   309          - ingress-class-1
   310  ---
   311  kind: ContourDeployment
   312  apiVersion: projectcontour.io/v1alpha1
   313  metadata:
   314    namespace: projectcontour
   315    name: ingress-class-2
   316  spec:
   317    runtimeSettings:
   318      ingress:
   319        classNames:
   320          - ingress-class-2
   321  ```
   322  
   323  Then create each `Gateway` with the appropriate `spec.gatewayClassName`.
   324  
   325  ## Running Contour in tandem with another ingress controller
   326  
   327  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   328  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   329  You can customize the class name with the `--ingress-class-name` flag at runtime. (A comma-separated list of class names is allowed.)
   330  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   331  
   332  ## Uninstall Contour
   333  
   334  To remove Contour from your cluster, delete the namespace:
   335  
   336  ```bash
   337  $ kubectl delete ns projectcontour
   338  ```
   339  **Note**: The namespace may differ from above if [Contour Operator][12] was used to
   340  deploy Contour.
   341  
   342  ## Uninstall Contour Operator
   343  
   344  To remove Contour Operator from your cluster, delete the operator's namespace:
   345  
   346  ```bash
   347  $ kubectl delete ns contour-operator
   348  ```
   349  
   350  [1]: #running-without-a-kubernetes-loadbalancer
   351  [2]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour.yaml
   352  [3]: #host-networking
   353  [4]: guides/proxy-proto.md
   354  [5]: https://github.com/kubernetes-up-and-running/kuard
   355  [7]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour/02-service-envoy.yaml
   356  [8]: /getting-started
   357  [9]: config/fundamentals.md
   358  [10]: guides/deploy-aws-nlb.md
   359  [11]: redeploy-envoy.md
   360  [12]: https://github.com/projectcontour/contour-operator
   361  [13]: https://projectcontour.io/resources/deprecation-policy/
   362  [14]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-deployment.yaml
   363  [15]: /resources/upgrading/
   364  [16]: https://projectcontour.io/getting-started/#option-3-contour-gateway-provisioner-alpha
   365  [17]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour
   366  [18]: guides/gateway-api/#next-steps