github.com/projectcontour/contour@v1.28.2/site/content/docs/1.20/deploy-options.md (about)

     1  # Deployment Options
     2  
     3  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     4  This topic explains the details and shows you additional options.
     5  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     6  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     7  
     8  ## Installation
     9  
    10  Contour requires a secret containing TLS certificates that are used to secure the gRPC communication between Contour<>Envoy.
    11  This secret can be auto-generated by the Contour `certgen` job or provided by an administrator.
    12  Traffic must be forwarded to Envoy, typically via a Service of `type: LoadBalancer`.
    13  All other requirements such as RBAC permissions, configuration details, are provided or have good defaults for most installations.
    14  
    15  ### Envoy as Daemonset
    16  
    17  The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset.
    18  The example Damonset places a single instance of Envoy per node in the cluster as well as attaches to `hostPorts` on each node.
    19  This model allows for simple scaling of Envoy instances as well as ensuring even distribution of instances across the cluster.
    20  
    21  The [example daemonset manifest][2] or [Contour Operator][12] will create an installation based on these recommendations.
    22  
    23  _Note: If the size of the cluster is scaled down, connections can be lost since Kubernetes Damonsets do not follow proper `preStop` hooks._
    24  _Note: Contour Operator is alpha and therefore follows the Contour [deprecation policy][13]._
    25  
    26  ### Envoy as Deployment
    27  
    28  An alternative Envoy deployment model is utilizing a Kubernetes Deployment with a configured `podAntiAffinity` which attempts to mirror the Daemonset deployment model.
    29  A benefit of this model compared to the Daemonset version is when a node is removed from the cluster, the proper shutdown events are available so connections can be cleanly drained from Envoy before terminating.
    30  
    31  The [example deployment manifest][14] will create an installation based on these recommendations.
    32  
    33  ## Testing your installation
    34  
    35  ### Get your hostname or IP address
    36  
    37  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    38  
    39  ```bash
    40  $ kubectl get -n projectcontour service envoy -o wide
    41  ```
    42  
    43  On AWS, for example, the response looks like:
    44  
    45  ```
    46  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    47  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    48  ```
    49  
    50  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    51  
    52  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    53  See the [instructions for enabling the PROXY protocol.][4]
    54  
    55  #### Minikube
    56  
    57  On Minikube, to get the IP address of the Contour service run:
    58  
    59  ```bash
    60  $ minikube service -n projectcontour envoy --url
    61  ```
    62  
    63  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    64  
    65  #### kind
    66  
    67  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
    68  
    69  ```yaml
    70  kind: Cluster
    71  apiVersion: kind.x-k8s.io/v1alpha4
    72  nodes:
    73  - role: control-plane
    74  - role: worker
    75    extraPortMappings:
    76    - containerPort: 80
    77      hostPort: 80
    78      listenAddress: "0.0.0.0"  
    79    - containerPort: 443
    80      hostPort: 443
    81      listenAddress: "0.0.0.0"
    82  ```
    83  
    84  Then run the create cluster command passing the config file as a parameter.
    85  This file is in the `examples/kind` directory:
    86  
    87  ```bash
    88  $ kind create cluster --config examples/kind/kind-expose-port.yaml
    89  ```
    90  
    91  Then, your CONTOUR_IP (as used below) will just be `localhost:80`.
    92  
    93  _Note: We've created a public DNS record (`local.projectcontour.io`) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster._
    94  
    95  ### Test with Ingress
    96  
    97  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
    98  To test your Contour deployment, deploy `kuard` with the following command:
    99  
   100  ```bash
   101  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
   102  ```
   103  
   104  Then monitor the progress of the deployment with:
   105  
   106  ```bash
   107  $ kubectl get po,svc,ing -l app=kuard
   108  ```
   109  
   110  You should see something like:
   111  
   112  ```
   113  NAME                       READY     STATUS    RESTARTS   AGE
   114  po/kuard-370091993-ps2gf   1/1       Running   0          4m
   115  po/kuard-370091993-r63cm   1/1       Running   0          4m
   116  po/kuard-370091993-t4dqk   1/1       Running   0          4m
   117  
   118  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   119  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   120  
   121  NAME        HOSTS     ADDRESS     PORTS     AGE
   122  ing/kuard   *         10.0.0.47   80        4m
   123  ```
   124  
   125  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   126  
   127  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   128  
   129  ### Test with HTTPProxy
   130  
   131  To test your Contour deployment with [HTTPProxy][9], run the following command:
   132  
   133  ```sh
   134  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   135  ```
   136  
   137  Then monitor the progress of the deployment with:
   138  
   139  ```sh
   140  $ kubectl get po,svc,httpproxy -l app=kuard
   141  ```
   142  
   143  You should see something like:
   144  
   145  ```sh
   146  NAME                        READY     STATUS    RESTARTS   AGE
   147  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   148  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   149  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   150  
   151  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   152  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   153  
   154  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   155  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy
   156  ```
   157  
   158  ... showing that there are three Pods, one Service, and one HTTPProxy .
   159  
   160  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   161  
   162  ```sh
   163  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   164  ```
   165  
   166  ## Running without a Kubernetes LoadBalancer
   167  
   168  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   169  
   170  ### NodePort Service
   171  
   172  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   173  or if you want to configure the load balancer outside Kubernetes,
   174  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   175  
   176  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   177  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   178  
   179  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   180  
   181  ### Host Networking
   182  
   183  You can run Contour without a Kubernetes Service at all.
   184  This is done by having the Envoy pod run with host networking.
   185  Contour's examples utilize this model in the `/examples` directory.
   186  To configure, set: `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` on your Envoy pod definition.
   187  Next, pass `--envoy-service-http-port=80 --envoy-service-https-port=443` to the contour `serve` command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
   188  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   189  See the [AWS NLB tutorial][10] as an example.
   190  
   191  ## Upgrading Contour/Envoy
   192  
   193  At times, it's needed to upgrade Contour, the version of Envoy, or both.
   194  The included `shutdown-manager` can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
   195  
   196  See the [redeploy envoy][11] docs for more information about how to not drop active connections to Envoy.
   197  Also see the [upgrade guides][15] on steps to roll out a new version of Contour.
   198  
   199  ## Running Contour in tandem with another ingress controller
   200  
   201  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   202  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   203  You can customize the class name with the `--ingress-class-name` flag at runtime.
   204  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   205  
   206  ## Uninstall Contour
   207  
   208  To remove Contour from your cluster, delete the namespace:
   209  
   210  ```bash
   211  $ kubectl delete ns projectcontour
   212  ```
   213  **Note**: The namespace may differ from above if [Contour Operator][12] was used to
   214  deploy Contour.
   215  
   216  ## Uninstall Contour Operator
   217  
   218  To remove Contour Operator from your cluster, delete the operator's namespace:
   219  
   220  ```bash
   221  $ kubectl delete ns contour-operator
   222  ```
   223  
   224  [1]: #running-without-a-kubernetes-loadbalancer
   225  [2]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour.yaml
   226  [3]: #host-networking
   227  [4]: /guides/proxy-proto.md
   228  [5]: https://github.com/kubernetes-up-and-running/kuard
   229  [7]: {{< param github_url>}}/tree/{{< param branch >}}/examples/contour/02-service-envoy.yaml
   230  [8]: /getting-started.md
   231  [9]: config/fundamentals.md
   232  [10]: /guides/deploy-aws-nlb.md
   233  [11]: redeploy-envoy.md
   234  [12]: https://github.com/projectcontour/contour-operator
   235  [13]: https://projectcontour.io/resources/deprecation-policy/
   236  [14]: {{< param github_url>}}/tree/{{< param branch >}}/examples/render/contour-deployment.yaml
   237  [15]: /resources/upgrading/