github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.5.1/deploy-options.md (about)

     1  # Deployment Options
     2  
     3  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     4  This topic explains the details and shows you additional options.
     5  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     6  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     7  
     8  ## Installation
     9  
    10  ### Recommended installation details
    11  
    12  The recommended installation of Contour is Contour running in a Deployment and Envoy in a Daemonset with TLS securing the gRPC communication between them.
    13  The [`contour` example][2] will install this for you.
    14  A Service of `type: LoadBalancer` is also set up to forward traffic to the Envoy instances.
    15  
    16  If you wish to use Host Networking, please see the [appropriate section][3] for the details.
    17  
    18  ## Testing your installation
    19  
    20  ### Get your hostname or IP address
    21  
    22  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    23  
    24  ```bash
    25  $ kubectl get -n projectcontour service envoy -o wide
    26  ```
    27  
    28  On AWS, for example, the response looks like:
    29  
    30  ```
    31  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    32  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    33  ```
    34  
    35  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    36  
    37  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    38  See the [instructions for enabling the PROXY protocol.][4]
    39  
    40  #### Minikube
    41  
    42  On Minikube, to get the IP address of the Contour service run:
    43  
    44  ```bash
    45  $ minikube service -n projectcontour envoy --url
    46  ```
    47  
    48  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    49  
    50  #### kind
    51  
    52  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
    53  
    54  ```yaml
    55  kind: Cluster
    56  apiVersion: kind.x-k8s.io/v1alpha4
    57  nodes:
    58  - role: control-plane
    59  - role: worker
    60    extraPortMappings:
    61    - containerPort: 80
    62      hostPort: 80
    63      listenAddress: "0.0.0.0"  
    64    - containerPort: 443
    65      hostPort: 443
    66      listenAddress: "0.0.0.0"
    67  ```
    68  
    69  Then run the create cluster command passing the config file as a parameter.
    70  This file is in the `examples/kind` directory:
    71  
    72  ```bash
    73  $ kind create cluster --config examples/kind/kind-expose-port.yaml
    74  ```
    75  
    76  Then, your CONTOUR_IP (as used below) will just be `localhost:80`.
    77  
    78  _Note: We've created a public DNS record (`local.projectcontour.io`) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster._
    79  
    80  ### Test with Ingress
    81  
    82  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
    83  To test your Contour deployment, deploy `kuard` with the following command:
    84  
    85  ```bash
    86  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
    87  ```
    88  
    89  Then monitor the progress of the deployment with:
    90  
    91  ```bash
    92  $ kubectl get po,svc,ing -l app=kuard
    93  ```
    94  
    95  You should see something like:
    96  
    97  ```
    98  NAME                       READY     STATUS    RESTARTS   AGE
    99  po/kuard-370091993-ps2gf   1/1       Running   0          4m
   100  po/kuard-370091993-r63cm   1/1       Running   0          4m
   101  po/kuard-370091993-t4dqk   1/1       Running   0          4m
   102  
   103  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   104  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   105  
   106  NAME        HOSTS     ADDRESS     PORTS     AGE
   107  ing/kuard   *         10.0.0.47   80        4m
   108  ```
   109  
   110  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   111  
   112  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   113  
   114  ### Test with IngressRoute
   115  
   116  To test your Contour deployment with [IngressRoutes][6], run the following command:
   117  
   118  ```sh
   119  $ kubectl apply -f https://projectcontour.io/examples/kuard-ingressroute.yaml
   120  ```
   121  
   122  Then monitor the progress of the deployment with:
   123  
   124  ```sh
   125  $ kubectl get po,svc,ingressroute -l app=kuard
   126  ```
   127  
   128  You should see something like:
   129  
   130  ```sh
   131  NAME                        READY     STATUS    RESTARTS   AGE
   132  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   133  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   134  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   135  
   136  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   137  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   138  
   139  NAME                                    CREATED AT
   140  ingressroute.contour.heptio.com/kuard   1h
   141  ```
   142  
   143  ... showing that there are three Pods, one Service, and one IngressRoute.
   144  
   145  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   146  
   147  ```sh
   148  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   149  ```
   150  ### Test with HTTPProxy
   151  
   152  To test your Contour deployment with [HTTPProxy][9], run the following command:
   153  
   154  ```sh
   155  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   156  ```
   157  
   158  Then monitor the progress of the deployment with:
   159  
   160  ```sh
   161  $ kubectl get po,svc,httpproxy -l app=kuard
   162  ```
   163  
   164  You should see something like:
   165  
   166  ```sh
   167  NAME                        READY     STATUS    RESTARTS   AGE
   168  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   169  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   170  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   171  
   172  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   173  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   174  
   175  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   176  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy
   177  ```
   178  
   179  ... showing that there are three Pods, one Service, and one HTTPProxy .
   180  
   181  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   182  
   183  ```sh
   184  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   185  ```
   186  
   187  ## Running without a Kubernetes LoadBalancer
   188  
   189  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   190  
   191  ### NodePort Service
   192  
   193  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   194  or if you want to configure the load balancer outside Kubernetes,
   195  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   196  
   197  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   198  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   199  
   200  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   201  
   202  ### Host Networking
   203  
   204  You can run Contour without a Kubernetes Service at all.
   205  This is done by having the Envoy pod run with host networking.
   206  Contour's examples utilize this model in the `/examples` directory.
   207  To configure, set: `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` on your Envoy pod definition.
   208  Next, pass `--envoy-service-http-port=80 --envoy-service-https-port=443` to the contour `serve` command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
   209  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   210  See the [AWS NLB tutorial][10] as an example.
   211  
   212  ### Upgrading Contour/Envoy
   213  
   214  At times it's needed to upgrade Contour, the version of Envoy, or both.
   215  The included `shutdown-manager` can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
   216  
   217  See the [redeploy envoy][11] docs for more information.
   218  
   219  ## Running Contour in tandem with another ingress controller
   220  
   221  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   222  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   223  You can customize the class name with the `--ingress-class-name` flag at runtime.
   224  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   225  
   226  ## Uninstall Contour
   227  
   228  To remove Contour from your cluster, delete the namespace:
   229  
   230  ```bash
   231  $ kubectl delete ns projectcontour
   232  ```
   233  
   234  [1]: #running-without-a-kubernetes-loadbalancer
   235  [2]: {{< param github_url >}}/tree/{{page.version}}/examples/contour/README.md
   236  [3]: #host-networking
   237  [4]: {% link _guides/proxy-proto.md %}
   238  [5]: https://github.com/kubernetes-up-and-running/kuard
   239  [6]: /docs/{{page.version}}/ingressroute
   240  [7]: {{< param github_url >}}/tree/{{page.version}}/examples/contour/02-service-envoy.yaml
   241  [8]: {% link getting-started.md %}
   242  [9]: httpproxy.md
   243  [10]: {% link _guides/deploy-aws-nlb.md %}
   244  [11]: redeploy-envoy.md