github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.18.0/deploy-options.md (about)

     1  # Deployment Options
     2  
     3  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     4  This topic explains the details and shows you additional options.
     5  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     6  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     7  
     8  ## Installation
     9  
    10  ### Recommended installation details
    11  
    12  The recommended installation is for Contour to run as a Deployment and Envoy to run as a Daemonset. A secret containing
    13  TLS certificates should be used to secure the gRPC communication between them. A Service of `type: LoadBalancer` should
    14  also be created to forward traffic to the Envoy instances. The [example manifest][2] or [Contour Operator][12] will
    15  create an installation based on these recommendations.
    16  
    17  __Note:__ Contour Operator is alpha and therefore follows the Contour [deprecation policy][13].
    18  
    19  If you wish to use Host Networking, please see the [appropriate section][3] for the details.
    20  
    21  ## Testing your installation
    22  
    23  ### Get your hostname or IP address
    24  
    25  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    26  
    27  ```bash
    28  $ kubectl get -n projectcontour service envoy -o wide
    29  ```
    30  
    31  On AWS, for example, the response looks like:
    32  
    33  ```
    34  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    35  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    36  ```
    37  
    38  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    39  
    40  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    41  See the [instructions for enabling the PROXY protocol.][4]
    42  
    43  #### Minikube
    44  
    45  On Minikube, to get the IP address of the Contour service run:
    46  
    47  ```bash
    48  $ minikube service -n projectcontour envoy --url
    49  ```
    50  
    51  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    52  
    53  #### kind
    54  
    55  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 80/443 to your local host:
    56  
    57  ```yaml
    58  kind: Cluster
    59  apiVersion: kind.x-k8s.io/v1alpha4
    60  nodes:
    61  - role: control-plane
    62  - role: worker
    63    extraPortMappings:
    64    - containerPort: 80
    65      hostPort: 80
    66      listenAddress: "0.0.0.0"  
    67    - containerPort: 443
    68      hostPort: 443
    69      listenAddress: "0.0.0.0"
    70  ```
    71  
    72  Then run the create cluster command passing the config file as a parameter.
    73  This file is in the `examples/kind` directory:
    74  
    75  ```bash
    76  $ kind create cluster --config examples/kind/kind-expose-port.yaml
    77  ```
    78  
    79  Then, your CONTOUR_IP (as used below) will just be `localhost:80`.
    80  
    81  _Note: We've created a public DNS record (`local.projectcontour.io`) which is configured to resolve to `127.0.0.1``. This allows you to use a real domain name in your kind cluster._
    82  
    83  ### Test with Ingress
    84  
    85  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
    86  To test your Contour deployment, deploy `kuard` with the following command:
    87  
    88  ```bash
    89  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
    90  ```
    91  
    92  Then monitor the progress of the deployment with:
    93  
    94  ```bash
    95  $ kubectl get po,svc,ing -l app=kuard
    96  ```
    97  
    98  You should see something like:
    99  
   100  ```
   101  NAME                       READY     STATUS    RESTARTS   AGE
   102  po/kuard-370091993-ps2gf   1/1       Running   0          4m
   103  po/kuard-370091993-r63cm   1/1       Running   0          4m
   104  po/kuard-370091993-t4dqk   1/1       Running   0          4m
   105  
   106  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
   107  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   108  
   109  NAME        HOSTS     ADDRESS     PORTS     AGE
   110  ing/kuard   *         10.0.0.47   80        4m
   111  ```
   112  
   113  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   114  
   115  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   116  
   117  ### Test with HTTPProxy
   118  
   119  To test your Contour deployment with [HTTPProxy][9], run the following command:
   120  
   121  ```sh
   122  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   123  ```
   124  
   125  Then monitor the progress of the deployment with:
   126  
   127  ```sh
   128  $ kubectl get po,svc,httpproxy -l app=kuard
   129  ```
   130  
   131  You should see something like:
   132  
   133  ```sh
   134  NAME                        READY     STATUS    RESTARTS   AGE
   135  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   136  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   137  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   138  
   139  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   140  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   141  
   142  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   143  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy
   144  ```
   145  
   146  ... showing that there are three Pods, one Service, and one HTTPProxy .
   147  
   148  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   149  
   150  ```sh
   151  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   152  ```
   153  
   154  ## Running without a Kubernetes LoadBalancer
   155  
   156  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   157  
   158  ### NodePort Service
   159  
   160  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   161  or if you want to configure the load balancer outside Kubernetes,
   162  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   163  
   164  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   165  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   166  
   167  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   168  
   169  ### Host Networking
   170  
   171  You can run Contour without a Kubernetes Service at all.
   172  This is done by having the Envoy pod run with host networking.
   173  Contour's examples utilize this model in the `/examples` directory.
   174  To configure, set: `hostNetwork: true` and `dnsPolicy: ClusterFirstWithHostNet` on your Envoy pod definition.
   175  Next, pass `--envoy-service-http-port=80 --envoy-service-https-port=443` to the contour `serve` command which instructs Envoy to listen directly on port 80/443 on each host that it is running.
   176  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   177  See the [AWS NLB tutorial][10] as an example.
   178  
   179  ### Upgrading Contour/Envoy
   180  
   181  At times it's needed to upgrade Contour, the version of Envoy, or both.
   182  The included `shutdown-manager` can assist with watching Envoy for open connections while draining and give signal back to Kubernetes as to when it's fine to delete Envoy pods during this process.
   183  
   184  See the [redeploy envoy][11] docs for more information.
   185  
   186  ## Running Contour in tandem with another ingress controller
   187  
   188  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   189  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   190  You can customize the class name with the `--ingress-class-name` flag at runtime.
   191  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   192  
   193  ## Uninstall Contour
   194  
   195  To remove Contour from your cluster, delete the namespace:
   196  
   197  ```bash
   198  $ kubectl delete ns projectcontour
   199  ```
   200  **Note**: The namespace may differ from above if [Contour Operator][12] was used to
   201  deploy Contour.
   202  
   203  ## Uninstall Contour Operator
   204  
   205  To remove Contour Operator from your cluster, delete the operator's namespace:
   206  
   207  ```bash
   208  $ kubectl delete ns contour-operator
   209  ```
   210  
   211  [1]: #running-without-a-kubernetes-loadbalancer
   212  [2]: {{< param github_url>}}/tree/{{< param version >}}/examples/contour
   213  [3]: #host-networking
   214  [4]: /guides/proxy-proto.md
   215  [5]: https://github.com/kubernetes-up-and-running/kuard
   216  [7]: {{< param github_url>}}/tree/{{< param version >}}/examples/contour/02-service-envoy.yaml
   217  [8]: /getting-started.md
   218  [9]: config/fundamentals.md
   219  [10]: /guides/deploy-aws-nlb.md
   220  [11]: redeploy-envoy.md
   221  [12]: https://github.com/projectcontour/contour-operator
   222  [13]: https://projectcontour.io/resources/deprecation-policy/