github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.0.1/deploy-options.md (about)

     1  The [Getting Started][8] guide shows you a simple way to get started with Contour on your cluster.
     2  This topic explains the details and shows you additional options.
     3  Most of this covers running Contour using a Kubernetes Service of `Type: LoadBalancer`.
     4  If you don't have a cluster with that capability see the [Running without a Kubernetes LoadBalancer][1] section.
     5  
     6  ## Installation
     7  
     8  ### Recommended installation details
     9  
    10  The recommended installation of Contour is Contour running in a Deployment and Envoy in a Daemonset with TLS securing the gRPC communication between them.
    11  The [`contour` example][2] will install this for you.
    12  A Service of `type: LoadBalancer` is also set up to forward traffic to the Envoy instances.
    13  
    14  If you wish to use Host Networking, please see the [appropriate section][3] for the details.
    15  
    16  ## Testing your installation
    17  
    18  ### Get your hostname or IP address
    19  
    20  To retrieve the IP address or DNS name assigned to your Contour deployment, run:
    21  
    22  ```bash
    23  $ kubectl get -n projectcontour service contour -o wide
    24  ```
    25  
    26  On AWS, for example, the response looks like:
    27  
    28  ```
    29  NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
    30  contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour
    31  ```
    32  
    33  Depending on your cloud provider, the `EXTERNAL-IP` value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
    34  
    35  Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections.
    36  See the [instructions for enabling the PROXY protocol.][4]
    37  
    38  #### Minikube
    39  
    40  On Minikube, to get the IP address of the Contour service run:
    41  
    42  ```bash
    43  $ minikube service -n projectcontour contour --url
    44  ```
    45  
    46  The response is always an IP address, for example `http://192.168.99.100:30588`. This is used as CONTOUR_IP in the rest of the documentation.
    47  
    48  #### kind
    49  
    50  When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 8080 to your local host:
    51  
    52  ```yaml
    53  kind: Cluster
    54  apiVersion: kind.sigs.k8s.io/v1alpha3
    55  nodes:
    56  - role: control-plane
    57  - role: worker
    58    extraPortMappings:
    59    - containerPort: 8080
    60      hostPort: 8080
    61      listenAddress: "0.0.0.0"
    62  ```
    63  
    64  Then run the create cluster command passing the config file as a parameter.
    65  This file is in the `examples/kind` directory:
    66  
    67  ```bash
    68  $ kind create cluster --config examples/kind/kind-expose-port.yaml
    69  ```
    70  
    71  Then, your CONTOUR_IP (as used below) will just be `localhost:8080`.
    72  
    73  _Note: If you change Envoy's ports to bind to 80/443 then it's possible to add entries to your local `/etc/hosts` file and make requests like `http://kuard.local` which matches how it might work on a production installation._
    74  
    75  ### Test with Ingress
    76  
    77  The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, [kuard][5].
    78  To test your Contour deployment, deploy `kuard` with the following command:
    79  
    80  ```bash
    81  $ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
    82  ```
    83  
    84  Then monitor the progress of the deployment with:
    85  
    86  ```bash
    87  $ kubectl get po,svc,ing -l app=kuard
    88  ```
    89  
    90  You should see something like:
    91  
    92  ```
    93  NAME                       READY     STATUS    RESTARTS   AGE
    94  po/kuard-370091993-ps2gf   1/1       Running   0          4m
    95  po/kuard-370091993-r63cm   1/1       Running   0          4m
    96  po/kuard-370091993-t4dqk   1/1       Running   0          4m
    97  
    98  NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
    99  svc/kuard   10.110.67.121   <none>        80/TCP    4m
   100  
   101  NAME        HOSTS     ADDRESS     PORTS     AGE
   102  ing/kuard   *         10.0.0.47   80        4m
   103  ```
   104  
   105  ... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (`*`).
   106  
   107  In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
   108  
   109  ### Test with IngressRoute
   110  
   111  To test your Contour deployment with [IngressRoutes][6], run the following command:
   112  
   113  ```sh
   114  $ kubectl apply -f https://projectcontour.io/examples/kuard-ingressroute.yaml
   115  ```
   116  
   117  Then monitor the progress of the deployment with:
   118  
   119  ```sh
   120  $ kubectl get po,svc,ingressroute -l app=kuard
   121  ```
   122  
   123  You should see something like:
   124  
   125  ```sh
   126  NAME                        READY     STATUS    RESTARTS   AGE
   127  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   128  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   129  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   130  
   131  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   132  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   133  
   134  NAME                                    CREATED AT
   135  ingressroute.contour.heptio.com/kuard   1h
   136  ```
   137  
   138  ... showing that there are three Pods, one Service, and one IngressRoute.
   139  
   140  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   141  
   142  ```sh
   143  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   144  ```
   145  ### Test with HTTPProxy
   146  
   147  To test your Contour deployment with [HTTPProxy][9], run the following command:
   148  
   149  ```sh
   150  $ kubectl apply -f https://projectcontour.io/examples/kuard-httpproxy.yaml
   151  ```
   152  
   153  Then monitor the progress of the deployment with:
   154  
   155  ```sh
   156  $ kubectl get po,svc,httpproxy -l app=kuard
   157  ```
   158  
   159  You should see something like:
   160  
   161  ```sh
   162  NAME                        READY     STATUS    RESTARTS   AGE
   163  pod/kuard-bcc7bf7df-9hj8d   1/1       Running   0          1h
   164  pod/kuard-bcc7bf7df-bkbr5   1/1       Running   0          1h
   165  pod/kuard-bcc7bf7df-vkbtl   1/1       Running   0          1h
   166  
   167  NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
   168  service/kuard   ClusterIP   10.102.239.168   <none>        80/TCP    1h
   169  
   170  NAME                                    FQDN                TLS SECRET                  FIRST ROUTE  STATUS  STATUS DESCRIPT
   171  httpproxy.projectcontour.io/kuard      kuard.local         <SECRET NAME IF TLS USED>                valid   valid HTTPProxy 
   172  ```
   173  
   174  ... showing that there are three Pods, one Service, and one HTTPProxy .
   175  
   176  In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
   177  
   178  ```sh
   179  $ curl -H 'Host: kuard.local' ${CONTOUR_IP}
   180  ```
   181  
   182  ## Running without a Kubernetes LoadBalancer
   183  
   184  If you can't or don't want to use a Service of `type: LoadBalancer` there are other ways to run Contour.
   185  
   186  ### NodePort Service
   187  
   188  If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
   189  or if you want to configure the load balancer outside Kubernetes,
   190  you can change the Envoy Service in the [`02-service-envoy.yaml`][7] file and set `type` to `NodePort`.
   191  
   192  This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
   193  That port can be discovered by taking the second number listed in the `PORT` column when listing the service, for example `30274` in `80:30274/TCP`.
   194  
   195  Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
   196  
   197  ### Host Networking
   198  
   199  You can run Contour without a Kubernetes Service at all.
   200  This is done by having the Contour pod run with host networking.
   201  Do this with `hostNetwork: true` on your pod definition.
   202  Envoy will listen directly on port 8080 on each host that it is running.
   203  This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
   204  See the [AWS NLB tutorial][10] as an example.
   205  
   206  ## Running Contour in tandem with another ingress controller
   207  
   208  If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
   209  you can specify the annotation `kubernetes.io/ingress.class: "contour"` on all ingresses that you would like Contour to claim.
   210  You can customize the class name with the `--ingress-class-name` flag at runtime.
   211  If the `kubernetes.io/ingress.class` annotation is present with a value other than `"contour"`, Contour will ignore that ingress.
   212  
   213  ## Uninstall Contour
   214  
   215  To remove Contour from your cluster, delete the namespace:
   216  
   217  ```bash
   218  $ kubectl delete ns projectcontour
   219  ```
   220  
   221  [1]: #running-without-a-kubernetes-loadbalancer
   222  [2]: {{< param github_url >}}/tree/{{page.version}}/examples/contour/README.md
   223  [3]: #host-networking
   224  [4]: {% link _guides/proxy-proto.md %}
   225  [5]: https://github.com/kubernetes-up-and-running/kuard
   226  [6]: {% link docs/v1.0.1/ingressroute.md %}
   227  [7]: {{< param github_url >}}/tree/{{page.version}}/examples/contour/02-service-envoy.yaml
   228  [8]: {% link getting-started.md %}
   229  [9]: {% link docs/v1.0.0/httpproxy.md %}
   230  [10]: {% link _guides/deploy-aws-nlb.md %}