github.com/projectcontour/contour@v1.28.2/site/content/guides/gateway-api.md (about) 1 --- 2 title: Using Gateway API with Contour 3 layout: page 4 --- 5 6 ## Introduction 7 8 [Gateway API][1] is an open source project managed by the Kubernetes SIG-NETWORK community. The project's goal is to 9 evolve service networking APIs within the Kubernetes ecosystem. Gateway API consists of multiple resources that provide 10 user interfaces to expose Kubernetes applications- Services, Ingress, and more. 11 12 This guide covers using version **v1** of the Gateway API, with Contour `v1.28.0` or higher. 13 14 ### Background 15 16 Gateway API targets three personas: 17 18 - __Platform Provider__: The Platform Provider is responsible for the overall environment that the cluster runs in, i.e. 19 the cloud provider. The Platform Provider will interact with `GatewayClass` resources. 20 - __Platform Operator__: The Platform Operator is responsible for overall cluster administration. They manage policies, 21 network access, application permissions and will interact with `Gateway` resources. 22 - __Service Operator__: The Service Operator is responsible for defining application configuration and service 23 composition. They will interact with `HTTPRoute` and `TLSRoute` resources and other typical Kubernetes resources. 24 25 Gateway API contains three primary resources: 26 27 - __GatewayClass__: Defines a set of gateways with a common configuration and behavior. 28 - __Gateway__: Requests a point where traffic can be translated to a Service within the cluster. 29 - __HTTPRoute/TLSRoute__: Describes how traffic coming via the Gateway maps to the Services. 30 31 Resources are meant to align with personas. For example, a platform operator will create a `Gateway`, so a developer can 32 expose an HTTP application using an `HTTPRoute` resource. 33 34 ### Prerequisites 35 The following prerequisites must be met before using Gateway API with Contour: 36 37 - A working [Kubernetes][2] cluster. Refer to the [compatibility matrix][3] for cluster version requirements. 38 - The [kubectl][4] command-line tool, installed and configured to access your cluster. 39 40 ## Deploying Contour with Gateway API 41 42 Contour supports two modes of provisioning for use with Gateway API: **static** and **dynamic**. 43 44 In **static** provisioning, the platform operator defines a `Gateway` resource, and then manually deploys a Contour instance corresponding to that `Gateway` resource. 45 It is up to the platform operator to ensure that all configuration matches between the `Gateway` and the Contour/Envoy resources. 46 With static provisioning, Contour can be configured with either a [controller name][8], or a specific gateway (see the [API documentation][7].) 47 If configured with a controller name, Contour will process the oldest `GatewayClass`, its oldest `Gateway`, and that `Gateway's` routes, for the given controller name. 48 If configured with a specific gateway, Contour will process that `Gateway` and its routes. 49 50 **Note:** configuring Contour with a controller name is deprecated and will be removed in a future release. Use a specific gateway reference or dynamic provisioning instead. 51 52 In **dynamic** provisioning, the platform operator first deploys Contour's Gateway provisioner. Then, the platform operator defines a `Gateway` resource, and the provisioner automatically deploys a Contour instance that corresponds to the `Gateway's` configuration and will process that `Gateway` and its routes. 53 54 Static provisioning may be more appropriate for users who prefer the traditional model of deploying Contour, have just a single Contour instance, or have highly customized YAML for deploying Contour. 55 Dynamic provisioning may be more appropriate for users who want a simple declarative API for provisioning Contour instances. 56 57 ### Option #1: Statically provisioned 58 59 Create Gateway API CRDs: 60 ```shell 61 $ kubectl apply -f {{< param github_raw_url>}}/{{< param latest_version >}}/examples/gateway/00-crds.yaml 62 ``` 63 64 Create a GatewayClass: 65 ```shell 66 kubectl apply -f - <<EOF 67 kind: GatewayClass 68 apiVersion: gateway.networking.k8s.io/v1 69 metadata: 70 name: contour 71 spec: 72 controllerName: projectcontour.io/gateway-controller 73 EOF 74 ``` 75 76 Create a Gateway in the `projectcontour` namespace: 77 ```shell 78 kubectl apply -f - <<EOF 79 kind: Namespace 80 apiVersion: v1 81 metadata: 82 name: projectcontour 83 --- 84 kind: Gateway 85 apiVersion: gateway.networking.k8s.io/v1 86 metadata: 87 name: contour 88 namespace: projectcontour 89 spec: 90 gatewayClassName: contour 91 listeners: 92 - name: http 93 protocol: HTTP 94 port: 80 95 allowedRoutes: 96 namespaces: 97 from: All 98 EOF 99 ``` 100 101 Deploy Contour: 102 ```shell 103 $ kubectl apply -f {{< param base_url >}}/quickstart/contour.yaml 104 ``` 105 This command creates: 106 107 - Namespace `projectcontour` to run Contour 108 - Contour CRDs 109 - Contour RBAC resources 110 - Contour Deployment / Service 111 - Envoy DaemonSet / Service 112 - Contour ConfigMap 113 114 Update the Contour configmap to enable Gateway API processing by specifying a gateway controller name, and restart Contour to pick up the config change: 115 116 ```shell 117 kubectl apply -f - <<EOF 118 kind: ConfigMap 119 apiVersion: v1 120 metadata: 121 name: contour 122 namespace: projectcontour 123 data: 124 contour.yaml: | 125 gateway: 126 controllerName: projectcontour.io/gateway-controller 127 EOF 128 129 kubectl -n projectcontour rollout restart deployment/contour 130 ``` 131 132 See the next section ([Testing the Gateway API](#testing-the-gateway-api)) for how to deploy an application and route traffic to it using Gateway API! 133 134 ### Option #2: Dynamically provisioned 135 136 Deploy the Gateway provisioner: 137 ```shell 138 $ kubectl apply -f {{< param base_url >}}/quickstart/contour-gateway-provisioner.yaml 139 ``` 140 141 This command creates: 142 143 - Namespace `projectcontour` to run the Gateway provisioner 144 - Contour CRDs 145 - Gateway API CRDs 146 - Gateway provisioner RBAC resources 147 - Gateway provisioner Deployment 148 149 Create a GatewayClass: 150 151 ```shell 152 kubectl apply -f - <<EOF 153 kind: GatewayClass 154 apiVersion: gateway.networking.k8s.io/v1 155 metadata: 156 name: contour 157 spec: 158 controllerName: projectcontour.io/gateway-controller 159 EOF 160 ``` 161 162 Create a Gateway: 163 164 ```shell 165 kubectl apply -f - <<EOF 166 kind: Gateway 167 apiVersion: gateway.networking.k8s.io/v1 168 metadata: 169 name: contour 170 namespace: projectcontour 171 spec: 172 gatewayClassName: contour 173 listeners: 174 - name: http 175 protocol: HTTP 176 port: 80 177 allowedRoutes: 178 namespaces: 179 from: All 180 EOF 181 ``` 182 183 The above creates: 184 - A `GatewayClass` named `contour` controlled by the Gateway provisioner (via the `projectcontour.io/gateway-controller` string) 185 - A `Gateway` resource named `contour` in the `projectcontour` namespace, using the `contour` GatewayClass 186 - Contour and Envoy resources in the `projectcontour` namespace to implement the `Gateway`, i.e. a Contour deployment, an Envoy daemonset, an Envoy service, etc. 187 188 See the next section ([Testing the Gateway API](#testing-the-gateway-api)) for how to deploy an application and route traffic to it using Gateway API! 189 190 ## Testing the Gateway API 191 192 Deploy the test application: 193 ```shell 194 $ kubectl apply -f {{< param github_raw_url>}}/{{< param latest_version >}}/examples/example-workload/gatewayapi/kuard/kuard.yaml 195 ``` 196 This command creates: 197 198 - A Deployment named `kuard` in the default namespace to run kuard as the test application. 199 - A Service named `kuard` in the default namespace to expose the kuard application on TCP port 80. 200 - An HTTPRoute named `kuard` in the default namespace, attached to the `contour` Gateway, to route requests for `local.projectcontour.io` to the kuard service. 201 202 Verify the kuard resources are available: 203 ```shell 204 $ kubectl get po,svc,httproute -l app=kuard 205 NAME READY STATUS RESTARTS AGE 206 pod/kuard-798585497b-78x6x 1/1 Running 0 21s 207 pod/kuard-798585497b-7gktg 1/1 Running 0 21s 208 pod/kuard-798585497b-zw42m 1/1 Running 0 21s 209 210 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 211 service/kuard ClusterIP 172.30.168.168 <none> 80/TCP 21s 212 213 NAME HOSTNAMES 214 httproute.gateway.networking.k8s.io/kuard ["local.projectcontour.io"] 215 ``` 216 217 Test access to the kuard application: 218 219 _Note, for simplicity and compatibility across all platforms we'll use `kubectl port-forward` to get traffic to Envoy, but in a production environment you would typically use the Envoy service's address._ 220 221 Port-forward from your local machine to the Envoy service: 222 ```shell 223 # If using static provisioning 224 $ kubectl -n projectcontour port-forward service/envoy 8888:80 225 226 # If using dynamic provisioning 227 $ kubectl -n projectcontour port-forward service/envoy-contour 8888:80 228 ``` 229 230 In another terminal, make a request to the application via the forwarded port (note, `local.projectcontour.io` is a public DNS record resolving to 127.0.0.1 to make use of the forwarded port): 231 ```shell 232 $ curl -i http://local.projectcontour.io:8888 233 ``` 234 You should receive a 200 response code along with the HTML body of the main `kuard` page. 235 236 You can also open http://local.projectcontour.io:8888/ in a browser. 237 238 ## Next Steps 239 240 ### Customizing your dynamically provisioned Contour instances 241 242 In the dynamic provisioning example, we used a default set of options for provisioning the Contour gateway. 243 However, Gateway API also [supports attaching parameters to a GatewayClass][5], which can customize the Gateways that are provisioned for that GatewayClass. 244 245 Contour defines a CRD called `ContourDeployment`, which can be used as `GatewayClass` parameters. 246 247 A simple example of a parameterized Contour GatewayClass that provisions Envoy as a Deployment instead of the default DaemonSet looks like: 248 249 ```yaml 250 kind: GatewayClass 251 apiVersion: gateway.networking.k8s.io/v1 252 metadata: 253 name: contour-with-envoy-deployment 254 spec: 255 controllerName: projectcontour.io/gateway-controller 256 parametersRef: 257 kind: ContourDeployment 258 group: projectcontour.io 259 name: contour-with-envoy-deployment-params 260 namespace: projectcontour 261 --- 262 kind: ContourDeployment 263 apiVersion: projectcontour.io/v1alpha1 264 metadata: 265 namespace: projectcontour 266 name: contour-with-envoy-deployment-params 267 spec: 268 envoy: 269 workloadType: Deployment 270 ``` 271 272 All Gateways provisioned using the `contour-with-envoy-deployment` GatewayClass would get an Envoy Deployment. 273 274 See [the API documentation][6] for all `ContourDeployment` options. 275 276 ### Further reading 277 278 This guide only scratches the surface of the Gateway API's capabilities. See the [Gateway API website][1] for more information. 279 280 281 [1]: https://gateway-api.sigs.k8s.io/ 282 [2]: https://kubernetes.io/ 283 [3]: https://projectcontour.io/resources/compatibility-matrix/ 284 [4]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ 285 [5]: https://gateway-api.sigs.k8s.io/api-types/gatewayclass/#gatewayclass-parameters 286 [6]: https://projectcontour.io/docs/main/config/api/#projectcontour.io/v1alpha1.ContourDeployment 287 [7]: https://projectcontour.io/docs/main/config/api/#projectcontour.io/v1alpha1.GatewayConfig 288 [8]: https://gateway-api.sigs.k8s.io/api-types/gatewayclass/#gatewayclass-controller-selection