github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.17.0/config/request-routing.md (about) 1 # Request Routing 2 3 A HTTPProxy object must have at least one route or include defined. 4 In this example, any requests to `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*` will be routed to the Service `s2`. 5 All other requests to the host `multi-path.bar.com` will be routed to the Service `s1`. 6 7 ```yaml 8 # httpproxy-multiple-paths.yaml 9 apiVersion: projectcontour.io/v1 10 kind: HTTPProxy 11 metadata: 12 name: multiple-paths 13 namespace: default 14 spec: 15 virtualhost: 16 fqdn: multi-path.bar.com 17 routes: 18 - conditions: 19 - prefix: / # matches everything else 20 services: 21 - name: s1 22 port: 80 23 - conditions: 24 - prefix: /blog # matches `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*` 25 services: 26 - name: s2 27 port: 80 28 ``` 29 30 In the following example, we match on headers and send to different services, with a default route if those do not match. 31 32 ```yaml 33 # httpproxy-multiple-headers.yaml 34 apiVersion: projectcontour.io/v1 35 kind: HTTPProxy 36 metadata: 37 name: multiple-paths 38 namespace: default 39 spec: 40 virtualhost: 41 fqdn: multi-path.bar.com 42 routes: 43 - conditions: 44 - header: 45 name: x-os 46 contains: ios 47 services: 48 - name: s1 49 port: 80 50 - conditions: 51 - header: 52 name: x-os 53 contains: android 54 services: 55 - name: s2 56 port: 80 57 - services: 58 - name: s3 59 port: 80 60 ``` 61 62 ## Conditions 63 64 Each Route entry in a HTTPProxy **may** contain one or more conditions. 65 These conditions are combined with an AND operator on the route passed to Envoy. 66 Conditions can be either a `prefix` or a `header` condition. 67 68 #### Prefix conditions 69 70 Paths defined are matched using prefix conditions. 71 Up to one prefix condition may be present in any condition block. 72 73 Prefix conditions **must** start with a `/` if they are present. 74 75 #### Header conditions 76 77 For `header` conditions there is one required field, `name`, and six operator fields: `present`, `notpresent`, `contains`, `notcontains`, `exact`, and `notexact`. 78 79 - `present` is a boolean and checks that the header is present. The value will not be checked. 80 81 - `notpresent` similarly checks that the header is *not* present. 82 83 - `contains` is a string, and checks that the header contains the string. `notcontains` similarly checks that the header does *not* contain the string. 84 85 - `exact` is a string, and checks that the header exactly matches the whole string. `notexact` checks that the header does *not* exactly match the whole string. 86 87 ## Multiple Upstreams 88 89 One of the key HTTPProxy features is the ability to support multiple services for a given path: 90 91 ```yaml 92 # httpproxy-multiple-upstreams.yaml 93 apiVersion: projectcontour.io/v1 94 kind: HTTPProxy 95 metadata: 96 name: multiple-upstreams 97 namespace: default 98 spec: 99 virtualhost: 100 fqdn: multi.bar.com 101 routes: 102 - services: 103 - name: s1 104 port: 80 105 - name: s2 106 port: 80 107 ``` 108 109 In this example, requests for `multi.bar.com/` will be load balanced across two Kubernetes Services, `s1`, and `s2`. 110 This is helpful when you need to split traffic for a given URL across two different versions of an application. 111 112 ### Upstream Weighting 113 114 Building on multiple upstreams is the ability to define relative weights for upstream Services. 115 This is commonly used for canary testing of new versions of an application when you want to send a small fraction of traffic to a specific Service. 116 117 ```yaml 118 # httpproxy-weight-shfiting.yaml 119 apiVersion: projectcontour.io/v1 120 kind: HTTPProxy 121 metadata: 122 name: weight-shifting 123 namespace: default 124 spec: 125 virtualhost: 126 fqdn: weights.bar.com 127 routes: 128 - services: 129 - name: s1 130 port: 80 131 weight: 10 132 - name: s2 133 port: 80 134 weight: 90 135 ``` 136 137 In this example, we are sending 10% of the traffic to Service `s1`, while Service `s2` receives the remaining 90% of traffic. 138 139 HTTPProxy weighting follows some specific rules: 140 141 - If no weights are specified for a given route, it's assumed even distribution across the Services. 142 - Weights are relative and do not need to add up to 100. If all weights for a route are specified, then the "total" weight is the sum of those specified. As an example, if weights are 20, 30, 20 for three upstreams, the total weight would be 70. In this example, a weight of 30 would receive approximately 42.9% of traffic (30/70 = .4285). 143 - If some weights are specified but others are not, then it's assumed that upstreams without weights have an implicit weight of zero, and thus will not receive traffic. 144 145 ### Traffic mirroring 146 147 Per route, a service can be nominated as a mirror. 148 The mirror service will receive a copy of the read traffic sent to any non mirror service. 149 The mirror traffic is considered _read only_, any response by the mirror will be discarded. 150 151 This service can be useful for recording traffic for later replay or for smoke testing new deployments. 152 153 ```yaml 154 apiVersion: projectcontour.io/v1 155 kind: HTTPProxy 156 metadata: 157 name: traffic-mirror 158 namespace: default 159 spec: 160 virtualhost: 161 fqdn: www.example.com 162 routes: 163 - conditions: 164 - prefix: / 165 services: 166 - name: www 167 port: 80 168 - name: www-mirror 169 port: 80 170 mirror: true 171 ``` 172 173 ## Response Timeouts 174 175 Each Route can be configured to have a timeout policy and a retry policy as shown: 176 177 ```yaml 178 # httpproxy-response-timeout.yaml 179 apiVersion: projectcontour.io/v1 180 kind: HTTPProxy 181 metadata: 182 name: response-timeout 183 namespace: default 184 spec: 185 virtualhost: 186 fqdn: timeout.bar.com 187 routes: 188 - timeoutPolicy: 189 response: 1s 190 idle: 10s 191 retryPolicy: 192 count: 3 193 perTryTimeout: 150ms 194 services: 195 - name: s1 196 port: 80 197 ``` 198 199 In this example, requests to `timeout.bar.com/` will have a response timeout policy of 1s. 200 This refers to the time that spans between the point at which complete client request has been processed by the proxy, and when the response from the server has been completely processed. 201 202 - `timeoutPolicy.response` This field can be any positive time period or "infinity". 203 This timeout covers the time from the *end of the client request* to the *end of the upstream response*. 204 By default, Envoy has a 15 second value for this timeout. 205 More information can be found in [Envoy's documentation][4]. 206 Note that a value of **0s** will be treated as if the field were not set, i.e. by using Envoy's default behavior. 207 - `timeoutPolicy.idle` This field can be any positive time period or "infinity". 208 By default, there is no per-route idle timeout. 209 Note that the default connection manager idle timeout of 5 minutes will apply if this is not set. 210 More information can be found in [Envoy's documentation][6]. 211 Note that a value of **0s** will be treated as if the field were not set, i.e. by using Envoy's default behavior. 212 213 TimeoutPolicy durations are expressed as per the format specified in the [ParseDuration documentation][5]. 214 Example input values: "300ms", "5s", "1m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 215 The string 'infinity' is also a valid input and specifies no timeout. 216 217 - `retryPolicy`: A retry will be attempted if the server returns an error code in the 5xx range, or if the server takes more than `retryPolicy.perTryTimeout` to process a request. 218 219 - `retryPolicy.count` specifies the maximum number of retries allowed. This parameter is optional and defaults to 1. 220 221 - `retryPolicy.perTryTimeout` specifies the timeout per retry. If this field is greater than the request timeout, it is ignored. This parameter is optional. 222 If left unspecified, `timeoutPolicy.request` will be used. 223 224 ## Load Balancing Strategy 225 226 Each route can have a load balancing strategy applied to determine which of its Endpoints is selected for the request. 227 The following list are the options available to choose from: 228 229 - `RoundRobin`: Each healthy upstream Endpoint is selected in round robin order (Default strategy if none selected). 230 - `WeightedLeastRequest`: The least request load balancer uses different algorithms depending on whether hosts have the same or different weights in an attempt to route traffic based upon the number of active requests or the load at the time of selection. 231 - `Random`: The random strategy selects a random healthy Endpoints. 232 - `RequestHash`: The request hashing strategy allows for load balancing based on request attributes. An upstream Endpoint is selected based on the hash of an element of a request. Requests that contain a consistent value in a HTTP request header for example will be routed to the same upstream Endpoint. Currently only hashing of HTTP request headers is supported. 233 - `Cookie`: The cookie load balancing strategy is similar to the request hash strategy and is a convenience feature to implement session affinity, as described below. 234 235 More information on the load balancing strategy can be found in [Envoy's documentation][7]. 236 237 The following example defines the strategy for the route `/` as `WeightedLeastRequest`. 238 239 ```yaml 240 # httpproxy-lb-strategy.yaml 241 apiVersion: projectcontour.io/v1 242 kind: HTTPProxy 243 metadata: 244 name: lb-strategy 245 namespace: default 246 spec: 247 virtualhost: 248 fqdn: strategy.bar.com 249 routes: 250 - conditions: 251 - prefix: / 252 services: 253 - name: s1-strategy 254 port: 80 255 - name: s2-strategy 256 port: 80 257 loadBalancerPolicy: 258 strategy: WeightedLeastRequest 259 ``` 260 261 The below example demonstrates how header hash load balancing policies can be configured: 262 263 ```yaml 264 # httpproxy-lb-request-hash.yaml 265 apiVersion: projectcontour.io/v1 266 kind: HTTPProxy 267 metadata: 268 name: lb-request-hash 269 namespace: default 270 spec: 271 virtualhost: 272 fqdn: request-hash.bar.com 273 routes: 274 - conditions: 275 - prefix: / 276 services: 277 - name: httpbin 278 port: 8080 279 loadBalancerPolicy: 280 strategy: RequestHash 281 requestHashPolicies: 282 - headerHashOptions: 283 headerName: X-Some-Header 284 terminal: true 285 - headerHashOptions: 286 headerName: User-Agent 287 ``` 288 289 In this example, if a client request contains the `X-Some-Header` header, the value of the header will be hashed and used to route to an upstream Endpoint. This could be used to implement a similar workflow to cookie-based session affinity by passing a consistent value for this header. If it is present, because it is set as a `terminal` hash option, Envoy will not continue on to process to `User-Agent` header to calculate a hash. If `X-Some-Header` is not present, Envoy will use the `User-Agent` header value to make a routing decision. 290 291 ## Session Affinity 292 293 Session affinity, also known as _sticky sessions_, is a load balancing strategy whereby a sequence of requests from a single client are consistently routed to the same application backend. 294 Contour supports session affinity on a per route basis with `loadBalancerPolicy` `strategy: Cookie`. 295 296 ```yaml 297 # httpproxy-sticky-sessions.yaml 298 apiVersion: projectcontour.io/v1 299 kind: HTTPProxy 300 metadata: 301 name: httpbin 302 namespace: default 303 spec: 304 virtualhost: 305 fqdn: httpbin.davecheney.com 306 routes: 307 - services: 308 - name: httpbin 309 port: 8080 310 loadBalancerPolicy: 311 strategy: Cookie 312 ``` 313 314 Session affinity is based on the premise that the backend servers are robust, do not change ordering, or grow and shrink according to load. 315 None of these properties are guaranteed by a Kubernetes cluster and will be visible to applications that rely heavily on session affinity. 316 317 Any perturbation in the set of pods backing a service risks redistributing backends around the hash ring. 318 319 [4]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-field-config-route-v3-routeaction-timeout 320 [5]: https://godoc.org/time#ParseDuration 321 [6]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-field-config-route-v3-routeaction-idle-timeout 322 [7]: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/overview