github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.19.1/config/request-routing.md (about)

     1  # Request Routing
     2  
     3  A HTTPProxy object must have at least one route or include defined.
     4  In this example, any requests to `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*` will be routed to the Service `s2`.
     5  All other requests to the host `multi-path.bar.com` will be routed to the Service `s1`.
     6  
     7  ```yaml
     8  # httpproxy-multiple-paths.yaml
     9  apiVersion: projectcontour.io/v1
    10  kind: HTTPProxy
    11  metadata:
    12    name: multiple-paths
    13    namespace: default
    14  spec:
    15    virtualhost:
    16      fqdn: multi-path.bar.com
    17    routes:
    18      - conditions:
    19        - prefix: / # matches everything else
    20        services:
    21          - name: s1
    22            port: 80
    23      - conditions:
    24        - prefix: /blog # matches `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*`
    25        services:
    26          - name: s2
    27            port: 80
    28  ```
    29  
    30  In the following example, we match on headers and send to different services, with a default route if those do not match.
    31  
    32  ```yaml
    33  # httpproxy-multiple-headers.yaml
    34  apiVersion: projectcontour.io/v1
    35  kind: HTTPProxy
    36  metadata:
    37    name: multiple-paths
    38    namespace: default
    39  spec:
    40    virtualhost:
    41      fqdn: multi-path.bar.com
    42    routes:
    43      - conditions:
    44        - header:
    45            name: x-os
    46            contains: ios
    47        services:
    48          - name: s1
    49            port: 80
    50      - conditions:
    51        - header:
    52            name: x-os
    53            contains: android
    54        services:
    55          - name: s2
    56            port: 80
    57      - services:
    58          - name: s3
    59            port: 80
    60  ```
    61  
    62  ## Conditions
    63  
    64  Each Route entry in a HTTPProxy **may** contain one or more conditions.
    65  These conditions are combined with an AND operator on the route passed to Envoy.
    66  Conditions can be either a `prefix` or a `header` condition.
    67  
    68  #### Prefix conditions
    69  
    70  Paths defined are matched using prefix conditions.
    71  Up to one prefix condition may be present in any condition block.
    72  
    73  Prefix conditions **must** start with a `/` if they are present.
    74  
    75  #### Header conditions
    76  
    77  For `header` conditions there is one required field, `name`, and six operator fields: `present`, `notpresent`, `contains`, `notcontains`, `exact`, and `notexact`.
    78  
    79  - `present` is a boolean and checks that the header is present. The value will not be checked.
    80  
    81  - `notpresent` similarly checks that the header is *not* present.
    82  
    83  - `contains` is a string, and checks that the header contains the string. `notcontains` similarly checks that the header does *not* contain the string.
    84  
    85  - `exact` is a string, and checks that the header exactly matches the whole string. `notexact` checks that the header does *not* exactly match the whole string.
    86  
    87  ## Multiple Upstreams
    88  
    89  One of the key HTTPProxy features is the ability to support multiple services for a given path:
    90  
    91  ```yaml
    92  # httpproxy-multiple-upstreams.yaml
    93  apiVersion: projectcontour.io/v1
    94  kind: HTTPProxy
    95  metadata:
    96    name: multiple-upstreams
    97    namespace: default
    98  spec:
    99    virtualhost:
   100      fqdn: multi.bar.com
   101    routes:
   102      - services:
   103          - name: s1
   104            port: 80
   105          - name: s2
   106            port: 80
   107  ```
   108  
   109  In this example, requests for `multi.bar.com/` will be load balanced across two Kubernetes Services, `s1`, and `s2`.
   110  This is helpful when you need to split traffic for a given URL across two different versions of an application.
   111  
   112  ### Upstream Weighting
   113  
   114  Building on multiple upstreams is the ability to define relative weights for upstream Services.
   115  This is commonly used for canary testing of new versions of an application when you want to send a small fraction of traffic to a specific Service.
   116  
   117  ```yaml
   118  # httpproxy-weight-shfiting.yaml
   119  apiVersion: projectcontour.io/v1
   120  kind: HTTPProxy
   121  metadata:
   122    name: weight-shifting
   123    namespace: default
   124  spec:
   125    virtualhost:
   126      fqdn: weights.bar.com
   127    routes:
   128      - services:
   129          - name: s1
   130            port: 80
   131            weight: 10
   132          - name: s2
   133            port: 80
   134            weight: 90
   135  ```
   136  
   137  In this example, we are sending 10% of the traffic to Service `s1`, while Service `s2` receives the remaining 90% of traffic.
   138  
   139  HTTPProxy weighting follows some specific rules:
   140  
   141  - If no weights are specified for a given route, it's assumed even distribution across the Services.
   142  - Weights are relative and do not need to add up to 100. If all weights for a route are specified, then the "total" weight is the sum of those specified. As an example, if weights are 20, 30, 20 for three upstreams, the total weight would be 70. In this example, a weight of 30 would receive approximately 42.9% of traffic (30/70 = .4285).
   143  - If some weights are specified but others are not, then it's assumed that upstreams without weights have an implicit weight of zero, and thus will not receive traffic.
   144  
   145  ### Traffic mirroring
   146  
   147  Per route,  a service can be nominated as a mirror.
   148  The mirror service will receive a copy of the read traffic sent to any non mirror service.
   149  The mirror traffic is considered _read only_, any response by the mirror will be discarded.
   150  
   151  This service can be useful for recording traffic for later replay or for smoke testing new deployments.
   152  
   153  ```yaml
   154  apiVersion: projectcontour.io/v1
   155  kind: HTTPProxy
   156  metadata:
   157    name: traffic-mirror
   158    namespace: default
   159  spec:
   160    virtualhost:
   161      fqdn: www.example.com
   162    routes:
   163      - conditions:
   164        - prefix: /
   165        services:
   166          - name: www
   167            port: 80
   168          - name: www-mirror
   169            port: 80
   170            mirror: true
   171  ```
   172  
   173  ## Response Timeouts
   174  
   175  Each Route can be configured to have a timeout policy and a retry policy as shown:
   176  
   177  ```yaml
   178  # httpproxy-response-timeout.yaml
   179  apiVersion: projectcontour.io/v1
   180  kind: HTTPProxy
   181  metadata:
   182    name: response-timeout
   183    namespace: default
   184  spec:
   185    virtualhost:
   186      fqdn: timeout.bar.com
   187    routes:
   188    - timeoutPolicy:
   189        response: 1s
   190        idle: 10s
   191      retryPolicy:
   192        count: 3
   193        perTryTimeout: 150ms
   194      services:
   195      - name: s1
   196        port: 80
   197  ```
   198  
   199  In this example, requests to `timeout.bar.com/` will have a response timeout policy of 1s.
   200  This refers to the time that spans between the point at which complete client request has been processed by the proxy, and when the response from the server has been completely processed.
   201  
   202  - `timeoutPolicy.response` Timeout for receiving a response from the server after processing a request from client.
   203  If not supplied, Envoy's default value of 15s applies.
   204  More information can be found in [Envoy's documentation][4].
   205  - `timeoutPolicy.idle` Timeout for how long the proxy should wait while there is no activity during single request/response (for HTTP/1.1) or stream (for HTTP/2).
   206  Timeout will not trigger while HTTP/1.1 connection is idle between two consecutive requests.
   207  If not specified, there is no per-route idle timeout, though a connection manager-wide stream idle timeout default of 5m still applies.
   208  More information can be found in [Envoy's documentation][6].
   209  
   210  TimeoutPolicy durations are expressed in the Go [Duration format][5].
   211  Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
   212  The string "infinity" is also a valid input and specifies no timeout.
   213  A value of "0s" will be treated as if the field were not set, i.e. by using Envoy's default behavior.
   214  Example input values: "300ms", "5s", "1m".
   215  
   216  - `retryPolicy`: A retry will be attempted if the server returns an error code in the 5xx range, or if the server takes more than `retryPolicy.perTryTimeout` to process a request.
   217  
   218  - `retryPolicy.count` specifies the maximum number of retries allowed. This parameter is optional and defaults to 1.
   219  
   220  - `retryPolicy.perTryTimeout` specifies the timeout per retry. If this field is greater than the request timeout, it is ignored. This parameter is optional.
   221    If left unspecified, `timeoutPolicy.request` will be used.
   222  
   223  ## Load Balancing Strategy
   224  
   225  Each route can have a load balancing strategy applied to determine which of its Endpoints is selected for the request.
   226  The following list are the options available to choose from:
   227  
   228  - `RoundRobin`: Each healthy upstream Endpoint is selected in round robin order (Default strategy if none selected).
   229  - `WeightedLeastRequest`:  The least request load balancer uses different algorithms depending on whether hosts have the same or different weights in an attempt to route traffic based upon the number of active requests or the load at the time of selection. 
   230  - `Random`: The random strategy selects a random healthy Endpoints.
   231  - `RequestHash`: The request hashing strategy allows for load balancing based on request attributes. An upstream Endpoint is selected based on the hash of an element of a request. Requests that contain a consistent value in a HTTP request header for example will be routed to the same upstream Endpoint. Currently only hashing of HTTP request headers is supported.
   232  - `Cookie`: The cookie load balancing strategy is similar to the request hash strategy and is a convenience feature to implement session affinity, as described below.
   233  
   234  More information on the load balancing strategy can be found in [Envoy's documentation][7].
   235  
   236  The following example defines the strategy for the route `/` as `WeightedLeastRequest`.
   237  
   238  ```yaml
   239  # httpproxy-lb-strategy.yaml
   240  apiVersion: projectcontour.io/v1
   241  kind: HTTPProxy
   242  metadata:
   243    name: lb-strategy
   244    namespace: default
   245  spec:
   246    virtualhost:
   247      fqdn: strategy.bar.com
   248    routes:
   249      - conditions:
   250        - prefix: /
   251        services:
   252          - name: s1-strategy
   253            port: 80
   254          - name: s2-strategy
   255            port: 80
   256        loadBalancerPolicy:
   257          strategy: WeightedLeastRequest
   258  ```
   259  
   260  The below example demonstrates how header hash load balancing policies can be configured:
   261  
   262  ```yaml
   263  # httpproxy-lb-request-hash.yaml
   264  apiVersion: projectcontour.io/v1
   265  kind: HTTPProxy
   266  metadata:
   267    name: lb-request-hash 
   268    namespace: default
   269  spec:
   270    virtualhost:
   271      fqdn: request-hash.bar.com
   272    routes:
   273    - conditions:
   274      - prefix: /
   275      services:
   276      - name: httpbin
   277        port: 8080
   278      loadBalancerPolicy:
   279        strategy: RequestHash
   280        requestHashPolicies:
   281        - headerHashOptions:
   282            headerName: X-Some-Header
   283          terminal: true
   284        - headerHashOptions:
   285            headerName: User-Agent
   286  ```
   287  
   288  In this example, if a client request contains the `X-Some-Header` header, the value of the header will be hashed and used to route to an upstream Endpoint. This could be used to implement a similar workflow to cookie-based session affinity by passing a consistent value for this header. If it is present, because it is set as a `terminal` hash option, Envoy will not continue on to process to `User-Agent` header to calculate a hash. If `X-Some-Header` is not present, Envoy will use the `User-Agent` header value to make a routing decision.
   289  
   290  ## Session Affinity
   291  
   292  Session affinity, also known as _sticky sessions_, is a load balancing strategy whereby a sequence of requests from a single client are consistently routed to the same application backend.
   293  Contour supports session affinity on a per route basis with `loadBalancerPolicy` `strategy: Cookie`.
   294  
   295  ```yaml
   296  # httpproxy-sticky-sessions.yaml
   297  apiVersion: projectcontour.io/v1
   298  kind: HTTPProxy
   299  metadata:
   300    name: httpbin
   301    namespace: default
   302  spec:
   303    virtualhost:
   304      fqdn: httpbin.davecheney.com
   305    routes:
   306    - services:
   307      - name: httpbin
   308        port: 8080
   309      loadBalancerPolicy:
   310        strategy: Cookie
   311  ```
   312  
   313  Session affinity is based on the premise that the backend servers are robust, do not change ordering, or grow and shrink according to load.
   314  None of these properties are guaranteed by a Kubernetes cluster and will be visible to applications that rely heavily on session affinity.
   315  
   316  Any perturbation in the set of pods backing a service risks redistributing backends around the hash ring.
   317  
   318  [4]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-field-config-route-v3-routeaction-timeout
   319  [5]: https://godoc.org/time#ParseDuration
   320  [6]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-field-config-route-v3-routeaction-idle-timeout
   321  [7]: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/overview