github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.12.0/config/request-routing.md (about)

     1  # Request Routing
     2  
     3  A HTTPProxy object must have at least one route or include defined.
     4  In this example, any requests to `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*` will be routed to the Service `s2`.
     5  All other requests to the host `multi-path.bar.com` will be routed to the Service `s1`.
     6  
     7  ```yaml
     8  # httpproxy-multiple-paths.yaml
     9  apiVersion: projectcontour.io/v1
    10  kind: HTTPProxy
    11  metadata:
    12    name: multiple-paths
    13    namespace: default
    14  spec:
    15    virtualhost:
    16      fqdn: multi-path.bar.com
    17    routes:
    18      - conditions:
    19        - prefix: / # matches everything else
    20        services:
    21          - name: s1
    22            port: 80
    23      - conditions:
    24        - prefix: /blog # matches `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*`
    25        services:
    26          - name: s2
    27            port: 80
    28  ```
    29  
    30  In the following example, we match on headers and send to different services, with a default route if those do not match.
    31  
    32  ```yaml
    33  # httpproxy-multiple-headers.yaml
    34  apiVersion: projectcontour.io/v1
    35  kind: HTTPProxy
    36  metadata:
    37    name: multiple-paths
    38    namespace: default
    39  spec:
    40    virtualhost:
    41      fqdn: multi-path.bar.com
    42    routes:
    43      - conditions:
    44        - header:
    45            name: x-os
    46            contains: ios
    47        services:
    48          - name: s1
    49            port: 80
    50      - conditions:
    51        - header:
    52            name: x-os
    53            contains: android
    54        services:
    55          - name: s2
    56            port: 80
    57      - services:
    58          - name: s3
    59            port: 80
    60  ```
    61  
    62  ## Conditions
    63  
    64  Each Route entry in a HTTPProxy **may** contain one or more conditions.
    65  These conditions are combined with an AND operator on the route passed to Envoy.
    66  Conditions can be either a `prefix` or a `header` condition.
    67  
    68  #### Prefix conditions
    69  
    70  Paths defined are matched using prefix conditions.
    71  Up to one prefix condition may be present in any condition block.
    72  
    73  Prefix conditions **must** start with a `/` if they are present.
    74  
    75  #### Header conditions
    76  
    77  For `header` conditions there is one required field, `name`, and five operator fields: `present`, `contains`, `notcontains`, `exact`, and `notexact`.
    78  
    79  - `present` is a boolean and checks that the header is present. The value will not be checked.
    80  
    81  - `contains` is a string, and checks that the header contains the string. `notcontains` similarly checks that the header does *not* contain the string.
    82  
    83  - `exact` is a string, and checks that the header exactly matches the whole string. `notexact` checks that the header does *not* exactly match the whole string.
    84  
    85  ## Multiple Upstreams
    86  
    87  One of the key HTTPProxy features is the ability to support multiple services for a given path:
    88  
    89  ```yaml
    90  # httpproxy-multiple-upstreams.yaml
    91  apiVersion: projectcontour.io/v1
    92  kind: HTTPProxy
    93  metadata:
    94    name: multiple-upstreams
    95    namespace: default
    96  spec:
    97    virtualhost:
    98      fqdn: multi.bar.com
    99    routes:
   100      - services:
   101          - name: s1
   102            port: 80
   103          - name: s2
   104            port: 80
   105  ```
   106  
   107  In this example, requests for `multi.bar.com/` will be load balanced across two Kubernetes Services, `s1`, and `s2`.
   108  This is helpful when you need to split traffic for a given URL across two different versions of an application.
   109  
   110  ### Upstream Weighting
   111  
   112  Building on multiple upstreams is the ability to define relative weights for upstream Services.
   113  This is commonly used for canary testing of new versions of an application when you want to send a small fraction of traffic to a specific Service.
   114  
   115  ```yaml
   116  # httpproxy-weight-shfiting.yaml
   117  apiVersion: projectcontour.io/v1
   118  kind: HTTPProxy
   119  metadata:
   120    name: weight-shifting
   121    namespace: default
   122  spec:
   123    virtualhost:
   124      fqdn: weights.bar.com
   125    routes:
   126      - services:
   127          - name: s1
   128            port: 80
   129            weight: 10
   130          - name: s2
   131            port: 80
   132            weight: 90
   133  ```
   134  
   135  In this example, we are sending 10% of the traffic to Service `s1`, while Service `s2` receives the remaining 90% of traffic.
   136  
   137  HTTPProxy weighting follows some specific rules:
   138  
   139  - If no weights are specified for a given route, it's assumed even distribution across the Services.
   140  - Weights are relative and do not need to add up to 100. If all weights for a route are specified, then the "total" weight is the sum of those specified. As an example, if weights are 20, 30, 20 for three upstreams, the total weight would be 70. In this example, a weight of 30 would receive approximately 42.9% of traffic (30/70 = .4285).
   141  - If some weights are specified but others are not, then it's assumed that upstreams without weights have an implicit weight of zero, and thus will not receive traffic.
   142  
   143  ### Traffic mirroring
   144  
   145  Per route,  a service can be nominated as a mirror.
   146  The mirror service will receive a copy of the read traffic sent to any non mirror service.
   147  The mirror traffic is considered _read only_, any response by the mirror will be discarded.
   148  
   149  This service can be useful for recording traffic for later replay or for smoke testing new deployments.
   150  
   151  ```yaml
   152  apiVersion: projectcontour.io/v1
   153  kind: HTTPProxy
   154  metadata:
   155    name: traffic-mirror
   156    namespace: default
   157  spec:
   158    virtualhost:
   159      fqdn: www.example.com
   160    routes:
   161      - conditions:
   162        - prefix: /
   163        services:
   164          - name: www
   165            port: 80
   166          - name: www-mirror
   167            port: 80
   168            mirror: true
   169  ```
   170  
   171  ## Response Timeouts
   172  
   173  Each Route can be configured to have a timeout policy and a retry policy as shown:
   174  
   175  ```yaml
   176  # httpproxy-response-timeout.yaml
   177  apiVersion: projectcontour.io/v1
   178  kind: HTTPProxy
   179  metadata:
   180    name: response-timeout
   181    namespace: default
   182  spec:
   183    virtualhost:
   184      fqdn: timeout.bar.com
   185    routes:
   186    - timeoutPolicy:
   187        response: 1s
   188        idle: 10s
   189      retryPolicy:
   190        count: 3
   191        perTryTimeout: 150ms
   192      services:
   193      - name: s1
   194        port: 80
   195  ```
   196  
   197  In this example, requests to `timeout.bar.com/` will have a response timeout policy of 1s.
   198  This refers to the time that spans between the point at which complete client request has been processed by the proxy, and when the response from the server has been completely processed.
   199  
   200  - `timeoutPolicy.response` This field can be any positive time period or "infinity".
   201  This timeout covers the time from the *end of the client request* to the *end of the upstream response*.
   202  By default, Envoy has a 15 second value for this timeout.
   203  More information can be found in [Envoy's documentation][4].
   204  Note that a value of **0s** will be treated as if the field were not set, i.e. by using Envoy's default behavior.
   205  - `timeoutPolicy.idle` This field can be any positive time period or "infinity".
   206  By default, there is no per-route idle timeout.
   207  Note that the default connection manager idle timeout of 5 minutes will apply if this is not set.
   208  More information can be found in [Envoy's documentation][6].
   209  Note that a value of **0s** will be treated as if the field were not set, i.e. by using Envoy's default behavior.
   210  
   211  TimeoutPolicy durations are expressed as per the format specified in the [ParseDuration documentation][5].
   212  Example input values: "300ms", "5s", "1m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
   213  The string 'infinity' is also a valid input and specifies no timeout.
   214  
   215  - `retryPolicy`: A retry will be attempted if the server returns an error code in the 5xx range, or if the server takes more than `retryPolicy.perTryTimeout` to process a request.
   216    - `retryPolicy.count` specifies the maximum number of retries allowed. This parameter is optional and defaults to 1.
   217    - `retryPolicy.perTryTimeout` specifies the timeout per retry. If this field is greater than the request timeout, it is ignored. This parameter is optional.
   218    If left unspecified, `timeoutPolicy.request` will be used.
   219  
   220  ## Load Balancing Strategy
   221  
   222  Each route can have a load balancing strategy applied to determine which of its Endpoints is selected for the request.
   223  The following list are the options available to choose from:
   224  
   225  - `RoundRobin`: Each healthy upstream Endpoint is selected in round robin order (Default strategy if none selected).
   226  - `WeightedLeastRequest`: The least request load balancer uses different algorithms depending on whether hosts have the same or different weights in an attempt to route traffic based upon the number of active requests or the load at the time of selection.
   227  - `Random`: The random strategy selects a random healthy Endpoints.
   228  - `RequestHash`: The request hashing strategy allows for load balancing based on request attributes. An upstream Endpoint is selected based on the hash of an element of a request. Requests that contain a consistent value in a HTTP request header for example will be routed to the same upstream Endpoint. Currently only hashing of HTTP request headers is supported.
   229  - `Cookie`: The cookie load balancing strategy is similar to the request hash strategy and is a convenience feature to implement session affinity, as described below.
   230  
   231  More information on the load balancing strategy can be found in [Envoy's documentation][7].
   232  
   233  The following example defines the strategy for the route `/` as `WeightedLeastRequest`.
   234  
   235  ```yaml
   236  # httpproxy-lb-strategy.yaml
   237  apiVersion: projectcontour.io/v1
   238  kind: HTTPProxy
   239  metadata:
   240    name: lb-strategy
   241    namespace: default
   242  spec:
   243    virtualhost:
   244      fqdn: strategy.bar.com
   245    routes:
   246      - conditions:
   247        - prefix: /
   248        services:
   249          - name: s1-strategy
   250            port: 80
   251          - name: s2-strategy
   252            port: 80
   253        loadBalancerPolicy:
   254          strategy: WeightedLeastRequest
   255  ```
   256  
   257  The below example demonstrates how header hash load balancing policies can be configured:
   258  
   259  ```yaml
   260  # httpproxy-lb-request-hash.yaml
   261  apiVersion: projectcontour.io/v1
   262  kind: HTTPProxy
   263  metadata:
   264    name: lb-request-hash 
   265    namespace: default
   266  spec:
   267    virtualhost:
   268      fqdn: request-hash.bar.com
   269    routes:
   270    - conditions:
   271      - prefix: /
   272      services:
   273      - name: httpbin
   274        port: 8080
   275      loadBalancerPolicy:
   276        strategy: RequestHash
   277        requestHashPolicies:
   278        - headerHashOptions:
   279            headerName: X-Some-Header
   280          terminal: true
   281        - headerHashOptions:
   282            headerName: User-Agent
   283  ```
   284  
   285  In this example, if a client request contains the `X-Some-Header` header, the value of the header will be hashed and used to route to an upstream Endpoint. This could be used to implement a similar workflow to cookie-based session affinity by passing a consistent value for this header. If it is present, because it is set as a `terminal` hash option, Envoy will not continue on to process to `User-Agent` header to calculate a hash. If `X-Some-Header` is not present, Envoy will use the `User-Agent` header value to make a routing decision.
   286  
   287  ## Session Affinity
   288  
   289  Session affinity, also known as _sticky sessions_, is a load balancing strategy whereby a sequence of requests from a single client are consistently routed to the same application backend.
   290  Contour supports session affinity on a per route basis with `loadBalancerPolicy` `strategy: Cookie`.
   291  
   292  ```yaml
   293  # httpproxy-sticky-sessions.yaml
   294  apiVersion: projectcontour.io/v1
   295  kind: HTTPProxy
   296  metadata:
   297    name: httpbin
   298    namespace: default
   299  spec:
   300    virtualhost:
   301      fqdn: httpbin.davecheney.com
   302    routes:
   303    - services:
   304      - name: httpbin
   305        port: 8080
   306      loadBalancerPolicy:
   307        strategy: Cookie
   308  ```
   309  
   310  Session affinity is based on the premise that the backend servers are robust, do not change ordering, or grow and shrink according to load.
   311  None of these properties are guaranteed by a Kubernetes cluster and will be visible to applications that rely heavily on session affinity.
   312  
   313  Any perturbation in the set of pods backing a service risks redistributing backends around the hash ring.
   314  
   315  [4]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-field-config-route-v3-routeaction-timeout
   316  [5]: https://godoc.org/time#ParseDuration
   317  [6]: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route_components.proto#envoy-v3-api-field-config-route-v3-routeaction-idle-timeout
   318  [7]: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/overview