github.com/projectcontour/contour@v1.28.2/site/content/docs/v1.10.1/config/request-routing.md (about)

     1  # Request Routing
     2  
     3  A HTTPProxy object must have at least one route or include defined.
     4  In this example, any requests to `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*` will be routed to the Service `s2`.
     5  All other requests to the host `multi-path.bar.com` will be routed to the Service `s1`.
     6  
     7  ```yaml
     8  # httpproxy-multiple-paths.yaml
     9  apiVersion: projectcontour.io/v1
    10  kind: HTTPProxy
    11  metadata:
    12    name: multiple-paths
    13    namespace: default
    14  spec:
    15    virtualhost:
    16      fqdn: multi-path.bar.com
    17    routes:
    18      - conditions:
    19        - prefix: / # matches everything else
    20        services:
    21          - name: s1
    22            port: 80
    23      - conditions:
    24        - prefix: /blog # matches `multi-path.bar.com/blog` or `multi-path.bar.com/blog/*`
    25        services:
    26          - name: s2
    27            port: 80
    28  ```
    29  
    30  In the following example, we match on headers and send to different services, with a default route if those do not match.
    31  
    32  ```yaml
    33  # httpproxy-multiple-headers.yaml
    34  apiVersion: projectcontour.io/v1
    35  kind: HTTPProxy
    36  metadata:
    37    name: multiple-paths
    38    namespace: default
    39  spec:
    40    virtualhost:
    41      fqdn: multi-path.bar.com
    42    routes:
    43      - conditions:
    44        - header:
    45            name: x-os
    46            contains: ios
    47        services:
    48          - name: s1
    49            port: 80
    50      - conditions:
    51        - header:
    52            name: x-os
    53            contains: android
    54        services:
    55          - name: s2
    56            port: 80
    57      - services:
    58          - name: s3
    59            port: 80
    60  ```
    61  
    62  ## Conditions
    63  
    64  Each Route entry in a HTTPProxy **may** contain one or more conditions.
    65  These conditions are combined with an AND operator on the route passed to Envoy.
    66  Conditions can be either a `prefix` or a `header` condition.
    67  
    68  #### Prefix conditions
    69  
    70  Paths defined are matched using prefix conditions.
    71  Up to one prefix condition may be present in any condition block.
    72  
    73  Prefix conditions **must** start with a `/` if they are present.
    74  
    75  #### Header conditions
    76  
    77  For `header` conditions there is one required field, `name`, and five operator fields: `present`, `contains`, `notcontains`, `exact`, and `notexact`.
    78  
    79  - `present` is a boolean and checks that the header is present. The value will not be checked.
    80  
    81  - `contains` is a string, and checks that the header contains the string. `notcontains` similarly checks that the header does *not* contain the string.
    82  
    83  - `exact` is a string, and checks that the header exactly matches the whole string. `notexact` checks that the header does *not* exactly match the whole string.
    84  
    85  ## Multiple Upstreams
    86  
    87  One of the key HTTPProxy features is the ability to support multiple services for a given path:
    88  
    89  ```yaml
    90  # httpproxy-multiple-upstreams.yaml
    91  apiVersion: projectcontour.io/v1
    92  kind: HTTPProxy
    93  metadata:
    94    name: multiple-upstreams
    95    namespace: default
    96  spec:
    97    virtualhost:
    98      fqdn: multi.bar.com
    99    routes:
   100      - services:
   101          - name: s1
   102            port: 80
   103          - name: s2
   104            port: 80
   105  ```
   106  
   107  In this example, requests for `multi.bar.com/` will be load balanced across two Kubernetes Services, `s1`, and `s2`.
   108  This is helpful when you need to split traffic for a given URL across two different versions of an application.
   109  
   110  ### Upstream Weighting
   111  
   112  Building on multiple upstreams is the ability to define relative weights for upstream Services.
   113  This is commonly used for canary testing of new versions of an application when you want to send a small fraction of traffic to a specific Service.
   114  
   115  ```yaml
   116  # httpproxy-weight-shfiting.yaml
   117  apiVersion: projectcontour.io/v1
   118  kind: HTTPProxy
   119  metadata:
   120    name: weight-shifting
   121    namespace: default
   122  spec:
   123    virtualhost:
   124      fqdn: weights.bar.com
   125    routes:
   126      - services:
   127          - name: s1
   128            port: 80
   129            weight: 10
   130          - name: s2
   131            port: 80
   132            weight: 90
   133  ```
   134  
   135  In this example, we are sending 10% of the traffic to Service `s1`, while Service `s2` receives the remaining 90% of traffic.
   136  
   137  HTTPProxy weighting follows some specific rules:
   138  
   139  - If no weights are specified for a given route, it's assumed even distribution across the Services.
   140  - Weights are relative and do not need to add up to 100. If all weights for a route are specified, then the "total" weight is the sum of those specified. As an example, if weights are 20, 30, 20 for three upstreams, the total weight would be 70. In this example, a weight of 30 would receive approximately 42.9% of traffic (30/70 = .4285).
   141  - If some weights are specified but others are not, then it's assumed that upstreams without weights have an implicit weight of zero, and thus will not receive traffic.
   142  
   143  ### Traffic mirroring
   144  
   145  Per route,  a service can be nominated as a mirror.
   146  The mirror service will receive a copy of the read traffic sent to any non mirror service.
   147  The mirror traffic is considered _read only_, any response by the mirror will be discarded.
   148  
   149  This service can be useful for recording traffic for later replay or for smoke testing new deployments.
   150  
   151  ```yaml
   152  apiVersion: projectcontour.io/v1
   153  kind: HTTPProxy
   154  metadata:
   155    name: traffic-mirror
   156    namespace: default
   157  spec:
   158    virtualhost:
   159      fqdn: www.example.com
   160    routes:
   161      - conditions:
   162        - prefix: /
   163        services:
   164          - name: www
   165            port: 80
   166          - name: www-mirror
   167            port: 80
   168            mirror: true
   169  ```
   170  
   171  ## Response Timeouts
   172  
   173  Each Route can be configured to have a timeout policy and a retry policy as shown:
   174  
   175  ```yaml
   176  # httpproxy-response-timeout.yaml
   177  apiVersion: projectcontour.io/v1
   178  kind: HTTPProxy
   179  metadata:
   180    name: response-timeout
   181    namespace: default
   182  spec:
   183    virtualhost:
   184      fqdn: timeout.bar.com
   185    routes:
   186    - timeoutPolicy:
   187        response: 1s
   188        idle: 10s
   189      retryPolicy:
   190        count: 3
   191        perTryTimeout: 150ms
   192      services:
   193      - name: s1
   194        port: 80
   195  ```
   196  
   197  In this example, requests to `timeout.bar.com/` will have a response timeout policy of 1s.
   198  This refers to the time that spans between the point at which complete client request has been processed by the proxy, and when the response from the server has been completely processed.
   199  
   200  - `timeoutPolicy.response` This field can be any positive time period or "infinity".
   201  This timeout covers the time from the *end of the client request* to the *end of the upstream response*.
   202  By default, Envoy has a 15 second value for this timeout.
   203  More information can be found in [Envoy's documentation][4].
   204  Note that a value of **0s** will be treated as if the field were not set, i.e. by using Envoy's default behavior.
   205  - `timeoutPolicy.idle` This field can be any positive time period or "infinity".
   206  By default, there is no per-route idle timeout.
   207  Note that the default connection manager idle timeout of 5 minutes will apply if this is not set.
   208  More information can be found in [Envoy's documentation][6].
   209  Note that a value of **0s** will be treated as if the field were not set, i.e. by using Envoy's default behavior.
   210  
   211  TimeoutPolicy durations are expressed as per the format specified in the [ParseDuration documentation][5].
   212  Example input values: "300ms", "5s", "1m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
   213  The string 'infinity' is also a valid input and specifies no timeout.
   214  
   215  - `retryPolicy`: A retry will be attempted if the server returns an error code in the 5xx range, or if the server takes more than `retryPolicy.perTryTimeout` to process a request.
   216    - `retryPolicy.count` specifies the maximum number of retries allowed. This parameter is optional and defaults to 1.
   217    - `retryPolicy.perTryTimeout` specifies the timeout per retry. If this field is greater than the request timeout, it is ignored. This parameter is optional.
   218    If left unspecified, `timeoutPolicy.request` will be used.
   219  
   220  ## Load Balancing Strategy
   221  
   222  Each route can have a load balancing strategy applied to determine which of its Endpoints is selected for the request.
   223  The following list are the options available to choose from:
   224  
   225  - `RoundRobin`: Each healthy upstream Endpoint is selected in round robin order (Default strategy if none selected).
   226  - `WeightedLeastRequest`: The least request strategy uses an O(1) algorithm which selects two random healthy Endpoints and picks the Endpoint which has fewer active requests. Note: This algorithm is simple and sufficient for load testing. It should not be used where true weighted least request behavior is desired.
   227  - `Random`: The random strategy selects a random healthy Endpoints.
   228  
   229  More information on the load balancing strategy can be found in [Envoy's documentation][7].
   230  
   231  The following example defines the strategy for the route `/` as `WeightedLeastRequest`.
   232  
   233  ```yaml
   234  # httpproxy-lb-strategy.yaml
   235  apiVersion: projectcontour.io/v1
   236  kind: HTTPProxy
   237  metadata:
   238    name: lb-strategy
   239    namespace: default
   240  spec:
   241    virtualhost:
   242      fqdn: strategy.bar.com
   243    routes:
   244      - conditions:
   245        - prefix: /
   246        services:
   247          - name: s1-strategy
   248            port: 80
   249          - name: s2-strategy
   250            port: 80
   251        loadBalancerPolicy:
   252          strategy: WeightedLeastRequest
   253  ```
   254  
   255  ## Session Affinity
   256  
   257  Session affinity, also known as _sticky sessions_, is a load balancing strategy whereby a sequence of requests from a single client are consistently routed to the same application backend.
   258  Contour supports session affinity on a per route basis with `loadBalancerPolicy` `strategy: Cookie`.
   259  
   260  ```yaml
   261  # httpproxy-sticky-sessions.yaml
   262  apiVersion: projectcontour.io/v1
   263  kind: HTTPProxy
   264  metadata:
   265    name: httpbin
   266    namespace: default
   267  spec:
   268    virtualhost:
   269      fqdn: httpbin.davecheney.com
   270    routes:
   271    - services:
   272      - name: httpbin
   273        port: 8080
   274      loadBalancerPolicy:
   275        strategy: Cookie
   276  ```
   277  
   278  Session affinity is based on the premise that the backend servers are robust, do not change ordering, or grow and shrink according to load.
   279  None of these properties are guaranteed by a Kubernetes cluster and will be visible to applications that rely heavily on session affinity.
   280  
   281  Any perturbation in the set of pods backing a service risks redistributing backends around the hash ring.
   282  
   283  [4]: https://www.envoyproxy.io/docs/envoy/v1.14.2/api-v2/api/v2/route/route_components.proto#envoy-api-field-route-routeaction-timeout
   284  [5]: https://godoc.org/time#ParseDuration
   285  [6]: https://www.envoyproxy.io/docs/envoy/v1.14.2/api-v2/api/v2/route/route_components.proto#envoy-api-field-route-routeaction-idle-timeout
   286  [7]: https://www.envoyproxy.io/docs/envoy/v1.14.2/intro/arch_overview/upstream/load_balancing/overview