github.com/openshift/installer@v1.4.17/docs/dev/kube-apiserver-health-check.md (about)

     1  ## Graceful Termination
     2  `kube-apiserver` in OpenShift is fronted by an external and an internal Load Balancer. This document serves as a
     3  guideline on how to properly configure the health check probe of the load balancers so that when a `kube-apiserver`
     4  instance restarts we can ensure:
     5  - The load balancers detect it and takes it out of service in time. No new request should be forwarded to the 
     6    `kube-apiserver` instance when it has stopped listening.
     7  - Existing connections are not cut off hard, they are allowed to complete gracefully.
     8  
     9  ## Load Balancer Health Check Probe
    10  `kube-apiserver` provides graceful termination support via the `/readyz` health check endpoint. When `/readyz` reports
    11  `HTTP Status 200 OK` it indicates that the apiserver is ready to serve request(s).
    12  
    13  Now let's walk through the events (in chronological order) that unfold when a `kube-apiserver` instance restarts:
    14  * E1: `T+0s`: `kube-apiserver` receives a TERM signal.
    15  * E2: `T+0s`: `/readyz` starts reporting `failure` to signal to the load balancers that a shut down is in progress.
    16    * The apiserver will continue to accept new request(s).
    17    * The apiserver waits for certain amount of time (configurable by `shutdown-delay-duration`) before it stops accepting new request(s).
    18  * E3: `T+30s`: `kube-apiserver` (the http server) stops listening:
    19    * `/healthz` turns red.
    20    * Default TCP health check probe on port `6443` will fail.
    21    * Any new request forwarded to it will fail, most likely with a `connection refused` error or `GOAWAY` for http/2.
    22    * Existing request(s) in-flight are not cut off but are given up to `60s` to complete gracefully.
    23  * E4: `T+30s+60s`: Any existing request(s) that are still in flight are terminated with an error `reason: Timeout message: request did not complete within 60s`.
    24  * E5: `T+30s+60s`: The apiserver process exits.
    25  
    26  Please note that after `E3` takes place, there is a scenario where all existing requests in-flight can gracefully complete
    27  before the `60s` timeout. In such a case no request is forcefully terminated (`E4` does not transpire) and `E5` 
    28  can come about well before `T+30s+60s`. 
    29  
    30  An important note to consider is that today in OpenShift the time difference between `E3` and `E2` is `70s`. This is known as
    31  `shutdown-delay-duration` and is configurable by the devs only. This is not a knob we allow the end user to tweak. 
    32  ```
    33  $ kubectl -n openshift-kube-apiserver get cm config -o json | jq -r '.data."config.yaml"' |
    34    jq '.apiServerArguments."shutdown-delay-duration"'
    35  [
    36    "70s"
    37  ]
    38  ```
    39  In future we will reduce `shutdown-delay-duration` to `30s`. So in this document we will continue with `E3 - E2` is `30s`.
    40  
    41  Given the above, we can infer the following:
    42  * The load balancers should use `/readyz` endpoint for `kube-apiserver` health check probe. It must NOT use `/healthz` or
    43  default TCP port probe.
    44  * The time taken by a load balancer (let's say `t` seconds) to deem a `kube-apiserver` instance unhealthy and take it
    45  out of service should not bleed into `E3`. So `E2 + t < E3` must be true so that no new request is forwarded to the 
    46  instance at `E3` or later. 
    47  * In the worst case, a load balancer should take at most `30s` (since `E2` triggers) to take the `kube-apiserver` 
    48  instance out of service.
    49  
    50  Below is the health check configuration used by `aws` currently.
    51  
    52  ```
    53  protocol: HTTPS
    54  path: /readyz
    55  port: 6443
    56  unhealthy threshold: 2
    57  timeout: 10s
    58  interval: 10s
    59  ```
    60  
    61  Based on aws documentation, the following is true of the ec2 load balancer health check probes:
    62  * Each health check request is independent and lasts the entire interval.
    63  * The time it takes for the instance to respond does not affect the interval for the next health check.
    64  
    65  Now let's verify that with the above configuration in effect, a load balancer takes at most `30s` (in the worst case) to
    66  deem a particular `kube-apiserver` instance unhealthy and take it out of service. With that in mind we will plot the 
    67  timeline of the health check probes accordingly. There are three probes `P1`, `P2` and `P3` involved in this worst 
    68  case scenario:
    69  * E1: T+0s:  `P1` kicks off and it immediately gets a `200` response from `/readyz`.
    70  * E2: T+0s:  `/readyz` starts reporting red, immediately after `E1`.
    71  * E3: T+10s: `P2` kicks off.
    72  * E4: T+20s: `P2` times out (we assume the worst case here).
    73  * E5: T+20s: `P3` kicks off (each health check is independent and will be kicked off at every interval).
    74  * E6: T+30s: `P3` times out (we assume the worst case here)
    75  * E7: T+30s: `unhealthy threshold` is satisfied and the load balancer takes the unhealthy `kube-apiserver` instance out 
    76    of service.
    77  
    78  Based on the worst case scenario above we have verified that with the above configuration aws load balancer will take at
    79  most `30s` to detect an unhealthy `kube-apiserver` instance and take it out of service.
    80  
    81  If you are working with a different platform please take into consideration relevant health check probe specifics if any
    82  and ensure that the worst case time to detect an unhealthy `kube-apiserver` instance is at most `30s` as explained in
    83  this document.