github.com/galamsiva2020/kubernetes-heapster-monitoring@v0.0.0-20210823134957-3c1baa7c1e70/docs/proposals/vision.md (about)

     1  # Heapster Long Term Vision
     2  
     3  ## Current status
     4  
     5  Heapster is an important component of Kubernetes that is responsible for metrics and event 
     6  handling. It reads metrics from cluster nodes and writes them to external, permanent 
     7  storage. This is the main use case of Heapster.
     8  
     9  To support system components of Kubernetes Heapster calculates aggregated metrics (like 
    10  sum of containers' CPU usage in a pod) and long term statistics (average, 95percentile 
    11  with 1h resolution), keeps them in memory and exposes via Heapster API. This API is mainly
    12  used by Horizontal Pod Autoscaler which asks for the most recent performance related 
    13  metrics to adjust the number of pods to the incoming traffic. The API is also used by KubeDash
    14  and will be used by the new UI (which will replace KubeDash) as well. 
    15  
    16  Additionally Heapster API allows to list all active nodes, namespaces, pods, containers 
    17  etc. present in the system.
    18  
    19  There is also a HeapsterGKE API dedicated for GKE through which it’s possible to get a full 
    20  dump of all metrics (spanning last minute or two).
    21  
    22  Metrics are gathered from cluster nodes, but Heapster developers wanted it to be useful also 
    23  in non-Kubernetes clusters. They wrote Heapster in a such a way that metrics can be read not 
    24  only from Kubernetes nodes (via Kubelet API) but also from custom deployments via cAdvisor 
    25  (with support for CoreOS Fleet and flat file node lists). 
    26  
    27  Metrics collected by Heapster can be written into multiple kinds of storage - Influxdb, 
    28  OpenTSDB, Google Cloud Monitoring, Hawkular, Kafka, Riemann, ElasticSearch (some of them are
    29  not yet submitted).
    30  
    31  In addition to gathering metrics  Heapster is responsible for handling Kubernetes events - it 
    32  reads them from Kubernetes API server and writes them, without extra processing, to a selection
    33  of persistent storages: Google Cloud Logging, Influxdb, Kafka, OpenTSDB, Hawkular, 
    34  ElasticSearch, etc.
    35  
    36  There is/was a plan to add resource prediction components (Initial Resources, Vertical 
    37  Pod Autoscaling) to Heapster binary.
    38  
    39  ## Separation of Use Cases
    40  From the current state description (see above) the following use cases can be extracted:
    41  
    42  * [UC1] Read metrics from nodes and write them to an external storage.
    43  * [UC2] Expose metrics from the last 2-3 minutes (for HPA and GKE)
    44  * [UC3] Read Events from the API server and write them to a permanent storage
    45  * [UC4] Do some long-term (hours, days) metrics analysis to get stats (average, 95 percentile) 
    46  and expected resource usage.
    47  * [UC5] Provide cpu and memory metrics for longer time window for the new Kubernetes 
    48  Dashboard UI (15 min for 1.2, up to 1h later for plots) 
    49  
    50  UC1 and UC2 go together - to expose the most recent metrics the API should be connected 
    51  to the metrics stream.
    52  UC3 can be completely separated from UC1, UC2 and UC4 - it reads different data from a 
    53  different place and writes it in a slightly different format to different sinks. 
    54  UC4 is connected to UC1 and UC2 but it is more based on data from the permanent storage 
    55  than on the super-fresh metrics stored in the memory.
    56  UC5 can go either with UC1/UC2 or with UC4. As there is no immediate need for UC4 we will
    57   provide basic UC5 together with UC1/UC2 but in the future it will join UC4.
    58  
    59  This separation leads to an idea of splitting Heapster into 3 binaries:
    60  
    61  * Core Heapster - covering UC1, UC2 and temporarily UC5
    62  * Eventer - covering UC3
    63  * Oldtimer - covering UC4 and UC5 
    64  
    65  ## Reduction of Responsibility
    66  
    67  With 3 possible node sources (Kubernetes API Server, flat file, CoreOS Fleet), 2 metrics 
    68  sources (cAdvisor and Kubelet) and constantly growing number of sinks we have to separate 
    69  the stuff that the core Heapster/K8S team is responsible for and what is provided as a 
    70  plugin/addition and doesn’t come in the main release package. 
    71  
    72  We decided to focus only on:
    73  
    74  Kubernetes API Server node source
    75  Kubelet metrics source
    76  Influxdb, GCM, GKE (there is special endpoint for GKE that exposes all available metrics),
    77   Hawkular sinks for Heapster
    78  Influxdb, GCL sinks for Eventer
    79  
    80  The rest of the sources/sinks will be available as plugins. The plugin will be used in 2 flavors:
    81  
    82  * Complied in - will require the user to rebuild the package and create his own image with 
    83  the desired set of plugins.  
    84  * Side-car - Heapster will talk to plugin’s HTTP server to get/pass metrics through a well 
    85  defined json interface. The plugin runs in a separate container.
    86  
    87  K8s team will explicitly say that it is NOT giving any warranty on the plugins. Plugins e2e 
    88  tests can be included in some CI suite but we will not block our development (too much) if 
    89  something breaks Kafka or Riemann. We will also not pay attention to whether a particular sink scales up.
    90  
    91  For now we will keep all of the currently available sinks compiled-in by default, to keep the 
    92  new Heapster more or less compatible with the old one, but eventually (if the number of sinks grows)
    93   we will migrate some of them to plugins.
    94  
    95  ## Custom Metrics Status
    96  
    97  Heapster is not a generic solution for gathering arbitrary number of arbitrary-formatted custom 
    98  metrics. The support for custom metrics is focused on auto-scaling and critical functionality 
    99  monitoring (and potentially scheduling). And Heapster is oriented towards system metrics, not 
   100  application/business level metrics.
   101  
   102  Kubernetes users and application developers will be able to push any number of their custom 
   103  metrics through our pipeline to the storage but this should be considered as a bonus/best effort 
   104  functionality. Custom metrics will not influence our performance targets (no extra fine-tuning effort 
   105  to support >5 custom metrics per pod). There will be a flag in Kubelet that will limit the 
   106  number of custom metrics.
   107  
   108  ## Performance Target
   109  
   110  Heapster product family (Core, Eventer and Oldtimer) should follow the same performance goals 
   111  as core Kubernetes. As Eventer is fairly simple and Oldtimer not yet fully defined this section
   112  will focus only on Core Heapster (for metrics).
   113  
   114  For 1.2 we should scale to 1000 nodes each running at least 30 pods (100 for 1.3) each reporting 
   115  20 metrics every 1 min (30 sec preferably). That brings us to the number of  600k metrics 
   116  per minute and 10k metrics per second.
   117  
   118  Stretch goal (for 1.2/1.3) is 60k metrics per second (possibly with not everything being written to Influxdb). 
   119  On smaller deployments, like 500 nodes with 15-30 pods each it should be easy to have 30 sec 
   120  metrics resolution or smaller. 
   121  
   122  Memory target - Fit into 2 gb with 1000 nodes x 30 pods and 6 gb with 1000 node x 100 pods (~60kb per pod). 
   123  
   124  Latency, measured from the time when we initiate scraping metrics to the moment the metric 
   125  change is visible in the API, should be less than 1*metrics resolution, which mainly depends 
   126  on how fast it is possible to get all the metrics through the wire and parse them. 
   127  
   128  The e2e latency from the moment the metric changes in the container to the moment the change is 
   129  visible in Heapster API is: metric_resolution + heapster_latency.
   130