github.com/apache/beam/sdks/v2@v2.48.2/python/apache_beam/testing/analyzers/README.md (about)

     1  <!--
     2      Licensed to the Apache Software Foundation (ASF) under one
     3      or more contributor license agreements.  See the NOTICE file
     4      distributed with this work for additional information
     5      regarding copyright ownership.  The ASF licenses this file
     6      to you under the Apache License, Version 2.0 (the
     7      "License"); you may not use this file except in compliance
     8      with the License.  You may obtain a copy of the License at
     9  
    10        http://www.apache.org/licenses/LICENSE-2.0
    11  
    12      Unless required by applicable law or agreed to in writing,
    13      software distributed under the License is distributed on an
    14      "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    15      KIND, either express or implied.  See the License for the
    16      specific language governing permissions and limitations
    17      under the License.
    18  -->
    19  
    20  # Performance alerts for Beam Python performance and load tests
    21  
    22  ##  Alerts
    23  Performance regressions or improvements detected with the [Change Point Analysis](https://en.wikipedia.org/wiki/Change_detection) using [edivisive](https://github.com/apache/beam/blob/0a91d139dea4276dc46176c4cdcdfce210fc50c4/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy#L30)
    24  analyzer are automatically filed as Beam GitHub issues with a label `perf-alert`.
    25  
    26  The GitHub issue description will contain the information on the affected test and metric by providing the metric values for N consecutive runs with timestamps
    27  before and after the observed change point. Observed change point is pointed as `Anomaly` in the issue description.
    28  
    29  Example: [sample perf alert GitHub issue](https://github.com/AnandInguva/beam/issues/83).
    30  
    31  If a performance alert is created on a test, a GitHub issue will be created and the GitHub issue metadata such as GitHub issue
    32  URL, issue number along with the change point value and timestamp are exported to BigQuery. This data will be used to analyze the next change point observed on the same test to
    33  update already created GitHub issue or ignore performance alert by not creating GitHub issue to avoid duplicate issue creation.
    34  
    35  ##  Config file structure
    36  The config file defines the structure to run change point analysis on a given test. To add a test to the config file,
    37  please follow the below structure.
    38  
    39  **NOTE**: The Change point analysis only supports reading the metric data from Big Query for now.
    40  
    41  ```
    42  # the test_1 must be a unique id.
    43  test_1:
    44    test_name: Pytorch image classification on 50k images of size 224 x 224 with resnet 152
    45    test_target: apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks
    46    source: big_query
    47    metrics_dataset: beam_run_inference
    48    metrics_table: torch_inference_imagenet_results_resnet152
    49    project: apache-beam-testing
    50    metric_name: mean_load_model_latency_milli_secs
    51    labels:
    52      - run-inference
    53    min_runs_between_change_points: 3 # optional parameter
    54    num_runs_in_change_point_window: 30 # optional parameter
    55  ```
    56  
    57  **NOTE**: `test_target` is optional. It is used for identifying the test that was causing the regression.
    58  
    59  **Note**: If the source is **BigQuery**, the `metrics_dataset`, `metrics_table`, `project` and `metric_name` should match with the values defined for performance/load tests.
    60  The above example uses this [test configuration](https://github.com/apache/beam/blob/0a91d139dea4276dc46176c4cdcdfce210fc50c4/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy#L30)
    61  to fill up the values required to fetch the data from source.
    62  
    63  ### Different ways to avoid false positive change points
    64  
    65  **min_runs_between_change_points**: As the metric data moves across the runs, the change point analysis can place the
    66  change point in a slightly different place. These change points refer to the same regression and are just noise.
    67  When we find a new change point, we will search up to the `min_runs_between_change_points` in both directions from the
    68  current change point. If an existing change point is found within the distance, then the current change point will be
    69  suppressed.
    70  
    71  **num_runs_in_change_point_window**: This defines how many runs to consider from the most recent run to be in change point window.
    72  Sometimes, the change point found might be way back in time and could be irrelevant. For a test, if a change point needs to be
    73  reported only when it was observed in the last 7 runs from the current run,
    74  setting `num_runs_in_change_point_window=7` will achieve it.
    75  
    76  ##  Register a test for performance alerts
    77  
    78  If a new test needs to be registered for the performance alerting tool, please add the required test parameters to the
    79  config file.
    80  
    81  ## Triage performance alert issues
    82  
    83  All the performance/load tests metrics defined at [beam/.test-infra/jenkins](https://github.com/apache/beam/tree/master/.test-infra/jenkins) are imported to [Grafana dashboards](http://104.154.241.245/d/1/getting-started?orgId=1) for visualization. Please
    84  find the alerted test dashboard to find a spike in the metric values.
    85  
    86  For example, for the below configuration,
    87  * test_target: `apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks`
    88  * metric_name: `mean_load_model_latency_milli_secs`
    89  
    90  Grafana dashboard can be found at http://104.154.241.245/d/ZpS8Uf44z/python-ml-runinference-benchmarks?orgId=1&viewPanel=7
    91  
    92  If the dashboard for a test is not found, you can use the
    93  notebook `analyze_metric_data.ipynb` to generate a plot for the given test, metric_name.
    94  
    95  If you confirm there is a change in the pattern of the values for a test, find the timestamp of when that change happened
    96  and use that timestamp to find possible culprit commit.
    97  
    98  If the performance alert is a `false positive`, close the issue as `Close as not planned`.