Metric Analysis

Before going through this article, we recommend to learn how to create metric templates.

The goal of metric analysis is to score/compare all the APM and INFRA metrics from baseline with canary to get the user a good idea of whether the canary is a fail or a pass based on the overall average metric score given to the system. You can make a decision overall to have the application in production or not based on this metric score.

The goal here is to carry out the canary analysis for old and new release by means of comparing the APM and INFRA metrics of both versions. This is carried out in a structural manner by comparing each metric of old release to new release and finally scoring the metrics analysis by averaging out the individual score of the metrics.

Critical Metrics

User-defined metrics that represent KPI of service behavior. Continuous Verification test will fail if any of the metrics tagged as critical fail in the canary test. If critical metrics are not tagged by the user, the system will treat all metrics equally and will assign rank based on algorithms.

Watchlist Metrics

User-defined metrics that represent intuitive performance measures. These metrics are used for filtering when presenting results with large number of analyzed metrics.

Interpret the score

The intermediate file generated with all the scoring details is attached below:

In the above file, the score is given to each metric based on the three tests that we perform to score. The score of the metric is given in the Score column. The MetricWeight column gives the weight given to each metric and based on this final score which is the weighted score is calculated.

The individual scores of three tests are given in score.per (percentage score), score.wilcox (Wilcoxon Test score) and the quantile score is given. The score of each metric is either 0 (failed) or 100 (pass).

Other relevant info like correlation coefficient between load and corresponding metric is given in corr.coeff column and also the percentage difference between two releases is in percent.diff column.

One more criteria is the presence of Critical metric in the data. If there is at least one Critical metric present in the dataset then the score is given as 0.

The final file which gives the individual score of all the metrics and the weighted final score is under the APM folder in the folder structure.

Last updated