Interpreting the score
Overall Score
Score computed for canary comparison is based on differences in clusters found and their classification. Overall score can also include score from metric analysis with individual metrics differences and weights assigned to the metrics based on service characteristics. Score threshold will determine the failure or success of canary comparison.
Log Clusters
Log clusters display overview of clusters analyzed. As the labels indicate the clusters are classified as “Critical”, “Error”, “Warning” or “Generic Events” based on the template specification and pattern analysis. Each of the clusters are given a weight based on the classification of clusters found that will reflect in the risk score computed for analysis. Lower the score, higher the perceived risk of the service under test.
The analysis summary section displays the key attributes used in risk analysis run.
The identifier, start time, end time, size of logs are defined for both baseline and new release. The template used in the analysis, time taken to do the analysis and the status of the analysis are the other key details.
Analysis summary key attributes
Identifiers - ‘Baseline’ and ‘New Release’ are the identifiers used to extract the events from the datasource.
‘Template’ - The template used in this risk analysis run. It is a clickable link to open the template in edit mode.
Log file attributes - The size of the logs and the number of events in each release.
‘Canary analysis duration’ - The time taken for the risk analysis run.
‘Reclassification duration’ - Time taken to rerun the analysis after user reclassification of events.
‘Interval Minutes’ - If the larger risk analysis was broken down into multiple intervals, duration of each interval in minutes.
‘Algorithm’ - The clustering algorithm used in this risk analysis.
Threshold (default) - The clustering is based on learned threshold.
LCS - Longest Common Subsequence based algorithm.
‘Regular Expression’ - The RegEx used to filter the events of this run.
‘View Logs’ - The raw logs used in this analysis can be readily accessed.
The raw logs used in this analysis can be readily accessed with the “View Logs” link. This log viewer URL should have been pre-configured in the data source section for this link to work. For example, if the data source is EFK/ELK, the Kibana URL should be configured in the DataSource setup section.
Log scoring formula/Sensitivity and guidelines for selection
The base score weights are based on the sensitivity levels - high, medium and low. Higher the sensitivity, higher the weights assigned to the errors and warnings in the analysis. Base score is calculated in the process outlined below:
numerator = (Number of critical errors in v2_unique) * x1 + (Number of errors in v2_unique) * x2 + (Number of warnings in v2_unique) * x3 + Number of clusters in v1_unique * x4
denominator = Total number of clusters in v1_unique and v1v2
Score = 100 – (numerator / denominator)
The values for x1, x2, x3, x4 are adjusted based on selected sensitivity. These values are higher for High sensitivity compared to Medium or Low sensitivity.
Scoring reduction based on the repetition of the errors and warnings
In this part of scoring, the base score is reduced based on the repetition of errors and warnings in v2 unique clusters.
Let's assume B is the base score, C is a cluster in V2 and it got repeated N number of times. Based on the topic of the cluster (ERROR, WARN, CRITICAL ERROR) and sensitivity level, a reduction factor is assigned.
The base score B is updated by using the following formula:
B = B - (reduction_factor + reduction_factor *(0.9835)^1 + reduction_factor*(0.9835)^2 +........................+ reduction_factor*(0.9835)^N-1 )
By this formula, the base score (B) of all the clusters with topic (ERROR , WARN, CRITICAL ERROR) is updated as the final score. If the obtained score is negative, it is made zero.
Reduction factors at different sensitivity levels:
HIGH
MEDIUM
LOW
CRITICAL ERRORS
0.32
0.20
0.12
ERRORS
0.08
0.05
0.03
WARNINGS
0.05
0.03
0.01
Note: In the current implementation, if there is any CRITICAL ERROR in V2 or V1V2 the final score is zero.
Cardinality considerations for Error events
When v2 occurrence in a new release is substantially more than Baseline then the error is considered critical and will cause failure of analysis.
Ratio = (v2 occurrences) / (v1 occurrences)
If the topic of cluster is ERROR or CRITICAL ERROR
IGNORE if ratio <2
ERROR if ratio >=2
CRITICAL ERROR if ratio >=3
If the topic of cluster is WARN
IGNORE if ratio <2
WARN if ratio >=2
These kinds of events will be differentiated from that of the regular unexpected event with a special icon as an upward arrow.
Impact on score
Impact on score calculation for a cluster:
W = weight assigned for the cluster topic at a given sensitivity level
L1 = No of unique clusters in V1
L12 = No of unique clusters in V1V2
C = Total number of all dark red, red and yellow clusters in both V2 and V1V2
N = (L1*10) / (L1+L12)
Impact on score for a cluster = (W/(L1+L12))+(N/C)+ Score reduced based on number of repetitions
Score reduction is calculated for only non green clusters in V2 and V1V2.
Last updated