Explainable Contextual Anomaly Detection: a focus on QCAD
Résumé
The integration of explainability techniques in anomaly detection systems is crucial for improving transparency and trust in decisionmaking processes across various application domains. Indeed, in a world increasingly reliant on data and algorithms, it is imperative for users to understand how and why a decision was made by an automated system. This is particularly true in sensitive sectors such as cybersecurity as well as finance, healthcare, where misinterpretation of results could have serious consequences. Traditional anomaly detection methods often lack explanations, making it difficult for users to understand the decisions made by these systems. To address this issue, the explainability of models is becoming a priority. In this paper, we conduct a detailed review of methods designed to enhance the explainability of anomaly detection models based on machine learning and artificial intelligence. We particularly study the various comparison metrics for all these methods: the positioning of explainability in relation to the model, the genericity of the explainability, the type of model, etc. We then focus on a recent explainable contextual anomaly detection method using quantile regression forest: QCAD developed by Li et al. in 2023. This method uses explainability while taking into account the context, which is crucial for a more efficient anomalies detection in many fields where the context has an important impact on the normal behavior. Li et al. showed that QCAD often yields excellent results in terms of PRC-AUC. We tested this method on a real-world dataset called "Bodyfat" to evaluate the impact of context size on its performance. Moreover, we exploited the explainability layer of QCAD to investigate the reasons behind its mitigated results on the Bodyfat dataset.
Origine | Fichiers produits par l'(les) auteur(s) |
---|