Explainable AI signal diagnosis for AI-based statistical process monitoring
Conference
65th ISI World Statistics Congress 2025
Format: CPS Abstract - WSC 2025
Keywords: artificial intelligence, statistical process monitoring
Session: CPS 78 - AI and Machine Learning in Statistics
Monday 6 October 5:10 p.m. - 6:10 p.m. (Europe/Amsterdam)
Abstract
AI techniques have excelled in many fields, including statistical process monitoring (SPM). Although existing AI-based SPM methods show a promising detection ability in high-dimensional scenarios, post-signal diagnosis for black-box models is often ignored. The lack of explainability makes it difficult to make an out-of-control action plan and take appropriate actions after the signal. Explainable AI (XAI) techniques, like feature attribution and counterfactual explanations, can help decision-makers investigate these signals. Feature attribution helps explain which features contribute to the output most, and counterfactual explanations provide insights into which minimal changes to features (perturbations) make the difference between an in-control or out-of-control observation. This paper proposes a framework integrating XAI signal diagnosis with AI monitoring methods. The applicability of the proposed framework is validated by the experiments generating explanations on simulated and real data.