The growing adoption of artificial intelligence in critical domains such as healthcare, finance, and cybersecurity demands models that are not only accurate but also transparent and trustworthy. This research aims to develop Explainable AI (XAI) methods that provide clear, interpretable insights into model decision-making without sacrificing performance.
By combining model-agnostic explanation techniques with inherently interpretable architectures, the proposed approach seeks to enhance user trust, support regulatory compliance, and facilitate human–AI collaboration.
Evaluation
The study evaluates XAI methods across multiple application domains, assessing their effectiveness in improving transparency, detectability of bias, and actionable decision support. Key techniques explored include SHAP, LIME, GradCAM, and attention-based architectures, with particular focus on IDS and CPS monitoring applications.