XNILMBoost: explainability-informed load disaggregation training enhancement using attribution priors

Djordje Batic*, Vladimir Stankovic, Lina Stankovic

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)

Abstract

In the ongoing energy transition, characterised by increased reliance on distributed renewable sources and smart grid technologies, the need for advanced and trustworthy artificial intelligence (AI) in energy management systems is crucial. Non-intrusive load monitoring (NILM), a method for inferring individual appliance energy consumption from aggregate smart meter data, has gained prominence for enhancing energy efficiency. However, advanced deep neural network models used in NILM, while effective, raise transparency and trust concerns due to their complexity. This paper introduces a novel explainability-informed NILM training framework, specifically designed for low-frequency NILM. Our approach aligns with principles for trustworthy AI, focusing on human agency and oversight, technical robustness, and transparency, incorporating explainability directly into the training phase of a NILM model. We propose a novel iterative, explainability-informed NILM training algorithm that uses attribution priors to guide model optimization, including implementation and evaluation of the framework across multiple state-of-the-art NILM architectures, namely, convolutional, recurrent, and dilated causal layers. We introduce a novel Robustness-Trust metric to measure joint improvement in predictive and explainability performance, utilizing explainability metrics of faithfulness, robustness and effective complexity while analyzing model predictive performance against NILM-specific regression and classification metrics. Results broadly show that robust models achieve better explainability, while explainability-enhanced models can lead to improved model robustness. Together, our results demonstrate significant improvements in robustness and transparency of NILM systems across various appliances, model architectures, measurement scales, types of buildings, and energy usage patterns. This work paves the way for more transparent and trustworthy deployments in AI-driven energy systems.
Original languageEnglish
Article number109766
Number of pages18
JournalEngineering Applications of Artificial Intelligence
Volume141
Early online date13 Dec 2024
DOIs
Publication statusE-pub ahead of print - 13 Dec 2024

Keywords

  • explainable deep learning
  • load disaggregation
  • non-intrusive load monitoring
  • trustworthy artificial intelligence

Fingerprint

Dive into the research topics of 'XNILMBoost: explainability-informed load disaggregation training enhancement using attribution priors'. Together they form a unique fingerprint.

Cite this