Explainable NILM Networks

Research output: Contribution to conferenceProceedingpeer-review

29 Downloads (Pure)

Abstract

There has been an explosion in the literature recently on Nonintrusive load monitoring (NILM) approaches based on neural networks and other advanced machine learning methods. However, though these methods provide competitive accuracy, the inner workings of these models is less clear. Understanding the outputs of the networks help in improving the designs, highlights the relevant
features and aspects of the data used for making the decision, provides a better picture of the accuracy of the models (since a single accuracy number is often insufficient), and also inherently provides a level of trust in the value of the provided consumption feedback to the NILM end-user. Explainable Artificial Intelligence (XAI) aims to address this issue by explaining these “black-boxes”. XAI methods, developed for image and text-based methods, can in many cases
interpret well the outputs of complex models, making them transparent.
However, explaining time-series data inference remains a challenge. In this paper, we show how some XAI-based approaches can be used to explain NILM deep learning-based autoencoders inner workings, and examine why the network performs or does not perform well in certain cases.
Original languageEnglish
Number of pages6
Publication statusE-pub ahead of print - 18 Nov 2020
Event5th International Workshop on Non Intrusive Load Monitoring - Virtual, Yokohama, Japan
Duration: 18 Nov 202018 Nov 2020
http://nilmworkshop.org/2020/

Workshop

Workshop5th International Workshop on Non Intrusive Load Monitoring
Abbreviated titleNILM 2020
CountryJapan
CityYokohama
Period18/11/2018/11/20
Internet address

Keywords

  • neural networks
  • explainable artificial intelligence
  • XAI
  • interpretable machine learning

Fingerprint Dive into the research topics of 'Explainable NILM Networks'. Together they form a unique fingerprint.

Cite this