Transparent AI: explainability of deep learning based load disaggregation

Research output: Contribution to conferenceProceedingpeer-review

25 Downloads (Pure)

Abstract

The paper focuses on explaining the outputs of deep-learning based non-intrusive load monitoring (NILM). Explainability of NILM networks is needed for a range of stakeholders: (i) technology developers to understand why a model is under/over predicting energy usage, missing appliances or false positives, (ii) businesses offering energy advice based on NILM as part of a broader energy home management recommender system, and (iii) end-users who need to understand the outcomes of the NILM inference.
Original languageEnglish
Pages268-271
Number of pages4
DOIs
Publication statusPublished - 17 Nov 2021
EventThe 1st ACM SIGEnergy Workshop of Fair, Accountable, Transparent, and Ethical AI for Smart Environments and Energy Systems - Coimbra, Portugal
Duration: 17 Nov 202118 Nov 2021
https://fatesys.github.io/2021/

Workshop

WorkshopThe 1st ACM SIGEnergy Workshop of Fair, Accountable, Transparent, and Ethical AI for Smart Environments and Energy Systems
Abbreviated titleFATEsys 2021
Country/TerritoryPortugal
CityCoimbra
Period17/11/2118/11/21
Internet address

Keywords

  • datasets
  • neural networks
  • reliability
  • validation

Fingerprint

Dive into the research topics of 'Transparent AI: explainability of deep learning based load disaggregation'. Together they form a unique fingerprint.

Cite this