Projects per year
The paper focuses on explaining the outputs of deep-learning based non-intrusive load monitoring (NILM). Explainability of NILM networks is needed for a range of stakeholders: (i) technology developers to understand why a model is under/over predicting energy usage, missing appliances or false positives, (ii) businesses offering energy advice based on NILM as part of a broader energy home management recommender system, and (iii) end-users who need to understand the outcomes of the NILM inference.
|Number of pages||4|
|Publication status||Published - 17 Nov 2021|
|Event||The 1st ACM SIGEnergy Workshop of Fair, Accountable, Transparent, and Ethical AI for Smart Environments and Energy Systems - Coimbra, Portugal|
Duration: 17 Nov 2021 → 18 Nov 2021
|Workshop||The 1st ACM SIGEnergy Workshop of Fair, Accountable, Transparent, and Ethical AI for Smart Environments and Energy Systems|
|Abbreviated title||FATEsys 2021|
|Period||17/11/21 → 18/11/21|
- neural networks
FingerprintDive into the research topics of 'Transparent AI: explainability of deep learning based load disaggregation'. Together they form a unique fingerprint.
1/01/17 → 31/12/20