Using explainability tools to inform NILM algorithm performance: a decision tree approach

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

5 Citations (Scopus)
68 Downloads (Pure)

Abstract

Over the years, Non-Intrusive Load Monitoring (NILM) research has focused on improving performance and more recently, generalizing over distinct datasets. However, the trustworthiness of the NILM model itself has hardly been addressed. To this end, it becomes important to provide a reasoning or explanation behind the predicted outcome for NILM models especially as machine learning models for NILM are often treated as black-box models. With this explanation, the models, not only can be improved, but also build trust for wider adoption within various applications. This paper demonstrates how some explainability tools can be used to explain the outcomes of a decision tree multi-classification approach for NILM and how model explainability results in improved feature selection and eventually performance.
Original languageEnglish
Title of host publicationProceedings of the 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
Place of PublicationNew York, NY, USA
PublisherAssociation for Computing Machinery (ACM)
Pages368-372
Number of pages5
ISBN (Print)9781450398909
DOIs
Publication statusPublished - 11 Nov 2022
EventBuildSys'22: Proceedings of the 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation: 6th International Workshop on Non-Intrusive Load Monitoring - Boston, United States
Duration: 9 Dec 202211 Dec 2022
http://nilmworkshop.org/2022/

Publication series

NameBuildSys '22
PublisherAssociation for Computing Machinery (ACM)

Conference

ConferenceBuildSys'22: Proceedings of the 9th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation
Country/TerritoryUnited States
CityBoston
Period9/12/2211/12/22
Internet address

Keywords

  • NILM
  • decision tree
  • classification
  • explainability

Fingerprint

Dive into the research topics of 'Using explainability tools to inform NILM algorithm performance: a decision tree approach'. Together they form a unique fingerprint.

Cite this