Abstract
Knowledge distillation (KD) is a machine learning technique widely used in recent years for the task of domain adaptation and complexity reduction. It relies on a Student-Teacher mechanism to transfer the knowledge of a large and complex Teacher network into a smaller Student model. Given the inherent complexity of large Deep Neural Network (DNN) models, and the need for deployment on edge devices with limited resources, complexity reduction techniques have become a hot topic in the Non-intrusive Load Monitoring (NILM) community. Recent literature in NILM has devoted increased effort to domain adaptation and architecture reduction via KD. However, the mechanism behind the transfer of knowledge from the Teacher to the Student is not clearly understood. In this work, we aim to address the aforementioned issue by placing the KD NILM approach in a framework of explainable AI (XAI). We identify the main inconsistency in the transfer of explainable knowledge, and exploit this information to propose a method for improvement of KD through explainability guided learning. We evaluate our approach on a variety of appliances and domain adaptation scenarios and demonstrate that solving inconsistencies in the transfer of explainable knowledge can lead to improvement in predictive performance.
| Original language | English |
|---|---|
| Title of host publication | ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
| Place of Publication | Piscataway, NJ. |
| Publisher | IEEE |
| ISBN (Print) | 9781728163277 |
| DOIs | |
| Publication status | Published - 4 Jun 2023 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 7 Affordable and Clean Energy
-
SDG 9 Industry, Innovation, and Infrastructure
-
SDG 16 Peace, Justice and Strong Institutions
Keywords
- non-intrusive load monitoring
- energy disaggregation
- knowledge distillation
- neural networks
- XAI
Fingerprint
Dive into the research topics of 'Improving knowledge distillation for non-intrusive load monitoring through explainability guided learning'. Together they form a unique fingerprint.Projects
- 1 Finished
-
building GrEener and more sustainable soCieties by filling the Knowledge gap in social science and engineering responsible artificial intelligence co-creatiOn (GECKO) MSCA-ITN-2020
Stankovic, V. (Principal Investigator) & Stankovic, L. (Co-investigator)
European Commission - Horizon Europe + H2020
1/01/21 → 30/06/25
Project: Research
Research output
- 14 Citations
- 1 Article
-
Knowledge distillation for scalable non-intrusive load monitoring
Tanoni, G., Stankovic, L., Stankovic, V., Squartini, S. & Principi, E., 1 Mar 2024, In: IEEE Transactions on Industrial Informatics. 20, 3, p. 4710-4721 12 p.Research output: Contribution to journal › Article › peer-review
Open AccessFile17 Link opens in a new tab Citations (Scopus)88 Downloads (Pure)
Activities
- 1 Invited talk
-
Trustworthy NILM in context of Demand Response and Sustainability
Stankovic, L. (Keynote speaker)
11 May 2023Activity: Talk or Presentation › Invited talk
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver