Abstract
With the rise of edge computing and the Internet of Things (IoT), there is an increasing demand for models with low memory footprints. These models must be adaptable to embedded system applications, while being able to leverage the large quantities of data recorded in these systems to produce superior performance.
Automatic Neural Architecture Search (NAS) has been an active and successful area of research for a number of years. However, a significant proportion of effort has been aimed at finding architectures which are able to effectively extract and transform the information in image data. This has lead to search space design which is heavily influenced by the heuristics of image classifiers.
We review and incorporate the characteristics of successful time-series methods, while seeking to address traits of conventional NAS search-space design which may be detrimental to performance on time-series.
This paper provides an in-depth look at the effects of each of our design choices with an analysis of time-series network design spaces on two benchmark tasks: Human Activity Recognition (HAR) using the UniMib-SHAR dataset and Electroencephalography (EEG) data from the BCI Competition IV 2a dataset.
Guided by these design principles and the results of our experimental procedure, we produce a search space tailored specifically to time-series tasks. This achieves excellent performance while producing architectures with significantly fewer parameters than other deep learning approaches.
We provide results on a collection of datasets from the UEA Multivariate time-series Classification Archive and achieve comparable performance to both deep learning and state-of-the-art machine learning time-series classification methods, using a simple random search.
Automatic Neural Architecture Search (NAS) has been an active and successful area of research for a number of years. However, a significant proportion of effort has been aimed at finding architectures which are able to effectively extract and transform the information in image data. This has lead to search space design which is heavily influenced by the heuristics of image classifiers.
We review and incorporate the characteristics of successful time-series methods, while seeking to address traits of conventional NAS search-space design which may be detrimental to performance on time-series.
This paper provides an in-depth look at the effects of each of our design choices with an analysis of time-series network design spaces on two benchmark tasks: Human Activity Recognition (HAR) using the UniMib-SHAR dataset and Electroencephalography (EEG) data from the BCI Competition IV 2a dataset.
Guided by these design principles and the results of our experimental procedure, we produce a search space tailored specifically to time-series tasks. This achieves excellent performance while producing architectures with significantly fewer parameters than other deep learning approaches.
We provide results on a collection of datasets from the UEA Multivariate time-series Classification Archive and achieve comparable performance to both deep learning and state-of-the-art machine learning time-series classification methods, using a simple random search.
Original language | English |
---|---|
Title of host publication | Advanced Analytics and Learning on Temporal Data |
Subtitle of host publication | 8th ECML PKDD Workshop |
Editors | Georgiana Ifrim, Romain Tavenard, Anthony Bagnall, Patrick Schaefer, Simon Malinowski, Thomas Guyet, Vincent Lemaire |
Place of Publication | Cham, Switzerland |
Publisher | Springer |
Pages | 190–204 |
Number of pages | 15 |
ISBN (Electronic) | 9783031498961 |
ISBN (Print) | 9783031498954 |
DOIs | |
Publication status | Published - 20 Dec 2023 |
Publication series
Name | Lecture Notes in Computer Science |
---|---|
Publisher | Springer |
Volume | 14343 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Keywords
- Internet of Things
- edge computing
- Automatic Neural Architecture Search (NAS)
- memory footprint