TY - GEN
T1 - An approach to time series forecasting with derivative spike encoding and spiking neural networks
AU - Manna, Davide Liberato
AU - Di Caterina, Gaetano
AU - Vicente Sola, Alex
AU - Kirkland, Paul
PY - 2025/1/7
Y1 - 2025/1/7
N2 - Timely and energy-efficient time series forecasting can play a key role on edge devices, where power requirements can be stringent. Spiking Neural Networks (SNNs) are regarded as a new avenue in which to solve time series problems, but with lower SWaP (Size, Weight, and Power) needs. We propose an SNN pipeline to process and forecast time series, developing a novel data spike-encoding mechanism and two loss functions that optimise the prediction of the upcoming spikes. Our approach encodes a signal into sequences of spikes that approximate its derivative, preparing the data to be processed by the SNN, while our proposed loss functions account for the reconstruction of the output spikes into a meaningful value to promote convergence to top-level solutions. Results show that our solution can effectively learn from the encoded data and the SNN trained with our loss function can outperform the same model trained with SLAYER’s default loss.
AB - Timely and energy-efficient time series forecasting can play a key role on edge devices, where power requirements can be stringent. Spiking Neural Networks (SNNs) are regarded as a new avenue in which to solve time series problems, but with lower SWaP (Size, Weight, and Power) needs. We propose an SNN pipeline to process and forecast time series, developing a novel data spike-encoding mechanism and two loss functions that optimise the prediction of the upcoming spikes. Our approach encodes a signal into sequences of spikes that approximate its derivative, preparing the data to be processed by the SNN, while our proposed loss functions account for the reconstruction of the output spikes into a meaningful value to promote convergence to top-level solutions. Results show that our solution can effectively learn from the encoded data and the SNN trained with our loss function can outperform the same model trained with SLAYER’s default loss.
KW - time series
KW - forecasting
KW - spiking neural networks
KW - neuromorphic
KW - differencing
KW - derivative
UR - https://hicss.hawaii.edu/
UR - https://hdl.handle.net/10125/109720
M3 - Conference contribution book
T3 - Proceedings of the Annual Hawaii International Conference on System Sciences
SP - 7258
EP - 7267
BT - Proceedings of the 58th Annual Hawaii International Conference on System Sciences, HICSS 2025
CY - Honolulu, HI
T2 - 58th Hawaii International Conference on System Sciences
Y2 - 7 January 2025 through 10 January 2025
ER -