Abstract
In recent years, the hardware implementation of neural networks, leveraging physical coupling and analog neurons has substantially increased in relevance. Such nonlinear and complex physical networks provide significant advantages in speed and energy efficiency, but are potentially susceptible to internal noise when compared to digital emulations of such networks. In this work, we consider how additive and multiplicative Gaussian white noise on the neuronal level can affect the accuracy of the network when applied for specific tasks and including a softmax function in the readout layer. We adapt several noise reduction techniques to the essential setting of classification tasks, which represent a large fraction of neural network computing. We find that these adjusted concepts are highly effective in mitigating the detrimental impact of noise.
Original language | English |
---|---|
Place of Publication | Ithaca, NY |
Number of pages | 8 |
DOIs | |
Publication status | Published - 28 Nov 2024 |
Funding
This work was supported by the Agence Nationale de la Recherche (ANR-21-CE24-0018-02); European Research Council (Consolidator Grant INSPIRE, 101044777); European Union Horizon research and innovation program under the Marie Sklodowska-Curie Doctoral Training Networks (860830, POST DIGITAL) .
Keywords
- neural networks
- analog neurons