Floating point Convolutional Neural Networks (CNNs) are computationally expensive and deeper networks can be impractical to deploy on FPGAs – consuming a large number of resources and power, as well as having lengthy development times. Previous work has hown that CNNs can be quantised heavily using fixed point arithmetic to combat this without significant loss in classification accuracy. We aim to quantise an existing CNN architecture or radio modulation classification to 2-bit weights and activations, while retaining a level of accuracy close to the original paper, for deployment on a Zynq System on Chip (SoC). To improve the development time for hardware synthesisable CNNs, we make use of MATLAB System Objects and HDL Coder. The PYNQ framework is presented as a practical means for accessing the functionality of the CNN. Our preliminary results show a high classification accuracy even with 2-bit weights and activations.
|29th International Conference on Field-Programmable Logic and Applications
|9/09/19 → 11/09/19
- deep learning (DL)
- machine learning
- neural networks