Research Output per year
Floating point Convolutional Neural Networks (CNNs) are computationally expensive and deeper networks can be impractical to deploy on FPGAs – consuming a large number of resources and power, as well as having lengthy development times. Previous work has hown that CNNs can be quantised heavily using fixed point arithmetic to combat this without significant loss in classification accuracy. We aim to quantise an existing CNN architecture or radio modulation classification to 2-bit weights and activations, while retaining a level of accuracy close to the original paper, for deployment on a Zynq System on Chip (SoC). To improve the development time for hardware synthesisable CNNs, we make use of MATLAB System Objects and HDL Coder. The PYNQ framework is presented as a practical means for accessing the functionality of the CNN. Our preliminary results show a high classification accuracy even with 2-bit weights and activations.
|Number of pages||2|
|Publication status||Published - 9 Sep 2019|
|Event||29th International Conference on Field-Programmable Logic and Applications - Barcelona Supercomputing Center and Universitat Politècnica de Catalunya, Barcelona, Spain|
Duration: 9 Sep 2019 → 11 Sep 2019
|Conference||29th International Conference on Field-Programmable Logic and Applications|
|Abbreviated title||FPL 2019|
|Period||9/09/19 → 11/09/19|
- deep learning (DL)
- machine learning
- neural networks
Maclellan, A., McLaughlin, L., Crockett, L. & Stewart, R. W., 10 Sep 2019. 1 p.
Research output: Contribution to conference › Poster
Maclellan, A., McLaughlin, L., Crockett, L., & Stewart, R. W. (2019). FPGA accelerated deep learning radio modulation classification using MATLAB system objects & PYNQ. Paper presented at 29th International Conference on Field-Programmable Logic and Applications, Barcelona, Spain.