Abstract
Over the past few years various Convolutional Neural Networks (CNNs) based models exhibited certain human-like performance in a range of image processing problems. Video understanding, action classification, gesture recognition has become a new stage for CNNs. The typical approach for video analysis is based on 2DCNN to extract feature map from a single frame and through 3DCNN or LSTM to merging spatiotemporal information, some approaches will add optical flow on the other branch and then post-hoc fusion. Normally the performance is proportional to the model complexity, as the accuracy keeps improving, the problem is also evolved from accuracy to model size, computing speed, model availability. In this paper, we present a lightweight network architecture framework to learn spatiotemporal feature from video. Our architecture tries to merge long-term content in any network feature map. Keeping the model as small and as fast as possible while maintaining accuracy. The accuracy achieved is 91.4% along with an appreciable speed of 69.3 fps.
Original language | English |
---|---|
Title of host publication | 2019 The 5th International Conference on Control, Automation and Robotics (ICCAR 2019) |
Place of Publication | Piscataway, N.J. |
Publisher | IEEE |
Pages | 550-554 |
Number of pages | 5 |
ISBN (Print) | 9781728133256, 9781728133263 |
DOIs | |
Publication status | Published - 29 Aug 2019 |
Event | 2019 The 5th International Conference on Control, Automation and Robotics - Park Plaza Beijing Science Park, 25 Zhichun Road, Haidian, Beijing, China Duration: 19 Apr 2019 → 22 Apr 2019 http://www.iccar.org/ |
Conference
Conference | 2019 The 5th International Conference on Control, Automation and Robotics |
---|---|
Abbreviated title | ICCAR 2019 |
Country/Territory | China |
City | Beijing |
Period | 19/04/19 → 22/04/19 |
Internet address |
Keywords
- CNN
- LSTM
- 3D-Net
- DenseNet
- 2D layer
- video analysis
- convolutional network
- convolutional neural network (CNN)
- feature maps