A new cognitive temporal-spatial visual attention model for video saliency detection

Erfu Yang, Zhengzheng Tu, Bin Luo, Amir Hussain

Research output: Contribution to conferenceSpeech

Abstract

Human vision has the natural cognitive ability to focus on salient objects or areas when watching static or dynamic scenes. Whilst research in image saliency has been historically popular, the challenging area of video saliency has been gaining increasing interest recently, as autonomous and cognitive vision techniques have continued to develop greatly. In this talk, a new cognitive temporal-spatial visual attention model is presented for video saliency detection. It extends the popular graph-based visual saliency(GBVS) model which adopts a ‘bottom-up’ visual attention mechanism. The new model can detect salient motion map which can be combined with other static feature maps in GBVS model. Our proposed model is inspired, firstly, by the observation that independent components of optical flows are recognized for motion understanding in human brains, in the light of which we employ robust independent component analysis (robust ICA) to separate salient foreground optical flows from relatively static background. A second key feature of our proposed model is that the motion saliency map is calculated based on the foreground optical flow vector field and mean shift segmentation. Finally, the salient motion map is normalized and then fused with static maps through a linear combination. Preliminary experiments demonstrate that the spatio-temporal saliency map detected by the new cognitive visual attention model highlights salient foreground moving objects effectively, even in a complex outdoor scene with dynamic background or bad weather. The proposed model could be further exploited for autonomous robotic applications.

Acknowledgements:
This research is supported by The Royal Society of Edinburgh (RSE) and The National Natural Science Foundation of China (NNSFC) under the RSE-NNSFC joint project (2012-2014) [grant number 61211130309] with Anhui University, China, and the “Sino-UK Higher Education Research Partnership for PhD Studies” joint-project (2013-2015) funded by the British Council China and The China Scholarship Council (CSC). Amir and Erfu Yang are also funded, in part, by the UK Engineering and Physical Sciences Research Council (EPSRC) [grant number EP/I009310/1], and the RSE-NNSFC joint project (2012-2014) [grant number 61211130210] with Beihang University, China.
Original languageEnglish
Pages14-14
Number of pages1
Publication statusUnpublished - 31 May 2015
EventSixth China-Scotland SIPRA Workshop on Recent Advances in Signal and Image Processing - University of Stirling, Stirling, United Kingdom
Duration: 31 May 20151 Jun 2015

Workshop

WorkshopSixth China-Scotland SIPRA Workshop on Recent Advances in Signal and Image Processing
CountryUnited Kingdom
CityStirling
Period31/05/151/06/15

Keywords

  • visual attention model
  • computer vision
  • image processing
  • cognitive computation
  • optical flows
  • independent component analysis

Fingerprint Dive into the research topics of 'A new cognitive temporal-spatial visual attention model for video saliency detection'. Together they form a unique fingerprint.

  • Cite this

    Yang, E., Tu, Z., Luo, B., & Hussain, A. (2015). A new cognitive temporal-spatial visual attention model for video saliency detection. 14-14. Sixth China-Scotland SIPRA Workshop on Recent Advances in Signal and Image Processing, Stirling, United Kingdom.