Weakly supervised deep semantic segmentation using CNN and ELM with semantic candidate regions

Xinying Xu, Guiqing Li, Gang Xie, Jinchang Ren, Xinlin Xie

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)
14 Downloads (Pure)


The task of semantic segmentation is to obtain strong pixel-level annotations for each pixel in the image. For fully supervised semantic segmentation, the task is achieved by a segmentation model trained using pixel-level annotations. However, the pixel-level annotation process is very expensive and time-consuming. To reduce the cost, the paper proposes a semantic candidate regions trained extreme learning machine (ELM) method with image-level labels to achieve pixel-level labels mapping. In this work, the paper casts the pixel mapping problem into a candidate region semantic inference problem. Specifically, after segmenting each image into a set of superpixels, superpixels are automatically combined to achieve segmentation of candidate region according to the number of image-level labels. Semantic inference of candidate regions is realized based on the relationship and neighborhood rough set associated with semantic labels. Finally, the paper trains the ELM using the candidate regions of the inferred labels to classify the test candidate regions. The experiment is verified on the MSRC dataset and PASCAL VOC 2012, which are popularly used in semantic segmentation. The experimental results show that the proposed method outperforms several state-of-the-art approaches for deep semantic segmentation.

Original languageEnglish
Article number9180391
Number of pages12
Publication statusPublished - 14 Mar 2019


  • semantic segmentation
  • image semantic segmentation
  • image detection


Dive into the research topics of 'Weakly supervised deep semantic segmentation using CNN and ELM with semantic candidate regions'. Together they form a unique fingerprint.

Cite this