TY - JOUR
T1 - Text to realistic image generation with attentional concatenation generative adversarial networks
AU - Li, Linyan
AU - Sun, Yu
AU - Hu, Fuyuan
AU - Zhou, Tao
AU - Xi, Xuefeng
AU - Ren, Jinchang
PY - 2020/10/28
Y1 - 2020/10/28
N2 - In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.
AB - In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.
KW - text
KW - image generation
KW - attentional
KW - concatenation
KW - generative adversarial networks (GANs)
UR - http://www.scopus.com/inward/record.url?scp=85096024300&partnerID=8YFLogxK
U2 - 10.1155/2020/6452536
DO - 10.1155/2020/6452536
M3 - Article
AN - SCOPUS:85096024300
SN - 1026-0226
VL - 2020
JO - Discrete Dynamics in Nature and Society
JF - Discrete Dynamics in Nature and Society
M1 - 6452536
ER -