Example-Guided Style-Consistent Image Synthesis from Semantic Labeling

Miao Wang 1 Guo-Ye Yang 2 Ruilong Li 2 Run-Ze Liang 2
Song-Hai Zhang 2 Peter M. Hall 3 Shi-Min Hu 2
1. Beihang University 2. Tsinghua University 3. University of Bath
CVPR 2019

Abstract


Example-guided image synthesis aims to synthesize an image from a semantic label map and an exemplary image indicating style. We use the term “style” in this problem to refer to implicit characteristics of images, for example: in portraits “style” includes gender, racial identity, age, hairstyle; in full body pictures it includes clothing; in street scenes it refers to weather and time of day and such like. A semantic label map in these cases indicates facial expression, full body pose, or scene segmentation. We propose a solution to the example-guided image synthesis problem using conditional generative adversarial networks with style consistency. Our key contributions are (i) a novel style consistency discriminator to determine whether a pair of images are consistent in style; (ii) an adaptive semantic consistency loss; and (iii) a training data sampling strategy, for synthesizing style-consistent results to the exemplar. We demonstrate the efficiency of our method on face, dance and street view synthesis tasks.


Citation

@InProceedings{pix2pixSC2019,
author = {Wang, Miao and Yang, Guo-Ye and Li, Ruilong and Liang, Run-Ze and Zhang, Song-Hai and Hall, Peter. M and Hu, Shi-Min},
title = {Example-Guided Style-Consistent Image Synthesis from Semantic Labeling},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}