pp. 515-522
S&M2824 Research Paper of Special Issue https://doi.org/10.18494/SAM3493 Published: February 14, 2022 Image-to-image Translation via Contour-consistency Networks [PDF] Hsiang-Ying Wang, Hsin-Chun Lin, Chih-Hsien Hsia, Natnuntnita Siriphockpirom, Hsien-I Lin, and Yung-Yao Chen (Received June 30, 2021; Accepted October 6, 2021) Keywords: image-to-image translation, contour-consistency networks, inconsistency problem, attention feature map
In this paper, a novel framework for image-to-image translation, in which contour-consistency networks are used to solve the problem of inconsistency between the contours of generated and original images, is proposed. The objective of this study was to address the lack of an adequate training set. At the generator end, the original map is sampled by an encoder to obtain the encoder feature map; the attention feature map is then obtained using the attention module. Using the attention feature map, the decoder can ascertain where more conversions are required. The mechanism at the discriminator end is similar to that at the generator end. The map is sampled through an encoder to obtain the encoder feature map and then converted into the attention feature map. Finally, the map is classified by the classifier as real or fake. Experimental results demonstrate the effectiveness of the proposed method.
Corresponding author: Chih-Hsien Hsia, Yung-Yao ChenThis work is licensed under a Creative Commons Attribution 4.0 International License. Cite this article Hsiang-Ying Wang, Hsin-Chun Lin, Chih-Hsien Hsia, Natnuntnita Siriphockpirom, Hsien-I Lin, and Yung-Yao Chen, Image-to-image Translation via Contour-consistency Networks, Sens. Mater., Vol. 34, No. 2, 2022, p. 515-522. |