Keywords:image-to-image translation, deep neural networks, generative adversarial networks, apparel items
This paper deals with image-to-image translation of apparel items. The images are difficult to be translated because the items are variously set, when they are took photos: being placed flat, being put on the mannequin and so on. We try to investigate and improve the previous work also known as ‘pix2pix’ based on deep neural networks, especially deep convolutional generative adversarial network (DCGAN). We propose a new two-stage procedure. Some experimentation revealed that our proposed method was superior to the previous work, evaluated using structural similarity index. Moreover, we confirmed it generated item details (zipper, button) and patterns (dot) as the result of visual confirmation. This knowledge is very important because the fault image of the item without buttons should be completely different from the original item image.