3:40 PM - 4:00 PM
[1S4-IS-1-05] Customizable text-based visual content creation with self-supervised learning
Regular
Keywords:Text-to-image, Interface, Self-supervised learning
AI-generation of images from textual descriptions has shown advanced and attractive capabilities. However, commonly trained machine-learning models or built AI-based systems may have deficient points to generate satisfied results for customized usage, maybe because of deficient understanding of textual expressions or low customization of trained text-to-image models. Therefore, we assist in creating flexible and diverse visual contents from textual descriptions. In modeling, we generate synthesized images using word-visual co-occurrence by Transformer model and synthesize images by decoding visual tokens. To improve visual and textual expressions and their relevance with more diversities, we utilize contrastive learning applying on texts, images, or pairs of texts and images. In the experimental results of a dataset of birds, we showed that the rendering quality was required of models with some scale neural-networks, and necessary training process with fined training by applying relatively low learning rates until the end of training. We further showed contrastive learning was possible for improvement of visual and textual expressions and their relevance.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.