4:10 PM - 4:30 PM
[3Q5-IS-2b-03] Generative Image Synthesis as a Substitute for Real Images in Pre-training of Vision Transformers
Keywords:Stable-diffusion, Vision Transformer, Self-supervised Learning
Gathering data from the real world involves time-consuming aspects of web scraping, data cleaning, and labelling. Aiming to alleviate these costly tasks, this paper proposes the utilization of rapid stable diffusion to synthesize images efficiently from text prompts, thereby eliminating the need for manual data collection and mitigating biases and mislabelling risks. Through extensive experimentation with a small-scale vision transformer across 4 downstream classification tasks, our study includes a comprehensive comparison of models pre-trained on conventional datasets, datasets enriched with synthetic images, and entirely synthetic datasets. The outcomes underscore the remarkable efficacy of stable diffusion-synthesized images to yield consistent model generalization and accuracy. Beyond the immediate benefits of fast dataset creation, our approach represents a robust solution for bolstering the performance of computer vision models. The findings underscore the transformative potential of generative image synthesis, offering a new paradigm for advancing the capabilities of machine learning in the realm of computer vision.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.