16:10 〜 16:30
[3Q5-IS-2b-03] Generative Image Synthesis as a Substitute for Real Images in Pre-training of Vision Transformers
キーワード:Stable-diffusion, Vision Transformer, Self-supervised Learning
Gathering data from the real world involves time-consuming aspects of web scraping, data cleaning, and labelling. Aiming to alleviate these costly tasks, this paper proposes the utilization of rapid stable diffusion to synthesize images efficiently from text prompts, thereby eliminating the need for manual data collection and mitigating biases and mislabelling risks. Through extensive experimentation with a small-scale vision transformer across 4 downstream classification tasks, our study includes a comprehensive comparison of models pre-trained on conventional datasets, datasets enriched with synthetic images, and entirely synthetic datasets. The outcomes underscore the remarkable efficacy of stable diffusion-synthesized images to yield consistent model generalization and accuracy. Beyond the immediate benefits of fast dataset creation, our approach represents a robust solution for bolstering the performance of computer vision models. The findings underscore the transformative potential of generative image synthesis, offering a new paradigm for advancing the capabilities of machine learning in the realm of computer vision.
講演PDFパスワード認証
論文PDFの閲覧にはログインが必要です。参加登録者の方は「参加者用ログイン」画面からログインしてください。あるいは論文PDF閲覧用のパスワードを以下にご入力ください。