[2Win5-02] Improving Performance of Test-Time Adaptation under Distribution Shifts
Keywords:Distribution shift, Test-Time Adaptation, AI quality
Deep learning models achieve strong generalization performance but often degrade under distribution shifts between training and test data. Test-time adaptation (TTA) has emerged as a promising approach to mitigate this issue by adapting models during inference using unlabeled test data. A widely used TTA method, Test Entropy Minimization (Tent), improves robustness by minimizing the entropy of output predictions. However, its adaptation is limited to updating only the affine parameters of batch normalization, restricting its ability to handle complex distribution shifts. To address this limitation, we propose integrating Funnel Activation (FReLU), an activation function with an adjustable receptive field, into Tent to enhance its adaptability. Experimental results demonstrate that our method outperforms conventional approaches, achieving improved performance under distribution shifts.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.