[3Yin2-15] Analysis of feature distribution using KLD between inputs and intermediate layer outputs in revetment image
Keywords:KL divergence, Batch Normalization, standardization
Scaling input data such as standardization and regularization is a common technique in machine learning. Scaling reduces the difference in the distribution of the input data. Batch Normalization has the purpose of scaling the distribution of features after convolution. Generally, the output of the Batch Normalization layer is converted non-linearly by the activation function. Therefore, the distribution of features through the Batch Normalization layer affects the propagation of features. The task of this study is the crack segmentation of the revetment. The Batch Normalization layer output is evaluated using KL divergence, which is an index for measuring the difference in probability distribution. Then, the correlation between the feature distribution of the input image and KL divergence is analyzed.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.