[4Xin2-75] Extension of Contrastive Learning in Dialogue Systems: Indirect Adjustment Method for Negative Example Generation Probability
Keywords:dialogue system, contrastive learning
Generating appropriate responses and suppressing inappropriate ones are critical challenges in the development of dialogue systems. This study proposes a new method that extends the conventional contrastive learning framework by indirectly adjusting the generation probability of negative examples. Utilizing a specific Bad Token, this method effectively suppresses the generation of inappropriate responses in dialogue systems. Unlike traditional direct negative example minimization strategies, this indirect approach offers new possibilities for influencing the generation probability of negative examples in dialogue systems. Experimental results demonstrate that this method achieves effectiveness comparable to traditional contrastive learning, opening new prospects for negative example control in dialogue systems.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.