10:40 AM - 11:00 AM
[1D1-GS-2-03] Integrating Fuzzy Control and Reinforcement Learning
Learning membership functions and rule weights
[[Online]]
Keywords:Fuzzy control, Reinforcement learning, Policy gradient algorithm, Neural network model
One of the recent issues in AI is the black box inside the inference results of machine learning. As an approach to solving this problem, the fusion of fuzzy inference and reinforcement learning, which is based on rules that follow human subjectivity, is an effective method. Igarashi et al. proposed a policy gradient method that uses fuzzy control rules as policies. In their framework, we approximated the membership function with a sigmoid function and learn the parameters in the sigmoid function and rule weights in the speed control problems of a car. As a result of the learning experiment, it was confirmed that appropriate parameter values were obtained. However, even in this case, the approximate form of the membership function was designed by a human. Therefore, we attempted to approximate the membership function with a neural network to see if we can learn the shape of the membership function from scratch. As a result of the learning experiment, we obtained a function shape that closely resembled the shape of the human-designed membership function from the initial values of random parameters. This suggests that the proposed learning method can acquire human fuzzy concepts from scratch.
Authentication for paper PDF access
A password is required to view paper PDFs. If you are a registered participant, please log on the site from Participant Log In.
You could view the PDF with entering the PDF viewing password bellow.