Function Approximation for Reinforcement Learning using Fuzzy Clustering


The KIPS Transactions:PartB , Vol. 10, No. 6, pp. 587-592, Oct. 2003
10.3745/KIPSTB.2003.10.6.587,   PDF Download:

Abstract

Many real world control problems have continuous states and actions. When the state space is continuous, the reinforcement learning problems involve very large state space and suffer from memory and time for learning all individual state-action values. These problems need function approximators that reason action about new state from previously experienced states. We introduce Fuzzy Q-Map that is a function approximators for 1 - step Q-learning and is based on fuzzy clustering. Fuzzy Q-Map groups similar states and chooses an action and refers Q value according to membership degree. The centroid and Q value of winner cluster is updated using membership degree and TD(Temporal Difference) error. We applied Fuzzy Q-Map to the mountain car problem and acquired accelerated learning speed.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
L. Y. A, J. G. Sug, J. T. Chung, "Function Approximation for Reinforcement Learning using Fuzzy Clustering," The KIPS Transactions:PartB , vol. 10, no. 6, pp. 587-592, 2003. DOI: 10.3745/KIPSTB.2003.10.6.587.

[ACM Style]
Lee Yeong A, Jeong Gyeong Sug, and Jeong Tae Chung. 2003. Function Approximation for Reinforcement Learning using Fuzzy Clustering. The KIPS Transactions:PartB , 10, 6, (2003), 587-592. DOI: 10.3745/KIPSTB.2003.10.6.587.