Bibliography | Akash, Ravi: Model-Based Reinforcement Learning under Sparse Rewards. University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Master Thesis No. 120 (2023). 87 pages, english.
|
Abstract | Reinforcement Learning (RL) has recently seen significant advances over the last decade in simulated and controlled environments. RL has shown impressive results in difficult decision-making problems such as playing video games or controlling robot arms, especially in industrial applications where most methods require many interactions with the system in order to achieve good performance, which can be costly and time-consuming. Model-Based Reinforcement Learning (MBRL) promises to close this gap by leveraging learned environment models and using them for data generation and/or planning and, at the same time trying to be sample efficient. However, Learning with sparse rewards remains a significant challenge in the field of RL. In order to promote efficient learning the sparsity of rewards must be addressed. This thesis work tries to study individual components of MBRL algorithms under sparse reward settings and investigate different design choices made to measure the impact on learning efficiency. Suitable Integral Probability Metrics (IPM) are introduced to understand the model’s reward and observation space distribution during training. These design combinations will be evaluated on continuous control tasks with established benchmarks.
|
Department(s) | University of Stuttgart, Institute of Artificial Intelligence, Machine Learning for Simulation Science
|
Superviser(s) | Niepert, Prof. Mathias; Luis, Carlos E.; Berkenkamp, Dr. Felix |
Entry date | September 17, 2024 |
---|