Bachelor Thesis BCLR-2023-41

BibliographyMayer, Paul: Distributed Deep Reinforcement Learning for Learn-to-optimize.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Bachelor Thesis No. 41 (2023).
47 pages, english.
Abstract

In the context of increasingly complex applications, e.g., robust performance tuning in Integrated Circuit Design, conventional optimization methods have difficulties in achieving satisfactory results while keeping to a limited time budget. Therefore, learning optimization algorithms becomes more and more interesting, replacing the established way of hand-crafting or tweaking algorithms. Learned algorithms reduce the amount of assumptions and expert knowledge necessary to create state-of-the-art solvers by decreasing the need of hand-crafting heuristics and hyper-parameter tuning. First advancements using Reinforcement Learning have shown great success in outperforming typical zeroth- and first-order optimization algorithms, especially with respect to generalization capabilities. However, training still is very time consuming. Especially challenging is training models on functions with free parameters. Changing these parameters (that could represent, e.g., conditions in a real world example) affects the underlying objective function. Robust solutions therefore depend on thorough sampling, which tends to be the bottleneck considering time consumption. In this thesis we identified the runtime bottleneck of the Reinforcement Learning Algorithm and were able to decrease runtime drastically by distributing data collection. Additionally, we studied the effects of combining sampling strategies in regards to generalization capabilities of the learned algorithm.

Full text and
other links
Volltext
Department(s)University of Stuttgart, Institute of Parallel and Distributed Systems, Scientific Computing
Superviser(s)Pflüger, Prof. Dirk; Domanski, Peter
Entry dateOctober 24, 2023
New Report   New Article   New Monograph   Computer Science