Bachelor Thesis BCLR-2021-27

BibliographyHubatscheck, Thomas: Distributed neural networks for continuous simulations on mobile devices.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Bachelor Thesis No. 27 (2021).
55 pages, english.
Abstract

Due to an increasing complexity of numerical simulations, calculating the results usually takes place on a server with access to large computational resources. To allow for a real-time visualization to users in an AR setting, these simulations shall run on the mobile device itself. Therefore, a way to enable the execution on a resource-constrained device is necessary. The goal is to compute the results of the simulation with a surrogate model in the form of a NN. The model has to comply to latency and quality requirements for an accurate visualization of results. This thesis proposes the use of a distributed network architecture. Hence, the interaction of a NN on the local device with a NN on a nearby server was simulated. LSTM layers and their ability in a continuous setting was studied to choose the type of network to replace the simulation. The mobile device was able to request accurate updates from the server during execution. Two operators were derived by analyzing the behavior of received updates in crucial input areas for the mobile device. A decision operator determined the frequency of update requests. The merging operator handled the combination of outputs with respect to a predicted quality and the current delay of received updates. For the latter, the local results are decoupled from the execution and serve as a way to adjust the received update. Different approaches to continue delayed updates with the corresponding local changes to fit the current local step are proposed and evaluated. For this, different artificial connection delay and offloading settings are considered. Using LSTM NNs increased the accuracy and showed a more stable execution compared to NNs without these layers. The proposed methods to merge results decreased the overall MAE from 5% of the local NN down to 2% with the help of updates every 10 steps, if a delay of 10 steps was assumed. This is an improvement of 60% compared to the local execution without updates. The quality-sensitive merging operator was also able to prevent a decrease in quality for bad connection settings by switching to a local-only execution when detecting that the quality of updates decreased. The average time elapsed to produce a single output on the mobile device with the ability to request updates decreased by 63.5% compared to the average inference time of the LSTM NN.

Full text and
other links
Volltext
Department(s)University of Stuttgart, Institute of Parallel and Distributed Systems, Distributed Systems
Superviser(s)Rothermel, Prof. Kurt; Kässinger, Johannes
Entry dateJuly 27, 2021
New Report   New Article   New Monograph   Computer Science