Bachelorarbeit BCLR-2021-27

Bibliograph.
Daten
Hubatscheck, Thomas: Distributed neural networks for continuous simulations on mobile devices.
Universität Stuttgart, Fakultät Informatik, Elektrotechnik und Informationstechnik, Bachelorarbeit Nr. 27 (2021).
55 Seiten, englisch.
Kurzfassung

Due to an increasing complexity of numerical simulations, calculating the results usually takes place on a server with access to large computational resources. To allow for a real-time visualization to users in an AR setting, these simulations shall run on the mobile device itself. Therefore, a way to enable the execution on a resource-constrained device is necessary. The goal is to compute the results of the simulation with a surrogate model in the form of a NN. The model has to comply to latency and quality requirements for an accurate visualization of results. This thesis proposes the use of a distributed network architecture. Hence, the interaction of a NN on the local device with a NN on a nearby server was simulated. LSTM layers and their ability in a continuous setting was studied to choose the type of network to replace the simulation. The mobile device was able to request accurate updates from the server during execution. Two operators were derived by analyzing the behavior of received updates in crucial input areas for the mobile device. A decision operator determined the frequency of update requests. The merging operator handled the combination of outputs with respect to a predicted quality and the current delay of received updates. For the latter, the local results are decoupled from the execution and serve as a way to adjust the received update. Different approaches to continue delayed updates with the corresponding local changes to fit the current local step are proposed and evaluated. For this, different artificial connection delay and offloading settings are considered. Using LSTM NNs increased the accuracy and showed a more stable execution compared to NNs without these layers. The proposed methods to merge results decreased the overall MAE from 5% of the local NN down to 2% with the help of updates every 10 steps, if a delay of 10 steps was assumed. This is an improvement of 60% compared to the local execution without updates. The quality-sensitive merging operator was also able to prevent a decrease in quality for bad connection settings by switching to a local-only execution when detecting that the quality of updates decreased. The average time elapsed to produce a single output on the mobile device with the ability to request updates decreased by 63.5% compared to the average inference time of the LSTM NN.

Volltext und
andere Links
Volltext
Abteilung(en)Universität Stuttgart, Institut für Parallele und Verteilte Systeme, Verteilte Systeme
BetreuerRothermel, Prof. Kurt; Kässinger, Johannes
Eingabedatum27. Juli 2021
   Publ. Abteilung   Publ. Institut   Publ. Informatik