Bachelorarbeit BCLR-2018-107

Bibliograph.
Daten
Tagscherer, Jan: A visual approach for probing learned models.
Universität Stuttgart, Fakultät Informatik, Elektrotechnik und Informationstechnik, Bachelorarbeit Nr. 107 (2018).
107 Seiten, englisch.
Kurzfassung

Deep learning models are complex neural networks that are able to accomplish a large range of tasks effectively, including machine translation, speech recognition, and image classification. However, recent research has shown that transformations of input data can deteriorate the performance of these models dramatically. This effect is especially startling with adversarial perturbations that aim to fool a deep neural network while being barely perceptible. The complexity of these networks makes it hard to understand where and why they fail. Previous work has attempted to provide insights into the inner workings of these models in various different ways. A survey of these existing systems is conducted and concludes that they have failed to provide an integrated approach for probing how specific changes to the input data are represented within a trained model. This thesis introduces Advis, a visualization system for analyzing the impact of input data transformations on a model's performance and on its internal representations. For performance analysis, it displays various metrics of prediction quality and robustness using lists and a radar chart. An interactive confusion matrix supports pattern detection and input image selection. Insights into the impact of data distortions on internal representations can be gained by the combination of a color-coded computation graph and detailed activation visualizations. The system is based on a highly flexible architecture that enables users to adapt it to the specific requirements of their task. Three use cases demonstrate the usefulness of the system for probing and comparing the impact of input transformations on performance metrics and internal representations of various networks. The insights gained through this system show that interactive visual approaches for understanding the effect of input perturbations on deep learning models are an area worth further investigation.

Volltext und
andere Links
Volltext
Abteilung(en)Universität Stuttgart, Institut für Visualisierung und Interaktive Systeme, Visualisierung und Interaktive Systeme
BetreuerErtl, Prof. Thomas; Han, Qi; Thom, Dr. Dennis
Eingabedatum22. Mai 2019
   Publ. Institut   Publ. Informatik