Bachelor Thesis BCLR-2018-107

BibliographyTagscherer, Jan: A visual approach for probing learned models.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Bachelor Thesis No. 107 (2018).
107 pages, english.
Abstract

Deep learning models are complex neural networks that are able to accomplish a large range of tasks effectively, including machine translation, speech recognition, and image classification. However, recent research has shown that transformations of input data can deteriorate the performance of these models dramatically. This effect is especially startling with adversarial perturbations that aim to fool a deep neural network while being barely perceptible. The complexity of these networks makes it hard to understand where and why they fail. Previous work has attempted to provide insights into the inner workings of these models in various different ways. A survey of these existing systems is conducted and concludes that they have failed to provide an integrated approach for probing how specific changes to the input data are represented within a trained model. This thesis introduces Advis, a visualization system for analyzing the impact of input data transformations on a model's performance and on its internal representations. For performance analysis, it displays various metrics of prediction quality and robustness using lists and a radar chart. An interactive confusion matrix supports pattern detection and input image selection. Insights into the impact of data distortions on internal representations can be gained by the combination of a color-coded computation graph and detailed activation visualizations. The system is based on a highly flexible architecture that enables users to adapt it to the specific requirements of their task. Three use cases demonstrate the usefulness of the system for probing and comparing the impact of input transformations on performance metrics and internal representations of various networks. The insights gained through this system show that interactive visual approaches for understanding the effect of input perturbations on deep learning models are an area worth further investigation.

Full text and
other links
Volltext
Department(s)University of Stuttgart, Institute of Visualisation and Interactive Systems, Visualisation and Interactive Systems
Superviser(s)Ertl, Prof. Thomas; Han, Qi; Thom, Dr. Dennis
Entry dateMay 22, 2019
   Publ. Computer Science