Master Thesis MSTR-2022-125

BibliographyGemander, Jan: Explanation-based Learning with Feedforward Neural Networks.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Master Thesis No. 125 (2022).
60 pages, english.
Abstract

The idea of this thesis is to adapt Moˇzina et al.’s method [13] of exploiting experts’ arguments to neural networks. This is done by incorporating the method of Ross et al. [17] to include an explanatory loss, which penalises attention on the wrong features. More specifically, we present a novel approach that in addition to recognising positive influencing features distinguishes between negative and neutral ones. Here we propose new variants of reinforcing correct explanations in our losses. Additionally, we want to improve results by using Shapley values contributions, which provides many desirable traits. In doing so we’re concentrating the neural network to learn reasons for predictions that were specified in the experts’ arguments. This leads to more predictable results of explanations generated on our network, which do not rely on unfamiliar dependencies.

Department(s)University of Stuttgart, Institute of Artificial Intelligence, Analytic Computing
Superviser(s)Staab, Prof. Steffen; Mainprice, Dr. Jim; Wang, Zihao
Entry dateSeptember 18, 2024
   Publ. Computer Science