Master Thesis MSTR-2022-67

BibliographyZilch, Markus: Evaluation of explainability in autoscaling frameworks.
University of Stuttgart, Faculty of Computer Science, Electrical Engineering, and Information Technology, Master Thesis No. 67 (2022).
50 pages, english.
Abstract

With the introduction of container- and microservice-based software architecture, operators face increasing workloads for monitoring and administrating these systems. Operators now need more and more software support to keep up with the complexity of these software architectures. This increased reliance on support software results in explainability becoming increasingly important. This effect is amplified as machine-learning-based autoscaling for container-based systems is steadily growing. Explainability of these new machine-learning approaches is vital for expert users to verify, correct and reason about these approaches. Many scaling approaches based on machine-learning do not offer suitable methods for explainability. Therefore, expert users have difficulties identifying the reasons behind problems, such as suboptimal resource utilization, in these scaling approaches. Unfortunately, this also prevents them from effectively improving resource utilization. This thesis aims to improve the tools developers and operators have to build and evaluate autoscaling frameworks with explainability. Our first objective is to build a base autoscaling framework for Kubernetes that can be easily enhanced with machine-learning and explainability approaches. Our second objective is to elicit requirements that capture the explainability needs of operators in an autoscaling environment. Our third objective is to build an evaluation scheme that helps operators evaluate autoscaling frameworks regarding their explainability capabilities. We re-implement the autoscaler “Custom Autoscaler” (CAUS) developed by Klinaku et al. in 2018. We also conduct an expert user survey with industry experts and researchers to gather the data needed for eliciting the requirements. Additionally, we use these requirements to build the evaluation scheme for autoscaling frameworks with explainability. Ultimately, we show our research’s benefits and limitations and how it can be expanded.

Full text and
other links
Volltext
Department(s)University of Stuttgart, Institute of Software Technology, Software Quality and Architecture
Superviser(s)Becker, Prof. Steffen; Klinaku, Floriment; Speth, Sandro
Entry dateMarch 17, 2023
   Publ. Institute   Publ. Computer Science