Masterarbeit MSTR-2024-63

Bibliograph.
Daten
Yang, Haoran: Task-Specific Instruction Tuning for Precise Generation of Driven Software Requirements with Large Language Models.
Universität Stuttgart, Fakultät Informatik, Elektrotechnik und Informationstechnik, Masterarbeit Nr. 63 (2024).
69 Seiten, englisch.
Kurzfassung

In this thesis, we propose to fine-tune different pre-trained large language models to assist people in writing precise, concrete requirements. To achieve this goal, we choose two criteria(ISO- 29148 and Transformational Effects) to define a ’well-defined’ requirement. During training, we used different open-source 7 billion parameter LLMs(Zephyr-7b-beta, Llama2-7b-chat-hf, Gemma-1.1-7b-it) fine-tuned with the same customized dataset. With some auto-evaluation metrics(BertScore, Frugalscore, TER Score, BLEU Score, ROUGE Score, Exact Match) and manual evaluation, we draw the conclusion that Gemma-1.1-7b-it performs the best in our task. It has the potential to significantly accomplish our generating and rewriting tasks while detecting some of the transformational effects in the requirement.

Abteilung(en)Universität Stuttgart, Institut für Softwaretechnologie, Empirisches Software Engineering
BetreuerWagner, Prof. Stefan; Habib, Mohammad Kasra
Eingabedatum17. Dezember 2024
   Publ. Informatik