Masterarbeit MSTR-2010-03

Bibliograph.
Daten
An, Lihua: Test Automation and Evaluation of Graphical User Interfaces in a Continuous Integration Environment.
Universität Stuttgart, Fakultät Informatik, Masterarbeit Nr. 3 (2010).
83 Seiten, englisch.
Kurzfassung

Graphical user interfaces (GUIs) are widely used in software projects for easy interaction. GUI testing becomes a significant portion of software verification and validation. However, traditional manual testing for GUI applications has many problems. Especially for complex GUIs, the testing of all the functionalities and their combinations is very time-consuming. Moreover, manual testing turns out to be error-prone due to the heavy test work. For example, testers make mistakes by misspelling the test data. This leads to unreliable test results and can therefore become a serious problem for the quality of software products. These problems of manual GUI testing become particularly apparent when testing software with short update cycle. As today more and more software projects have been developed with continuous integration which is the key element in agile development process. There are frequent refinements of requirements during agile development. In this context, manual testers have to run GUI testing more quickly and frequently than in traditional development model. As a result, more resources and time are consumed and there could be more human errors because of the high requirements of test. Moreover, GUI testing cannot verify the development consistently. GUI testing is isolated from the continuous integration process so it is out of synchronization of other aspects during development such as compiling the source code and unit testing. All these problems have driven GUI testing to be automated and integrated in the continuous integration process.

So far, GUI testing has been automated with various techniques. Firstly, the most direct and simple way is capture-and-replay. All the actions that have been performed on the GUI application by the tester are recorded and test cases are generated after the recording. The recorded actions could be played back simulating the tester going through the GUI application. Test cases could be developed very easily and quickly with this method. However, the recorded tests are so fragile especially when testing GUI applications which change frequently during development. Secondly, in opposition to capture-and-replay, model-based approach has been emerged recently for automated test cases generation based on the model that describes the GUI application. Yet, model-based testing is not so widely applied in industry as it is expensive to create the model. Thirdly, test cases could also be created by simulating a novice user in order to test the GUI application in unexpected path. Finally, test cases can be developed with test automation frameworks such as data-driven framework and keyworddriven framework. Framework approach is quite popular now because of its low maintenance cost. In industry, test automation generally refers to automated test execution and report. Test cases are executed and reported automatically with the testing tools. However, all the automated GUI testing mentioned above are performed separately out of the continuous integration process. GUI testing cannot provide consistent test result for frequent updates.

In this thesis, automated test cases are developed with framework approach. Firstly, IBM Rational Functional Tester (RFT) is selected as the testing tool according to the selection process provided in the thesis. RFT supports many types of frameworks. Data-driven framework is applied in the thesis. Secondly, the automated GUI test execution and report are integrated in the continuous integration process. As a case study, GUI testing for an industrial software project in IBM is automated with RFT and finally integrated in a continuous integration environment. Automated GUI test execution is started automatically by the build process and the smart test report is generated and deployed automatically as well. The smart test report is more readable than the test result of RFT as exceptions are mapped to the corresponding causes of different categories. For each category, there are people responsible for those exceptions. It is proved that the automated GUI testing runs much faster than manual GUI testing and the return on investment (ROI) is improved especially after the integration of GUI testing in the continuous integration process. Besides, defects of the GUI application could be quickly recognized and solved with the smart test report.

Abteilung(en)Universität Stuttgart, Institut für Informatik, Visualisierung und Interaktive Systeme
BetreuerErtl, Prof. Thomas; Schlegel, Prof. Thomas; Heim, Philipp; Stäbler, Benno
Eingabedatum18. November 2019
   Publ. Informatik