| Kurzfassung | This study explores the potential of AI-enhanced testing tools in improving the efficiency and effectiveness of software testing processes. With the increasing complexity of software applications, traditional manual testing methods are often time-consuming and resource-intensive. This research aims to evaluate AI-enhanced testing tools by comparing their performance against manual testing in terms of test creation, execution, and maintenance. The study focuses on three tools—Mabl, Tricentis Testim, and BrowserStack—selected based on a comprehensive literature review and specific inclusion criteria. The evaluation was conducted through a two-part case study involving a modified Angular Jira Clone application. The tools were assessed on configuration effort, tech stack support, test report generation, and integration capabilities. The results indicate that while AI-enhanced tools show promise in automating and streamlining testing processes, they also exhibit certain limitations, such as AI-generated errors and security concerns related to cloud-based testing. Interviews with developers at LF Consult revealed mixed opinions on the effectiveness of these tools, with significant value placed on features like no-code test creation and self-healing functionality. However, concerns about data privacy and the reliability of AI-generated tests were also highlighted. Overall, the findings suggest that AI-enhanced testing tools, with adequate training and experience, can significantly improve testing efficiency and reduce costs, making them a valuable addition to the software testing toolkit for companies like LF Consult.
|