Simone Bugni Duch
Application of Large Language Models in Software Testing: An Analysis of Method-Level Bug Detection.
Rel. Flavio Giobergia, Alexander Felfernig, Denis Helic. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (3MB) | Preview |
Abstract
Software testing is a crucial phase in the software development lifecycle, essential for delivering secure and reliable software systems. Detection of software bugs is critical in order to deploy robust systems to end users, avoiding unexpected behaviors and maintaining high software quality. However, traditional testing methods frequently rely on manual effort or static rule-based tools, approaches that can be time-consuming and resource-intensive. With the advancement of Artificial Intelligence (AI), Large Language Models (LLMs) have emerged as a breakthrough, demonstrating impressive capabilities in various scenarios, such as machine translation, text summarization and natural language understanding. Motivated by these promising results, researchers have begun exploring the potential of LLMs for a wide range of software engineering tasks, including applications within software testing.
This thesis explores the application of LLMs to the task of method-level bug detection in source code
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
