Alessio Maria Palermo
AI Explainability in Cybersecurity: a framework for explainability evaluation of language models in cybersecurity decision-making.
Rel. Cataldo Basile. Politecnico di Torino, Corso di laurea magistrale in Cybersecurity, 2026
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
The increasing deployment of sophisticated AI systems, particularly in high-stakes domains such as cybersecurity, is driven by the need to cope with rapidly growing volumes of heterogeneous data, increasingly complex infrastructures, and persistent shortages of specialised human resources. Organisations adopt data-driven and learning-based solutions to automate detection, prioritisation, and response activities, with the aim of improving coverage, speed, and consistency of security operations. At the same time, this shift towards deep neural architectures and large-scale language models has been accompanied by a marked rise in model complexity and opacity. As organisations adopt these models to maximise predictive performance and coverage, the internal decision processes of the resulting systems often become less accessible to human stakeholders, widening the gap between effectiveness and interpretability.
This thesis presents a structured analysis of the Explainable Artificial Intelligence (XAI) paradigm to bridge this gap
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
