Giacomo Zuliani
EXPLAINABILITY METHODS IN MUSIC EMOTION RECOGNITION.
Rel. Cristina Emma Margherita Rottondi. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2025
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (6MB) | Preview |
Abstract
This thesis explores how explainability can be introduced into Music Emo- tion Recognition (MER) models, which are usually hard to interpret despite their good performance. While many deep learning models can predict the emotional content of music with high accuracy, they often work as black boxes, giving little to no information about how they reach their conclusions. The goal of this work is to make these models more understandable, especially for users who might want to exploit them not just as tools, but also to learn something from them. To do this, the thesis develops and tests two different approaches. The first one is based on musical features—some taken from the literature, and others introduced as a novel contribution.
It starts from an existing deep learning framework that uses mid-level features like melodic or rhythmic descriptors to explain predictions, and then expands it by adding simpler, more intuitive features like chords or notes that could be easier to interpret and possibly helpful to composers or researchers
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
