Victor Sinapi
An innovative methodology to experimentally compare explainable AI solutions for Natural Language Processing.
Rel. Tania Cerquitelli. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2021
|
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract: |
In recent years there has been a great development of artificial intelligence (AI). From agriculture to finance passing through healthcare, the potential advantages of using these algorithms are extremely high. Our life is influenced daily by AI decisions, just think of the recommendation systems for films and TV series, suggestions for purchases in an e-commerce or even the mechanisms of targeted advertising. But there is an important distinction to be made, although the performance of an artificial intelligence is very high there are areas in which for various reasons the AI decision cannot be accepted. The reason for this difference is due to the construction of artificial intelligence, they work like black boxes, take an input and return an output, the results provided are often excellent, there is no knowing why a decision has been made, a fundamental element in many sectors. For various reasons, the demand for interpretability and explainability of artificial intelligence models has increased, an open question is still how to evaluate the goodness of an explanation and how different models of explanation can be compared. The goal of this thesis work was to define a methodology to allow a comparison among various techniques of explanation of artificial intelligence models, in the field of natural language processing, both in quantitative and qualitative terms. The most appropriate criteria have been determined to be able to compare explanations, metrics have been defined to measure these criteria quantitatively. Furthermore, an analysis was performed to measure the qualitative criteria regarding the explanations through a survey. This methodology was applied on three different explanation frameworks, respectively LIME, T-Ebano and Shap. For each of these comparison experiments were performed on three different tasks and tasks, in order Sentiment Analysis on Movie review, Toxicity Detection on Comment and Topic Classification on News Article. |
---|---|
Relatori: | Tania Cerquitelli |
Anno accademico: | 2021/22 |
Tipo di pubblicazione: | Elettronica |
Numero di pagine: | 100 |
Soggetti: | |
Corso di laurea: | Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering) |
Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA |
Aziende collaboratrici: | NON SPECIFICATO |
URI: | http://webthesis.biblio.polito.it/id/eprint/20577 |
Modifica (riservato agli operatori) |