Victor Sinapi
An innovative methodology to experimentally compare explainable AI solutions for Natural Language Processing.
Rel. Tania Cerquitelli. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
In recent years there has been a great development of artificial intelligence (AI). From agriculture to finance passing through healthcare, the potential advantages of using these algorithms are extremely high. Our life is influenced daily by AI decisions, just think of the recommendation systems for films and TV series, suggestions for purchases in an e-commerce or even the mechanisms of targeted advertising. But there is an important distinction to be made, although the performance of an artificial intelligence is very high there are areas in which for various reasons the AI decision cannot be accepted. The reason for this difference is due to the construction of artificial intelligence, they work like black boxes, take an input and return an output, the results provided are often excellent, there is no knowing why a decision has been made, a fundamental element in many sectors.
For various reasons, the demand for interpretability and explainability of artificial intelligence models has increased, an open question is still how to evaluate the goodness of an explanation and how different models of explanation can be compared
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
URI
![]() |
Modifica (riservato agli operatori) |
