Niccolo' Manfredi Selvaggi
A new metric for the interpretability of artificial neural networks in medical diagnosis applications.
Rel. Alfredo Braunstein. Politecnico di Torino, Corso di laurea magistrale in Physics Of Complex Systems (Fisica Dei Sistemi Complessi), 2022
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
Abstract
Machine learning is a powerful tool for automating some tasks that humans can easily accomplish but, at the same time, are difficult to implement using "classic" algorithms. In recent years, thanks to the increase in the computing power of computers and the amount of data collected, algorithms have become more complex and efficient in solving increasingly difficult tasks. The increase in the complexity of algorithms, such as Deep Learning, has made the models black boxes, i.e. machines capable of performing a task faster and sometimes even better than humans, but without a way of understanding to which criteria and which calculations the algorithm provided a certain output.
One of the most promising areas for these technologies is medical diagnosis, because the identification of pathologies is a simple classification of medical data that can be collected in databases and often made up of numerical values and images, formats that can be easily processed by modern machine learning
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
