polito.it
Politecnico di Torino (logo)

A new metric for the interpretability of artificial neural networks in medical diagnosis applications.

Niccolo' Manfredi Selvaggi

A new metric for the interpretability of artificial neural networks in medical diagnosis applications.

Rel. Alfredo Braunstein. Politecnico di Torino, Corso di laurea magistrale in Physics Of Complex Systems (Fisica Dei Sistemi Complessi), 2022

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (4MB) | Preview
Abstract:

Machine learning is a powerful tool for automating some tasks that humans can easily accomplish but, at the same time, are difficult to implement using "classic" algorithms. In recent years, thanks to the increase in the computing power of computers and the amount of data collected, algorithms have become more complex and efficient in solving increasingly difficult tasks. The increase in the complexity of algorithms, such as Deep Learning, has made the models black boxes, i.e. machines capable of performing a task faster and sometimes even better than humans, but without a way of understanding to which criteria and which calculations the algorithm provided a certain output. One of the most promising areas for these technologies is medical diagnosis, because the identification of pathologies is a simple classification of medical data that can be collected in databases and often made up of numerical values and images, formats that can be easily processed by modern machine learning. In this sector, however, trust in the outcome of the diagnosis and the legal responsibility of the doctor as regards his professionalism and any errors made are of fundamental importance. However much an algorithm can be tested and, however it can be often shown that it has a lower error rate than the human one, it is inevitable that it will make mistakes and misdiagnoses too, and there is also the problem of entrusting responsibility for the life of patients to a non-human agent using non-interpretable methods. This thesis proposes an empirical mathematical analysis of artificial neural networks from which it is possible to develop a new metric to evaluate the diagnostic models under the aspect of interpretability. The purpose of this thesis is to extrapolate certain statistical information on the reliability of the output of ML models through which it is possible to implement rational decision making protocols. Underlying the ideas of the proposed new metric is a new mathematical formalism developed by Professor Audun Josang called Subjective Logic which provides a series of tools aimed at the problem of decision making under uncertainty.

Relatori: Alfredo Braunstein
Anno accademico: 2022/23
Tipo di pubblicazione: Elettronica
Numero di pagine: 74
Soggetti:
Corso di laurea: Corso di laurea magistrale in Physics Of Complex Systems (Fisica Dei Sistemi Complessi)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-44 - MODELLISTICA MATEMATICO-FISICA PER L'INGEGNERIA
Aziende collaboratrici: QUATERNION TECHNOLOGY S.R.L.
URI: http://webthesis.biblio.polito.it/id/eprint/24521
Modifica (riservato agli operatori) Modifica (riservato agli operatori)