polito.it
Politecnico di Torino (logo)

Explainable AI for Clustering Algorithms

Marcello Cannone

Explainable AI for Clustering Algorithms.

Rel. Elena Maria Baralis, Eliana Pastor. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2020

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (31MB) | Preview
Abstract:

Technological progress has brought artificial intelligence closer to people, assuming an important role in many fields thanks to its support. Artificial Intelligence, AI, is a technology capable of being transversal in many fields, from medicine to finance, from legal to security, from autonomous driving to military and so on. As AI becomes involved in context of high sensitivity and risk, the user needs more to be able to understand what the AI decision-making process suggests. The understanding, and so also the comprehensibility, of the result is closely linked to the interpretability that the model is capable of providing through its result's explanations. Although AI systems are becoming more useful providing huge benefits, their involvement is limited by the model's inability in explaining to users a given decision and action. This leads many user to consider it as untrustworthy. Today's challenge is to make AI explainable, gaining users trust and helping them to understand and manage AI outcomes. Not all AI models have this lack of interpretability, some of the simpler ones are interpretable by nature, which, although less accurate, make them preferable to user. So trust and understanding are key to a growing adoption of AI models by users. To achieve them, the generation of the upcoming AI models is also making a greater effort on interpretability, not only to understand a result but also to validate a model, finding possible issues. Inspection of internal processes of a model is not always possible, it depends on the model, which could also be a black-box one. This is a problem for the majority of the most used machine learning algorithms, for which today's effort is in developing tools and libraries to understand what is behind the model outcome, in order to provide a reasonable explanation. The explanation method approaches are divided into two type, model-dependent and model-agnostic, with the first limited on a single class of model and the second independent from the applied model. As the explanation is the basis and essence of the interpretability of a result, it influences the name given to the topic, being known as eXplainable Artificial Intelligence, XAI. This thesis is focused on exploring the actual available tools and solutions to provide explanations for unsupervised clustering learning. State-of-the-art explanation techniques for supervised clustering are tailored and adapted for unsupervised clustering applications. The proposed approach is model agnostic, i.e. it is applicable to explain the clustering results of any unsupervised techniques. Clustering results are firstly learned and modeled exploiting supervised techniques. State-of-the-art explainers are then applied to provide explanations. The proposed explanation approach allows the understanding of clustering results at different scopes. It provides (i) a global understanding of clustering results (ii) individual cluster interpretability, highlighting which attributes values mostly contribute to a specific cluster under analysis, and (iii) a local explanation for a single cluster instance. Experimental results on artificial and real datasets compare multiple explainers and underline which is most suitable for the scope of interpretability of interest.

Relatori: Elena Maria Baralis, Eliana Pastor
Anno accademico: 2020/21
Tipo di pubblicazione: Elettronica
Numero di pagine: 138
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/15868
Modifica (riservato agli operatori) Modifica (riservato agli operatori)