polito.it
Politecnico di Torino (logo)

Explainable AI for business decision-making

Gianluigi Lopardo

Explainable AI for business decision-making.

Rel. Elena Maria Baralis, Frédéric Precioso, Damien Garreau, Greger Ottosson. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Matematica, 2021

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (2MB) | Preview
Abstract:

Machine Learning is increasingly being leveraged in business processes to make automated decisions. Nevertheless, a decision is rarely made by a standalone machine learning model, but is rather the result of an orchestration of predictive models, each predicting key quantities for the problem at hand, which are then combined through decision rules to produce the final decision. For example, a mobile phone company aiming to reduce customer churn would use machine learning to predict churn risk and rank potential retention offers, and then apply eligibility rules and other policies to decide whether a retention offer is worth proposing to a certain customer and, if so, which one. Applying decision rules on top of machine learning-based predictions or classifications is typically performed by companies to deliver better conformance, adaptability, and transparency. Interpretability is a pressing question in these situations. In the example above, it is fundamental for the sales representative to know, even roughly, why a decision was made. While the field of interpretable machine learning is full of open challenges in itself, when trying to explain a decision that relies on both business rules and multiple machine learning models, a number of additional challenges arise. First, the business rules surrounding the models represent non-linearities that cause problems for attribution-based interpretability methods like LIME and SHAP. Second, the already transparent business rules represent knowledge that unless exploited will causeproblems for sampling-based explanation methods. Third, machine learning models with overlapping features will produce conflicting explanation weights. As a result, applying current methods to these real-world decision systems produce unreliable and brittle explanations. In this configuration, there is knowledge that we can exploit to make our explanations processaware. We know which variables are involved in the decision policy and we know its rules. It is worth to exploit this information instead of treating the whole system as a black-box and being completely model-agnostic. In this thesis, we present SMACE - Semi-Model-Agnostic Contextual Explainer, a new interpretability method that combines a geometric approach (for business rules) with existing interpretability solutions (for machine learning models) to generate feature importance based explanations. Specifically, SMACE provides two levels of explanation, for the different users involved in the decision-making process. The first level, which is useful for the business user, must provide a ranking of importance for all the variables used, whether they are input attributes or values calculated in-house. This is useful, for example, to the sales representative, who has access to and knowledge of company policies. By interpreting the process, the business user can explain, modify, override or validate the specific decision. The second level is necessary for the end customer. She does not have access to the internal policy rules, nor to the way in which decision-making processes are managed. It therefore requires explanations based solely on information that she is aware of, i.e., input features, such as her personal details or service usage values. We show that while LIME and SHAP produce poor results when applied to such a decision system, SMACE provides intuitive feature ranking, tailored to business needs.

Relatori: Elena Maria Baralis, Frédéric Precioso, Damien Garreau, Greger Ottosson
Anno accademico: 2021/22
Tipo di pubblicazione: Elettronica
Numero di pagine: 83
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Matematica
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-44 - MODELLISTICA MATEMATICO-FISICA PER L'INGEGNERIA
Ente in cotutela: Inria - Institut national de recherche en informatique et en automatique (FRANCIA)
Aziende collaboratrici: INRIA
URI: http://webthesis.biblio.polito.it/id/eprint/19854
Modifica (riservato agli operatori) Modifica (riservato agli operatori)