Sensitive attributes disproportion as a risk indicator of algorithmic unfairness
Federico D'Asaro
Sensitive attributes disproportion as a risk indicator of algorithmic unfairness.
Rel. Antonio Vetro', Juan Carlos De Martin. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2021
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (5MB) | Preview |
|
|
Archive (ZIP) (Documenti_allegati)
- Altro
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (43MB) |
Abstract
Title: Sensitive attributes disproportion as a risk indicator of algorithmic unfairness Candidate: Federico d’Asaro Supervisor: ric. Antonio Vetrò Co-Supervisor: prof. Juan Carlos De Martin 07/09/2021 AI is increasingly being used in highly sensitive areas such as health care, hiring, so there has been a wider focus on the implications of bias and unfairness embedded in it. One may assume that using data to automate decisions would make everything fair, but it is not the case. AI bias can come in through societal bias embedded in training datasets, decisions made during the machine learning development process, etc. Our aim is to anticipate, before applying any algorithm, unfairness phenomenon by studying balance characteristic of protected attributes such age, ethnicity, gender, etc.
We start by replicating results of [1], thus analyzing relationships between balance and unfairness indices
Relatori
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
