polito.it
Politecnico di Torino (logo)

Understanding traffic matrix estimation with eXplainable AI (XAI)

Cristian Zilli

Understanding traffic matrix estimation with eXplainable AI (XAI).

Rel. Guido Marchetto, Alessio Sacco. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2022

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (4MB) | Preview
Abstract:

Diagnostics constitute a foundation for the management, maintenance, and improvement of computer networks. To this end, traffic matrices are an effective element of diagnostics by representing directed traffic flows between pairs of networked nodes in a compact manner. In detail, a traffic matrix is a two-dimensional array where each row (column) corresponds to a node, and each cell contains the value of traffic flow between the row and column nodes, obtained by aggregating link load measurements over a sampling time interval. The collection of this data serves the purpose of enacting strategies for infrastructural enhancement and traffic engineering in a conscientious, informed way. However, such information can often be only partially available: this is the case, for example, of networks dealing with massive volumes of traffic such that telemetry operation may put heavy computational strain on the measuring devices, causing these to suffer degradation of performance for their networking functions (e.g. forwarding throughput). Simply scaling the computational resources up to satisfy the requirements' overhead is not always feasible and is an expensive solution. This is a reason for the prominent interest in the problem of traffic matrix estimation and completion, namely the problem of inference of traffic flows via statistical or Artificial Intelligence (AI)-based techniques. This work focuses mostly on the category of AI and data-driven as a regression tool to tackle the aforementioned problem. At the same time, we aim to solve another issue that stems directly from the intrinsic nature of AI: its lack of human interpretability. The main contribution of this thesis is the comparison (in terms of different error metrics) of several models for traffic matrix completion and the explanation of the decision process of the black box-like techniques via eXplainable Artificial Intelligence (XAI) methods such as saliency maps. Experimental results show that the accuracy of Machine Learning (ML)-based and statistical models highly depends on the set of network conditions, i.e., the dataset used. The complexity of traffic and the absence of clear patterns alter the ability of the model to generalize the findings among different network traces. On the other hand, the study of the model decision process via XAI demonstrates that models are majorly influenced in their inference process by the surrounding square around the missing traffic matrix cell.

Relatori: Guido Marchetto, Alessio Sacco
Anno accademico: 2022/23
Tipo di pubblicazione: Elettronica
Numero di pagine: 58
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/25663
Modifica (riservato agli operatori) Modifica (riservato agli operatori)