polito.it
Politecnico di Torino (logo)

Incorporating prototypes into a Neural-Symbolic architecture

Simone Martone

Incorporating prototypes into a Neural-Symbolic architecture.

Rel. Fabrizio Lamberti, Lia Morra. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Matematica, 2021

Abstract:

In the field of Deep Learning, highly complex models are designed to maximize performance metrics, and little importance is often assigned to the issue of interpreting their results, or reasoning about them. Consequently, several models tend to be extremely reliable when faced with a scenario they have repeatedly experienced in the past but generalize poorly to new quests. On the contrary, humans can leverage logical reasoning to make guesses about a new circumstance and are able to infer knowledge from few to zero examples. To cope with this fundamental issue, novel research areas are emerging. Among them, Neural-Symbolic Integration, involved inter alia in the assimilation of logic into deep architectures, and Few-Shot Learning, extending the traditional classification problem to settings affected by scarcity or lack of labelled examples, are some of the most dynamical. An investigation of the extent to which elements from both these fields could be combined may therefore reveal useful. In this thesis, we consider two frameworks from the aforementioned fields, Logic Tensor Network (LTN) and Prototypical Networks (PNs), and explore the possibility of integrating ideas of the latter into the former. LTN is a neural-symbolic architecture that replaces the classic concept of training set with a knowledge base of logical axioms, ultimately interpreted in a fuzzy way, or as truth values between 0 and 1. As soon as a set of differentiable operators is defined to approximate the role of connectives, predicates, functions and quantifiers, a loss function is automatically specified so that LTNs can learn to satisfy the knowledge base. On the other hand, PNs handle few and zero-shot classification tasks by defining suitable class prototypes in a high-dimensional embedding space. Items are assigned to the class of their nearest prototype, according to some distance measure. As the embedding space is the focus of the learning procedure, such prototypes may be as well defined for classes that are not seen at training time. If we limit ourselves to Few-Shot Learning, mixing PNs with LTNs could help us to improve embedding robustness through the extension of the knowledge base with axioms that account, e.g., for hierarchies between classes or multi-level relationships between them. However, representing classes as parametrized prototypes rather than integer labels may be of interest to several applications of LTNs that go beyond classification, such as Semantic Image Interpretation. In addition, prototypes are more interpretable than simple labels, as their incorporation onto the embedding space can be easily visualized by employing dimensionality reduction methods, such as t-SNE. The purpose of this work is not a thorough exploration of these potential applications, but rather the construction of a theoretical background that may be exploited as a starting point for future research. Therefore, we perform experiments on simple tasks, in which comparability with alternative architectures is preserved, but notable aspects of the model can be as well highlighted. More specifically, two toy examples (MNIST and Fashion-MNIST) and a Zero-Shot Learning benchmark (Animal with Attributes 2, or AwA2) are examined.

Relatori: Fabrizio Lamberti, Lia Morra
Anno accademico: 2020/21
Tipo di pubblicazione: Elettronica
Numero di pagine: 86
Informazioni aggiuntive: Tesi secretata. Fulltext non presente
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Matematica
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-44 - MODELLISTICA MATEMATICO-FISICA PER L'INGEGNERIA
Aziende collaboratrici: NON SPECIFICATO
URI: http://webthesis.biblio.polito.it/id/eprint/18795
Modifica (riservato agli operatori) Modifica (riservato agli operatori)