Carlo Orientale Caputo
Plasticity across neural hierarchies in artificial neural network.
Rel. Andrea Pagnani, Matteo Marsili. Politecnico di Torino, Corso di laurea magistrale in Physics Of Complex Systems (Fisica Dei Sistemi Complessi), 2023
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (2MB) | Preview |
Abstract
Deep neural networks can extract a hierarchy of relevant features from the data that can be used both for classification and generation task, reaching state-of-the-art performance in object/speech recognition and language translation. However, many characteristics of the way these networks process the information or, more in general, the reason why they work so well are still unclear. In this work we analyze some features of a deep belief network during training across different layers in an unsupervised setting. First of all we study how the plasticity varies across the network’s layers, computing the variation of the architecture’s weights when the dataset to be learned is changed.
We observe an increasing behaviour of the plasticity across layers, meaning that the features learned in deep layers are more dataset dependent, instead the shallow ones are more generic
Relatori
Anno Accademico
Tipo di pubblicazione
Numero di pagine
Corso di laurea
Classe di laurea
Aziende collaboratrici
URI
![]() |
Modifica (riservato agli operatori) |
