Cognitive Aware Incremental Knowledge Update of Large Language Models
Simone Clemente
Cognitive Aware Incremental Knowledge Update of Large Language Models.
Rel. Marco Mellia. Politecnico di Torino, Corso di laurea magistrale in Data Science And Engineering, 2024
|
Preview |
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (2MB) | Preview |
Abstract
Despite remarkable capabilities, Large language models (LLMs) struggle with incrementally updating knowledge without catastrophic forgetting or indiscriminate learning. In contrast, humans effortlessly integrate new information, detect conflicts with existing beliefs, and selectively update their knowledge. This work introduces a novel paradigm inspired by human brain: Cognitive Aware Incremental Knowledge Update. We implement and evaluate two key components within existing LLM architectures: (1) Inner State Awareness, allowing LLMs to classify new information as novel, familiar, or conflicting; and (2) targeted updates through Differentiated Plasticity, distinguishing between neurons containing previous knowledge (busy) and rarely used neurons (free). Through a series of controlled experiments, we demonstrate the potential benefits of this approach, including improved preservation of prior knowledge during updates, more effective handling of conflicting information, and enhanced ability to target specific knowledge for updates.
While challenges remain, particularly in scaling to full-size LLMs and real-world scenarios, our work provides a promising direction for developing more flexible and adaptable language models
Tipo di pubblicazione
URI
![]() |
Modifica (riservato agli operatori) |
