Aller au contenu principal

Thesis defence : Miguel SOLINAS

Soutenance

Le 9 décembre 2021

Dual memory system to overcome catastrophic forgetting

One of the main characteristics that make human beings unique is their ability to learn continually. It is part of individual development and it is vital to progress and to avoid stagnation. In order to evolve, human beings need to gain experience and acquire competencies to broaden their skills constantly. Artificial neural networks lack the capacity to store memories and to learn incrementally. Indeed, much research is being done to enable artificial neural networks to learn continually and to avoid catastrophic forgetting.For almost three decades, researchers have been dealing with the problem of catastrophic forgetting by studying the neurogenesis of the brain, synaptic consolidation and replay systems. Deep learning has yielded remarkable results in many applications. However, artificial neural networks still suffer from catastrophic forgetting of old knowledge as new information is learned. Modeling true continual learning, as humans do, remains a challenge and requires finding appropriate solutions to this problem.First, neurogenesis-based approaches evolve the neural network architecture to adapt to different training experiences using independent sets of parameters. Second, synaptic consolidation-based approaches limit the changes in important parameters of previously learned tasks. Thus, new tasks employ neurons that are less useful for previous tasks. It is therefore possible to rehearse previously learned information in two ways: with real samples (rehearsal) or with synthetic samples (pseudo-rehearsal).Rehearsal methods overcome catastrophic forgetting by replaying an amount of previously learned examples stored in dedicated memory buffers.Alternatively, pseudo-rehearsal methods generate pseudo-samples to emulate the previously learned data, alleviating the need for dedicated buffers.Reviewing what has been previously learned through examples or pseudo-samples while learning new tasks allows adapting the global set of parameters for past and new tasks. In this way, it is possible to overcome catastrophic forgetting similarly to classical deep learning training when the entire dataset is present. Since replay methods often rely on limited memory buffers or on roughly generative models, their biggest challenge is to represent correctly and globally what has been previously learned.This thesis brings together contributions on continual learning, on the properties of autoencoders properties and knowledge transfer. First, we make a distinction between continual learning and catastrophic forgetting. We highlight certain limitations concerning the settings used to evaluate continual learning approaches and we draw future research tracks. Second, we introduce an auto-associative memory module and a sampling method to generate synthetic samples for capturing and transferring knowledge that replay methods can employ. Third, we propose a continual learning model when privacy issues exist. We improve and extend this model by combining pseudo-rehearsal and rehearsal methods to provide an efficient and competitive solution that improves state-of-the-art results. Finally, in a comprehensive investigation, we attempt to determine what examples to use in replay methods to alleviate catastrophic forgetting. We detail methodological aspects of each contribution and we provide evidence of our contributions on datasets such as MNIST, SVHN, CIFAR-10 and CIFAR-100.

Encadrants
- Directeur de thèse : Martial MERMILLOD - martial.mermillodatuniv-grenoble-alpes.fr (martial[dot]mermillod[at]univ-grenoble-alpes[dot]fr)
- Co-encadrant : Marina REYBOZ - marina.reybozatcea.fr (marina[dot]reyboz[at]cea[dot]fr)
 
Keywords: Incremental learning, Catastrophic forgetting, Continuous learning
 

Date

Le 9 décembre 2021

Financement

CEA : Dotaion des EPIC

01/10/2018-09/12/2021

Publié le 28 août 2023

Mis à jour le 20 novembre 2023