Skip to main content

Thesis Pierre COURSIMAULT

Thèse

From 1 October 2022 to 30 September 2025

HW/SW co-design for on-chip incremental learning

A thorough reading of the state of the art reveals several limitations in terms of the intrinsic performance of the algorithms using artificial neural networks. First of all at the algorithmic level, artificial neural networks are very efficient in classification tasks but they suffer from catastrophic forgetting. Consequently, they cannot learn incrementally : learning is only sequential and the famous plasticity/stability dilemma cannot be solved since the plasticity of the system is dominant. It should be noted that so far the methods proposed concern therefore static image type data. However, incremental learning will take on its full meaning when the environment is changing and therefore particularly with dynamic data of the time series type.
During the 3 years of the thesis, the candidate will have to :
- Define the application framework of his/her subject, i.e. the database(s) on which he/she will work. He/she can start with an image database (MNIST, CIFAR or IMAGENET) on which the bio-inspired algorithms of the laboratory operate and allow comparison with other state-of-the-art methods. It will then be necessary to choose a dynamic database of the time series type which will underline all the interest of the methods of incremental learning. Indeed, it is in a changing environment that these algorithms will be most relevant.
- Adapt the algorithms developed in the laboratory to the chosen application framework, making sure to find the right compromises between their application performance and their material frugality. For this, it will be necessary to define the material constraints in connection with the chosen application. Questions may be asked such as:
> How is knowledge distributed in the network? How does it evolve with learning new information?
> What's a good pseudo-example? How to generate these good pseudo-examples to optimize the inference phase?

Keywords :
- Martial MERMILLOD - martial.mermillodatuniv-grenoble-alpes.fr (martial[dot]mermillod[at]univ-grenoble-alpes[dot]fr) -
- Marina REYBOZ - marina.reybozatcea.fr (marina.reyboz@cea.f)r

Keywords : deep learning,incremental learning,lifelong learning,

Date

From 1 October 2022 to 30 September 2025

Financement

CEA - Dotation des EPIC et EPA (dont CEA)

Submitted on 17 November 2023

Updated on 17 November 2023