Hello, I’m Corentin, a Research Engineer at Huawei Noah's Ark. I currently work on applying LLM agents to automate Data Science tasks. I have a strong foundation in Machine Learning and Software Development, and I am particularly interested in Reinforcement Learning, Evolutionary Strategies, multi-agent systems and LLMs.
Prior to that, I was a Research Engineer in the Inria Flowers team, where I developped and studied multi-agent systems. It involved analyzing text evolution in populations of LLMs and building a multi-agent simulator.
I hold an MSc in Computer and Cognitive Sciences from ENSC (GPA 4.0) complemented by exchange programs in AI. I also interned at Connectiv-IT as a Data Scientist, and at Inria as a Researcher between Flowers and Mnemosyne teams, working on Meta-RL.
Among a lot of other things, I love sport and play volley-ball at a national level (bronze medal in the 2023 French University Championship!).
Contact: corentin.lger@gmail.com

Publications
You can see my Google Scholar for more details about the publications.

When LLMs Play the Telephone Game: Cumulative Changes and Attractors in Iterated Cultural Transmissions
J Perez,
G Kovač,
C Léger,
C Colas,
G Molinaro,
M Derex,
PY Oudeyer,
C Moulin-Frier
ICLR, 2025
@misc{perez2024llmsplaytelephonegame, title={When LLMs Play the Telephone Game: Cumulative Changes and Attractors in Iterated Cultural Transmissions}, author={Jérémy Perez and Corentin Léger and Grgur Kovač and Cédric Colas and Gaia Molinaro and Maxime Derex and Pierre-Yves Oudeyer and Clément Moulin-Frier}, year={2024}, eprint={2407.04503}, archivePrefix={arXiv}, primaryClass={physics.soc-ph}, url={https://arxiv.org/abs/2407.04503}, }

Cultural evolution in populations of Large Language Models
J Perez,
C Léger,
M Ovando-Tellez,
C Foulon,
J Dussauld,
PY Oudeyer,
C Moulin-Frier
ArXiv, 2024
@article{perez2024cultural, title={Cultural evolution in populations of Large Language Models}, author={Perez, J{\'e}r{\'e}my and L{\'e}ger, Corentin and Ovando-Tellez, Marcela and Foulon, Chris and Dussauld, Joan and Oudeyer, Pierre-Yves and Moulin-Frier, Cl{\'e}ment}, journal={arXiv preprint arXiv:2403.08882}, year={2024} }

Evolving reservoirs for Meta Reinforcement Learning
*C Léger,
*G Hamon,
E Nisioti,
X Hinaut,
C Moulin-Frier
EvoStar [Long Talk] , 2024
@inproceedings{leger2024evolving, title={Evolving Reservoirs for Meta Reinforcement Learning}, author={L{\'e}ger, Corentin and Hamon, Gautier and Nisioti, Eleni and Hinaut, Xavier and Moulin-Frier, Cl{\'e}ment}, booktitle={International Conference on the Applications of Evolutionary Computation (Part of EvoStar)}, pages={36--60}, year={2024}, organization={Springer} }

Early Empirical Results on Reinforcement Symbolic Learning
W Radji,
C Léger,
L Bardisbanian
HAL Inria, 2023
Open Source
Here is a list of open source projects I contributed to, you can check my GitHub profile for more details.

LLM-Culture
This software enables simulating networks composed of LLMs agents,
that can generate text over multiple generations based on their neighbors input, personnality and task.
The project also provides tools for analyzing the resulting text dynamics a web interface.


KanRL
Project studying the combination of RL and Kolmogorov-Arnold Networks (KANs).
I helped creating a
Hugging Face app
to interpret RL policies,
and
benchmarked
the performance of Policy Gradient and PPO algorithms using both KANs and MLPs.
-
Stable-Baselines3 Star :
Fixed a few issues on the Reinforcement Learning libraries Stable-Baselines3 and Stable-Baselines3-Contrib. -
ReservoirPy Star :
Developed a tutorial for parallelized hyperparameter search using Optuna in the ReservoirPy ML library. It covers this process on local machines, as well as how to scale it by utilizing Slurm files on remote clusters.