Hello, I’m Corentin, a Research Engineer at Huawei Noah's Ark Lab Paris. I have a strong foundation in Machine Learning and Software Engineering, and I'm particularly interested in Reinforcement Learning, Evolutionary Strategies, Multi-agent systems and LLMs.
I currently work on applying LLMs to automate Data Science tasks, under the supervision of Balázs Kégl. Prior to that, I was a Research Engineer at Inria in the Flowers lab, where I worked on multi-agent systems, supervised by Clément Moulin Frier.
During my MSc in Computer and Cognitive Sciences at ENSC, I also interned at Connectiv-IT as a Data Scientist, and at Inria Flowers doing Meta-RL research.
Among a lot of other things, I love sports and play volley-ball at a national level (bronze medal in the 2023 French University Championship!).
Contact: corentin.lger@gmail.com

Publications
You can check my Google Scholar for more details about the publications.

When LLMs Play the Telephone Game: Cumulative Changes and Attractors in Iterated Cultural Transmissions
J Perez,
G Kovač,
C Léger,
C Colas,
G Molinaro,
M Derex,
PY Oudeyer,
C Moulin-Frier
ICLR, 2025
@misc{perez2024llmsplaytelephonegame, title={When LLMs Play the Telephone Game: Cumulative Changes and Attractors in Iterated Cultural Transmissions}, author={Jérémy Perez and Corentin Léger and Grgur Kovač and Cédric Colas and Gaia Molinaro and Maxime Derex and Pierre-Yves Oudeyer and Clément Moulin-Frier}, year={2024}, eprint={2407.04503}, archivePrefix={arXiv}, primaryClass={physics.soc-ph}, url={https://arxiv.org/abs/2407.04503}, }

Cultural evolution in populations of Large Language Models
J Perez,
C Léger,
M Ovando-Tellez,
C Foulon,
J Dussauld,
PY Oudeyer,
C Moulin-Frier
arXiv, 2024
@article{perez2024cultural, title={Cultural evolution in populations of Large Language Models}, author={Perez, J{\'e}r{\'e}my and L{\'e}ger, Corentin and Ovando-Tellez, Marcela and Foulon, Chris and Dussauld, Joan and Oudeyer, Pierre-Yves and Moulin-Frier, Cl{\'e}ment}, journal={arXiv preprint arXiv:2403.08882}, year={2024} }

Evolving reservoirs for Meta Reinforcement Learning
*C Léger,
*G Hamon,
E Nisioti,
X Hinaut,
C Moulin-Frier
EvoStar [Long Talk], 2024
@inproceedings{leger2024evolving, title={Evolving Reservoirs for Meta Reinforcement Learning}, author={L{\'e}ger, Corentin and Hamon, Gautier and Nisioti, Eleni and Hinaut, Xavier and Moulin-Frier, Cl{\'e}ment}, booktitle={International Conference on the Applications of Evolutionary Computation (Part of EvoStar)}, pages={36--60}, year={2024}, organization={Springer} }

Early Empirical Results on Reinforcement Symbolic Learning
W Radji,
C Léger,
L Bardisbanian
HAL Inria, 2023
Open Source
Here is a list of open source projects I contributed to, you can check my GitHub profile for more details.

LLM-Culture
This software enables simulating networks composed of LLM agents,
that can generate text over multiple generations based on their neighbors' input, personality and task.
The project also provides tools for analyzing the resulting text dynamics and a web interface.


KanRL
Project studying the combination of RL and Kolmogorov-Arnold Networks (KANs).
I helped create a
Hugging Face app
to interpret RL policies,
and
benchmarked
the performance of Policy Gradient and PPO algorithms using both KANs and MLPs.
Other contributions
- Stable-Baselines3 Star : Fixed a few issues in the popular Reinforcement Learning libraries Stable-Baselines3 and Stable-Baselines3-Contrib.
- ReservoirPy Star : Created a tutorial for parallelized hyperparameter search in the ReservoirPy ML library. It covers this process on local machines, as well as how to scale it on remote clusters.
Hackathons
-
🧠 Hack1Robo 2024 (first place): Optimized persuasion skills of LLMs in debate tournaments via prompt evolution. Used a Quality Diversity method to evolve the strategies of debaters LLMs.
-
🤖 Hugging Face LeRobot: Assembled a robotic arm and created a real-world RL environment for objects manipulation. Trained the robotic arm using both Behavioral cloning and online Reinforcement Learning.
-
📚 Hack1Robo 2023: Evolved and analyzed text evolution in populations of Large language Models, inspired by works in cultural evolution. This later led to the publication of 2 papers.
-
🧬 Inria Hackatech 2023: Optimized multi-LLM agent systems strategies via prompt evolution. Reached GPT-4 level on math tasks with evolved systems of GPT-3.5 agents. This led to a startup creation: Ebiose.