Since 2020, I am an assistant professor at the VU Amsterdam. Previously, I did a post-doc at McGill University/Mila, where I was working with Joelle Pineau and Doina Precup as well as a short research stay at the University of Louvain. Before that, I did a PhD in machine learning at the University of Liege with Raphael Fonteneau and Damien Ernst from which I graduated in September 2017.
More information on my professional experience is available on my Linkedin profile.
I'm interested in deep reinforcement learning, particularly in the aspects related to generalization from limited data, learning abstract representations and how it is possible to integrate model-based and model-free learning. One of my recent works describes how both the model-based and model-free approaches can be integrated via a shared low-dimensional learned encoding of the environment that captures summarizing abstractions (AAAI-19). Another publication uses a similar approach for novelty search in representational space for sample efficient exploration (NeurIPS-2020).
With a particular focus on generalization, I also wrote an introduction to deep reinforcement learning (Foundations and Trends in Machine Learning) as well as a paper providing theoretical insights on bias and overfitting in the general context of POMDPs (JAIR and IJCAI-2020).
The full list of my publications can be found here.
The title of my PhD thesis is "Contributions to deep reinforcement learning and its applications in smartgrids".
Reinforcement learning and its extension with deep learning have led to a field of research called deep reinforcement learning. Applications of that research have recently shown the possibility to solve complex decision-making tasks that were previously believed extremely difficult for a computer. Yet, deep reinforcement learning requires caution and understanding of its inner mechanisms in order to be applied successfully in the different settings.
As an introduction, we provide a general overview of the field of deep reinforcement learning. The thesis is then divided in two parts.
In the first part of this thesis, we provide an analysis of reinforcement learning in the particular setting of a limited amount of data and in the general context of partial observability. In this setting, we focus on the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data), and theoretically show that while potentially increasing the asymptotic bias, a smaller state representation decreases the risk of overfitting. An original theoretical contribution relies on expressing the quality of a state representation by bounding L1 error terms of the associated belief states. We also discuss and empirically illustrate the role of other parameters to optimize the bias-overfitting tradeoff: the function approximator (in particular deep learning) and the discount factor. In addition, we investigate the specific case of the discount factor in the deep reinforcement learning setting case where additional data can be gathered through learning.
In the second part of this thesis, we focus on a smartgrids application that falls in the context of a partially observable problem and where a limited amount of data is available (as studied in the first part of the thesis). We consider the case of microgrids featuring photovoltaic panels (PV) associated with both long-term (hydrogen) and short-term (batteries) storage devices. We propose a novel formalization of the problem of building and operating microgrids interacting with their surrounding environment. In the deterministic assumption, we show how to optimally operate and size microgrids using linear programming techniques. We then show how to use deep reinforcement learning to solve the operation of microgrids under uncertainty where, at every time-step, the uncertainty comes from the lack of knowledge about future electricity consumption and weather dependent PV production.
I've taught different courses: