I am currently a post-doctoral research fellow in the Reasoning and Learning Lab at McGill University, where I am working with Joelle Pineau and Doina Precup. Before that, I did a PhD in machine learning (deep reinforcement learning) at the university of Liege with Raphael Fonteneau and Damien Ernst from which I graduated in September 2017.
More information on my research is provided below.
The title of my PhD thesis is "Contributions to deep reinforcement learning and its applications in smartgrids".
Reinforcement learning and its extension with deep learning have led to a field of research called deep reinforcement learning. Applications of that research have recently shown the possibility to solve complex decision-making tasks that were previously believed extremely difficult for a computer. Yet, deep reinforcement learning requires caution and understanding of its inner mechanisms in order to be applied successfully in the different settings.
As an introduction, we provide a general overview of the field of deep reinforcement learning. The thesis is then divided in two parts.
In the first part of this thesis, we provide an analysis of reinforcement learning in the particular setting of a limited amount of data and in the general context of partial observability. In this setting, we focus on the tradeoff between asymptotic bias (suboptimality with unlimited data) and overfitting (additional suboptimality due to limited data), and theoretically show that while potentially increasing the asymptotic bias, a smaller state representation decreases the risk of overfitting. An original theoretical contribution relies on expressing the quality of a state representation by bounding L1 error terms of the associated belief states. We also discuss and empirically illustrate the role of other parameters to optimize the bias-overfitting tradeoff: the function approximator (in particular deep learning) and the discount factor. In addition, we investigate the specific case of the discount factor in the deep reinforcement learning setting case where additional data can be gathered through learning.
In the second part of this thesis, we focus on a smartgrids application that falls in the context of a partially observable problem and where a limited amount of data is available (as studied in the first part of the thesis). We consider the case of microgrids featuring photovoltaic panels (PV) associated with both long-term (hydrogen) and short-term (batteries) storage devices. We propose a novel formalization of the problem of building and operating microgrids interacting with their surrounding environment. In the deterministic assumption, we show how to optimally operate and size microgrids using linear programming techniques. We then show how to use deep reinforcement learning to solve the operation of microgrids under uncertainty where, at every time-step, the uncertainty comes from the lack of knowledge about future electricity consumption and weather dependent PV production.