Research

Research Interests

My research is in machine learning, including reinforcement learning. I am very interested in representation learning and brain-inspired algorithms, abstractions, and generalization.

  • Reinforcement Learning
  • Combinatorial Generalization
  • Brain-inspired AI
  • Unsupervised Abstractions

Current Research Projects

Combinatorial Generalization with Tensorial Representations

RL-Map

The RL-Map project addresses a fundamental challenge in artificial intelligence: enabling reinforcement learning (RL) agents to learn effectively in real-world environments rather than just simulations. While AI has revolutionized image recognition and language processing, it still struggles with sequential decision-making tasks in the physical world, such as controlling robots for everyday tasks. This research introduces a novel fourth component to RL agents – a dynamic, abstract mapping module inspired by the mammalian brain's hippocampal-entorhinal cartographic circuits – to complement the traditional policy, value function, and world model components. By decoupling the agent's capabilities from environmental configurations through this mapping system, the project aims to dramatically improve sample efficiency (reducing the amount of experience needed to learn) and robustness to environmental changes, two major obstacles preventing real-world deployment of RL. The research will develop and test these mapping algorithms across diverse simulated environments representing real-world challenges, with potential applications in Quebec industries such as forestry robotics, ultimately working toward RL agents that can be trained directly in the field without requiring extensive simulations.

Deep Local Unsupervised Learning

Contemporary deep learning is built on three foundational pillars:

1) Artificial neural networks with at least one hidden layer (that is, at least one layer of neurons between the input and the output),

2) The formulation of an objective function that takes as input data and learnable parameters (the “synapses”) and outputs a real number measuring performance; the smaller this number, the better the model,

3) The adjustment of the network’s “synapses” using gradient backpropagation, which means minimizing the objective function via gradient descent.

However, we know that the human brain does not learn by minimizing a global objective function, but instead through local learning rules such as Hebbian learning.

Local learning rules are more robust, generalize better beyond training data, and are more resistant to catastrophic forgetting.

However, implementing Hebbian or other unsupervised local learning rules in deep artificial neural networks remains challenging, and this project aims to address that challenge.

Publications