Institut des Systèmes Intelligents
et de Robotique

Partenariats

Sorbonne Universite

CNRS

INSERM

Tremplin CARNOT Interfaces

Labex SMART

Rechercher

Short bio

khamassi Mehdi
Title : Research Director
Address : 4 place Jussieu, CC 173, 75252 Paris cedex 05
Phone : +33 (0) 1 44 27 28 85
Email : khamassi(at)isir.upmc.fr
Group : AMAC (AMAC)

Brief biography

In 2003, I graduated from both Université Pierre et Marie Curie, Paris (Master in Cognitive Science; Cogmaster) and from an engineering school ENSIIE, Evry (Master in Electrical and Computer Engineering). Then from 2003 until 2007, I prepared a PhD thesis between Université Pierre and Marie Curie and Collège de France under the supervision of Agnès Guillot and Sidney I. Wiener about learning and navigation in animals and robots. In 2008, I spent a short period at Kenji Doya's lab at Okinawa Institute of Science and Technology, Japan. Then I pursued a postdoctoral fellowship at INSERM in Lyon, where my work was at the interface between Emmanuel Procyk's neurophysiology team and Peter F. Dominey's modelling and robotics team.

From 2010 until 2020, I have been holding a tenured research scientist position at the French National Center for Scientific Research (CNRS) in the Institute of Intelligent Systems and Robotics at Sorbonne Université (ex: Université Pierre et Marie Curie), Paris, France. I am also co-director of studies and pedagogical council member for the Cogmaster program at Ecole Normale Supérieure (Paris Sciences Lettres) / EHESS / Paris University. I obtained my Habilitation to Direct Researches in Biology from Université Pierre et Marie Curie, Paris 6, on May 6th 2014. I am also Associate Editor for the journals Frontiers in Neurorobotics and Frontiers in Decision Neuroscience, and Editorial Board Member for the journals Intellectica and Neurons, Behavior, Data analysis, and Theory (NBDT). I have been an invited researcher at the Center for Mind/Brain Sciences, University of Trento, Italy in 2014-2015, where I was mainly collaborating with Giorgio Coricelli, Nadège Bault, David Pascucci and Massimo Turatto. I have also been an invited researcher at the Department of Experimental Psychology, University of Oxford, in 2017-2020, where I was mainly collaborating with Matthew Rushworth, Marios PanayiJérôme Sallet, Chris Summerfield and Marco WittmannSince January 2016, I have been an invited researcher at the Intelligent Robotics and Automation Laboratory of the National Technical University of Athens, Greece, where I mainly collaborate with Petros Maragos and Costas Tzafestas.

In 2020, I was promoted to director of research by the CNRS. 

Download full CV or short CV.

 


Research Activities

My work is at the interface between Cognitive Science (understanding the human mind), Neuroscience (understanding how the brain works), Artificial Intelligence (designing algorithms enabling an agent to make sense of its perception, to act and to learn), and Robotics (designing bio-inspired robots that can interact more naturally with humans, especially for healthcare applications).
 
The goal of my research is twofold: (1) To better understand how decision making and reinforcement learning processes are organized in the mammals' brain: What are the underlying neural mechanisms in the prefrontal cortex, basal ganglia, hippocampus, and dopamine system? How do they enable humans to adapt so flexibly to new situations? Why and how are they impaired in some neurodegenerative diseases or some psychiatric conditions? (2) To take inspiration from biology to improve current robots' flexibility and autonomy in decision-making. Among our current healthcare applications, we use small social robots as assistive tools for therapies with children with autism, where the robot is playful and interactive, permitting to better engage the child in the therapy and to mediate and encourage his/her interactions with other children.
 
One of our current central research questions of interest is whether similar learning mechanisms and similar reward processing principles apply to both social and non-social contexts. This is key on the one hand to better understand what is so special about the social dimension of learning mechanisms in the brain, and on the other hand to establish more adaptive and efficient human-robot interactions.
 
Keywords: reinforcement learning; decision-making; set-shifting; auto-evaluation; structure learning; navigation; prefrontal cortex; basal ganglia; dopamine; hippocampus; machine learning; computational neuroscience; autonomous robotics; social robotics; autism; cognitive architectures; artificial intelligence.

 

Selected publications

Neuroscience
  • Khamassi, M. and Girard, B. (2020). Modeling awake hippocampal reactivations with model-based bidirectional search. Biological Cybernetics, 114:231-248.
  • Wittmann, M.K., Fouragnan, E., Folloni, D., Klein-Flügge, M.C., Chau, B., Khamassi, M. and Rushworth, M.F.S. (2020). Global reward state affects learning, the raphe nucleus, and anterior insula in monkeys. Nature Communications. To appear.
  • Cinotti, F.* and Fresno, V.* and Aklil, N. and Coutureau, E. and Girard, B. and Marchand, A.° and Khamassi, M.° (2019). Dopamine blockade impairs the exploration-exploitation trade-off in rats. Scientific Reports, 9:6770. (* equally contributing authors) (° equally contributing senior authors)
  • Lee, B., Gentry, R., Bissonette, G.B., Herman, R.J., Mallon, J.J., Bryden, D.W., Calu, D.J., Schoenbaum, G., Coutureau, E., Marchand, A., Khamassi, M.  and Roesch, M.R. (2018). Manipulating the revision of reward value during the intertrial interval increases sign tracking and dopamine releases. PLoS Biology, 16(9): e2004015. Commented by Eshel, N. & Steinberg, E.E. in the same issue.
  • Dollé, L. and Chavarriaga, R. and Guillot, A. and Khamassi, M. (2018). Interactions of spatial strategies producing generalization gradient and blocking: a computational approach. PLoS Computational Biology, 14(4):e1006092.
  • Bavard, S., Lebreton, M., Khamassi, M., Coricelli, G. and Palminteri, S. (2018). Reference point and range-adaptation produce both rational and irrational choices in human reinforcement learning. Nature Communications, 9(1):4503.
  • Khamassi, M., Quilodran, R., Enel, P., Dominey, P.F. and Procyk, E. (2015). Behavioral regulation and the modulation of information coding in the lateral prefrontal and cingulate cortex. Cerebral Cortex, 25(9):3197-218.
  • Palminteri, S., Khamassi, M., Joffily, M. and Coricelli, G. (2015). Contextual modulation of value signals in reward and punishment learning. Nature Communications, 6:8096.
  • Lesaint, F., Sigaud, O., Flagel, S.B., Robinson, T.E. and Khamassi, M. (2014). Modelling individual differences observed in Pavlovian autoshaping in rats using a dual learning systems approach and factored representations. PLoS Computational Biology, 10(2):e1003466.
  • Benchenane, K., Peyrache, A., Khamassi, M., Tierney, P.L., Gioanni, Y., Battaglia, F.P. and Wiener, S.I. (2010). Coherent theta oscillations and reorganization of spike timing in the hippocampal-prefrontal network upon learning. Neuron, 66(6):912-36.
  • Peyrache, A., Khamassi, M., Benchenane, K., Wiener, S.I. and Battaglia, F.P. (2009). Replay of rule-learning related neural patterns in the prefrontal cortex during sleep. Nature Neuroscience, 12(7):919-26.
Robotics
  • Staffa, M., Rossi, S., Tapus, A. and Khamassi, M. (2020). Behavior Adaptation, Interaction and Artificial Perception for Assistive Robotics (Editorial). International Journal of Social Robotics. To appear.
  • Zaraki, A.*, Khamassi, M.*, Wood, L., Lakatos, G., Tzafestas, C., Amirabdollahian, F., Robins, B. and Dautenhahn, K. (2019). A Novel Reinforcement-Based Paradigm for Children to Teach the Humanoid Kaspar Robot. International Journal of Social Robotics, 12:709-720. (* equally contributing authors).
  • Velentzas, G., Tsitsimis, T., Rano, I., Tzafestas, C. and Khamassi, M. (2018). Adaptive reinforcement learning with active state-specific exploration for engagement maximization during simulated child-robot interaction. Paladyn Journal of Behavioral Robotics, 9:235-253.
  • Chatila, R., Renaudo, E., Andries, M., Chavez Garcia, R.O., Luce-Vayrac, P., Gottstein, R., Alami, R., Clodic, A., Devin, S., Girard, B. and Khamassi, M. (2018). Towards Self-Aware Robots. Frontiers in Robotics and AI, 5:88.
  • Khamassi, M., Velentzas, G., Tsitsimis, T. and Tzafestas, C. (2018). Robot fast adaptation to changes in human engagement during simulated dynamic social interaction with active exploration in parameterized reinforcement learning. IEEE Transactions on Cognitive and Developmental Systems10(4):881-893.
  • Aklil, N., Girard, B., Denoyer, L. and Khamassi, M. (2018). Sequential action selection and active sensing for budgeted localization in robot navigation. International Journal of Semantic Computing, 12(1):102-127.
  • Khamassi, M., Girard, B., Clodic, A., Devin, S., Renaudo, E., Pacherie, E., Alami, R. and Chatila, R. (2016). Integration of Action, Joint Action and Learning in Robot Cognitive Architectures. Intellectica, 2016(65):169-203.
  • Caluwaerts, K., Staffa, M., N'Guyen, S., Grand, C., Dollé, L., Favre-Félix, A., Girard, B. and Khamassi, M. (2012). A biologically inspired meta-control navigation system for the Psikharpax rat robot. Bioinspiration & Biomimetics, 7(2):025009.
  • Khamassi, M., Lallée, S., Enel, P., Procyk, E. and Dominey P.F. (2011). Robot cognitive control with a neurophysiologically inspired reinforcement learning model. Frontiers in Neurorobotics, 5:1.
  • Khamassi, M., Lachèze, L., Girard, B., Berthoz, A. and Guillot, A. (2005). Actor-Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats. Adaptive Behavior, 13(2):131-148
  • Meyer, J.-A., Guillot, A., Girard, B. Khamassi, M., Pirim, P. and Berthoz, A. (2005). The Psikharpax Project: Towards Building an Artificial Rat. Robotics and Autonomous Systems, 50(4):211-223.