Institut des Systèmes Intelligents
et de Robotique

Partenariats

Sorbonne Universite

CNRS

INSERM

Tremplin CARNOT Interfaces

Labex SMART

Rechercher

Egalement dans la rubrique

PIRoS

 

Multimodal Integration, Interaction and Social Signal

Home

Research Projects

Experimental Platforms

Team Job Offers Contacts
 

 

Objectives and scientific project

The "multimodal integration, interaction and social signal" group study methods and analysis for recognition, interpretation and prediction of signals and socio-emotional behaviors during development throughout the life (from childhood to very old). Particular attention is paid to their dysfunction. In addition, the robot plays a central role in these activities through its integrative character, agentivity and mobility for applications ranging from the assistance to the clinical investigation.

 

According to the metacognitive dimension of the studied phenomena (information integration and human social interactions), the imi2s group adopted an integrative approach in the interface of social signal processing, psycho-pathology and practice clinic. Our interdisciplinary theoritical and applied researches focus on the signals received and produced by humans: with virtual and robotic agents; in the early interactions mothers / babies; with children with pervasive developmental disorders; during interactions between adolescents and their psychotherapists; and with adults with neurological disorders. They are related to emotions processing (prosody, facial emotions, theory of mind), multimodal integration, brain  connectivity and plasticity and temporal dynamics (synchrony, engagement, pragmatic communication).

The specificity of imi2s group is related to complementary scientific disciplines, rarely grouped into the same team, namely automaticsocial signal processing, neuropsychology and psychopathology. The Researches on automatic analysis of verbal and non-verbal communication signals are based on methodological and theoretical contributions in the field of signal and image processing, pattern recognition and machine learning . The characterization of the social dimension of signals and behaviors is central to our contributions with facial analysis or the study of gestures and speech.

 

A particular attention is paid to the dynamics of human communication, which results in characterization and modeling of temporal signals at multiple scales: individual (eg, speech), multimodal (eg, gesture + speech) and inter-personal (eg, imitation). Modeling this interpersonal dynamics is identified as a major challenge in social signal processing and personal robotics but this modeling is central to many disciplines ranging from developmental psychology to the design of virtual agents. It results in a coordination of individuals behavior and is known to be both a witness and a vector of the interaction quality . We study a systematic approach of synchrony and imitation based on the detection of non-verbal cues, on the unsupervised multimodal integration (correlation matrix, factorization of non-negative matrices) and on the proposal of new representations (hierarchical clustering, "soft") suitable for interpretation by non-experts.

 

 

This approach allows us to manage different situations: human-robot interaction, interaction with deficient or naive people. These methodological approaches have been applied in a clinical setting to monitor interactive behaviors on long periods (several months) to study early signs of autism or estimating the developmental age of a child for example.