Today I read a paper titled “Starting engagement detection towards a companion robot using multimodal features”
The abstract is:
Recognition of intentions is a subconscious cognitive process vital to human communication.
This skill enables anticipation and increases the quality of interactions between humans.
Within the context of engagement, non-verbal signals are used to communicate the intention of starting the interaction with a partner.
In this paper, we investigated methods to detect these signals in order to allow a robot to know when it is about to be addressed.
Originality of our approach resides in taking inspiration from social and cognitive sciences to perform our perception task.
We investigate meaningful features, i.e.
human readable features, and elicit which of these are important for recognizing someone’s intention of starting an interaction.
Classically, spatial information like the human position and speed, the human-robot distance are used to detect the engagement.
Our approach integrates multimodal features gathered using a companion robot equipped with a Kinect.
The evaluation on our corpus collected in spontaneous conditions highlights its robustness and validates the use of such a technique in a real environment.
Experimental validation shows that multimodal features set gives better precision and recall than using only spatial and speed features.
We also demonstrate that 7 selected features are sufficient to provide a good starting engagement detection score.
In our last investigation, we show that among our full 99 features set, the space reduction is not a solved task.
This result opens new researches perspectives on multimodal engagement detection.