In 2018, there will be more than 35 million private or non-industrial robots used worldwide, a market of 19 billion euros. However, autonomous robot technology in Europe is not yet ready to cover this high expectancy. This is due to the lack of robust functionality in uncertain environments. Particularly, safe interaction is an essential requirement. A basic skill, still unachieved, is to allow the robot to be aware of its own body and perceive other agents. To build a synthetic model that allows robots that learn to recognize their own body and distinguish it from other elements in the environment is the goal of the SELFCEPTION research project.
Recent evidence suggests that self/other distinction will be a major breakthrough for improving interaction and might be the connection between low-level sensorimotor abilities and voluntary actions, or even abstract thinking. The project follows the hypothesis that the “sensorimotor self” learning will permit that humanoid robots could distinguish between the machine and the other agents during interaction. For that purpose, SELFCEPTION proposes combining advanced sensorimotor learning with new multimodal sensing devices, such as artificial skin, in order to permit the robot to acquire its perceptual representation.
SELFCEPTION is an interdisciplinary project that combines robotics and cognitive psychology. To this end, the main researcher will be trained under the supervision of the renowned cognitive psychologist Bernard Homel at the Leiden Institute for Brain and Cognition (LIBC). The synthetic model developed model will be tested in a whole body sensing humanoid and validated in a service robot in collaboration with the Spanish company PAL Robotics.
SELFCEPTION will boost the materialization of the next generation of perceptive robots: multisensory machines able to build their perceptual body schema and distinguish their actions from other entities. We already have robots that navigate and now it is the time to develop robots that interact.
This EU-funded project is led by Pablo Lanillos and coordinated by the director of the Institute for Cognitive Systems Gordon Cheng from the Technical University of Munich (TUM). The project has been funded through a Marie Sklodowska-Curie action granted by the European Union.
Project link | EU Cordis link
Lanillos, P., Emannuel-Dean, L., Cheng, G. (2016) Yielding self-perception in robots through sensorimotor contingencies. IEEE Transactions on Cognitive and Developmental Systems.pdf
Lanillos, P., Emannuel-Dean, L., Cheng, G. (2016), Multisensory Object Discovery via Self-detection and Artificial Attention. IEEE Int. Conf. on Developmental Learning and Epigenetic Robotics (ICDL-EpiRob). Sept 2016. Best paper presentation distinction award.pdf
The REM project’s aim is to instil a major breakthrough in social robotics by enhancing the robot multi-sensory active perception as well as the action reasoning response. Current social robots are still incapable of deploying enough coherent behaviour according to the human expectations diminishing the interaction considerably. This project seeks to enhance the semantics reasoning at symbolic level to one more connected to the robot real perception, improving the level of reciprocity and awareness and yielding to better human-robot interaction (HRI). The societal impact pursued in this research is to get closer to the socially capable robot for health care, assistive and social applications (e.g., assist elder population), thus, enhancing people’s quality of life and aiding robots to entry inside the end-user market.
There are three main lines of research:
- Multisensory attention: real time bottom-up attention of visual and tactile cues
- Aware robots: intentional state modelling though inference
- Non-verbal communication: visual and haptics message communication
Ferreira, J. F., Lanillos, P., & Dias, J. (2015). Fast Exact Bayesian Inference for High-Dimensional Models. In Workshop on Unconventional computing for Bayesian inference (UCBI), IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).pdf
Oliveira, B., Lanillos, P., Ferreira, J.F.: Gaze Tracing in a Bounded Log-spherical Space for Artificial Attention Systems. To appear in Second Iberian Robotics Conference ROBOT’2015
Lanillos, P., Ferreira, J.F., Dias, J.: Designing an Artificial Attention System for Social Robots. In: Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE (2015)
Lanillos, P., Ferreira, J.F., Dias, J.: Multisensory 3d saliency for artficial attention systems. In: 3rd Workshop on Recognition and Action for Scene Understanding (REACTS), 16th International Conference of Computer Analysis of Images and Patterns (CAIP) (2015)pdf
P. Lanillos, J. F. Ferreira, and J. Dias, “Evaluating the influence of automatic attentional mechanisms in human-robot interaction,” in Workshop: a bridge between Robotics and Neuroscience Workshop in Human-Robot Interaction, 9th ACM/IEEE International Conference on, Bielefeld, Germany, March 2014. pdf
Lanillos, P., Besada-Portas, E., Lopez-Orozco, J. A., & de la Cruz, J. M. (2014). Minimum time search in uncertain dynamic domains with complex sensorial platforms. Sensors, 14(8), 14131-14179.