SELFCEPTION – Self/other distinction for interaction under uncertainty

SELFCEPTION – Self/other distinction for interaction under uncertainty

http://www.selfception.eu

In 2018, there will be more than 35 million private or non-industrial robots used worldwide, a market of 19 billion euros. However, autonomous robot technology in Europe is not yet ready to cover this high expectancy. This is due to the lack of robust functionality in uncertain environments. Particularly, safe interaction is an essential requirement. A basic skill, still unachieved, is to allow the robot to be aware of its own body and perceive other agents. To build a synthetic model that allows robots that learn to recognize their own body and distinguish it from other elements in the environment is the goal of the SELFCEPTION research project.

Recent evidence suggests that self/other distinction will be a major breakthrough for improving interaction and might be the connection between low-level sensorimotor abilities and voluntary actions, or even abstract thinking. The project follows the hypothesis that the “sensorimotor self” learning will permit that humanoid robots could distinguish between the machine and the other agents during interaction. For that purpose, SELFCEPTION proposes combining advanced sensorimotor learning with new multimodal sensing devices, such as artificial skin, in order to permit the robot to acquire its perceptual representation.

SELFCEPTION is an interdisciplinary project that combines robotics and cognitive psychology. To this end, the main researcher will be trained under the supervision of the renowned cognitive psychologist Bernard Homel at the Leiden Institute for Brain and Cognition (LIBC). The synthetic model developed model will be tested in a whole body sensing humanoid and validated in a service robot in collaboration with the Spanish company PAL Robotics.

SELFCEPTION will boost the materialization of the next generation of perceptive robots: multisensory machines able to build their perceptual body schema and distinguish their actions from other entities. We already have robots that navigate and now it is the time to develop robots that interact.

This EU-funded project is led by Pablo Lanillos and coordinated by the director of the Institute for Cognitive Systems Gordon Cheng from the Technical University of Munich (TUM). The project has been funded through a Marie Sklodowska-Curie action granted by the European Union.

Project linkEU Cordis link

REM – active perception for Reasoning in a Embodied robotic Mind

REM – active perception for Reasoning in a Embodied robotic Mind

REMlogo

The REM project’s aim is to instil a major breakthrough in social robotics by enhancing the robot multi-sensory active perception as well as the action reasoning response. Current social robots are still incapable of deploying enough coherent behaviour according to the human expectations diminishing the interaction considerably. This project seeks to enhance the semantics reasoning at symbolic level to one more connected to the robot real perception, improving the level of reciprocity and awareness and yielding to better human-robot interaction (HRI). The societal impact pursued in this research is to get closer to the socially capable robot for health care, assistive and social applications (e.g., assist elder population), thus, enhancing people’s quality of life and aiding robots to entry inside the end-user market.

There are three main lines of research:

  • Multisensory attention: real time bottom-up attention of visual and tactile cues
  • Aware robots: intentional state modelling though inference
  • Non-verbal communication: visual and haptics message communication

logo TUMLogo

Coordinated Attention for Social Interaction with Robots

Coordinated Attention for Social Interaction with Robots

project webpage.

When interacting in socially-relevant applications, robots are expected to engage with humans while displaying attentional behaviours that resemble those of their interlocutors; as a matter of fact, they are supposed to be able to assess intentionality and to be, themselves, intentional agents.

Several solutions have been proposed for providing social robots with the ability of engaging in joint attention, the ability to share attention with another agent towards the same object or event, one of the most primal of social interactions. However, they have yet to appropriately capture some of the most crucial skills involved, such as the multisensory nature of active perception and attention, its inherent uncertainty, or the processes responsible for the emergence of an intentional stance. Consequently, social robots have only been able to instil a sense of intentionality and reciprocity for very specific and constrained social scenarios.

We therefore propose to research an integrated probabilistic framework to deal with the endogenous and exogenous coordinated control of stimulus-driven and goal-directed multisensory attention within the context of social interaction.

Designing a full-fledged artificial attention system:

Gaze detection:

System for surveillance, search and rescue in the sea using autonomous marine and air vehicles

System for surveillance, search and rescue in the sea using autonomous marine and air vehicles

Search for lost targets

The minimum time search problem consists in determining the best sequence of actions (observations) to find a target (object) with uncertain location in the minimum time possible. In more colloquial way we can pose the following question: where do we have to look to find a lost object as soon as possible? I propose a Bayesian approach to efficiently find the target using several moving agents with constrained dynamics and equipped with sensors that provide information about the environment. The whole task involves two processes: the target location estimation using the information collected by the agents, and the planning of the searching routes that the agents must follow to find the target. The agents trajectory planning is faced as a sequential decision making problem where, given the a priori target location estimation, the best actions that the agents have to perform are computed. For that purpose, three Bayesian strategies are proposed: minimizing the local expected time of detection, maximizing the discounted time probability of detection, and optimizing a probabilistic function that integrates a heuristic that approximates the expected observation. The minimum time search problems are found inside the core of many real applications, such as search and rescue emergency operations (e.g. shipwreck accidents).