Tutorial on Embodiment

4.2.1. Visual perception case study*

 

We will pick some robotic case studies to illustrate how sensory-motor coordinated behavior induces information structure. Lungarel­la & Sporns (2006) used an upper torso humanoid robot (Fig. 4.2.1.1., A) to evaluate the contribution of sensory-motor coupling to different informational measu­res by comparing two experimental conditions. In both conditions, the robot arm was following a preprogrammed trajectory. The movement of the ball results in a displacement of the ball relative to the head and leads to physical stimulation in the head-mounted camera. In the first condition, which we will refer to as ‘fov', the sensory feedback is exploited by the controller of the robot head with camera to track the end-effector (orange ball). In other words, the sensory-motor loop (Fig. 4.2.1.1., B) was ensuring the orange ball stays at the center of the visual field - the fovea. In the second condition, ‘rnd', the movement of the camera is unrelated to the movement of the ball (sensory-motor coupling is disrupted). The amount of information in the sequence of camera images was measured for both conditions (Fig. 4.2.1.1., C). As can be seen, there is more information structure in the case of the foveation condition for all measures; for example, the dark region in the center of the entropy panel indicates that entropy is clearly diminished in the center of the visual field (disorder has been reduced, or in other words, information structure has been induced), which is due to foveation being a sensory-motor coordinated behavior. Similar results were reported by Martinez et al. (2010a) who used a head with two cameras. In their case, coordinated behavior consisted in ver­gence, i.e. both eyes tracking salient objects. Moreover, Martinez et al. (2010a) also showed that it is not arbitrary coordinated behavior that generates information structure. A different behavior, one eye tracking the object and the other following its movements, i.e. without vergence, did not generate more information structure than random behavior. Although this behavior may seem sensory-motor coordinated to the outside observer, it does not match the robot's morphology, in this case the sensory apparatus. This illustrates the point that morphology and active perception cannot be considered in isolation.


Fig. 4.2.1.1. Information self-structuring. (A) Picture of the robot, a small humanoid with a pan-tilt head equipped with a camera. (B) Schematic representation of the experi­mental setup. (C) Various measures to capture information structure: entropy (the amount of disorder in the system), mutual information (the extent to which the activity of one pixel can be predicted from the combined activities of neighboring pixels), integration (a measure of global coherence), and complexity (a measure that captures global coherence and local variation). The measures are applied to the camera image in the case of the foveation condition (top) and random condition (bottom). (From Pfeifer et al., 2007; there adapted from Lungarella & Sporns, 2006)


Information structure in individual sensory modalities, such as in the visual modality as shown above, is definitely a prerequisite for subsequent processing. However, for effective control of behavior we are also interested in relations between modalities, and in relations in time. In particular, we are interested in directed relations in time, such as the ones between motor and sensory modalities, which may indicate causal relations. Sensory-motor coor­dinated behavior increases the directed information flow, as measured using transfer entropy (Lungarella & Sporns, 2006; Martinez et al., 2010b). Such rela­tions can be further exploited by the agent to learn to predict the consequen­ces of its behavior. Moreover, predictability in the sensory-motor loop can be used to drive development (e.g., Oudeyer et al., 2007). Learning and represen­ting the relations that exist between sensory and motor modalities constitute the first traces of cognition and will be the subject of a separate section ("Embodied Cognition").

 

* adapted from Hoffmann and Pfeifer, 2011

References

Lungarella, M. & Sporns, O. (2006), 'Mapping information flow in sensorimotor networks', PLoS Comput Biol 2, 1301-12.
Martinez, H.; Lungarella, M. & Pfeifer, R. (2010), On the influence of sensor morphology on eye motion coordination, in 'Proc. Int. Conf. Development and Learning (ICDL)'.
Martinez, H.; Sumioka, H.; Lungarella, M. & Pfeifer, R. (2010), On the influence of sensor morphology on vergence, in 'Proc. From Animal to Animats, Int. Conf. Sim. Adaptive Beh. (SAB)'.
Oudeyer, P.-Y.; Kaplan, F. & Hafner, V. (2007), 'Intrinsic motivation systems for autonomous mental development', IEEE Trans. on Evol. Comp. 11, 265-286.
Pfeifer, R.; Lungarella, M. & Iida, F. (2007), 'Self-organization, embodiment, and biologically inspired robotics', Science 318, 1088-1093.
Hoffmann, M. & Pfeifer, R. (2011), The implications of embodiment for behavior and cognition: animal and robotic case studies, in W. Tschacher & C. Bergomi, ed., The Implications of Embodiment: Cognition and Communication, Exeter: Imprint Academic, pp. 31-58.

 

<<PREVIOUS | HOME | NEXT >>