Tutorial on Embodiment

5.2.2. Categorization in artificial agents

 

Traditional Categories

Following the approach of traditional symbolic AI (see section Traditional AI and its problems), one starts with categories - symbols - that represent objects in the world. However, the instance of those objects in the real world have to be mapped onto the internal representation - a non-trivial task. There is a number of reasons for this: ‘'First, based on the stimulation impinging on its sensory arrays (sensation) the agent has to rapidly determine and attend to what needs to be categorized. Second, the appearance and properties of objects or events in the environment being classified fluctuate continuously, for example owing to occlusions, or changes of distances and orientations with respect to the agent. And third, the environmental condi­tions (e.g., illumination, viewpoint, and background noise) vary considerably. There is much relevant work in computer vision that has been devoted to extracting scale- and translation-invariant low-level visual features and high-level multidimensional representations for the purpose of robust perceptual categorization (Riesenhuber & Poggio, 2002). Following this approach, however, categorization often turns out to be a very difficult if not an impossible computational feat, especially when sufficiently detailed informa­tion is lacking." (Pfeifer et al., 2008)

 

Embodied Categories

Embodied agents can, however, follow an alternative strategy: they can exploit the interaction with the environment to generate raw data that greatly simplifies the discrimination. One example are the sensory-motor coordination (SMC) agents (Pfeifer & Scheier, 1997; see Video 5.2.2.1.). These robots need to discriminate between small and big wooden cylinders. Later, they can pick up the small ones and bring them to a container at the side of their arena while ignoring the big ones. To tell apart identical cylinders that differ only in size is very difficult by visual means only. However, the SMC agents wonderfully demonstrate how sensory-motor coordination can be exploited: the robots have a built-in reflex to circle around objects. Like that, the discrimination becomes much easier - the robot's angular velocity is enough to disambiguate the cylinders. That is, thanks to sensory-motor coordination new information is generated (information that cannot be available when looking at an object only; cf. section Information self-structuring through sensory-motor coordination). Note the crucial importance of action. Beer (2003) illustrates a similar point on even simpler simulated agents that discriminate between circle and diamond objects.

 

Video 5.2.2.1. Sensory-motor coordination agents.

 

Classical vs. embodied categorization*

Let us compare the categories that we have just come across with catego­ries as symbols as we know them from classical symbolic AI. Taking the SMC agents' case study, if it was realized in a symbolic architecture, we should find a ‘big cylinder' symbol, which represents the bigger cylinders and onto which the instances of diamonds in the real world need to be mapped (a nontrivial task, as descri­bed above). Moreover, once again, the pitfall of this approach is that cognitive processing becomes detached from real world interaction and from meaning for the agent (the symbol grounding problem). On the other hand, when one examines the control architectures used by Pfeifer & Scheier (1997) or by Beer (2003), it is not possible to identify a site where the catego­ries (big vs. small cylinders, or circles vs. diamonds) reside. Beer's dynamical systems analysis of the behaving agent does not reveal clear neural correlates of ‘circles' or ‘diamonds' either. Rather than corresponding to ‘labels' defined from the outside, the categories are in fact behaviors. A small cylinder can be grasped, whereas a big one cannot; a circle is caught by the agent, whereas a diamond is avoided. Thus, categories are intrinsically meaningful to the agent and they are emergent from complex system-environment dynamics (see also Kuniyoshi et al., 2004).

 

*This section has been adapted from Hoffmann & Pfeifer, 2011.

 

References

Beer, R. (2003). The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior 11, 209-43.
Hoffmann, M. & Pfeifer, R. (2011), The implications of embodiment for behavior and cognition: animal and robotic case studies, in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-58.
Kuniyoshi, Y., Yorozu, Y., Ohmura, Y., Terada, K., Otani, T., Nagakubo, A. & Yamamoto, T. (2004). From humanoid embodiment to theory of mind. In F. Iida, R. Pfeifer, L. Steels, & Y. Kuniyoshi (eds.), Embodied Artificial Intelligence (pp. 202-18). Springer: Berlin.
Pfeifer, R. & Scheier, C. (1997). Sensory-motor coordination: The metaphor and beyond. Robotics and Autonomous Systems 20, 157-78.
Pfeifer, R., Lungarella, M. & Sporns, O. (2008). The synthetic approach to embodied cognition: a primer. In O. Calvo & A. Gomila (eds.), Handbook of Cognitive Science (pp. 121-137). Amsterdam: Elsevier.
Riesenhuber, M. & Poggio, T. (2002). Neural mechanisms of object recognition. Current Opinion in Neurobiology 22, 162-8.

 

<<PREVIOUS | HOME | NEXT>>