Tutorial on Embodiment

4.1.1. Eye morphology*


Let us start with visual perception. Vision is the key sensory modality for many animals. However, the morphology or "design" of visual sensors varies greatly across species. We will pick a human and an insect eye to illustrate how the sensors are optimized to the particular needs of an animal.

 

Human eye

The retina of a human eye is a variable resolution sensor: the distribution of photoreceptors is non-homogeneous. The density of cones, which are used for high acuity vision, is greatest in the center (fovea) (Fig. 4.1.1.1) (e.g., Curcio et al., 1990). Through this morphological arrangement, a limited num­ber of sensing and processing elements can provide both high acuity in the center of the visual field, and a wide field of view. In robots, the retinal morphology can be emulated by the log-polar transformation (e.g., Sandini & Metta, 2002), and the degree of variable resolution can be scaled arbitrarily. Martinez et al. (2010a) investigated this effect in a robot with two eyes perfor­ming vergence behavior (simultaneous movement of both eyes in opposite directions to obtain single binocular vision). The sensor morphology as repre­sented by the log-polar transform clearly manifests itself in the information structure calculated on a sequence of images obtained from the robot. A similar phenomenon was observed by Lungarella & Sporns (2006). There, a simulated wheeled robot (but with a human-inspired eye) was driving around colored objects and foveated on them.

Fig 4.1.1.1 A diagram of a human eye. The density of cones, which are used for high acuity vision, is greatest in the center (fovea). (Image source: National Eye Institute, National Institutes of Health)

 

* This section has been adapted from Hoffmann and Pfeifer (2011).

References:

Curcio, A. C.; Kenneth, R. S. & Robert, E. K. (1990), 'Human receptor topography', J Comp Neurol 292, 497-523.
Lungarella, M. & Sporns, O. (2006), 'Mapping information flow in sensorimotor networks', PLoS Comput Biol 2, 1301-12.
Martinez, H.; Lungarella, M. & Pfeifer, R. (2010), On the influence of sensor morphology on eye motion coordination, in 'Proc. Int. Conf. Development and Learning (ICDL)'.

Sandini, G. & Metta, G.Secomb, T. W.; Barth, F. & Humphrey, P., ed., (2002), Sensors and sensing in biology and engineering, Springer, chapter Retina-like sensors: Motivations, technology and applications, pp. 379-392.
Hoffmann, M. & Pfeifer, R. (2011), The implications of embodiment for behavior and cognition: animal and robotic case studies, in W. Tschacher & C. Bergomi, ed., The Implications of Embodiment: Cognition and Communication, Exeter: Imprint Academic, pp. 31-58.

 

 

Insect eye#

It has been shown that for many objectives (e.g. obstacle avoidance) motion detection is all that is required. Motion detection can often be simplified if the light-sensitive cells are not spaced evenly, but if there is a non-homogeneous arrangement. For instance, Franceschini and co-workers (1992) found that in the compound eye of the house fly the spacing of the facets is denser toward the front of the animal. This non-homogeneous arrangement, in a sense, compensates for the phenomenon of motion parallax, i.e. the fact that at constant speed, objects on the side travel faster across the visual field than objects towards the front: it performs the ‘morphological computation', so to speak. Allowing for some idealization, this implies that under the condition of straight flight, the same motion detection circuitry - the elementary motion detectors, or EMDs - can be employed for motion detection for the entire eye, a principle that has also been applied to the construction of navigating robots (e.g., Hoshino et al., 2000). In experiments with artificial evolution on real robots, it has been shown that certain aims, e.g. keeping a constant lateral distance to an obstacle, can be solved by proper morphological arrangement of the ommatidia, i.e. denser frontally than late­rally without changing anything inside the neural controller (Lichtensteiger, 2004; Fig. 4.1.1.2. and Video4.1.1.1.). Because the sensory stimulation is only induced when the robot (or the insect) moves in a particular way, this is also called information self-structuring (or more precisely, self-structuring of the sensory stimulation). We will devote a separate section to this phenomenon later (Information self-structuring through sensory-motor coordination).

Fig. 4.1.1.2. Morphological computation through sensor morphology - the Eyebot. The specific non-homogeneous arrangement of the facets compensates for motion parallax, thereby facilitating neural processing. (A) Insect eye. (B) Picture of the Eyebot. (C) Front view: the Eyebot consists of a chassis, an on-board controller, and sixteen independently-controllable facet units, which are all mounted on a common vertical axis. A schematic drawing of the facet is shown on the right. Each facet unit consists of a motor, a potentiometer, two cog-wheels and a thin tube containing a sensor (a photo diode) at the inner end. These tubes are the primitive equivalent of the facets.


Video 4.1.1.1. Eyebot - online evolution of multi-facet eye. The video depicts the process of artificial evolution. In every trial - the robot moves forward - the fitness of the current sensor morphology is evaluated. The fitness is given by the ability of the robot to detect an obstacle that is to the left from the robot; the subsequent sensory processing remains fixed. Then, a new facet distribution is generated, the robot is moved back and a new trial is started (Lichtensteiger, 2004).

 

 

# This case study has been adapted from Pfeifer & Gomez, 2009.

References

Franceschini, N.; Pichon, J. & Blanes, C. (1992), 'From insect vision to robot vision', Trans. R. Soc. London B 337, 283-294.
Hoshino, K.; Mura, F. & Shimoyama, I. (2000), 'Design and performance of a micro-sized biomorphic compound eye with a scanning retina', J. Microelectromechanical Systems 9, 32-37.
Lichtensteiger, L. (2004), 'On the interdependence of morphology and control for intelligent behavior', PhD thesis, University of Zurich.
Pfeifer, R. & Gomez, G. (2009). Morphological computation - connecting brain, body, and environment. In B. Sendhoff, O. Sporns, E. Körner, H. Ritter, & K. Doya, K. (eds.), Creating Brain-like Intelligence: From Basic Principles to Complex Intelligent Systems (pp.66-83). Berlin: Springer.

 

<<PREVIOUS | HOME | NEXT>>