Tutorial on Embodiment
2.3. Problems of Traditional AI
"However, the original intention of artificial intelligence was not only to develop clever algorithms, but also to understand natural forms of intelligence, which requires a direct interaction with the real world. It is now generally agreed that the classical approach has failed to deepen our understanding of many intelligent processes. How do we make sense of an everyday scene or recognize a face in a crowd, for example? How do we manipulate objects, especially flexible and soft objects and materials like clothes, string, and paper? How do we walk, run, ride a bicycle, and dance? What is common sense all about, and how are we able to understand and produce everyday natural language? Needless to say, trying to answer these questions requires us to consider not just the brain, but how the body and brain of an intelligent agent interact with the real world.
Classical approaches to computer vision (which is one form of artificial perception), for example, have been successful in factory environments where the lighting conditions are constant, the geometry of the situation is precisely known (i.e., the camera is always in the same place, the objects always appear on the conveyor belt in the same position, the types of possible objects are known and can therefore be modeled), and there is always ample energy supply. However, when these conditions do not hold, such systems fail miserably, and in the real world, stable and benign conditions are never assured: the distance from an object to your eyes changes constantly, one of the many consequences of moving around; lighting conditions and orientation are always changing; objects are often entirely or partially blocked from view; objects themselves move; and they appear against very different and changing backgrounds." (Pfeifer and Bongard 2007, p. 30)
Real world vs. virtual worlds
"Classical models, that is, models developed within the cognitivistic paradigm, focus on high-level processes like problem solving, reasoning, making inferences, and playing chess. Much progress has been made, for example, in the case of chess, with computers able to play well enough to defeat world champions. In other areas, progress has been less rapid; for example, in computer vision. It has turned out to be far more involved than expected to extract information from camera images, typically in the form of a pixel array, and map them onto internal representations of the world. The main reason for these difficulties and the reason for the fundamental problems of AI in generalis that the models do not take the real world sufficiently into account. Much work in classical AI has been devoted to abstract, virtual worlds with precisely defined states and operations, quite unlike the real world.
To illustrate our argument, let us return to the game of chess (Fig 2.3.1 a). Chess is a formal game. It represents a virtual world with discrete, clearly defined states, board positions, operations, and legal moves. It is also a game involving complete information: If you know the board position, it is possible to know all you need to know to play the game, because given a certain board position, the possible moves are precisely defined and finite in number. Even though you may not know the particular move your opponent will make, you know that he will make a legal move; if he did not, he would cease to be playing chess any longer. (Breaking the chess board over the opponent's head is not part of the game itself.) Chess is also a static game, in the sense that if no one makes a move, nothing changes. Moreover, the types of possible moves do not change over time.
By contrast, consider soccer (Fig 2.3.1.b). Soccer is clearly a nonformal game. It takes place in the real world, where there are no uniquely defined states. The world of soccerthe real world is continuous. As humans, we can make a model of a soccer game, and that model may have states, but not the soccer game as such. Having no uniquely defined states also implies that two situations in the real world are never identical. Moreover, in contrast to virtual worlds, the available information an agent can acquire about the real world is always incomplete. A soccer player cannot know about the activities of all other players at the same time, and those activities are drawn from a nearly infinite range of possibilities. In fact, it is not even defined what ''complete'' information means where a game like soccer is concerned. Completeness can be defined only within a closed, formal world. Since completeness is not defined, it is better to talk in terms of limited information. A soccer player has only limited information about the overall situation. In fact, information that can be acquired about the real world is always limited because of embodiment: the field of view is restricted, the range of the sensors is limited, and the sensory and motor systems take time to operate. Moreover, in the real world there is time pressure: things happen even if we do not do anything, and they happen in real time." (Pfeifer and Scheier 1999, p. 58-61)
Fig 2.3.1 (a) (b)
Real worlds and virtual words. (a) Chess is a formal game. It represents a virtual world with precisely defined states, board positions, and operations, that is, the legal moves. (b) Soccer is an example of a nonformal game. There are no precisely defined states and operations. In contrast to chess, two situations in soccer are never exactly identical.
The symbol-grounding problem
"The symbol-grounding problem, which refers to how symbols relate to the real world, was first discussed by Steven Harnad (1990). In traditional AI, symbols are typically defined in a purely syntactic way by how they relate to other symbols and how they are processed by some interpreter (Newell and Simon 1976; Quillian 1968); the relation of the symbols to the outside world is rarely discussed explicitly. In other words, we are dealing with closed systems, not only in AI but in computer science in general. Except in real-time applications, the relation of symbols (e.g., in database applications) to the outside world is never discussed; it is assumed as somehow given, with the (typically implicit) assumption that designers and potential users know what the symbols mean (e.g., the price of a product). This idea is also predominant in linguistics: it is taken for granted that the symbols or sentences correspond in some way with the outside world. The study of meaning then relates to the translation of sentences into some kind of logic-based representation whose semantics are clearly defined (Winograd and Flores 1986, p. 18).
Using symbols in a computer system is no problem as long as there is a human interpreter who can be safely expected to be capable of establishing the appropriate relations to some outside world: the mapping is "grounded" in the human's experience of his or her interaction with the real world. However, once we remove the human interpreter from the loop, as in the case of autonomous agents, we have to take into account that the system needs to interact with the environment on its own. Thus, the meaning of the symbols must be grounded in the system's own interaction with the real World. Symbol systems, such as computer programs, in which symbols refer only to other symbols are not grounded because they do not connect the symbols they employ to the outside world. The symbols have meaning only to a designer or a user, not to the system itself." (Pfeifer and Scheier 1999, p. 69-70)
Pfeifer, R. & Bongard, J. C. (2007), How the body shapes the way we think: a new view of intelligence, MIT Press, Cambridge, MA.
Pfeifer, R. & Scheier, C. (1999), Understanding Intelligence, MIT Press.
Harnad, S. (1990), The symbol grounding problem. Physica D, 42, 335-346.
Newell, A., and Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM (Association for Computing Machinery), 19, 113-126.
Quillian, R. (1968). Semantic memory. In M. Minsky (Ed.), Semantic information processing. Cambridge, MA: MIT Press, pp. 18-29.
Winograd, T., and Flores, F. (1986). Understanding computers and cognition. Reading, MA: Addison-Wesley.