From EUCog Wiki
Outcome Rapperwil meeting
On 28-30 January 2011 EUCogII organized a Workshop around the area "Challenges for Cognitive Systems" in Rapperswil (Switzerland). Among other areas this meeting addressed the main challenges of Cognitive Systems Research in the so-called "Whole Iguana" group.
Outcome of the “Whole Iguana” group
The task of the “Whole Iguana group” was to focus on challenges associated with the design and theory of “putting it all together” in cognitive systems research. We have introduced two graphical depictions of progress in Cognitive Systems research and we have proposed a reformulation of the Challenge and a formulation of a mission statement.
The first depiction focuses on improving the relation between agent and environment and can be used to conceptualize progress. The second focuses on the integration of functional modules in cognitive systems and suggests a research strategy. The two depictions are closely related because they start with the conclusion that cognitive functions are not primarily the result of excellent modules, but are primarily the result of a highly theory dependent integration. This conclusion is based on the observation that changes in the complexity and structure of the task environment have profound–even catastrophic–effects on the performance of the artificial cognitive system. Apparently the agent’s coping ability is closely associated with–if not identical to—the agent’s adaptability to the environment.
Agent adaptability vs Environmental complexity
Cognitive systems are systems with goals to be executed in environments and progress is signified by systems that successfully pursue more ambitious goals in more complex environments. To depict this graphically we suggest using "Environmental complexity" as horizontal axis and as vertical axis "Agent adaptability" and "Agent coping ability". This leads to the following graph:
The horizontal axis depicts monotonously increasing task environment complexity, starting with a controlled task environment on the left and ending, theoretically, with open, uncontrolled real-world environments on the right. The vertical axis depicts the agent’s coping capacity and/or the agent’s adaptability. When approached from below, the diagonal signifies a point in which environmental complexity, such as the inability to adapt to unseen or novel situations, is no longer the main issue and problems with the optimal task/goal execution become more important.
Systems below the diagonal need user-control to prevent unmanageable environmental complexity, while systems well above the diagonal have autonomy in their task environment. Typically we desire systems to be situated well above the diagonal because that is the only place where excess adaptability enhances system performance and user benefit. Of course in situation is which the user can reduce task environment complexity, the system can perform its tasks (reach its goals) with less adaptability.
In more human terms, agents below the diagonal exist in world they cannot control and need (and may search) guidance. They might be called “fearful” and benefit most from strategies that reduce task environment complexity. Agents above the diagonal exist in a world they can control and cope with, their emotional state might be described as “interested” and they benefit most from option exploration strategies to further improve task performance and goal achievement. Note that upper right area well above the diagonal represents highly open task environments. Systems positioned here may have started from many different initial tasks but have now been generalized to highly versatile and reliable cognitive systems.
Fundamental progress in cognitive systems research is typically associated with issues related to prevention of catastrophic performance effects when the cognitive system is exposed to increased task environment complexity when it needs to become “aware” of more conditions in which it can and cannot perform as intended. The desired development is to maintain a sufficient coping ability – at the same distance above the diagonal – while being able to operate in more complex and diverse environments. However, in practice performance in the new environmental conditions is highly degraded. This entails that research needs to be directed towards increasing the adaptability and/or coping ability in the new environment. This may require improving its learning ability, representing the task environment better, introducing improved reasoning abilities, or giving it new abilities such as language). When this leads to a new well-operating system a new attempt can be made to generalize the system for less constrained task environments.
This suggest an iterative design/research approach in which the task environment’s complexity is increased gradually. For any level of complexity one should ensure that the agent’s control mechanism is as environment independent as possible, i.e. as adaptable as possible. This allows the design of systems for which it is unnecessary to specify every aspect of the environment at design time, because the agent learns much of its control strategy at run-time.
“Emerging Theory of Cognitive Systems”
Research on artificial cognitive systems focuses on design of autonomous agents that act in real-world environments. The real-world setting challenges classical approaches in robotics, where known, predictable, restricted environments are assumed. This brought the robotic community to the fundamental problem of dealing with complexity of physics and dynamics of real-world environments. Reductionistic approaches that are based on simple models and assume additivity of functional parts into a coherent whole by means of simple static couplings, apparently, fail in complex environments.
As technological progress demands more comprehensive systems with more and more capabilities, the non-additivity of the functional submodules limits progress in this direction.
To advance research on cognitive systems, we suggest an iterative approach with close cooperation of theory and technology development. This approach is based on gradual growth of ability of an artificial cognitive agent to deal in more and more flexible ways with more and more complex task environments.
- Benchmarks definition. A set of technological (benchmark) challenges should be identified (blue ellipses on the figure)
- Each challenge may be addressed at different levels of the environment’s complexity. In the limit, the challenge can be formulated as a desirable “Cognitive artifact” (such as assistant at home, wise alter ego, tennis player, dancer, shopper, etc.). The complexity of the task environment should be estimated and quantified just as the adaptability of the desirable agent. This sets the long-term goal for this particular benchmark challenge.
- The state-of-the-art system should be analyzed in terms of capabilities crucial in the task (taking into account the task environment complexity/adaptability plane (C/A-plane), thus analyzed in terms of putting each of the capabilities: perception of scenes, objects recognition, categorization, motor control of grasping, action selection, etc. on the (C/A-plane).
- Now, relative “urgency” of the capabilities can be estimated, e.g. object recognition should be more flexible, whereas grasping is OK for now. Thus, “bottleneck research areas”, or systemic challenges might be specified within each of the benchmark challenges.
- Systemic Challenges. If several benchmark challenges share a systemic challenge (red circles on the figure), researchers working in different benchmark challenges in the field of the systemic challenge, should coordinate. They should meet regularly and discuss the critical aspects of the area (systemic challenge) that become apparent in different benchmark scenarios, trying to identify principled problems and bottlenecks and find principled solutions, inspired by natural cognitive systems (thus consulting with experts from theoretical and experimental neuroscience, behavioral and cognitive sciences). Thus, within each “systemic challenge”, a coherent scientific picture, theoretical basis of the particular field should emerge that is -- through the benchmark work -- connected to other “systemic challenges”.
- Benchmark progress Monitoring. The progress can be monitored on the level of benchmark challenges above the diagonal of the complexity/adaptability diagram, whereas progress should be estimated for each capability (and thus for each systemic challenge involved in the benchmark) and also overall. .
- Systemic progress monitoring. Progress within systemic challenges is expressed in improvement of our understanding of the underlying phenomena on different levels -- theoretical, modeling, and computational and is, probably harder to quantize. But progress will be easy to detect because it leads to important reductions in design time for novel applications. The main role of the “red circles” of systemic challenges is to promote theoretical thinking and gradual evolution of the “whole iguana”.
Reformulating the Cognitive Systems Challenge
In FP7 Challenge 2 of the ICT-call is described as:
Challenge 2 focuses on artificial cognitive systems and robots that operate in dynamic, nondeterministic, real-life environments. Such systems must be capable of responding in a timely and sensible manner and with a suitable degree of autonomy to gaps in their knowledge, and to situations not anticipated at design time. Actions under this Challenge support research on engineering robotic systems and on endowing artificial systems with cognitive capabilities.
This formulation has a number of drawbacks. The most important one is that it describes a virtually impossible aim. It is always possible to find a situation or reason why any developed cognitive system fails at one or even all of the requirements. However in the previous sections we described a progressive path towards systems that can deal with ever more complex environments. This lead to the following proposal for FP 8:
Challenge 2 focuses on artificial cognitive systems and robots that operate in more dynamic and complex environments, exploit predictability better, and can deal better with the novelty of non-deterministic environments. Such systems must be capable of responding in a more timely and more sensible manner and with a suitable degree of autonomy to gaps in their knowledge as well as to evermore situations not anticipated at design time. Actions under this Challenge support research on engineering robotic systems and on endowing artificial systems with cognitive capabilities.
Towards a mission statement
In addition we proposed a concept mission statement for our field. It is important that we defend the term Cognitive. Otherwise it will be adopted and inflated (just as has happened with "intelligent") by ill-informed or opportunistic activities in related fields. In addition we need a popular science description. The mission statement might be useful for both.
Concept mission statement of cognitive systems research. Cognitive systems researchers
- combine results from different scientific domains,
- integrate this as working systems,
- test these in ever more complex dynamic environments,
- while making them less dependent of designer and user, and
- keeping them understandable for the user so that
- user and system can communicate and cooperate in shared environments
The resulting cognitive systems function safely and ethically in human, natural, or artificial environments. In addition the scientific process advances the theory of what it means to be cognitive.
This category has only the following subcategory.
Pages in category "Challenges"
This category contains only the following page.