Challenges for Artificial Cognitive Systems II

From EUCog Wiki

Jump to: navigation, search

→ Workshop "Challenges for Artificial Cognitive Systems II", Oxford, 20-22 January 2012

Contents

Organisers

-> Organisers' Report

Other "Challenges" Events in EUCogII

Working Groups: Formulations of Challenges

Group A

Challenge(s)

  1. We are challenged to learn from the examples of deeply integrated systems found in biology
  2. Build a methodology for establishing the conceptual or mathematical basis of understanding the behaviour of populations of endogenously active systems.
  3. Raising interest through engaging with the public and learning from public engagement through education, entertainment, and debate.

Explanations

  1. As we develop increasingly complex solutions to particular problems, biology provides examples of systems in which functions are deeply integrated, whereas engineering inevitably produces specialized solutions to individual problems. Biology provides examples of entire systems in which all functions are deeply integrated and embedded. Engineers, in seeking solutions to specific problems, repeatedly encounter asymptotic limits to performance. Speech recognition based on Markowian assumptions provides a familiar example. A robust reciprocal dialogue between engineers and scientists can help both sides to re-understand the statement of problems in each domain. An important source of information from biology is provided by the sequence within which functions evolved in organisms. This may guide our development of profoundly integrated complex functions on the basis of simpler interacting constituents.
  2. Unlike artefacts, living systems consist of components that self-organise, self-regulate and contribute to collective behaviour from swarms and ant-hills through to brain and societies. This enables the possibility of self-organised, emergent representation and cognition. Currently these are problems we are not in a position to understand/analyse; however the emergence of semantics is one example of a 'hard problem' whose solution is bypassed by the eventual construction of such systems - living systems do not need to ground symbols as their semantics are not injected/engineered. The eventual benefit of meeting this challenge could be to understand, model and design systems with intrinsic cognition/semantics.
  3. Raising interest is possible via a cognitive zoo analogy. Like a zoo it has scientific and a support staff. The cognitive zoo has three main purposes. Purpose (1) is developing appealing cognitive systems. 'Appealing' can be in terms of functionality, aesthetics, usability, and maintaining interests through self-initiated playful behavior. This should offer the scientific community opportunities for two-way engagement, for example by allowing hobbyists to actively engage with science. Target audiences are media first, followed by the general public and politicians. It allows the cognitive systems community to learn from the full diversity of how the public reacts with cognitive systems. (2) Benefits for the community; allowing the community to show-off; Offering opportunities for other fields of science -- such as to robotics, gaming, animat communities -- to connect to the cognitive systems community. (3) Challenges for the community include the focus on interest: both as basic properties of the cognitive systems as well as in the public, e.g., playfulness, novel affordances, interaction, appropriateness of different behaviors in the environment; the development of very robust systems that work for a long time in open environments, with severe constraints on resources, while requiring minimal staff-support; answering how a world optimized for cognitive systems look like; interaction between cognitive systems; and examples of a first business completely ran by cognitive systems: an artists workshop, a restaurant (and the practical, legal, and moral issues associated).


Group Members

Tjeerd Andriga, Nicola Bellotto, Simon Bensasson, Mark Bickhard, John Bishop, Fabio Bonsignorio, Lola Cañamero, Chih-Chun Chen, Fred Cummins, Torbjorn Dahl, Yulia Sandamirskaya, Patrick Courtney

Group B

Challenge 1

Following cognitive vision we should develop more cognitive approaches to active perception and control, including multisensory integration, and merge it into a cognitive architectures that will integrate perception with goal-directed reasoning.

Explanations: Cognition refers to mental processes, including attention, remembering, producing and understanding language, solving problems, and making decisions. We need a toolkit to be able to integrate active perception with decision making and reasoning in one system.

Cognitive vision refers to computer vision systems that can pursue goals, adapt to unexpected changes of the visual environment, and anticipate the occurrence of objects or events. Vision systems that are more than just vision, active or context sensitive visions.


Challenge 2

Develop cognitive systems that are able to use social signals used by humans to facilitate interactions with artificial systems, and be able to understand ethical concerns of people.

Explanations:

A lot of work has been done on understanding emotions, intentions, but it has to be related to the meaning of information that humans understand at the mental level. Signals should include cultural sensitive information, non-verbal behavioral cues, eye gaze direction, gestures and physical interactions.


Individual takes:

Maurice: reaching human level intelligence, using semantic knowledge on the web, exploring virtual environment building cognitive systems that evolve in such environment.

John: in 5 years time language input and gesture understanding cognitive systems should reason within a context, define its goals, achieve it autonomously, act, report, and give reasons for results obtained. Schemes should be extensible to more complex things. It does not have to replicate human brain, it may just emulate it and should be able to understand humans and should not compromise ethical dimensions …

Honghai: we need clear understanding of cognitive capabilities and objectives to implement them, how to design cognitive architecture accommodating inferences, etc. and define benchmarks.  

Paul: we need understanding of human motor activity, build humanoid robots that act like people, improve human-robot interaction, mirror how people act, understand context, understand people.

Hagen: understand the gaze pattern people would be comfortable with, for robot companions. Using gaze to find relevant information. Understanding intentionality of humans, internal mental emotional state.

Catriona: what matters to CS is to reach understanding, ethics, disaster scenario, humans right and needs, theory of mind, understanding our own mental state, metareasoning and metacognition, how to reason what other people understand.

Tony: machine ethics, top control morality and emotions and autonomy, reward learning, valuing situations, emotions as guidance for action, more autonomy = less reliability, restricted autonomy balance.

Margarita: emotion recognition from speech, how the user feels? Hard to find. Personality, fast change of emotions, prosodic information is not sufficient, changes, energy, shivering … body language?

Włodek: building CS that understands humans and that helps humans to understand themselves, understanding human internal point of view, specific psychological feelings, showing human-like qualities in problem solving and design: creativity, insight, imagery, intuition, goals leading to focus of attention, neurocognitive approach to language. In the long run some cognitive systems will be a part of us ... so they need to be closely adapted to our brains and capabilities. Methodologies: pattern calculus, spaces that change dimensions, dynamics system use, geometrical model, cooperative learning systems from simple threshold neurons to society of mind-like agents, modules that are specialized yet flexible. Cognitive operating system (MS Windows or Mac OS) that understands us is a challenge. We may need better phenomenology of human mental states to know how to interact with people – and this is difficult.


Group Members

Włodzisław Duch | Antoni Gomila | Catriona Kennedy | Margarita Kotti | Hagen Lehmann | Paul E. Hemeren | Hong-hai Liu | John Gray | Maurice Grinberg

Presentation PPTX | Notes of the B group | Cognitive Architectures links and reviews

Group C

Challenge

Building systems to understand, quantify, simulate and exploit the behaviours exhibited by autonomous systems operating in the natural world.

Explanation

Cognitive systems exhibit the following interlinked internal characteristics and behaviours as major dimensions:

  1. Interaction with the environment, including communication with other agents
  2. Information processing, such as knowledge or reasoning
  3. Adaptation, learning, flexibility
  4. Goal directed autonomous behaviour

A prominent example of the interlinked nature of these dimensions is perception, linking interaction and information processing.

The multidimensional space can be represented in a cob web diagram, where performance is measured along each of these dimensions. Each cognitive system occupies a region in this space (red or blue in the diagram) and we would expect a cognitive system to involve a significant component in each of these dimensions. Critical features of these dimensions might be measured in benchmarks.

File:Cogdysdiag.png

Group Members

Nikolaos Mavridis, Georg Meyer, Marcin Miłkowski, Roger Moore, Vincent C. Müller, Slawomir Nasuto, Dimitri Ognibene, Fiora Pirri, Edwige Pissaloux, Frank Pollick

Group D

Challenge(s)

  1. Identify what kind of tasks, as defined in terms of a platform/domain, would require internal representations/symbols/classifications grounded/linked to physical realities, and for which solutions based on acting in a purely reactive fashion or using simple signal processing are impossible or brittle.
  2. Invent ways to assess if a particular solution to such a task would indeed need the use of such meanings/internal representations (and not perhaps because of mistakes in our judgment).

Explanations

We need a list of classes of cognitive systems, perhaps a bio-inspired list, that would correspond to milestones on a road towards building cognitive systems through incremental design or through evolution and/or development. We would like the classes phrased in such a way that it would be easy to identify what is the complexity of the robotic/agent platform that one has in mind, and the complexity of the environment it would be expected to deal with. We hope that relating this roadmap to biology will provide not only an intuition of what is meant, but also a way to benchmark the achievements, and to approximate the difficulty of particular challenges. The ordering we have in mind does not need to be one-dimensional, could be partial, or indeed tree-like. But we would like to identify or reconsider what could be considered as “major transitions” in the evolution of cognitive abilities, in other words: classes of problems that require inherently different cognitive skills. The list should not be very long, if it becomes long we advise creation of a hierarchical structure.

It is a different issue that increased computational abilities and memory capacity can allow for increased precision, dealing with larger-scale problems, and dealing with more uncertainty/noise/distractors. The question of if the rate of change of the physical world to the temporal scale relevant to an individual agent adds another dimension to the problem (requiring individual learning /adaptation.

Our first take is as follows:

  1. Sensing with simple feedback(s). For example, one could imagine an agent that searches for or avoids objects, guided by vision or smell if the objects are sources of diffusive substances. The performance would be assessed by placing the agent at a random position in the environment with random position of sources of substances. Adaptation and/or choosing appropriate action from a pre-specified repertoire would be required if the physical characteristic of the physical environment or categories of substances or objects may change at a time scale relevant to an individual agent.
  2. Taking advantage of spatial/temporal structure (regularities of the neighbourhood relations) of the environment (a task that requires an internal representation of the environment). An example may an agent able to associate a certain direction at certain times as relevant to its goal. Another example (and perhaps a different subclass) would be an agent able to remember the position of the objects on a map and which would be able to navigate on the map regardless of their initial position.
  3. The ability to manipulate/shape the external physical environment. An example may be an agent that stores objects belonging to different categories at different locations (possibly because of internal storage limitations, for further use).
  4. Taking advantage of the regularities in the behaviour of other agents. An example may be an agent of type A that associates the fact that another agent in its environment, B, in certain situations looks for objects of interest, and to follow B should that situation arise. This may allow A to offload to B sensing of different objects (assuming that finding objects is more difficult than finding agent B), all of interest to A, or to take advantage of resources accumulated in B either internally (predation) or externally (for example, if B is stores objects as described above).
  5. Taking advantage of the knowledge about the cognitive abilities and limitations of other agents in order to influence their behaviour. For example, A would make B collect objects of interest to A. This task can be framed in an egoistic fashion or in a framework of cooperation (reciprocal altruism).

We think the community has been addressing these issues, but only with partial success. There is a hope that increasing the complexity of the tasks in the classes (along the added dimensions mentioned above) would require solutions that are not brittle, that would involve internal representations, association of these internal symbols with the features of the external environment (symbol grounding/linking), manipulation of these symbols, and that would require communication. We feel that in order to progress towards human-equivalent cognitive abilities it may be necessary to be committed to features of human language. Rather, we believe that in order to be efficient in a complex word, any animat would need to cluster objects (or different forms of the signal about the same type of object that reaches the sensors in different environmental conditions), and two represent this internally. The hope of this programme is that efficient inference about temporal (causal) and spatial relations between events or objects in a complex physical world may only by necessary if symbols are manipulated internally. Manipulating them in a non-restricted but structured fashion may be necessary for internal redescription/reinterpretation and for efficient communication.


Group Members

Ricardo Sanz, Bill Sharpe, Aaron Sloman, Kostas Stathis, Ricardo Tellez, Serge Thill, Sandor Veres, Georg von Wichert, Borys Wrobel

Discussion

"Cognitive Robotics" - A contribution by J.O. Gray

Current humanoid robotic platforms in the U.S.A, Europe and Japan are reaching a high degree of engineering development with ever increasing functionality and sophisticated control architectures. In contrast demonstrable cognitive development, however defined, still appears to be severely limited. Rather than enter into a long discussion as to the definition of cognition in such engineering platforms it is simpler and more practical to define a set of functionalities that would greatly increase the use of these devices in the real world. These could be defined as :

  1. The ability to accept natural language/gesture commands.
  2. The ability to reason in order to identify the goal required within the situation context
  3. To achieve the goal autonomously and in safety within a semi-­‐structured and variable environment
  4. Report on success/failure with reasons and ,if necessary, determine suggested alternative strategies from live situation data

Clearly there is a requirement for memory and a learning capability and a development path that allows extension from simple to more complex operations as the function of the platform is enhanced. Practical realisation will require the development of some implementable schema for engineering/software architecture design. This need not necessarily be a unique architecture as we only need to imitate perceived human cognitive processes and not to replicate them in detail. Given that these devices will be physically interacting with humans there must be some ethical dimension to any cognitive architecture and this aspect is now being studied in the U.S.A

Humanoid robotics studies were given an impetus in the 1980s by the Chernobyl event and the Fukushima event last year underlined the lack of progress made in developing practical machines that can be deployed usefully in unstructured environments. As indicated we have come a long way in engineering but limitations in deployable cognitive functionality still remain.

J.O.Gray January 2012

The Challenge of Knowing What We've Accomplished

As difficult as it is to build cognitive systems (and I know it's hard, I've been building them since 1991) it strikes me looking at this page that one of the hardest things is taking on board what's been learned and moving forward. We need more review articles that demonstrate our positive progress, and more books and tutorials that help make sure new PhDs (and professors!) don't reinvent the wheel, or worse, spend their time dismissing the accomplishments of the past rather than learning from them. Part of this is understanding natural intelligence. Much of what we've learned is that human and animal intelligence hasn't worked quite any of the various extreme ways we thought it might have done. Without relating our learnings in cognitive systems to those in natural intelligence, I think the field necessarily becomes boring to those who come into it (or look into it) because it is inaccessible and alien. But most of what makes up minds is entirely alien, our awareness is only a tiny part of our intelligence.

--Joanna Bryson 22:49, 6 February 2012 (UTC)

Please log in to add to the discussion.

  • point
  • another point

Report: "Challenges for Cognitive Systems Research" A. Gomila & V.C. Müller

Personal tools