ECAI Workshop on Machine Learning for Interactive Systems (MLIS 2012): Bridging the Gap among Language, Motor Control and Vision

Interactive systems such as multimodal interfaces or robots must perceive, act, and interact in the environment where they are embedded. Naturally, perception, action and interaction are mutually related and affect each other.

Event dates: 27th - 28th Aug 2012

This is particularly the case in many hands-free and eyes-free mobile applications of interactive systems. Machine learning offers the attractive capability of making interactive systems more adaptive to the user and environment. For any of perception, action, and interaction we find a large number of applications using machine learning techniques. However, holistic approaches that tackle these fields in a unified way are still rare. The question of how to integrate language, motor control and vision in machine learning interfaces in an efficient and effective way has been a long standing problem and is the main topic of the workshop.

For further information please visit: