Invited talk: Mathieu LEFORT (Laboratory LORIA, University of Lorraine Nancy 2)

When:
2012-09-26 at 14:00

Where:
UMR 7102 - Neurobiology of Adaptive Processes, University Pierre and Marie Curie, Paris 6, Building B, 5th floor, Room 501 (How to come)

Details:

TITLE:
Spatial learning of multimodal correlations in a cortically inspired way

ABSTRACT:
An artificial or biological agent catches its internal state and the state of its external environment thanks to numerous sensors. This sensors provide data from several modalities (proprioception, vision, audition,~...). Psychological experiments show that the unification of this numerous modal stimuli is based on the detection of multimodal invariants (i.e. co-occurrent modal stimuli) in the environment. Such an unification is illustrated by the ventriloquist effect. From that point of view, the unification of multimodal stimuli is related to the sensory motor theories. During my thesis, I worked on the unification of multimodal data, based on the detection of correlations (ie. temporally recurrent spatial patterns that appear in any multimodal data flow). This detection is obtained by learning some correlations of the input flux and generalizing these learned correlations. To study this problematic, I proposed some functional paradigms of multimodal data processing leading to the connectionist, generic and modular architecture SOMMA (Self-Organizing Maps for Multimodal Association). In SOMMA, each modal stimulus is processed in a cortical map. The interconnection of these maps provides the multimodal data processing. Learning and generalization of correlations is based on the constrained self-organization of each map. At the model level, I proposed the learning to be gradual: monomodal properties are necessary to the apparition of the multimodal ones and the learning of correlations in each map precedes the self-organization of the maps. Furthermore, the use of a connectionist architecture provides plasticity and robustness properties to the data processing in SOMMA. Classical artificial intelligence models generally miss such properties. Moreover, the learning rules in SOMMA are continuous (on-line learning) and unsupervised.