Self-Organizing Neurons: Toward Brain-Inspired Multimodal Association
Lyes Khacef a, Laurent Rodriguez b, Benoît Miramond b
a University of Groningen, The Netherlands, Nijenborgh, 4, Groningen, Netherlands
b Université Côte d'Azur, France
Proceedings of Neural Interfaces and Artificial Senses (NIAS)
Online, Spain, 2021 September 22nd - 23rd
Organizers: Tiago Costa and Georgios Spyropoulos
Invited Speaker, Lyes Khacef, presentation 007
Publication date: 13th September 2021

Our brain-inspired computing approach attempts to simultaneously reconsider AI and von Neumann's architecture. Both are formidable tools responsible for digital and societal revolutions, but also intellectual bottlenecks linked to the ever-present desire to ensure the system is under control. The brain remains our only reference in terms of intelligence: we are still learning about its functioning, but it seems to be built on a very different paradigm in which its developmental autonomy gives it an efficiency that we haven’t yet attained in computing.

Our research focuses on the cortical plasticity that is the fundamental mechanism enabling the self-organization of the brain, which in turn leads to the emergence of consistent representations of the world. Indeed, the cerebral cortex self-organizes itself through local structural and synaptic plasticity mechanisms that are very likely at the basis of an extremely interesting characteristic of the human brain development: the multimodal association. In spite of the diversity of the sensory modalities, like sight, sound and touch, the brain arrives at the same concepts (convergence). Moreover, biological observations show that one modality can activate the internal representation of another modality when both are correlated (divergence). In this work, we propose the Reentrant Self-Organizing Map (ReSOM), a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning. We propose and compare different computational methods for unsupervised learning and inference, then quantify the gain of the ReSOM in a multimodal classification task. The divergence mechanism is used to label one modality based on the other, while the convergence mechanism is used to improve the overall accuracy of the system.

We perform our experiments on a constructed written/spoken digits database and a Dynamic Vision Sensor (DVS)/EletroMyoGraphy (EMG) hand gestures database. The proposed model is implemented on a cellular neuromorphic architecture that enables distributed computing with local connectivity. We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system’s topology is not fixed by the user but learned along the system’s experience through self-organization.

© Fundació Scito
We use our own and third party cookies for analysing and measuring usage of our website to improve our services. If you continue browsing, we consider accepting its use. You can check our Cookies Policy in which you will also find how to configure your web browser for the use of cookies. More info