1.1-I1
Event-based sensor are bio-inspired vision sensors, encoding visual information in the form of sparse asynchronous events. Each event encodes a change in the log-luminosity intensity at a given pixel location. As a consequence, event sensors capture information at extremely high temporal resolution and with high dynamic range, while keeping data and power consumption small.
Converting a sparse events stream to a dense representation allows the use of classical AI methods, such as convolutional neural networks, to event data. This is a practical solution to leverage the well-established AI ecosystem and results in outperforming classical frame-based cameras methods in latency-critical and high dynamic range scenarios. However, the low power and low data property of the camera are lost.
Neuromorphic AI will be the cornerstone of the next generation event-based vision pipelines. In fact, the asynchronous and ultra-low power computation paradigms of neuromorphic architectures are the perfect fit to process event-based data. However, several research directions, both from the algorithmic side and hardware implementation side, are still open and need to be explored.
1.1-I2
Event cameras mimic the workings of the human bio visual pathway by sending image intensity
change pulses to the neural system. They are a promising alternative to conventional frame-based
cameras for detecting ultra fast motion with low latency, robustness to changes in illumination
conditions, and low power consumption. These characteristics make them ideal for mobile robotic
tasks. However, exploiting to its full capacity their unconventional sparse and asynchronous spatio-
temporal data flow efficiently still challenges the computer vision community.
Deep Artificial Neural Networks (ANN), especially, the recent architecture of visual transformers
(ViT) have achieved state-of-the-art performance for various visual tasks [1]. However, the straight-
forward use of ANNs on event input data needs a preprocessing step that constraints its sparse and
asynchronous nature. Inspired by computational neuroscience, Spiking Neural Networks (SNNs) turn
out to be a natural match for event cameras due to their sparse event-driven and temporal processing
characteristics. SNNs have been applied mostly for classification tasks [2]. Some other works involve
regression tasks for optical flow estimation [3], [4], depth estimation [5] angular velocity estimation [6],
and video reconstruction [7]. However, limited work has been done to incorporate SNNs for full 3D
ego-motion estimation.
We first present an optimization-based ego-motion estimation framework that exploits the event-based
optical flow outputs of a trained SNN model [8]. Our method successfully estimates pure rotation and
pure translation motion from input events only and shows the potential of using SNNs for continuous
ego-motion estimation tasks. Secondly, we show our Hybrid RNN-ViT architecture for optical flow
estimation which uses ViT to learn global context. We further present preliminary results for its SNN counterpart which combines SNNs to directly process the event data.
1.1-I3
Amirreza received his BSC. and M.Sc. degrees in electrical engineering from the Tehran Polytechnic (2010) and Sharif University of Technology (2013), Tehran, Iran. In 2014, he was awarded an F.P.I scholarship from Spanish Research Council (CSIC). He received his Ph.D. in the field of neuromorphic engineering at the Instituto de Microelectronica de Sevilla (IMSE-CNM-CSIC), University of Seville, Seville, Spain in 2018. He has been visiting scholar/Postdoctoral Fellow at the University of Manchester (UK), Brain and Mind Institute (CerCo, CNRS, Toulouse), National University of Singapore, imec (Ghent), BrainChip (Toulouse) and GrAI-Matter-Labs (Eindhoven). Since 2020, he is a neuromorphic architect researcher in IMEC (Eindhoven, Netherlands) with a focus on designing low-power neuromorphic processor architecture.
This talk is about the challenges and trade-offs in the design of digital scalable neuromorphic processing architectures. We will talk about IMEC's achievements in neuromorphic sensory and event-based processing technologies. We specifically focus on the IMEC RISC-V-based neuromorphic processor (SENeCA) in comparison to the other state-of-the-art architectures.
SENeCA is a RISC-V-based digital neuromorphic processor to accelerate bio-inspired Spiking Neural Networks for extreme edge application (from 1M to 100M neurons) in /near-sensors where ultra-low power and adaptivity features are required. SENeCA is optimized to exploit unstructured spatio-temporal sparsity in computations and data transfer. It improves the available solutions by 1) addressing the flexibility issue in neuromorphic processors, 2) improving the area efficiency by employing a 3-level memory hierarchy, 3) efficient deployment of advanced learning mechanisms and optimization algorithms by accelerating neural operations in three data types: int4, int8 and BrainFloat16 and 4) efficient event communication by using a novel Network-on-Chip with multicasting, compression mechanism and source-based routing.
The last section of the talk will be about "why we think neuromorphic event-based processing is the future of data-driven computing", "why it is not yet on the market" and the "IMEC roadmap to address the main challenges and trade-offs both in software and hardware in the neuromorphic processing domain".
1.2-I1
To truly understand how the brain processes sensory information into decisions on actions to take, one needs mappings between measurable quantities to the elementary units of computation. While much is measurable, there is also much debate about what constitutes the elementary unit of computation in the brain: artificial neural networks represent only one such abstract mapping and imply fundamental choices on neural coding and functioning. Trained and large-scale ANNs also map to certain aspects of brain functioning. Still, data from experimental neuroscience is strongly suggesting that the ANN abstraction is too simple and omits important computational principles of real neuronal processing, such as the spiking nature of neural communication and the diversity and function of neuronal morphology. To investigate these computational principles, we need to be able to train large and complex networks of spiking neurons for specific tasks. In this talk, I will show how effective online learning rules enable the supervised training of large-scale networks of detailed spiking neuronal models, and how these models can be integrated with brain-derived decision-making circuits to operate continuously. As I will argue, this approach opens up the investigation of both network and neuronal architectures based on functional principles rather than imputed connectivity patterns.
1.2-I2
Our brain relies on spiking neural networks for rapid and ultra-low-power information processing. To build artificial intelligence that leverages spiking networks with comparable efficiency requires instantiating vast spiking network models on neuromorphic hardware accelerators. However, direct end-to-end training of spiking neural networks remains challenging due to the non-differentiability of spiking neuron models.
Surrogate gradients have emerged as a widespread solution to this problem. In my talk, I will briefly introduce the notion of surrogate gradient learning, showcase its robustness, and illustrate its self-calibration capabilities on analog neuromorphic hardware. I will further discuss the importance of network initialization on deep spiking neural network training and introduce effective bio-inspired initialization strategies. Finally, I will sketch how biologically plausible online learning rules naturally emerge through local approximations of surrogate gradients that exploit block-sparse Jacobians. This step is essential for learning from long temporal sequences and paves the way for exciting future on-chip online learning applications.
1.2-I3
Bernabé Linares-Barranco received the B. S. degree in electronic physics in June 1986 and the M. S. degree in microelectronics in September 1987, both from the University of Seville , Sevilla , Spain . From September 1988 until August 1991 he was a Graduate Student at the Dept. of Electrical Engineering of Texas A&M University. He received a first Ph.D. degree in high-frequency OTA-C oscillator design in June 1990 from the University of Seville, Spain, and a second Ph.D deegree in analog neural network design in December 1991 from Texas A&M University , College-Station, USA.
Since June 1991, he has been a Tenured Scientist at the "Instituto de Microelectrónica deSevilla" , (IMSE-CNM-CSIC) Sevilla , Spain , which since 2015 is a Mixed Center between the University of Sevilla and the Spanish Research Council (CSIC). From September 1996 to August 1997, he was on sabbatical stay at the Department of Electrical and Computer Engineering of the Johns Hopkins University . During Spring 2002 he was Visiting Associate Professor at the Electrical Engineering Department of Texas A&M University , College-Station, USA. In January 2003 he was promoted to Tenured Researcher, and in January 2004 to Full Professor. Since February 2018, he is the Director of the "Insitituto de Microelectrónica de Sevilla".
He has been involved with circuit design for telecommunication circuits, VLSI emulators of biological neurons, VLSI neural based pattern recognition systems, hearing aids, precision circuit design for instrumentation equipment, VLSI transistor mismatch parameters characterization, and over the past 20 years has been deeply involved with neuromorphic spiking circuits and systems, with strong emphasis on vision and exploiting nanoscale memristive devices for learning. He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware.
Dr. Linares-Barranco was corecipient of the 1997 IEEE Transactions on VLSI Systems Best Paper Award for the paper "A Real-Time Clustering Microchip Neural Engine", and of the 2000 IEEE Transactions on Circuits and Systems Darlington Award for the paper "A General Translinear Principle for Subthreshold MOS Transistors". He organized the 1995 Nips Post-Conference Workshop "Neural Hardware Engineering ". From July 1997 until June 1999 he has been Associate Editor of the IEEE Transactions on Circuits and Systems Part II , and from January 1998 until December 2009 he was also Associate Editor for IEEE Transactions on Neural Networks . Since April 2010 he is Associate Editor for the new journal "Frontiers in Neuromorphic Engineering", as part of the open access "Frontiers in Neuroscience" journal series (http://www.frontiersin.org/). Since Jan. 2021 he is Specialty Chief Editor of "Frontiers in Neuromorphic Engineering".
He is co-author of the book "Adaptive Resonance Theory Microchips ". He was Chief Guest Editor of the IEEE Transactions on Neural Networks Special Issue on 'Hardware Neural Networks Implementations '. He is an IEEE Fellow since January 2010. He is listed among the Stanford top 2% most world-wide cited scientist in Electrical and Electronic Engineering (top 0.62% world-wide, 8th in Spain, 2nd in Andalucía, 1st in CSIC).
We will briefly give an overview of vision with Dynamic Vision Sensor (DVS) cameras, processing with spiking-based hardware processing modules, and link it with emerging nanoscale synaptic-like devices which can exploit on-line bio-inspired learning. DVS cameras are frame-free strongly bio-inspired vision sensors that result in highly energy efficient visual information encoding, very well suited for processing with spiking neural networks. We will present techniques to process such signals with spiking neural network hardware that can be modularly expanded to scaled-up systems. Spike Timing Dependent Plasticity (STDP) is one type of learning rule for Spiking Neural Networks (SNN). We will present how STDP can be implemented by exploiting novel nano-scale memristor devices, used as synapses, whose resistance changes as correlated spiking signals appear at their terminals. We will show experimental results from a CMOS chip with 4k monolithically integrated nanoscale memristors performing spiking computation and recognition of spiking patterns.
1.2-I4
The recent discovery of surrogate gradient learning (SGL) has been a game changer for the more biology inspired spiking neural networks (SNNs). In short, by solving non-differentiability issues, it reconciles SNNs with backpropagation, THE algorithm that caused the deep learning revolution. SNNs and conventional artificial neural networks (ANNs) can now be trained using the same algorithm and the same auto-differentiation enabled tools (e.g. PyTorch or TensorFlow). This bridges the gap between SNNs and ANNs, and makes the comparison between them fairer.
In this talk, I will review recent works in which we show that SNNs trained with SGL can solve a broad range of problems, just like ANNs, but possibly with orders of magnitude less energy, once implemented on event-based hardware. These problems include image and sound classification, depth and optic flow estimation from event-based cameras, encrypted Internet traffic classification, epileptic seizure detection from electro-encephalograms, etc.