Spiking neural networks trained with surrogate gradients as universal function approximators
Timothee Masquelier a
a CNRS, Université de Rennes 1, Campus de Beaulieu, Rennes, 35000, France
nanoGe Fall Meeting
Proceedings of Materials for Sustainable Development Conference (MAT-SUS) (NFM22)
Neuromorphic Sensory-Processing-Learning Systems inspired in Computation Neuroscience
Barcelona, Spain, 2022 October 24th - 28th
Organizers: Bernabé Linares Barranco and Timothée Masquelier
Invited Speaker, Timothee Masquelier, presentation 259
DOI: https://doi.org/10.29363/nanoge.nfm.2022.259
Publication date: 11th July 2022

The recent discovery of surrogate gradient learning (SGL) has been a game changer for the more biology inspired spiking neural networks (SNNs). In short, by solving non-differentiability issues, it reconciles SNNs with backpropagation, THE algorithm that caused the deep learning revolution. SNNs and conventional artificial neural networks (ANNs) can now be trained using the same algorithm and the same auto-differentiation enabled tools (e.g. PyTorch or TensorFlow). This bridges the gap between SNNs and ANNs, and makes the comparison between them fairer.

In this talk, I will review recent works in which we show that SNNs trained with SGL can solve a broad range of problems, just like ANNs, but possibly with orders of magnitude less energy, once implemented on event-based hardware. These problems include image and sound classification, depth and optic flow estimation from event-based cameras, encrypted Internet traffic classification, epileptic seizure detection from electro-encephalograms, etc.

We use our own and third party cookies for analysing and measuring usage of our website to improve our services. If you continue browsing, we consider accepting its use. You can check our Cookies Policy in which you will also find how to configure your web browser for the use of cookies. More info