Sequence learning in a memristive crossbar array
Sebastian Siegel a, Younes Bouhadjar b, Tom Tetzlaff b, Regina Dittmann c, Dirk Wouters d
a Peter Grünberg Institute (PGI-10), Forschungszentrum Jülich GmbH, Germany
b Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
c Peter Grünberg Institute (PGI-7), Forschungszentrum Jülich GmbH, Germany
d Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
Proceedings of Materials, devices and systems for neuromorphic computing 2022 (MatNeC22)
Groningen, Netherlands, 2022 March 28th - 29th
Organizers: Jasper van der Velde, Elisabetta Chicca, Yoeri van de Burgt and Beatriz Noheda
Contributed talk, Sebastian Siegel, presentation 014
DOI: https://doi.org/10.29363/nanoge.matnec.2022.014
Publication date: 23rd February 2022

Sequential learning, the training and correct prediction of recurring sequences of different events, is one of the core capabilities of biological brains and has many potential technological applications, e.g., in natural language processing or anomaly detection. Here we focus on a concept for sequence learning inspired by the Spiking Temporal Memory (Spiking TM) algorithm [1]. This algorithm allows for continuous learning in a sparsely active network, which promises energy efficiency and makes it favorable for edge IoT applications. To overcome the shortfalls and limitations of an implementation of this algorithm on conventional von-Neumann machines, in this work we propose mapping of this algorithm on a CMOS-co-integrated resistive switching ReRAM crossbar array. ReRAM devices are two terminal resistive switching devices with voltage-history dependent resistance and are considered low power and low area realizations for weights in neural networks.

In a first step, we illustrate the biologically inspired core principles of the spiking TM algorithm including prediction, inhibition, and firing-rate homeostasis and explain how they are adapted in a spiking array-wide learning rule. This Hebbian-type learning rule is facilitated solely by the neuron circuitry outside of the ReRAM crossbar array. Therefore, weights can be realized by a simple one-transistor-one-ReRAM-device (1T1R) structure. As the number of weights scales quadratically with the number of neurons, this structure is beneficial for the scaling behavior of the architecture. It is important to point out that the spatial and temporal locality of the Spiking TM algorithm can nevertheless be conserved in the proposed learning rule.

Circuit-level simulations of the array and its periphery are conducted employing the physics-based JART VCM v1b model [2]. After illustrating the training procedure, we show how the system successfully learns context sensitivity, e.g., to discriminate between different sequences which share a subset of sequence elements (high-order sequences) and investigate the energy consumption during training.

© FUNDACIO DE LA COMUNITAT VALENCIANA SCITO
We use our own and third party cookies for analysing and measuring usage of our website to improve our services. If you continue browsing, we consider accepting its use. You can check our Cookies Policy in which you will also find how to configure your web browser for the use of cookies. More info