Publication date: 21st July 2025
Facing the energy and computational demands of large artificial intelligence (AI) models, significant efforts have focused on overcoming the memory bandwidth bottleneck by integrating memory and processor units [1]. Analog in-memory computing (AIMC), particularly with resistive array-based architectures, is a promising approach, enabling massively parallel, energy-efficient vector matrix multiplication (VMM) operations directly where data resides [2]. Resistive crossbar arrays efficiently map deep neural network (DNN) architectures onto real hardware, realizing the synaptic interconnects where the cross-point resistive devices store synaptic weights as conductance values [3].
In this talk, we present a CMOS-integrated analog resistive memory (ReRAM) technology based on fab-friendly conductive metal oxide and HfOx materials [4], enabling fully parallel in-memory compute (inference and training) operations in crossbar circuits. The results highlight the potential of our technology for scalable, energy-efficient analog AI hardware for both inference and training - in one platform.
This work is funded by EU within the PHASTRAC (grantID: 101092096). The authors also acknowledge the Binnig and Rohrer Nanotechnology Center (BRNC) at IBM Research Europe.