H4-42-I1
Gianluca Milano, PI of the ERC MEMBRAIN, is currently Senior Researcher at the Italian National Institute of Metrological Research (INRiM). He holds a Ph.D. in Physics cum laude from Politecnico di Torino in collaboration with the Italian Institute of Technology (IIT), a bachelor’s and master’s in physics from the University of Torino. He his author of more than 50 articles in peer-reviewed international journals, 2 patents, and gave more than 10 invited talks at international conferences. Besides being the responsible for national and European projects at INRiM, he was coordinator of the European project MEMQuD that involved 15 European partners including universities, research centers and industries. He is associate editor of APL Machine Learning, and he is co-organizer of the workshop Deep Learning meets Neuromorphic Hardware. He was selected as “Emerging Leader 2022” by the Journal of Physics D: Applied Physics Editorial Board, he received the Advanced Materials Award 2025 from the International Association of Advanced Materials for the notable and outstanding research contribution in the field of Advanced Materials Science & Technology, and he received the NEST prize for Nanoscience in 2021.
Self-organizing memristive networks have been demonstrated as physical substrates for in materia implementation of unconventional computing paradigms such as reservoir computing [1,2]. However, a detailed understanding of the relationship between electrical properties, emergent dynamics and computational properties in these multiterminal systems remain challenging. Here, we report on emergent functionalities in multiterminal nanowire networks through a combined experimental and modeling approach. Beyond extending the traditional two-terminal memristive concept to multiterminal architectures, we discuss experimental characterization techniques used to analyze these emergent behaviors, including conductance matrices, voltage maps, and conductivity maps obtained via electrical resistance tomography. By linking self-organizing memristive systems to dynamical systems theory, we introduce a new framework that enables the description of self-organizing nanowire networks as stochastic dynamical systems [3]. In this framework, emergent dynamics are modeled as an Ornstein-Uhlenbeck process, where stimulus-dependent deterministic trajectories coexist with stochastic effects such as noise and discrete jumps. Within this context, we demonstrate that the emergent dynamics of nanowire-based self-organizing systems can be harnessed for computation. We evaluate their computational capabilities through benchmark tasks, including nonlinear autoregressive moving average (NARMA) and nonlinear transformation (NLT) tasks, and discuss the implementation of pattern recognition, speech recognition, and time-series prediction.
H4-42-I2
Susanne Hoffmann-Eifert holds a degree in physics and a doctorate in electrical engineering from RWTH Aachen University. She currently works as a senior scientist at the Peter Grünberg Institute for Electronic Materials at Forschungszentrum Jülich, where she heads a research group on functional thin films for novel electronic devices. She has been engaged in research and development of high-dielectric and ferroelectric thin films for volatile and non-volatile random access memory. Her current research projects focus on the fabrication, optimisation and CMOS integration of nanoscale memristive devices for use in neuromorphic computing circuits. She is the author and co-author of more than 145 peer-reviewed publications. ORCID: 0000-0003-1682-826X
Neuromorphic computing implemented with spiking neural networks (SNNs) using volatile threshold switching devices is a promising computing paradigm that may overcome future limitations of the von Neumann architecture. For the computation by dynamics and time new technologies for devices with high dynamics are required, for which volatile electrochemical memory (v-ECM), also known as diffusive memristor, is very promising [1,2]. v-ECM devices switch from the high-resistance to the low-resistance state when the voltage rises above the threshold voltage and automatically restores the high-resistance state after a relaxation time when the voltage falls below the holding voltage [3]. The advantage of v-ECMs compared to insulator-to-metal-based threshold switches is their extremely low leakage current and the low threshold voltages, while a drawback is the higher switching variability. We have investigated in detail the set kinetics and relaxation dynamics of v-ECM threshold switches based on analysis of the transient current in pulsed voltage measurements for Ag/HfO2/Pt crossbar devices made of 3 nm amorphous HfO2 by atomic layer deposition. The results show a clear correlation between the relaxation time and the operating parameters with reference to the set kinetics [4,5]. This is crucial for the application of v-ECM cells in neuromorphic circuits. There are several approaches to understand the physical origin of the relaxation behaviour, ranging from surface energy minimization [6] to the effect of electromotive force [7]. The presentation will summarize different approaches of device fabrication, physical understanding, and applications of v-ECM devices, also known as diffusive memristors. In addition, actual results based on integrate-and-fire circuits will be discussed.
H4-42-O1

Nowadays, the increasing popularity of extremely energy demanding AI models raises serious concerns in terms of environmental sustainability [1]. At the same time, the need to process vast amounts of often unstructured data highlights the urgency of exploring new and efficient computational paradigms. The mammalian brain performs complex tasks with remarkable efficiency compared to digital computers, exhibiting adaptability and learning capabilities. These features arise from brain peculiar architecture, comprising hundred billions neurons interconnected through an enormous number of synapses. Neurons integrate signals in a nonlinear fashion, while synapses modulate their transmission efficacy through plasticity, thus enabling learning. Neuromorphic Computing (NC) draws inspiration from the brain to develop viable alternatives to conventional digital computing.
Recently, gold cluster–assembled films (ns-Au) have attracted significant attention due to their topological complexity, memory effects, and non-linear electrical response, making them promising physical substrates for NC applications. Their intriguing neuromorphic properties arise from Resistive Switching (RS) phenomena displayed by the nano-junctions distributed thorough the network [2][3]. Notably, the implementation of a reconfigurable nonlinear threshold logic gate based on ns-Au has already been demonstrated [4].
We focused our study on the characterization of the switching behaviour exhibited by ns-Au with the aim to investigate the emergent electrical response of these systems. We employed a novel microthermography based imaging technique to track the regions responsible for the RS activity during electrical stimulation, referred as “switching sites”. This analysis allowed us to reliably study the correlation between different switching sites, gaining insight in the functional connectivity of the network [5]. Moreover, we highlighted that the geometry of the deposit has also a strong impact over the RS activity, particularly influencing the number of switching sites and the statistics of the RS, providing meaningful insight for future design and engineering of ns-Au based devices.
Furthermore, our experiments allowed us to optimize electrical stimulation protocols for reliably tuning the electrical resistance map of multi-electrode device and the connectivity of the network by means of selectively triggering the formation and destruction of conductive paths. This way, we demonstrate that we have achieved a fine reversible control over the conduction state of the devices, which has never been previously reported for self-assembled materials. This characterization paved the way for the implementation of optimized stimulation protocols for implementing specific computing functionalities in reconfigurable ns-Au based multiterminal devices.
H4-42-O2

H4-43-I1
Juan Bisquert (pHD Universitat de València, 1991) is a Distinguished Research Professor at Instituto de Tecnología Química (Universitat Politècnica de València-Consejo Superior de Investigaciones Científicas). He is Executive Editor for Europe of the Journal of Physical Chemistry Letters. He has been distinguished in the list of Highly Cited Researchers from 2014 to 2024. The research activity of Juan Bisquert has been focused on the application of measurement techniques and physical modeling in several areas of energy devices materials, using organic and hybrid semiconductors as halide perovskite solar cells. Currently the main research topic aims to create miniature devices that operate as neurons and synapses for bio-inspired neuromorphic computation related to data sensing and image processing. The work on this topic combines harnessing hysteresis and memory properties of ionic-electronic conducting devices as memristors and transistors towards computational networks. The work is supported by European Research Council Advanced Grant.
Nonlinear oscillators are increasingly recognized as core building blocks for physics-based computation, where information is processed through dynamical behaviors such as spiking, synchronization, and phase relationships. A wide variety of material platforms—including phase-change compounds, mixed ionic–electronic conductors, van der Waals materials, and nanoscale fluidic channels—have been explored for such dynamical devices. Yet across these diverse systems, a unifying challenge emerges: computational functionality is determined not only by materials properties, but by the underlying dynamical structure of the device. In this work, I present a cross-platform dynamical description of oscillator devices—spanning silicon thyristors, organic electrochemical transistors, and rectifying nanopores—demonstrating how nonlinear dynamics and bifurcation theory provide a general framework for designing, tuning, and optimizing oscillatory behavior for physical computation.
Despite their disparate physical mechanisms, these systems share a common architecture: a nonlinear element exhibiting negative differential resistance or negative transconductance, coupled to slow dynamical variables arising from capacitive charging, ionic motion, or configurational relaxation. Together these ingredients form slow–fast dynamical systems capable of self-sustained oscillations through a Hopf bifurcation, generating limit cycles whose amplitude, waveform, and frequency can be engineered through device parameters and external circuit elements.
I first discuss a compact two-terminal silicon thyristor oscillator with an ultrasmooth S-type NDR characteristic, enabling robust and hysteresis-free oscillations. Combined experimental and analytical work identifies a tunable Hopf bifurcation governed by input current and capacitance, producing a continuous transition from sinusoidal oscillations near onset to relaxation waveforms at higher capacitances. When operated close to the bifurcation threshold, the system exhibits stochastic resonance, illustrating how intrinsic nonlinearities can amplify weak signals and support temporal processing tasks.
I then extend the same dynamical framework to a single-transistor organic electrochemical oscillator, where the coupling between ion transport and electronic conduction creates a peaked transfer curve and an effective negative transconductance region. Despite the distinct physics of mixed ionic–electronic materials, the device maps onto the same two-variable oscillator class: a fast destabilizing electronic response combined with a slow ionic recovery. The Hopf criterion accurately predicts the emergence of autonomous oscillations without external amplifiers, showing how minimal circuit motifs can generate functional dynamical behavior in soft and biointegrated materials.
Finally, I show that fluidic nanopores—rectifying channels with hysteresis and deactivation dynamics—exhibit an analogous bifurcation structure. When described with fast activation and slow deactivation variables, the nanopore displays a negative resistance sector and undergoes a Hopf bifurcation, thereby functioning as a minimal liquid-phase oscillator. This demonstrates that the same mathematical formulation governing solid-state and organic oscillators extends naturally to fluidic systems, enabling dynamical computation in aqueous environments.
Across these platforms, a unified dynamical-systems perspective yields general design rules for oscillatory devices: (i) the shape and smoothness of the stationary I–V or transfer curve dictate the onset of instability; (ii) the ratio of fast to slow timescales controls the transition between harmonic and relaxation dynamics; (iii) external resistive–capacitive elements act as tunable parameters for frequency, amplitude, phase, and noise sensitivity; and (iv) operating near bifurcation points provides regimes of enhanced responsiveness, valuable for sensing, synchronization, and oscillator-based computational architectures.
Overall, this work argues that nonlinear dynamics offers a materials-agnostic design principle for next-generation dynamical devices. By treating oscillators as physical–computational entities governed by universal bifurcation structures, we open new pathways for engineering scalable, tunable, and energy-efficient computation across solid-state, organic, and fluidic technologies.1,2
H4-43-I2
Artificial neural network (ANN)-based computing [e.g., deep learning with a multi-layer neural network (NN)] can provide excellent learning, classification, and inference characteristics that are close to, and in some cases beyond, those found in natural intelligence (i.e., the human brain), whereas the enormous amounts of power required by ANN (as in a typical multi-layer NN) are far higher than that required by human beings. To overcome the low energy efficiency of ANN computing, in-materio computing, which harnesses the inherent properties of materials to perform computation, has recently attracted attention. Among various types of computing, physical reservoir computing (PRC) is particularly attractive because it can significantly reduce the computational resources required to process time-series data by leveraging the nonlinear responses of a ‘reservoir’ (a material or device acting as a dynamical system) to input signals. Realizing nonlinear and diverse dynamics with nanomaterials and/or nanospace is thus a major challenge for the development of low-power, highly integrated ANN-based computing devices. Recently, we have developed high-performance PRC devices based on iontronic phenomena. One example is an ion-gating reservoir (IGR), which utilizes ion-electron coupled dynamics in the vicinity of a solid electric double layer [1,2]. Another example is a magnonic PRC device that utilizes the chaotic dynamics of interfered spin waves in a ferrimagnetic Y3Fe5O12 (YIG) single crystal [3]. In the presentation, the device performance and the operating mechanisms of the two PRC systems will be discussed.
H4-43-O1

Randomly assembled materials composed of nanoparticles or nanowires exhibit remarkable functional properties that can be leveraged to mimic brain-like data processing. This capability stems from the inherent memristive character and the intricate wiring of the junctions connecting the building blocks within the nanostructured network.
Specifically, nanostructured cluster-assembled metallic films, such as those fabricated by assembling gas-phase produced platinum clusters, demonstrate nonlinear conduction properties. These arise from the extremely high density of grain boundaries and the resulting complex arrangement of nanojunctions which result in a current response to applied voltage ramps that reveals the presence of negative differential resistance.
Crucially, different stable values of electrical resistance can be accessed and reversibly set using unipolar voltage pulses. This process enables a long-term memory effect due to the stability of these distinct resistive levels over time. Such traits have allowed the integration of these nanostructured films with conventional electronic components in programmable analog circuits, including devices like gain amplifiers and relaxation oscillators. Exploiting the unique properties inherent in these platinum cluster-assembled films, a novel class of neuromorphic systems can be realized for unconventional electronic devices. The reprogrammability of the electrical resistance, the complex connectivity of the network that results in nonlocal modification of the resistance map, and long-term memory effects can be exployted to implement electrical components that function as reconfigurable nonlinear threshold logic gates. Furthermore, the peculiar nonlinear electrical response of platinum-based cluster-assembled devices is demonstrated to be well-suitable for hardware solutions that can significantly decrease energy and time consumption of data processing, particularly for classification tasks.
H4-43-I3

H4-43-O2

Oscillatory dynamics are a cornerstone of computational physics and emerging neuromorphic hardware, where information processing relies on synchronization and collective temporal behavior. Despite their importance, predicting the onset of self-sustained oscillations in physical devices remains a nontrivial task, particularly in systems governed by strong nonlinearities and local activity. In this work, we establish practical and physically grounded criteria for the emergence of oscillations in neuronic units by applying bifurcation theory to systems exhibiting nonlinear and locally active behavior, including both S-type and N-type negative differential resistance [1].
By analyzing stationary states through the Jacobian matrix and tracking stability transitions in the trace–determinant plane, we systematically identify the parameter regimes that give rise to Hopf bifurcations and sustained oscillatory dynamics. These analytical predictions are corroborated by numerical simulations, revealing clear connections between device parameters, oscillation frequency, and stability. In addition, we identify characteristic signatures in the impedance spectra associated with the onset of oscillations, providing experimentally accessible markers of dynamical transitions.
This unified dynamical framework enables reliable prediction and control of oscillatory behavior across a broad class of ionic–electronic and memristive devices, and offers concrete design guidelines for the development of robust, tunable neuromorphic oscillators and scalable oscillator-based computing networks.
H4-52-I1
Tiny processing entities, such as microcontrollers, are at the basis of the Internet of Things paradigm. The integration of Machine Learning models on small computing units require the development of architectures and hardware solutions that do not need the processing support from the cloud and the training processes typical of Artificial Intelligence, which are power consuming and involves data security and privacy risks [1]. To face this challenge, a novel threshold logic gate design, called Receptron, has been recently proposed as an alternative to multilayer architectures based on perceptrons typical of artificial neural networks [2]. The receptron is based on the use of nonlinear weights thus widening the spectrum of Boolean computable functions while simplifying training thanks to a random search protocol [3,4]. The hardware implementation of the Receptron model has been demonstrated to rurn on standard microcontrollers and CMOS components.
The reconfigurability and functional completeness of the receptron allows faster, more efficient and possibly edge, data processing compared to traditional architectures used for artificial neural networks. Here we report the fabrication of a receptron-based electronic board for edge data processing and classification working on analog inputs and capable to learn with an extremely reduced training.
H4-52-I2

I am Principal Investigator at the International School of Advanced Studies (SISSA) of Trieste, Italy.
I was born in [Genova](https://en.wikipedia.org/wiki/Genoa) (Italy) in 1974, I received my *scientific* high-school diploma in 1992, and I graduated summa cum laude in *Electronic Engineering* in 1997 at the [Univ. of Genova](http://www.unige.it) (Italy), specializing in *Biomedical* *Engineering*. In 2001, after I received a [PhD in *Bioengineering*](http://www.dottorato.polimi.it/corsi-di-dottorato/corsi-di-dottorato-attivi/bioingegneria/), with a thesis in Computational Neuroscience by the Polytechnic of Milan (Italy), I decided to move abroad to continue my academic training.
In the same year, I received an award from the [Human Frontiers Science Program Organization](http://www.hfsp.org) to pursue postdoctoral training in experimental Electrophysiology and Neurobiology at the [Inst. of Physiology](http://www.physio.unibe.ch) of the Univ. of Bern (Switzerland),
where I had the opportunity to work with Prof. Hans-Rudolf Luescher and [Prof. Stefano Fusi](http://neuroscience.columbia.edu/profile/stefanofusi). In 2005, I moved to the [Brain Mind Institute](http://bmi.epfl.ch) at the Swiss Federal Institute of Technology of Lausanne where I joined the experimental lab of [Prof. Henry Markram](https://en.wikipedia.org/wiki/Henry_Markram) as junior group leader.
Three years later, in 2008, I was appointed faculty member at the [University of Antwerp](https://www.uantwerpen.be/en/) (Belgium), taking over the
Theoretical Neurobiology lab as a successor of [Prof. Erik De Schutter](https://loop.frontiersin.org/people/132/bio), to extend its scope to
interdisciplinary research in experimental Neuroscience and Neuroengineering. During the period 2013-2015, I was also visiting scientist at the [Neuroelectronics Flanders Institute](http://www.nerf.be) at IMEC, Leuven (Belgium). Over the years, I received visiting appointments at the [Department of Computer Science](https://www.sheffield.ac.uk/dcs) of the University of Sheffield (UK) and at the Brain Mind Institute of the EPFL (Switzerland). In 2012, I received my [tenure](https://en.wikipedia.org/wiki/Academic_tenure) and later, in 2016,
I was promoted to full professor.
From 2008 until 2019, I directed the Laboratory for Theoretical Neurobiology and Neuroengineering, founding in 2017 a new research unit on Molecular, Cellular, and Network Excitability research.
In 2019, I moved to the International School of Advanced Studies (SISSA) of Trieste, where I became faculty in the Neuroscience Area and I started the Neuronal Dynamics Laboratory. In 2024, I was called by the University of Modena and Reggio Emilia to launch a new faculty of Bioengineering for Innovation in Medicine.
Both in Neuroscience and in AI, the input-output transfer function of the units composing a large network is a pivotal element. Known as "activation function" (in Machine Learning) or as the frequency-current curve (in Neurobiology), it is a known mechanism of non-linearity as well as a biophysical primitive for neural computation.
We also know that its knowledge is instrumental (in Computational Neuroscience) to analyse and predict the collective behavior of real neuronal circuits by mean-field theories. Finally, in experimental Neuroscience, the estimate of the frequency-current curve of nerve cells has been used for over three decades to classify and make sense of neuronal diversitys.
However, such a static description is inadequate to interpret how microcircuits and large networks of the brain process time-varying stimuli. Early theoretical studies and late experimental work from our group and others revealed how probing single-cell dynamical response properties is necessary to interpret of ultra-fast ensemble responses and other collective network phenomena.
In this talk, I will review the results on dynamical response properties of neurons and neuronal ensembles and put them into context of the findings of unexpected differences between rodent and human cortical neurons.
Whethere these generalised biological features should be also considered for (Neuromorphic) implementation of AI systems, based on spiking neural models, is a question I will leave open to the audience.
References
Linaro D, Ocker GK, Doiron B, Giugliano M (2019) Correlation transfer by layer 5 cortical neurons under recreated synaptic inputs in vitro, J Neuroscience, 39 (39) 7648-7663, https://doi.org/10.1523/JNEUROSCI.3169-18.2019
Linaro D, Biró I, Giugliano M (2018) Dynamical response properties of neocortical neurons to conductance-driven time-varying inputs, European Journal of Neuroscience 47(1):17–32, https://doi.org/10.1111/ejn.13761
Testa-Silva, G., Verhoog, M.B., de Kock, C.P.J., Baayen, J.C., Meredith, R.M., Giugliano, M.*, Mansvelder, H.D.* (2014) High bandwidth synaptic communication and frequency tracking in human neocortex. PLoS Biology 12(11):e1002007. http://dx.doi.org/10.1371/journal.pbio.1002007
Köndgen, H., Geisler, C., Fusi, S., X.-J. Wang, Lüscher, H.-R., Giugliano, M. (2008) The dynamical response properties of neocortical neurons to temporally modulated noisy inputs in vitro, Cerebral Cortex 18(9), 665-670. http://dx.doi.org/10.1093/cercor/bhm235http://dx.doi.org/10.1093/cercor/bhm235
H4-52-O1

Performances of computers based on von Neumann architecture [1], with separated data processing and memory units, are hindered by power dissipation, execution time and sustainability issues [2]. This makes it challenging to process in real time a substantial amount of data for complex tasks, that are carried out with great efficiency by the human brain. This is a network composed by basic computing units, neurons, which are densely connected in a self-assembled and redundant way through synapses. Their conductive strength depends on the signals previously received, which are spike trains whose frequency encodes information; computation and memory thus take place together.
Computational models and engineering solutions emulating the biological nervous system are called neuromorphic; thin films made of metallic clusters, fabricated by Supersonic Cluster Beam Deposition [4], can be an experimental strategy used to implement in hardware such paradigm. These systems show non-linear electrical properties and resistive switching [5,6]: their resistance has a spiking activity and explores distinct levels when constant voltage is applied.
Here, we report how gold nanostructured networks change their conductive state depending on the features of voltage trains as input signals, in a controllable and reproducible way.
We proved that the device resistance is affected by varying the temporal features and voltage levels of pulsed signals, switching reversibly between levels, several orders of magnitude apart; this switching mechanism is controlled by varying time and polarity features of the signal. Furthermore, devices preserve their conductive state in time and can cycle dozens of times between different resistance levels.
This behaviour, observed at first in two terminal devices, can be found also in multielectrode ones. Not only reversibility and input dependence are preserved, but the more complex terminal geometry allows to study the non-local electrical response of such devices: by applying the proper signal between two terminals it modifies not only the resistance of the path connecting them, but also those of the paths that do not cross it, in a non-trivial and reproducible way.
This type of device emulates the heterosynaptic plasticy featured in the brain, that makes it possible to change its state instantaneously and globally depending on input signal features. We propose the use of multi-electrodes devices based on cluster-assembled thin films both as a neuromorphic multiplexer and reconfigurable threshold logic gate.
To further engineer these devices, we plan to fabricate cluster assembled films with tailored metallic materials, featuring different thermic and electrical properties, as also to study their real time response to analog inputs.
H4-52-I3

Self-assembled conducting-polymer architectures offer a powerful route to create adaptive bioelectronic systems without relying on rigid templates or conventional patterning. Among these materials, PEDOT dendritic fibers formed through AC electropolymerization stand out for their ability to grow into hierarchical, branched networks whose morphology and conductivity emerge directly from local electric fields and ionic dynamics. This template-less growth not only simplifies fabrication but also yields architectures with intrinsic electronic responsiveness and mixed ionic/electronic transport.
In this talk, we present recent advances demonstrating how these dendritic networks can serve as adaptive interfaces for biological systems. Their complex microstructure provides both a physical scaffold and an active electrochemical environment that supports cell adhesion and organization. Because the fibers modulate—and are modulated by—the surrounding electrolyte, they enable a form of reciprocal bioelectronic feedback, where cellular activity influences the polymer’s state and vice versa. This creates opportunities for spatially localized stimulation, sensing, and computation within a single soft-matter platform.
To guide the rational design of such systems, we introduce a computational framework that models field-driven dendritic morphogenesis and predicts the resulting electrotactic landscape for cells. The simulations capture key experimental trends and identify regions that promote directed cell motion or alignment, providing a foundation for programming template-less interfaces capable of influencing cellular behavior.
Overall, these results outline a pathway toward electrochemically-assembled, biologically integrated, and computationally capable materials—advancing the vision of soft, adaptive bioelectronics rooted in emergent physical principles.