Experiment: Edge of Chaos in Random Recurrent Neural Networks

« Back to Live Experiments

Edge of Chaos in Random Recurrent Neural Networks

Category: Neuroscience

Summary: Finding where random recurrent neural networks cross from ordered to chaotic dynamics and how that transition aligns with memory capacity.


Random recurrent neural networks can either damp perturbations away or amplify them chaotically, and the boundary between those regimes is often linked to optimal information processing. This experiment asks where the largest Lyapunov exponent crosses zero as the spectral radius of the recurrent weight matrix is varied.

The script tracks divergence of nearby trajectories to estimate the Lyapunov exponent, and it separately measures memory capacity by training a linear readout to reconstruct past inputs from the hidden state. Comparing those observables reveals whether memory performance peaks near the same point where dynamics leave the ordered regime.

That makes the project a direct numerical study of the edge-of-chaos hypothesis. The interest is in connecting a dynamical stability marker to a functional measure of temporal information retention.

Method: Random-RNN simulations that sweep spectral radius while estimating Lyapunov exponents and memory capacity from the resulting trajectories.

What is measured: Largest Lyapunov exponent, memory capacity, spectral-radius location of the order-chaos boundary, and alignment of memory peaks with that boundary.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026