Category: Machine Learning
Summary: Testing how reservoir size and spectral radius affect echo-state-network prediction quality on chaotic Lorenz time series.
Reservoir computing uses a fixed recurrent dynamical system plus a trained linear readout, but practical performance depends strongly on the reservoir's internal scale and memory. This experiment asks how prediction accuracy on the Lorenz attractor changes as reservoir size and spectral radius vary.
The workflow generates Lorenz trajectories with numerical integration, builds echo state networks across a grid of reservoir settings, and fits readouts with ridge regression on held-out data. That isolates how recurrent memory and dynamical richness trade off in a standard chaotic forecasting problem.
The broader scientific question is where useful nonlinear memory sits between underpowered and overly unstable reservoirs. The experiment turns that qualitative idea into a measurable performance map.
Method: Lorenz time-series generation followed by echo-state-network sweeps over reservoir size and spectral radius with ridge-regression readouts.
What is measured: Prediction mean-squared error, dependence on reservoir size, dependence on spectral radius, and relative forecasting quality across configurations.
