Category: Machine Learning
Summary: Measuring how reservoir size and spectral radius shape echo-state-network performance across Lorenz, Mackey-Glass, and NARMA-10 forecasting tasks.
Reservoir computing is attractive because a fixed recurrent core can be paired with a simple trained readout, but performance depends strongly on the reservoir’s internal dynamical regime. This experiment asks how forecast error scales with reservoir size and spectral radius across several benchmark time-series tasks with different levels of complexity and memory demand.
The script trains echo state networks on Lorenz, Mackey-Glass, and NARMA-10 sequences over a grid of reservoir sizes and spectral radii. It then compares normalized mean squared error, searches for best settings per task, and fits scaling relationships between reservoir size and prediction quality.
That turns a tuning exercise into a cross-task scaling study. The important question is whether the expected edge-of-chaos regime and power-law improvement with size appear consistently across qualitatively different dynamical systems.
Method: Echo-state-network sweeps across three chaotic or memory-demanding time-series tasks, five reservoir sizes, and four spectral radii, evaluated by normalized mean squared error.
What is measured: Test NMSE, best configuration per task, best spectral radius by size, scaling-law exponents, mean cross-task ranking of settings, and overall best reservoir configuration.
