Experiment: Double Descent v2

« Back to Live Experiments

Double Descent v2

Category: Machine Learning

Summary: Measuring whether test error peaks near the interpolation threshold and then falls again as model width keeps increasing under label noise.


Classical bias-variance reasoning predicts a single U-shaped test-error curve, but modern theory suggests a second descent once models become heavily overparameterized. This experiment asks whether that double-descent pattern appears clearly in a controlled classification problem with noisy labels.

The script trains single-hidden-layer networks across a width sweep that crosses the estimated interpolation threshold, where the model first has enough capacity to fit the training data perfectly. By injecting label noise and training to convergence, it sharpens the expected peak and makes the transition easier to resolve.

That gives the experiment a mechanistic focus: not just whether bigger is better, but whether the worst generalization happens right at the boundary between underparameterized and overparameterized fitting.

Method: Width sweep of noisy-label MLP training across the interpolation threshold, measuring train and test performance after convergence.

What is measured: Test accuracy, test loss, training fit quality, location of the interpolation threshold, and strength of the double-descent peak.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026