Experiment: Feature Learning Phase Transitions

« Back to Live Experiments

Feature Learning Phase Transitions

Category: Machine Learning

Summary: Mapping where a simple neural network on spiral classification behaves in a lazy regime versus a rich feature-learning regime as width, learning rate, and initialization scale vary.


Neural networks can sometimes solve a task mostly by making small kernel-like adjustments around their initialization, while in other settings they substantially reorganize internal features. This experiment asks where that lazy-to-rich transition falls for a modest multilayer perceptron trained on spiral classification data.

The script sweeps network width, learning rate, and initialization scale, then measures final accuracy, relative weight change, and representation similarity between the learned hidden state and its initialization. Runs are classified as lazy or rich depending on how much the parameters move, producing a phase map over the training hyperparameters.

That framing matters because the question is not just which configuration performs best. It is about whether qualitatively different training regimes appear in a systematic way, and whether those regimes line up with measurable representation change rather than only with accuracy.

Method: Parameter sweeps of a two-hidden-layer MLP on spiral data, with regime classification based on relative weight change and representation similarity.

What is measured: Train and test accuracy, relative weight change, representation similarity, phase-transition locations, and counts of lazy versus rich runs.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026