Experiment: Neural Collapse

« Back to Live Experiments

Neural Collapse

Category: Machine Learning

Summary: Tracking the four classical neural-collapse signatures during the terminal phase of training across class count, depth, and hidden dimension.


Neural collapse refers to a distinctive geometric regime that emerges late in classifier training, where within-class variability shrinks, class means adopt a highly symmetric arrangement, classifier weights align with those means, and predictions simplify. This experiment asks how all four canonical signatures develop in multilayer perceptrons trained on synthetic Gaussian-mixture data.

The script measures NC1 through NC4 throughout training while varying the number of classes, network depth, and hidden dimension, and it continues training beyond the point of zero training error to capture the terminal phase where the effect is expected to appear most clearly.

That makes the project a geometry study of late training rather than a standard performance benchmark. The result is a time-resolved map of when and under what architectural conditions the neural-collapse structure becomes visible.

Method: NumPy MLP training on synthetic Gaussian-mixture classification, with time-resolved measurements of NC1 through NC4 across architecture settings.

What is measured: Within-class variability collapse, simplex-ETF structure of class means, alignment between classifier weights and class means, nearest-class-mean prediction behavior, and their dependence on class count, depth, and hidden size.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026