Experiment: Memorization Dynamics

« Back to Live Experiments

Memorization Dynamics

Category: Machine Learning

Summary: Testing whether neural networks learn clean structure before they begin memorizing corrupted labels.


A common claim in machine learning is that networks first fit genuine patterns and only later memorize noise. This experiment asks how that separation plays out over training when the dataset contains different fractions of corrupted labels.

The script trains a small multilayer perceptron on a synthetic classification task while varying the corruption rate, and it records per-epoch metrics separately for clean and corrupted subsets. That makes it possible to see whether useful generalization peaks before memorization of mislabeled examples accelerates.

The value of the experiment is in the time-resolved comparison, not just final accuracy. It is designed to expose when the training process shifts from learning shared structure to fitting exceptions.

Method: Numpy-based MLP training with label-corruption sweeps and per-epoch metrics tracked separately on clean and corrupted subsets.

What is measured: Clean-subset performance, corrupted-subset performance, training dynamics over epochs, dependence on corruption fraction, and evidence for generalize-before-memorize behavior.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026