Experiment: Memorization Dynamics

« Back to Live Experiments

Memorization Dynamics

Category: Machine Learning

Summary: Tracking whether networks learn clean examples before they begin memorizing corrupted labels as label noise increases.


Neural networks can often fit noisy labels, but evidence suggests they may first learn the cleaner underlying structure before memorization takes over. This experiment asks how that balance shifts as the fraction of corrupted labels increases.

The design trains a two-hidden-layer MLP on a synthetic classification task while sweeping label corruption from 0 percent to 60 percent. It records per-epoch behavior on clean and corrupted subsets separately, so learning and memorization can be disentangled rather than folded into one aggregate accuracy number.

That makes the run a temporal study of generalization versus memorization. The key scientific question is whether clean structure is reliably learned first and how the crossover changes with increasing corruption.

Method: NumPy MLP training on synthetic classification with a sweep over label-noise fractions and separate per-epoch evaluation on clean and corrupted subsets.

What is measured: Per-epoch performance on clean examples, per-epoch performance on corrupted examples, dependence on corruption fraction, and timing of memorization onset.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026