Experiment: Loss of Plasticity in Continual Learning

« Back to Live Experiments

Loss of Plasticity in Continual Learning

Category: Machine Learning

Summary: Measuring how a neural network's ability to learn new tasks deteriorates over a continual-learning sequence, and whether shrink-and-perturb resets mitigate that decline.


Continual learners often become harder to train as they move from one task to the next, even aside from ordinary forgetting. This experiment studies that loss of plasticity directly by asking how quickly a network can learn each new task in a sequence and how its internal structure degrades over time.

A small multilayer perceptron is trained on four synthetic classification tasks in succession. For each transition, the script measures learning speed, stable rank, dead-neuron fraction, gradient norms, and forgetting, and it compares a standard continual learner against a shrink-and-perturb variant that partially refreshes the weights before each new task.

That setup separates two questions that are often mixed together: whether old knowledge is forgotten, and whether the model remains adaptable at all. The mitigation arm tests whether partial resetting can preserve plasticity without discarding all accumulated structure.

Method: Sequential training of a small MLP across multiple tasks, comparing a naive continual learner with a shrink-and-perturb reset strategy.

What is measured: Learning-speed ratio, stable rank, dead-neuron fraction, gradient norm, forgetting across previous tasks, and mitigation effects from shrink-and-perturb.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026