Experiment: Lottery Ticket Hypothesis

« Back to Live Experiments

Lottery Ticket Hypothesis

Category: Machine Learning

Summary: Testing whether sparse subnetworks found by pruning can match the full model when rewound to their original initialization, while equally sparse random reinitializations fail.


The lottery ticket hypothesis proposes that large networks contain small subnetworks that are especially well initialized for the task from the start. This experiment asks whether iterative pruning can uncover those subnetworks in a controlled multilayer-perceptron setting and whether rewinding to the original initialization preserves their advantage.

The script trains a baseline model, applies iterative magnitude pruning, and then compares rewound subnetworks with random reinitializations at the same sparsity. That comparison is central because it separates the value of the mask itself from the value of the particular initial weights it preserves.

The broader significance is mechanistic. If rewound sparse subnetworks reliably outperform random ones, that supports the idea that training success partly depends on rare favorable initial configurations hidden inside larger models.

Method: Iterative magnitude pruning with weight rewinding in an MLP, compared against sparse random reinitializations at matched sparsity.

What is measured: Accuracy of rewound subnetworks, accuracy of random sparse reinitializations, effect of sparsity level, and evidence for lottery-ticket behavior.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026