Experiment: Lottery Ticket Hypothesis v2

« Back to Live Experiments

Lottery Ticket Hypothesis v2

Category: Machine Learning

Summary: Testing whether iteratively pruned subnetworks with weight rewinding outperform equally sparse reinitialized or randomly pruned networks on the same task.


The lottery-ticket hypothesis proposes that large randomly initialized networks contain sparse subnetworks that are already especially well suited for training if they are isolated and rewound to an early state. This experiment examines that idea with iterative magnitude pruning, asking how far sparsity can be pushed before rewound subnetworks clearly outperform both random pruning and full reinitialization.

The script trains a baseline multilayer perceptron on a Gaussian-cluster classification task, repeatedly prunes the smallest remaining weights, rewinds survivors to an early checkpoint, and compares their behavior with equally sparse controls. It tracks how accuracy and loss evolve across pruning rounds and searches for a critical sparsity where the lottery-ticket advantage becomes visible.

That makes the project a mechanistic comparison of sparsity procedures rather than a single compressed-model result. The goal is to locate where pruning begins to reveal special structure in the surviving network instead of merely reducing capacity.

Method: Iterative magnitude pruning with weight rewinding on a Gaussian-cluster MLP, compared against random pruning and reinitialized controls across many sparsity levels.

What is measured: Critical sparsity, train and test accuracy by sparsity, train and test loss, pruning-round performance, and lottery-ticket advantage over matched controls.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026