Experiment: Neural Network Pruning - Strong Lottery Ticket Hypothesis

« Back to Live Experiments

Neural Network Pruning - Strong Lottery Ticket Hypothesis

Category: Machine Learning

Summary: Testing whether a sparse subnetwork found inside a random untrained network can approach the performance of a trained dense model.


The strong lottery ticket hypothesis proposes that useful subnetworks may already exist inside a randomly initialized neural network before any standard weight training occurs. This experiment asks how far that idea can be pushed by searching only for a sparse mask while keeping the underlying random weights fixed.

The script uses the edge-popup method to learn which connections to keep at a range of sparsity levels, then compares those subnetworks against both a dense trained baseline and randomly pruned controls at the same sparsities. That design separates the effect of structured mask discovery from the effect of sparsity alone.

The scientific interest is mechanistic rather than purely practical. If a highly sparse masked subnetwork can match most of the dense model's performance, that suggests that trainability is partly a selection problem over latent subnetworks, not only a weight-optimization problem.

Method: Edge-popup mask training on fixed random weights across multiple sparsity levels, compared with a dense trained baseline and random-pruning controls.

What is measured: Dense-baseline train and test performance, edge-popup accuracy by sparsity, random-pruning control accuracy, maximum sparsity that still matches dense performance, and popup-versus-random accuracy gains.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026