Experiment: Mode Connectivity v2

« Back to Live Experiments

Mode Connectivity v2

Category: Machine Learning

Summary: Testing whether independently trained neural networks are connected by low-loss paths or separated by genuine barriers in parameter space.


Neural-network loss landscapes often look rugged locally, but many studies suggest that separately trained solutions can still be joined by surprisingly smooth low-loss paths. This experiment asks how often that happens in a controlled setting by training multiple models on the same task and linearly interpolating between their parameters.

The script trains several copies of the same multilayer perceptron from different random initializations, then measures train and test loss along interpolation paths between pairs of solutions. It records midpoint loss, maximum interpolation loss, and whether a meaningful barrier appears above the endpoint losses.

That makes the project a geometric study of optimization landscapes rather than an accuracy benchmark. The goal is to test whether good solutions live in isolated basins or in a broader connected region of parameter space.

Method: Repeated independent MLP training followed by linear interpolation through parameter space and loss evaluation along endpoint-to-endpoint paths.

What is measured: Endpoint losses, midpoint loss, maximum interpolation loss, barrier height, presence or absence of a loss barrier, and pairwise mode-connectivity statistics.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026