Category: Machine Learning
Summary: Testing whether two independently trained neural networks are connected by low-loss paths in weight space even when linear interpolation shows a barrier.
Different neural-network solutions often appear isolated if one looks only at straight-line interpolation in parameter space. This experiment asks whether two independently trained models can nevertheless be connected by a curved low-loss path, which would suggest that the loss landscape is flatter and more connected than naive interpolation implies.
The script trains two multilayer perceptrons from different random seeds, probes the loss barrier along the linear segment between them, and then fits a quadratic Bezier path through weight space. It compares whether the learned curve finds a route that stays in a low-loss region.
That matters because mode connectivity bears on how isolated trained solutions really are, and on whether apparently different minima belong to a larger connected basin. The experiment turns that geometric idea into a measurable path-finding problem.
Method: Train two neural networks from different seeds, then compare straight-line and quadratic Bezier paths through weight space using loss evaluations along the path.
What is measured: Loss barrier under linear interpolation, low-loss path quality, Bezier-path connectivity, and endpoint model performance.
