Experiment: Progressive Sharpening and Edge of Stability

« Back to Live Experiments

Progressive Sharpening and Edge of Stability

Category: Machine Learning

Summary: Tracking how loss-landscape sharpness evolves during training and whether it stabilizes near the edge-of-stability prediction.


Modern training dynamics often show a rapid growth in loss-landscape sharpness before settling into a regime that appears neither fully stable nor explosively unstable. This experiment asks whether the top Hessian eigenvalue follows the edge-of-stability picture, hovering near the scale set by the learning rate after an initial sharpening phase.

The script estimates Hessian-vector products with finite differences, uses power iteration to track the leading curvature direction, and monitors how that quantity evolves over the course of training. Because it focuses on the trajectory rather than only the endpoint, it is designed to capture the temporal onset of sharpening and any later stabilization.

That makes the project a dynamical test of a current optimization hypothesis. The point is not just to measure sharpness, but to see whether its time dependence supports the claim that successful training operates near a marginally stable boundary.

Method: Neural-network training with finite-difference Hessian-vector products and power iteration to track the top Hessian eigenvalue over time.

What is measured: Top Hessian eigenvalue, training-time sharpness trajectory, relation to the learning-rate scale, and evidence for edge-of-stability behavior.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026