Experiment: Gradient Descent Loss Landscapes

« Back to Live Experiments

Gradient Descent Loss Landscapes

Category: Machine Learning

Summary: Mapping two-dimensional slices of a trained neural network loss surface around a learned minimum and projecting the training trajectory into that plane.


Optimization in neural networks depends not only on final accuracy but also on the geometry of the loss surface near and around the learned solution. This experiment asks what a local two-dimensional slice of that landscape looks like once a small multilayer perceptron has been trained.

The script trains a compact MLP, chooses two random orthogonal directions in weight space, and evaluates the loss on a grid centered at the trained minimum. It also projects the optimization trajectory into the same plane so the path taken by gradient descent can be compared directly with nearby basins, ridges, and curvature changes.

That makes the output useful for interpreting optimization geometry rather than only reporting endpoint metrics. The resulting landscape slices provide a compact view of how the trained minimum sits inside the surrounding loss surface.

Method: Train a small MLP, sample a 2D orthogonal plane in weight space around the learned minimum, and evaluate loss over a grid plus the projected training trajectory.

What is measured: Loss values on a 2D weight-space grid, grid coordinates, and the training trajectory projected onto the same plane.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026