Experiment: Representation Alignment Dynamics

« Back to Live Experiments

Representation Alignment Dynamics

Category: Machine Learning

Summary: Measuring whether independently trained neural networks converge toward similar internal representations, and how that depends on width and depth.


Neural networks can reach similar performance from different random initializations while still learning very different internal features. This experiment asks how much hidden representations actually align across seeds, whether wider networks converge to more similar solutions, and which layers become stable first during training.

The setup trains multiple networks on the same synthetic classification problem and measures hidden-state similarity with Centered Kernel Alignment at repeated checkpoints. By sweeping width and comparing shallow and deeper models, it builds a time-resolved picture of whether training pushes representations into a common subspace or leaves them strongly path-dependent.

This matters because alignment is one way to distinguish richer feature learning from lazier, more kernel-like behavior. If wide networks align strongly across seeds, that would support the view that scale can make learning dynamics more constrained and less dependent on initialization details.

Method: Repeated multi-seed MLP training with checkpointed CKA comparisons between hidden representations across widths and depths.

What is measured: CKA alignment across seeds, layerwise alignment over time, width and depth dependence, and associated task accuracy.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026