Experiment: Benford's Law and Neural Weights

« Back to Live Experiments

Benford's Law and Neural Weights

Category: Machine Learning

Summary: Testing whether neural-network weights, gradients, and updates develop Benford-like leading-digit distributions during training.


Benford's law describes the uneven frequencies of leading digits in many naturally occurring numerical datasets. This experiment asks whether gradient-based learning produces a similar pattern in neural-network parameters and related quantities such as gradients and weight updates.

The workflow trains networks of different sizes, samples parameters at several stages of training, computes leading-digit distributions, and compares them with Benford expectations using goodness-of-fit tests. Layer-wise and time-resolved measurements help distinguish whether any agreement is global, local, or transient.

That makes the project a statistical probe of training dynamics rather than a performance benchmark. The interesting question is whether multiplicative effects in optimization leave a measurable numerical signature in learned weights.

Method: Repeated neural-network training with leading-digit analysis and goodness-of-fit testing for weights, gradients, and updates across training stages.

What is measured: Leading-digit distributions, chi-squared goodness-of-fit to Benford's law, layer-wise deviations, and changes from initialization to final training.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026