Experiment: Feature Competition Dynamics

« Back to Live Experiments

Feature Competition Dynamics

Category: Machine Learning

Summary: Testing how easy high-signal features suppress the learning of harder low-signal features during neural-network training.


Neural networks do not always learn all useful patterns at once. This experiment asks whether strong, easy-to-learn features can dominate gradient updates so thoroughly that weaker but still predictive features are delayed or effectively ignored.

The script trains NumPy-only networks from scratch and tracks learning dynamics when features have different signal strengths. The design is aimed at gradient starvation: a mechanism in which one set of features captures the model's capacity early, leaving little pressure to learn the harder structure.

That matters because many failures of robustness and generalization may come from which features are learned first, not just from final model size or loss value. The experiment turns that intuition into measurable training-time competition.

Method: From-scratch NumPy neural-network training on controlled data with competing feature strengths, measuring learning trajectories over time.

What is measured: Relative learning speed of easy and hard features, evidence for gradient starvation, training-time feature dominance, and resulting generalization behavior.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026