Experiment: Curriculum Learning Dynamics

« Back to Live Experiments

Curriculum Learning Dynamics

Category: Machine Learning

Summary: Testing how example order by difficulty changes learning speed, generalization, and width dependence in neural-network training.


Neural networks may already follow an implicit curriculum by fitting easy examples before hard ones, which raises a practical and scientific question: does an explicit curriculum help, or does it interfere with the order SGD would naturally discover? This experiment compares easy-first, hard-first, mixed, and randomly shuffled training sequences on the same synthetic task.

The design controls example difficulty with noise, then sweeps network width and learning rate while recording train and test accuracy, convergence timing, loss curves, and accuracy by difficulty level. That allows the study to separate faster optimization from genuinely better learning dynamics.

The broader goal is to test whether ordering examples reshapes feature competition in a predictable way. If explicit curricula help only in some width or learning-rate regimes, that would suggest the benefit depends on the underlying optimization dynamics rather than on a universal rule of presentation order.

Method: Synthetic classification training sweeps crossing example-order schedule, width, and learning rate while tracking per-difficulty performance over 150 epochs.

What is measured: Train and test accuracy, convergence epoch, loss curves, per-difficulty accuracy, and dependence on width and learning rate.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026