Experiment: Compositionality Critical Period

« Back to Live Experiments

Compositionality Critical Period

Category: Machine Learning

Summary: Identifying whether compositional generalization has a training-time inflection point and whether targeted interventions at that moment can preserve it.


Compositional generalization can degrade during training even when in-distribution performance keeps improving. This experiment asks whether there is a distinct critical period where out-of-distribution compositional accuracy peaks and then begins to collapse, especially in wider or deeper networks.

The script first trains baseline models while monitoring out-of-distribution accuracy to locate an inflection epoch. It then branches from that checkpoint with interventions such as learning-rate reduction or weight-decay injection and compares the final compositional gap against the unmodified continuation.

That makes the project a timing-sensitive intervention study rather than a static architecture comparison. The aim is to determine whether preserving compositionality depends on acting at the right training moment, not just choosing the right regularizer in advance.

Method: Baseline training to detect the OOD-accuracy inflection epoch, followed by branch interventions such as learning-rate reduction and weight-decay injection from that checkpoint.

What is measured: Inflection epoch, out-of-distribution accuracy, compositional gap, effect of intervention strategy, and dependence on width and depth.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026