Experiment: Compositional Generalization in Neural Networks

« Back to Live Experiments

Compositional Generalization in Neural Networks

Category: Machine Learning

Summary: Testing whether small neural networks learn reusable compositional rules or merely memorize familiar feature combinations.


Humans can often apply known rules to combinations they have never seen before, while neural networks frequently struggle with that kind of systematic recombination. This experiment asks whether simple multilayer perceptrons trained on a synthetic structured task can learn the underlying compositional rule rather than just the training combinations.

The task is built from four independent feature groups whose binary signals jointly determine the label. Training deliberately omits some combinations, and evaluation then checks both ordinary in-distribution performance and accuracy on the held-out compositions. Width, depth, and learning rate are swept to see which training setups encourage rule-like generalization.

That makes the experiment a clean diagnostic for how networks balance abstraction against memorization. Success on the held-out combinations would indicate that the model has captured the reusable structure of the task, not just its most common surface patterns.

Method: Synthetic compositional classification with systematic holdouts, evaluated across MLP widths, depths, and learning rates.

What is measured: In-distribution accuracy, held-out compositional accuracy, generalization gap, and dependence on width, depth, and learning rate.


Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026