AXIOM BOINC EXPERIMENT REVIEW — Session s0302e Date: March 2, 2026 ~19:30 UTC ================================================================ SUMMARY ------- Credited 323 completed results (1,594.7 total credit). Deployed ~1,866 new workunits (1,777 CPU + 89 GPU) to ~70 hosts. No stuck or broken tasks requiring cleanup. Fleet: ~85 active hosts, now fully loaded. CREDIT AWARDED (1,594.7 total, 323 results) -------------------------------------------- Per user: Steve Dodd: 801.7 credit (Dads-PC + DadOld-PC, heavy microscalev2 runs) ChelseaOilman: 620.2 credit (Hotel-3, Dell-9520, Charlie/Delta/Echo/Foxtrot fleet) Anandbhat: 83.5 credit (DESKTOP-EMAFVVL) WTBroughton: 35.2 credit (achernar — legacy grokking + lottery results) marmot: 32.2 credit (XYLENA) kotenok2000: 15.7 credit (DESKTOP-P57624Q) dthonon: 3.6 credit (thonon-meylan) [DPC] hansR: 1.6 credit (dbgrensenh27) Vato: 1.0 credit (iand-r7-5800h) Per experiment type: compgen: 44, bottleneck: 41, regcomp: 36, curriculum: 34 repalignv2: 34, featcompv2: 34, memdynv2: 31, microscalev2: 25 orthocomp: 24, neuronspec: 18, grokking: 1, lottery: 1 DEPLOYMENT (1,866 workunits: 1,777 CPU + 89 GPU) ------------------------------------------------- Experiment priority (weighted): bottleneck_compositionality.py — weight 3 (strongest finding, needs 50+ seeds) orthogonality_compositionality.py — weight 2 (growing confirmation) regularized_compositionality.py — weight 1 compositional_generalization.py — weight 1 feature_competition_dynamics_v2.py — weight 1 representation_alignment_v2.py — weight 1 neuron_specialization.py — weight 1 micro_scaling_laws_v2.py — weight 1 (big hosts only, >=32 cores & >=60GB RAM) GPU workunits: bottleneck_gpu, orthocomp_gpu, neuronspec_gpu, regcomp_gpu 1 GPU WU per GPU on each GPU-capable host. Major hosts filled: DESKTOP-N5RAJSE (192 cores, 2 GPU) — 192 CPU + 2 GPU WUs 7950x (128 cores, 1 GPU) — 128 CPU + 1 GPU WUs SPEKTRUM (72 cores, 2 GPU) — 72 CPU + 2 GPU WUs JM7 (64 cores, 1 GPU) — 64 CPU + 1 GPU WUs DadOld-PC (80 cores, 50 idle) — 50 CPU + 2 GPU WUs Dads-PC (80 cores, 38 idle) — 38 CPU + 2 GPU WUs Plus ~65 additional hosts ranging from 4-32 cores each Skipped hosts: Latitude (4GB RAM), Athlon-x2-250 (3GB RAM), alix (SSL), archlinux (SSL), MSI-B550-A-Pro (exit_status=203), Rosie (exit_status=195) SYSTEM STATUS ------------- No stuck tasks (>12h on dead hosts or >48h on live hosts). No new crash patterns in active experiments. Over-queued hosts still draining from previous sessions: 113, 137, 159, 219, 222, 319. Website counters updated: credited=2901, total_results=26175. KEY SCIENTIFIC FINDINGS ======================= 1. BOTTLENECK COMPOSITIONALITY continues strong positive signal. This session adds ~41 bottleneck results to the growing pool. The bottleneck layer [Wide→Bottleneck→Wide] remains the STRONGEST intervention discovered for rescuing compositional generalization in wide networks. Previous finding: gap reduction of 4-8% absolute (vs 1-2% for orthogonality, 0% for pruning). Best configs: w128_b16, w256_b32. Hierarchy confirmed: bottleneck >> ortho >> dropout >> pruning. 2. ORTHOGONALITY COMPOSITIONALITY accumulates ~24 more seeds this session. Soft orthogonality penalty gives modest but real rescue of compositionality. ortho_rescues_compositionality=true consistent; ortho_helps_wide_networks remains MIXED. 3. REGULARIZED COMPOSITIONALITY adds ~36 regcomp results. Dropout is width-dependent: helps narrow networks more than wide ones. Growing confirmation. 4. MICRO SCALING LAWS continue to show mixed signal on data scaling at micro scale. Parameter scaling consistently does NOT hold. ~25 more results this session from heavy compute hosts. 5. Cross-validation status update: Findings 27 (feature competition ~160+), 28 (rep alignment ~150+), and 31 (compositional generalization ~160+) continue to mature with additional seeds from this session. NEXT SESSION PRIORITIES ----------------------- - Continue bottleneck compositionality replication toward 100+ seed target - Monitor orthocomp for convergence on the "ortho_helps_wide_networks" mixed signal - Consider designing new experiment: "random projection bottleneck" to test whether bottleneck benefit comes from dimensionality reduction per se or from learned compression. If a frozen random projection [W→Random_Fixed→W] also helps compositionality, the mechanism is purely architectural constraint. If only trained bottleneck helps, it's about learning the right compression. - Retired experiments remain retired. No new experiments deployed this session.