Axiom BOINC Experiment Review — Session s0303c Date: March 3, 2026 ~06:30 UTC ================================================================ SYSTEM STATUS ================================================================ Active hosts: 89 (72h window) GPU hosts with idle capacity: 82 (virtually all GPU slots were empty) CPU hosts needing work: 10 (most already well-loaded from s0303b) No stuck tasks or >48h tasks found. RESULTS REVIEWED ================================================================ Total uncredited results: 199 All results successful (outcome=1) By experiment type: - microscalev2: ~15 results (1600-8300s) — heavy compute, replications - featcompv2: ~15 results (150-300s) — feature competition dynamics - compgen: ~12 results (100-300s) — compositional generalization - repalignv2: ~10 results (6-11s) — very fast, representation alignment - svdrank: 16 results (332-649s) — NEW: SVD rank intervention causal test - percolation: 13 results (146-561s) — NEW: percolation scaling physics - mpuniv_gpu: 9 results (20-703s) — NEW: Marchenko-Pastur GPU experiment - wdlrinteract: ~20 results (126-393s) — WD x LR schedule interaction - wdwindur: ~20 results (213-564s) — WD window duration - wdonsetsweep: ~12 results (276-440s) — WD onset sweep - regtimuniv: ~6 results (517-739s) — reg timing universality - repcrystal: ~15 results (73-275s) — representation crystallization - intervtiming: ~8 results (134-242s) — intervention timing - regmech: ~6 results (110-443s) — regularization mechanisms - regcomp: ~4 results (286-335s) — regularization compositionality - bottmech: ~6 results (6-19s) — bottleneck mechanism - wdrebound: ~3 results (82-166s) — WD rebound dynamics - combinedcomp: 2 results (3384-3977s) — combined compositionality - grokking: 1 result (4473s), lottery_ticket: 1 (520s), mode_connectivity: 1 (67s), optimizer_comparison: 1 (67s) — legacy KEY SCIENTIFIC FINDINGS ================================================================ 1. MP UNIVERSALITY IMMEDIATE DEPARTURE (NEW, 5 GPU results) All neural network weight matrices depart from Marchenko-Pastur universality at epoch 0 — the very first gradient update creates detectable structure. Observed on RTX 3050 (4GB) and RTX 3080 (16GB) with widths 256/512/768/1024 and init scales 0.5x/1x/2x Xavier. H1 (wider networks depart later) NOT supported: correlation=0.0. All 6 layers (3 widths x 2 scales checked) departed immediately. Consistent with Pennington & Worah (2017) theory that single gradient steps create rank-1 perturbations detectable by KS test. NOTE: Host 159 (achernar) GPU results failed with CUDA arch mismatch ("CUDA_ERROR_NO_BINARY_FOR_GPU") — 4 failures, not a script bug. 2. SVD RANK INTERVENTION (16 results, ALL seed=42 — SEED BUG) Data collected but not yet statistically valid. All results are identical replications due to seed extraction bug (now fixed). Preliminary single-seed data shows per-config results across widths 32/64/128 with SVD rank fractions 0.5/0.75/0.9 at intervention epochs 0/10/30/50/80/120. Need diverse seeds. 3. PERCOLATION SCALING (13 results, ALL seed=42 — SEED BUG) Data collected but all identical. ER N=100 p_c measured at 0.337, consistent with theory. Full scaling analysis needs diverse seeds. 4. WD WINDOW DURATION (20 results, mostly seed=42 — SEED BUG CONFIRMED) Previous "fix" in s0303b only added inner try/except but the fundamental issue was that working directory lacks WU JSON files. Now fixed with PID+time fallback. BUG FIXES DEPLOYED ================================================================ CRITICAL: Seed extraction failure across ALL experiment scripts. Root cause: The seed extraction code iterates os.listdir('.') looking for workunit JSON files, but the experiment script's working directory does NOT contain these files (they're in the BOINC slot directory, and the wrapper/PyInstaller binary changes the working directory). Fix: Added PID+time fallback to all 7 active scripts: - svd_rank_intervention.py (FIXED) - percolation_scaling.py (FIXED) - wd_window_duration.py (FIXED) - wd_rebound_dynamics.py (FIXED) - bottleneck_mechanism.py (FIXED) - regularization_mechanisms.py (FIXED) - representation_crystallization.py (already had fallback) - wd_lr_interaction.py (already had fallback) - mp_universality_test.py (already had fallback) New results from these scripts will now have diverse seeds based on hostname + PID + time, ensuring genuine independent replications. CREDIT AWARDED ================================================================ Total: 2,114 credit across 199 results, 7 users Per-user breakdown: Steve Dodd: +596 ChelseaOilman: +500 WTBroughton: +382 kotenok2000: +356 PyHelix: +120 marmot: +96 Armin Gips: +64 DEPLOYMENTS ================================================================ GPU workunits: 200 deployed to 82 GPU hosts Script: mp_universality_test.py (GPU-accelerated eigendecomposition) All hosts with p_ngpus > 0 received 2x GPU count workunits Skipped: host 118 (Athlon-x2-250, 3GB RAM) CPU workunits: 108 deployed to 9 hosts Host 123 (Dads-PC, 80 CPU): +40 tasks Host 137 (Note11Ste, 12 CPU): +19 tasks Host 29 (DESKTOP-P57624Q, 8 CPU): +16 tasks Host 320 (Dell-9520, 20 CPU): +17 tasks Host 87 (Dad-Workstation, 80 CPU): +10 tasks Host 345 (Andre-WEBK, 8 CPU): +4 tasks Host 334 (Golf-1, 32 CPU): +1 task Host 16 (dahyun, 32 CPU): +1 task Skipped: host 63 (Latitude, 4GB RAM), host 118 (3GB RAM) Weighted experiment pool: svd_rank_intervention (35%), percolation_scaling (20%), wd_lr_interaction (20%), wd_window_duration (10%), representation_crystallization (5%), regularization_mechanisms (5%), wd_rebound_dynamics (3%), bottleneck_mechanism (2%) NEXT PRIORITIES ================================================================ 1. Await diverse-seed SVD rank and percolation results (seed fix deployed) 2. Analyze MP universality GPU results from 200 new workunits 3. Consider retiring MP universality if immediate departure confirmed across many hosts (the result may be trivially expected from RMT) 4. Need a new CPU experiment — consider: a. Weight distribution dynamics (tracking how distribution shape changes during training: Gaussian → heavy-tailed → ?) b. Gradient interference in multi-task learning c. Loss landscape fractal dimension measurement