AXIOM BOINC SESSION RESULTS Session timestamp: 2026-03-04 09:13 (America/Denver) Source logs: validate_2026-03-04_0907.txt, run_2026-03-03_1352.log PART 1: VALIDATION, CREDIT, AND CLEANUP - Reviewed 195 completed uploads; all 195 contained valid xperiment_result science payloads (no error-json payloads in credited batch). - Credit awarded: 2302 total across 195 results (CPU=183, GPU=12). - Runtime profile for credited batch: min 50.39s, median 845.06s, max 901.72s. - Dominant reviewed experiment families: cstr_poison_purge_reentrance (20), toggle_delay_adaptive_control (17), grayscott_delay_pulse_feedback (11), tritrophic_delay_harvest_resilience (10), metapop_allee_corridor_hysteresis (10). - Stuck/broken cleanup: dead-host >12h aborts=0; hard >48h aborts=0; broken-prefix broad aborts=0. - Website counters after pass: credited_count=5377, total_results_count=4896. - Post-session queue snapshot: completed-success uncredited=0, completed-failed uncredited=0. PART 2: RESEARCH, DEPLOYMENT, AND DESIGN - Retirement re-check executed against over-completed families (svdrank, percolation, wdlr, wdwindow, wdoptim, etc.); unsent backlog was 0, so ABORT_TOTAL=0. CPU deployment - Deployment strategy: host-targeted fill to ~3x CPU queue target, skipping hosts with <6 GB RAM. - CPU scripts used: 1) wd_batchnoise_interaction.py 2) wd_labelsmooth_interaction.py - Deployment metrics from run log: CPU_HOSTS_SEEN=81, CPU_SKIPPED_LOW_RAM=2, CPU_WU_CREATED=2937. - Host targeting: queued to active CPU hosts needing additional work (RAM-qualified hosts only). GPU deployment checkpoint - GPU script list in run log: 1) wd_curvature_trigger_gpu.py 2) wd_timing_scale_gpu.py - The Part 2 run log ended during execution of the GPU deploy command (no final GPU summary lines were written in that log). - Live checkpoint measured after log review (current queued/running set): - wd_curvature_trigger_gpu: 8 active WUs across 3 GPU hosts - wd_timing_scale_gpu: 12 active WUs across 7 GPU hosts - Combined GPU checkpoint: 7 unique GPU hosts, 20 active GPU workunits NEW EXPERIMENTS DESIGNED + NOVELTY CHECK - New experiment script created and compiled: - wd_batchnoise_interaction.py (py_compile OK) - Scientific intent: test whether late weight-decay gains are stronger under high gradient-noise (small-batch) than low-noise (large-batch) conditions. - Novelty check documentation (captured in run log web queries): - site:arxiv.org weight decay label smoothing interaction - site:arxiv.org adaptive weight decay deep neural networks - site:arxiv.org batch size weight decay generalization - "weight decay" "batch size" "schedule" neural networks - arxiv 1711.05101 decoupled weight decay regularization - Conclusion from check: existing literature covers decoupled/scheduled WD and batch-size generalization broadly; this specific late-WD x batch-noise interaction framing was treated as sufficiently novel for deployment. KEY SCIENTIFIC FINDINGS 1. In this validation batch, science payload integrity was high: 195/195 credited uploads were valid xperiment_result outputs with no error-json-only rows. 2. Runtime distribution remained substantial (median ~845s), supporting meaningful volunteer compute contribution rather than trivial instant-return artifacts. 3. Current validated signal remains concentrated in delay-ecology/control families, with highest throughput from cstr_poison_purge_reentrance and toggle_delay_adaptive_control. 4. No active multi-host high-failure crash signature required broad experiment aborts in this session. 5. New mechanism-testing line (wd_batchnoise_interaction) was added to discriminate whether late-WD gains are noise-regime dependent, extending beyond prior fixed-timing WD investigations.