AXIOM BOINC SESSION RESULTS (Part 3 Consolidation) Session timestamp: 2026-03-04 14:30 (America/Denver) Source logs: validate_2026-03-04_1428.txt + run_2026-03-03_1352.log RESULTS REVIEWED AND CREDIT AWARDED (Part 1) - Reviewed and credited 20 completed success results (non-cumulative; DB remains source of truth). - Payload audit: 20/20 resolved upload files parsed as valid JSON with `experiment_result` objects. - Elapsed-time profile of credited batch: min=822.12s, median=846.51s, max=1022.45s. - Total credit awarded this session: 405 (under 10,000 cap). - Per-user credit additions: Orange Kid +100; Steve Dodd +84; ChelseaOilman +80; WTBroughton +61; Armin Gips +40; PyHelix +20; vanos0512 +20. - Counter updates from validation log: credited_count 1678 -> 1698; total_results_count 1974 -> 1979. STUCK/BROKEN TASK CLEANUP - Stuck dead-host cleanup (>12h running, host silent >6h): 0 aborted. - Hard-ceiling cleanup (>48h running): 0 aborted. - Broken-experiment screening (recent 72h nonzero-elapsed failures, CPU+GPU): no broad unsent/in-progress abort required. - Retirement pass in Part 2: `ABORT_TOTAL=0` (retirement candidates existed but unsent backlog was zero at action time). DEPLOYMENT SUMMARY (Part 2) CPU DEPLOYMENT - Deployment model: host-targeted queue fill to ~3x CPU slots; hosts with RAM <6GB skipped. - CPU hosts seen: 81. - CPU hosts skipped for low RAM: 2. - CPU workunits created: 2937. - CPU scripts used: 1. wd_batchnoise_interaction.py (newly added in this run) 2. wd_labelsmooth_interaction.py GPU CHECKPOINT - GPU deployment run initiated with GPU-aware script set: 1. wd_curvature_trigger_gpu.py 2. wd_timing_scale_gpu.py - Run log ended with manual interrupt (`^C`) before final GPU counters were emitted. - GPU hosts deployed (from this run log): not recorded due interruption. - GPU workunits deployed (from this run log): not recorded due interruption. NEW EXPERIMENTS DESIGNED + NOVELTY CHECK NOTES - New experiment script created and syntax-checked: `wd_batchnoise_interaction.py` (remote `python3 -m py_compile` returned `OK`). - Intended mechanism test: whether late weight-decay gain is stronger under high-noise/small-batch training than large-batch training. - Novelty-check search queries recorded in run log: 1. weight decay batch size interaction neural networks 2. arxiv weight decay batch size interaction deep learning 3. Scheduled Weight Decay paper arxiv 2021 4. site:arxiv.org weight decay label smoothing interaction 5. site:arxiv.org adaptive weight decay deep neural networks 6. site:arxiv.org batch size weight decay generalization 7. arxiv 1711.05101 decoupled weight decay regularization - Novelty position used for deployment planning: no exact prior match was documented in-session for this specific late-WD x batch-noise interaction protocol and metric bundle. KEY SCIENTIFIC FINDINGS 1. All 20 newly credited results were science-bearing (`experiment_result`) with zero parse failures, supporting current pipeline integrity. 2. Credited runtimes clustered in a stable band (~822-1022s), consistent with healthy throughput across active experiment families. 3. No active nonzero-elapsed crash cluster (CPU or GPU) was detected in the 72h screening window, so broad abort was not warranted. 4. Over-seeded retirement candidates remained unsent-clean at deployment time (ABORT_TOTAL=0), indicating queue pressure came from active lines rather than retired-family backlog.