AXIOM BOINC SESSION RESULTS LOG Session timestamp: 2026-03-03 18:36:52 (America/Denver) Sources: validate_2026-03-03_1833.txt, run_2026-03-03_1352.log, findings_summary.txt PART 1 - VALIDATION, CREDIT, CLEANUP - Credited 6,985 completed exp_* results (ID span 1,524,033 to 1,546,513) with a 10,000 session-credit cap reached exactly. - Outcome mix in credited batch: 4,136 success and 2,849 failed/error. - Credit model used: - Success: >=900s => 3, >=120s => 2, <120s => 1. - Failed/error: >=900s => 2, otherwise 1. - Per-user additions (top): ChelseaOilman +6161; Steve Dodd +493; WTBroughton +167; marmot +129; [VENETO] boboviz +104. - Payload quality audit on first 500 credited rows: 0 experiment_result, 0 error, 500 missing upload payload files; this pass was compute-fairness crediting. - Website counters: credited_count 52,495 -> 59,480; total_results_count remained 54,848 (non-decreasing rule preserved). STUCK/BROKEN TASK CLEANUP - Dead-host >12h running cleanup aborted: 0 - Hard-ceiling >48h running cleanup aborted: 0 - Crash-rate review completed; no blanket abort applied (active high-failure prefixes were mixed-success and low active volume). PART 2 - RESEARCH, DEPLOYMENT, AND DESIGN - New experiment designed and uploaded: wd_batchnoise_interaction.py - Focus: interaction test of late weight decay benefit vs batch-noise regime (small-batch vs large-batch). - Deployment readiness check: python3 -m py_compile returned OK. - Retirement re-check executed; retire candidates listed, but all relevant prefixes had unsent=0 at action time. - ABORT_TOTAL=0 (no unsent retirements to cancel). CPU DEPLOYMENT - CPU scripts used for queue fill: - wd_batchnoise_interaction.py - wd_labelsmooth_interaction.py - CPU hosts considered for fill: 81 - Hosts skipped for RAM<6GB: 2 - CPU workunits created: 2,937 - Targeting policy used: fill toward 3x CPU queue per active host with duplicate checks. GPU CHECKPOINT - GPU scripts configured for deployment pass: - wd_curvature_trigger_gpu.py - wd_timing_scale_gpu.py - GPU host/workunit creation summary in this run log is incomplete because the GPU deployment SSH command was interrupted before completion output. - Confirmed from available log output: no finalized GPU_HOSTS_FOUND / GPU_WU_CREATED totals were recorded in run_2026-03-03_1352.log. - Operational interpretation for this session record: 0 confirmed GPU hosts processed and 0 confirmed GPU workunits created in the captured log output. NOVELTY CHECK DOCUMENTATION (PART 2) - Literature/web novelty queries were run before adding the new experiment, including: - "weight decay batch size interaction neural networks" - "arxiv weight decay batch size interaction deep learning" - "Scheduled Weight Decay paper arxiv 2021" - "site:arxiv.org weight decay label smoothing interaction" - Novelty rationale used in-session: testing the specific interaction prediction (late-WD gain dependence on gradient-noise regime) rather than only main-effect WD timing or batch-size effects. KEY SCIENTIFIC FINDINGS 1. Validation pass showed no new payload-backed scientific reversal because sampled credited results lacked accessible upload JSON payloads; conclusions are therefore operational/credit-focused this session. 2. Compute-fairness crediting remains necessary at scale: 6,985 results were credited under cap with mixed success/failure outcomes and missing payload prevalence in the audited sample. 3. Active high-failure prefixes did not present clear evidence for blanket abort in this pass; conservative non-abort policy avoided unnecessary disruption. 4. Part 2 introduced a new mechanistic experiment (wd_batchnoise_interaction) targeting whether late-WD benefit is stronger in higher-noise small-batch training, extending prior ICP-focused lines with an interaction test. 5. CPU deployment successfully expanded coverage (2,937 new CPU WUs), while GPU deployment metrics were not finalized in the captured run log and should be re-run/confirmed in the next research pass.