AXIOM BOINC SESSION LOG Session: 2026-03-03 17:06 (America/Denver) Workflow: Part 3 - Consolidated Save/Upload (from Part 1 + Part 2 outputs) PART 1 SUMMARY (VALIDATION / CREDIT / CLEANUP) - Results reviewed: 583 uncredited successful rows (all 583 had experiment_result payloads) and a 400-row failed/error sample (12 experiment_result, 388 missing payload files). - Credit awarded (final capped pass): 2,400 results for total credit 9,998 (within 10,000 cap). - Credited outcomes: success=2,400, failed/error=0. - Credited ID span: 1,517,037 to 1,529,339. - Table updates: result rows 2,400; user rows 9; host rows 16. - Per-user additions: ChelseaOilman +9,134; Steve Dodd +498; kotenok2000 +228; WTBroughton +70; Dirk Broer +28; Armin Gips +14; Manuel Stenschke +12; amazing +8; dthonon +6. - Counter updates: credited_count 33,016 -> 35,416 (+2,400); total_results_count 54,225 -> 54,262. - Post-session backlog snapshot: uncredited success 5,682; uncredited failed/error 71,261. CLEANUP / BROKEN TASK ACTIONS - Dead-host >12h running cleanup: 0 tasks aborted. - Hard >48h running cleanup: 0 tasks aborted. - Broken-experiment blanket aborts: none applied in validation pass. PART 2 SUMMARY (RESEARCH / DEPLOYMENT) - Retirement recheck ran; ABORT_TOTAL=0 (no new unsent rows to retire at run time). - CPU deployment completed in run log: CPU_HOSTS_SEEN=81 CPU_SKIPPED_LOW_RAM=2 CPU_WU_CREATED=2937 - CPU experiments deployed: wd_batchnoise_interaction.py and wd_labelsmooth_interaction.py. - CPU live checkpoint (current queued/running snapshot): wd_batchnoise_interaction: 571 WUs across 17 hosts wd_labelsmooth_interaction: 497 WUs across 16 hosts - GPU deployment pass was initiated in Part 2 with scripts wd_curvature_trigger_gpu.py and wd_timing_scale_gpu.py; run log capture ended during that step. GPU CHECKPOINT - GPU hosts with queued/running WUs for the active GPU scripts: 10 hosts total. - GPU queued/running WUs for active GPU scripts: 104 total. - By script: wd_curvature_trigger_gpu.py: 98 WUs across 10 hosts wd_timing_scale_gpu.py: 6 WUs across 4 hosts NEW EXPERIMENT DESIGN + NOVELTY CHECK DOCUMENTATION - New experiment script added in Part 2: wd_batchnoise_interaction.py. - Hypothesis: late weight decay provides larger generalization gain in high-gradient-noise (small-batch) training than in low-noise (large-batch) training. - Novelty-check searches recorded in run log: 1) weight decay batch size interaction neural networks 2) arxiv weight decay batch size interaction deep learning 3) Scheduled Weight Decay paper arxiv 2021 4) site:arxiv.org weight decay label smoothing interaction 5) site:arxiv.org adaptive weight decay deep neural networks 6) site:arxiv.org batch size weight decay generalization 7) "weight decay" "batch size" "schedule" neural networks 8) arxiv 1711.05101 decoupled weight decay regularization - Novel angle used for this line: explicit interaction test (timing x noise regime) rather than another single-factor weight-decay schedule sweep. KEY SCIENTIFIC FINDINGS 1. Reviewed successful rows in this validation pass were consistently analyzable (583/583 with experiment_result payloads), supporting continued extraction of scientific signal from the current success cohort. 2. This credited tranche remained dominated by older memdyn/featcomp cohorts and does not contradict established inverse-critical-period conclusions for weight-decay timing. 3. Failed/error legacy backlog still shows heavy missing-payload concentration (388/400 in the sampled failed rows), indicating retention/path-aging effects rather than a newly emerged single crash signature. 4. Part 2 introduced and seeded a new mechanistic interaction line (wd_batchnoise_interaction) that specifically tests whether late-WD gains depend on gradient-noise regime; this is a targeted extension beyond prior main-effect-only WD timing checks. NOTES - This log intentionally omits cumulative credited result-ID inventories; authoritative credit state is in the database.