AXIOM BOINC SESSION RESULTS LOG Session: Part 3 Save & Upload (consolidated Part 1 + Part 2) Timestamp: 2026-03-04 05:34 (America/Denver) PART 1 VALIDATION / CREDIT SUMMARY - Validation log source: validate_2026-03-04_0529.txt - Completed uncredited experiment rows reviewed at audit snapshot: 308 - Credited rows applied this session: 308 - Total credit awarded this session: 2122 (safety cap PASS, <= 10000) - File audit for reviewed rows: experiment_result JSON=0, error JSON=0, missing upload artifacts=308, invalid JSON=0 - Non-success rows credited minimally for compute attempts: 4 rows PER-USER CREDIT SUMMARY (session increment) - Highest session credit increments: Amapola +1141, Steve Dodd +497, ChelseaOilman +228, Orange Kid +161 - Additional users credited this run: WTBroughton, zombie67 [MM], Vato, PyHelix, _Scandinavian_, vanos0512, [DPC] hansR, Raimund Barbeln, Henk Haneveld CLEANUP ACTIONS (PART 1) - Broken-prefix abort sweep (server_state IN (2,4)): - exp_oscillatory_roughchannel_lbm_resonance*: aborted 0 - exp_abx_cycle*: aborted 0 - exp_potts_pulse_anneal_resonance*: aborted 0 - exp_spatial_pgg_delay_fatigue*: aborted 0 - Stuck-task cleanup: - >12h running and host silent >6h: aborted 0 - >48h hard ceiling: aborted 0 WEBSITE COUNTERS (PART 1) - credited_count.txt incremented by 308 -> 2137 - total_results_count.txt updated -> 2138 - Remaining uncredited completed exp success rows after run: 9 PART 2 RESEARCH / DEPLOYMENT SUMMARY - Auto-review log source: run_2026-03-03_1352.log - Mandatory retirement pass executed; no new unsent retirements triggered (ABORT_TOTAL=0) CPU DEPLOYMENT - CPU deployment scripts: wd_batchnoise_interaction.py, wd_labelsmooth_interaction.py - Host-targeted queue fill strategy: active hosts seen=81, low-RAM hosts skipped (<6GB)=2 - CPU workunits created in Part 2 run: 2937 - Targeting policy: each eligible CPU host filled toward ~3x CPU queue depth using duplicate checks and replication suffixing GPU DEPLOYMENT - GPU deployment scripts: wd_curvature_trigger_gpu.py, wd_timing_scale_gpu.py - Part 2 run log shows GPU pass launched but terminal capture was interrupted before final summary was printed. - GPU checkpoint (server state at Part 3 consolidation): - Distinct GPU hosts carrying these script families: 12 - Total GPU workunits present for these script families: 2333 - Currently queued/running/in-progress GPU results for these script families: 19 - Hosts currently showing queued GPU tasks for these families: 355, 57, 1, 159, 339, 341, 9, 29, 287, 299 NEW EXPERIMENTS DESIGNED (PART 2) 1) wd_batchnoise_interaction.py (new CPU experiment created and syntax-validated) - Hypothesis: late weight decay benefits are stronger under high gradient-noise (small-batch) training than under low-noise (large-batch) training. - Novelty-check search queries logged in run session: - "weight decay batch size interaction neural networks" - "arxiv weight decay batch size interaction deep learning" - "Scheduled Weight Decay paper arxiv 2021" - "site:arxiv.org batch size weight decay generalization" - "\"weight decay\" \"batch size\" \"schedule\" neural networks" - "arxiv 1711.05101 decoupled weight decay regularization" - Novel angle documented: direct interaction-effect test (late-WD gain difference between small-batch and large-batch regimes), not a main-effect-only WD study. 2) Existing interaction/mechanism lines retained in deployment mix - wd_labelsmooth_interaction.py (interaction line) - wd_curvature_trigger_gpu.py and wd_timing_scale_gpu.py (GPU mechanism/timing line) KEY SCIENTIFIC FINDINGS 1. No new payload-level scientific result JSONs were recoverable in Part 1 review (0/308 recoverable artifacts), so no new confirmed model-level empirical finding was added from this validation batch. 2. The dominant scientific bottleneck remains infrastructure reliability (missing upload artifacts), not lack of volunteer compute completion. 3. Part 2 introduced and deployed a novel interaction hypothesis test (WD timing x batch-noise) designed to determine whether late-WD generalization gains are conditional on stochastic gradient noise level. 4. GPU mechanism lines (curvature-trigger and timing-scale WD) remain active across multi-host deployment, preserving the pathway to causal mechanism evidence once upload artifacts are reliably retained.