Axiom BOINC Session Log - Part 3 Save/Upload Timestamp: 2026-03-04 10:09 (America/Denver) SESSION SUMMARY - Combined summary from Part 1 validation/credit pass and Part 2 research/deployment pass. - Validation completed and credits awarded on live DB; research pass added/deployed WD-interaction experiments. PART 1 - RESULTS REVIEWED AND CREDIT AWARDED - Total credit awarded this session: 5880 (session cap 10000; compliant). - Total completed rows credited: 460 (all outcome=1 in credited batches). - Credited mix: CPU(appid=1)=441, GPU(appid=2)=19. - Website counters after pass: credited_count.txt=1613, total_results_count.txt=1613. - Per-user increments: - ChelseaOilman: +4563 - Steve Dodd: +1164 - _Scandinavian_: +38 - Orange Kid: +27 - makracz: +26 - Vato: +25 - marmot: +24 - vanos0512: +13 STUCK/BROKEN TASK CLEANUP - Dead-host cleanup (>12h running and >6h host silence): 0 aborted. - Hard >48h running-task cleanup: 0 aborted. - Broken-experiment broad abort: none applied (no qualifying active multi-host crash pattern in screening window). PART 2 - EXPERIMENT DEPLOYMENT CPU DEPLOYMENT - CPU deployment pass completed. - Host targeting policy: 3x CPU queue target per active host, skipped hosts with <6 GB RAM. - CPU hosts seen: 81 - CPU hosts skipped for RAM: 2 - CPU workunits deployed: 2937 - CPU scripts used in deployment: - wd_batchnoise_interaction.py - wd_labelsmooth_interaction.py GPU DEPLOYMENT CHECKPOINT - GPU deployment launched with scripts: - wd_curvature_trigger_gpu.py - wd_timing_scale_gpu.py - Run log was interrupted during GPU deployment, so final in-log completion counters were not printed. - Live DB snapshot for deployed GPU prefixes: - GPU hosts observed (curvature script): 12 (host IDs: 1, 9, 29, 57, 126, 159, 249, 340, 341, 347, 353, 355) - GPU hosts observed (timing-scale script): 11 (host IDs: 1, 9, 57, 126, 159, 249, 299, 341, 347, 353, 355) - GPU workunits present (curvature script): 1576 - GPU workunits present (timing-scale script): 1523 - GPU total workunits across these two scripts: 3099 NEW EXPERIMENTS DESIGNED + NOVELTY CHECK - New CPU experiment added in Part 2: wd_batchnoise_interaction.py - Hypothesis: late weight decay gains should be stronger under high-noise (small-batch) training than low-noise (large-batch) training. - Includes explicit run-duration budget loop and host-derived seeding. - Existing/newly active mechanism lines reviewed/deployed: - wd_labelsmooth_interaction.py - wd_curvature_trigger_gpu.py - wd_timing_scale_gpu.py - Novelty check documentation captured in run log: - Web/arXiv search queries executed (examples): 1. weight decay batch size interaction neural networks 2. arxiv weight decay batch size interaction deep learning 3. site:arxiv.org weight decay label smoothing interaction 4. Scheduled Weight Decay paper arxiv 2021 - Decision: proceed with interaction/mechanism-focused variants rather than repeating settled baseline WD lines. KEY SCIENTIFIC FINDINGS 1. Validated completion flow remains dominated by delay/ecology/control families with healthy payload integrity in sampled rows, indicating stable throughput in the active science portfolio. 2. No experiment prefix met emergency broad-abort criteria (high fail-rate plus multi-host active crash evidence), so compute was preserved for healthy queues. 3. The current research direction is now explicitly mechanism-focused: label-smoothing interaction, curvature-triggered timing, and batch-noise interaction were prioritized as the next discriminative tests of WD timing behavior. 4. GPU WD mechanism families are actively deployed across multiple GPU hosts, enabling near-term cross-host comparison of curvature-triggered vs timing-scale effects. END-OF-SESSION NOTES - Uncredited completed success rows at validation cutoff: 1 (live queue race after final credit pass). - No cumulative credited ID list retained here; DB remains source of truth for credited results.