AXIOM BOINC SESSION LOG (PART 3 SAVE) Session timestamp: 2026-03-04 03:36:11 -07:00 Sources: validate_2026-03-04_0327.txt, run_2026-03-03_1352.log, findings_summary.txt SCOPE - Consolidated Part 1 (validation/credit/cleanup) and Part 2 (deployment/research/design) outcomes into one publishable session report. PART 1: VALIDATION, CREDIT, AND CLEANUP SUMMARY - Results reviewed/credited (summarized by type): - Core pass reviewed targeted completed experiment rows across WD-family and follow-up lines, with conservative time-tiered crediting. - Backlog pass credited 2,823 completed-uncredited rows (predominantly outcome=6 timeout/abort completions). - Final catch-up pass credited 190 additional completed-uncredited rows. - Total rows credited this session: 3,069 - Total credit awarded this session: 1,277.30 (within 10,000 cap) - Top credited users this session-id range: Amapola +909.70, Orange Kid +260.00, makracz +40.50, ChelseaOilman +33.50. - Stuck/broken cleanup: - Dead-host stuck-task aborts (>12h running, >6h silent): 0 - Hard-ceiling aborts (>48h running): 0 - Broken-prefix fail-rate auto-abort: none triggered at check time - Website counters updated: - credited_count.txt = 3247 - total_results_count.txt = 254 - Cutoff state after Part 1: uncredited completed exp rows = 0; uncredited completed success rows = 0. PART 2: DEPLOYMENT AND RESEARCH SUMMARY - Retirement guard pass executed against over-seeded lines; no unsent tasks found for abort (ABORT_TOTAL=0). CPU DEPLOYMENT - CPU deployment run completed with host-targeted queue fill: - CPU_HOSTS_SEEN = 81 - CPU_SKIPPED_LOW_RAM (<6GB) = 2 - CPU_WU_CREATED = 2937 - CPU scripts deployed: - wd_batchnoise_interaction.py (newly designed this session) - wd_labelsmooth_interaction.py (active interaction line) - Targeting policy used: fill toward ~3x CPU queued tasks per eligible active host with duplicate checks and per-host balancing. GPU DEPLOYMENT CHECKPOINT - GPU deployment command in run log was interrupted before final printed summary lines. - Independent DB checkpoint by workunit naming on server indicates active GPU deployment for: - wd_curvature_trigger_gpu.py: 737 workunits, 85 hosts - wd_timing_scale_gpu.py: 731 workunits, 84 hosts - GPU checkpoint totals from this snapshot: - GPU hosts represented: 85 - GPU workunits represented: 1,468 - GPU scripts: wd_curvature_trigger_gpu.py, wd_timing_scale_gpu.py NEW EXPERIMENT DESIGN + NOVELTY CHECK DOCUMENTATION - Newly added experiment script: - wd_batchnoise_interaction.py - Hypothesis: late weight decay improves generalization more under high-noise (small-batch) than low-noise (large-batch) training. - Implementation notes: numpy-only, seed derivation from experiment_name, run_duration-driven repeated-trial loop, outputs aggregated interaction metrics. - Novelty/literature search log captured in Part 2 run log: - "weight decay batch size interaction neural networks" - "arxiv weight decay batch size interaction deep learning" - "site:arxiv.org weight decay label smoothing interaction" - "site:arxiv.org batch size weight decay generalization" - "arxiv 1711.05101 decoupled weight decay regularization" - Novel angle retained for deployment: - Explicit interaction-mechanism test (late WD timing x gradient-noise regime) rather than single-factor WD schedule comparison. KEY SCIENTIFIC FINDINGS 1. No new evidence in this session overturns the prior inverse critical-period / late-WD timing signal. 2. The dominant operational signal remains reliability pressure (large outcome=6 completion waves), not a contradictory scientific trend. 3. Missing upload JSON artifacts persist even among successful completions, limiting payload-level interpretation and emphasizing need for ingestion-path debugging. 4. The science queue was pivoted toward mechanism/interaction tests (batch-noise interaction, label-smoothing interaction, curvature-triggered GPU timing) rather than further over-seeding of settled WD families. NOTES - This report intentionally omits cumulative credited result-ID lists; database state is the source of truth.