AXIOM BOINC SESSION LOG (PART 3 SAVE) Session timestamp: 2026-03-04 05:18 Source logs: validate_2026-03-04_0515.txt, run_2026-03-03_1352.log PART 1 VALIDATION / CREDIT SUMMARY - Reviewed and credited 254 completed experiment results (bulk-reviewed by experiment families; no cumulative ID list stored here). - Total credit awarded this session: 1765 (under 10,000 cap). - Upload payload audit on credited rows: missing=254, experiment_result=0, error=0. - Per-user additions included: Amapola +1138, ChelseaOilman +231, Orange Kid +231, Steve Dodd +84, plus smaller increments across additional volunteers. STUCK / BROKEN TASK CLEANUP - Broken-prefix active-task aborts: oscillatory_roughchannel_lbm_resonance=0, abx_cycle=0, potts_pulse_anneal_resonance=0, spatial_pgg_delay_fatigue=0. - Stuck-task cleanup: dead-host >12h aborts=0; hard >48h aborts=0. - Post-session uncredited completed exp rows at cutoff: 1. - Website counters updated in Part 1 log: credited_count=1829, total_results_count=1830. PART 2 DEPLOYMENT / RESEARCH SUMMARY - Retirement cleanup pass executed before deployment; retirement candidates were inspected and ABORT_TOTAL=0 for unsent over-seeded tasks in that pass. CPU DEPLOYMENT - Host-targeted CPU queue fill completed. - CPU hosts scanned: 81; low-RAM hosts skipped (<6GB): 2. - CPU workunits created: 2937. - CPU scripts deployed: 1. wd_batchnoise_interaction.py 2. wd_labelsmooth_interaction.py - Targeting policy from run log: per-host queues filled toward ~3x CPU capacity, with host-bound workunit naming (`..._h` plus replications). GPU DEPLOYMENT CHECKPOINT - GPU deployment pass was started with appid=2 and per-host target of ~3x GPU queue depth. - GPU scripts selected for deployment: 1. wd_curvature_trigger_gpu.py 2. wd_timing_scale_gpu.py - Run log confirms GPU pass launch, but the recorded run log terminates during the SSH execution block before final GPU summary counters were printed. - Checkpoint status from this log capture: GPU host/workunit final counts not emitted in the tail due to interrupted logging. NEW EXPERIMENT DESIGN + NOVELTY CHECK DOCUMENTATION - New CPU experiment added in Part 2: 1. wd_batchnoise_interaction.py - Design intent (from script metadata/log text): test whether late weight-decay benefit is stronger under small-batch gradient-noise conditions. - Novelty/literature queries documented in run log before deployment: 1. "weight decay batch size interaction neural networks" 2. "arxiv weight decay batch size interaction deep learning" 3. "Scheduled Weight Decay paper arxiv 2021" 4. "site:arxiv.org weight decay label smoothing interaction" 5. "site:arxiv.org adaptive weight decay deep neural networks" 6. "site:arxiv.org batch size weight decay generalization" 7. "\"weight decay\" \"batch size\" \"schedule\" neural networks" 8. "arxiv 1711.05101 decoupled weight decay regularization" KEY SCIENTIFIC FINDINGS 1. No new upload JSON scientific payloads were recovered in the Part 1 credit batch; all 254 credited completed rows were missing payload artifacts at validation time. 2. Infrastructure reliability pattern remains unchanged: recurrent outcome=5/6 with exit_status=0 and empty stderr signatures in broken families, indicating a persistent packaging/assimilation-side failure mode rather than interpretable model output. 3. The active Part 2 scientific push shifted toward mechanism-level WD hypotheses: small-batch noise interaction (newly added) plus ongoing label-smoothing and GPU timing/curvature tests. 4. Volunteer compute throughput remained high (large CPU refill volume) while data-yield reliability remains the limiting bottleneck for extracting publishable model-level effects.