| Name | wu_22_g32300_0 |
| Workunit | 274346 |
| Created | 30 Jan 2026, 11:52:49 UTC |
| Sent | 30 Jan 2026, 13:15:44 UTC |
| Report deadline | 6 Feb 2026, 13:15:44 UTC |
| Received | 30 Jan 2026, 18:29:54 UTC |
| Server state | Over |
| Outcome | Success |
| Client state | Done |
| Exit status | 0 (0x00000000) |
| Computer ID | 51 |
| Run time | 13 min 40 sec |
| CPU time | 6 min 1 sec |
| Priority | 0 |
| Validate state | Valid |
| Credit | 5.00 |
| Device peak FLOPS | 4.17 GFLOPS |
| Application version | Axiom Federated AI v2.17 windows_x86_64 |
| Peak working set size | 3.58 GB |
| Peak swap size | 7.50 GB |
| Peak disk usage | 162.44 MB |
<core_client_version>8.0.2</core_client_version>
<![CDATA[
<stderr_txt>
14:33:20 (13864): wrapper (7.7.26016): starting
14:33:20 (13864): wrapper: running ../../projects/axiom.heliex.net/axiom_worker_2.17_windows_x86_64.exe (wu.json result.json)
[CPU] Using NumPy
[Worker] Saved 5 gradient files to contribute folder (gradient sharing)
INFO: Downloading model: https://axiom.heliex.net/pyhelix/model/expert_22_v2.0.bin
INFO: Downloaded and cached model: expert_22_v2.0.bin (42580994 params)
INFO: Found 5 local files to train on.
WARNING: Could not load local files:
============================================================
WARNING: No training data found!
Please add files to your Axiom contribute folder:
C:\Users\thoma\Axiom\contribute
The AI learns from YOUR data - add text, code, docs, etc.
Submitting zero gradient (no model impact).
============================================================
[Worker] Running 30 batches for expert 22...
[Worker] Batch 10/30 complete
[Worker] Batch 20/30 complete
[Worker] Batch 30/30 complete
18:34:12 (13864): axiom_worker_2.17_windows_x86_64.exe exited; CPU time 361.265625
18:34:12 (13864): called boinc_finish(0)
</stderr_txt>
]]>