Thread 'To Admin about testing versions'

Message boards : Technical Support : To Admin about testing versions
Message board moderation

To post messages, you must log in.

AuthorMessage
Henk Haneveld

Send message
Joined: 30 Jan 26
Posts: 20
Credit: 18,581
RAC: 664
Message 155 - Posted: 13 Feb 2026, 12:49:34 UTC

Please stop messing about and putting versions online without testing.

Version 3.87 contains this error:
[DataProducer] Error reading C:\Users\hjhan\Axiom\contribute\seed_data.bin: cannot access local variable 'batches_queued' where it is not associated with a value
ID: 155 · 0 ▲ Upvote Report     Reply Quote
Henk Haneveld

Send message
Joined: 30 Jan 26
Posts: 20
Credit: 18,581
RAC: 664
Message 156 - Posted: 13 Feb 2026, 16:13:27 UTC - in response to Message 155.  

New version 3.88 Same error

Just a single test run would show this.
ID: 156 · 0 ▲ Upvote Report     Reply Quote
[VENETO] boboviz

Send message
Joined: 4 Feb 26
Posts: 36
Credit: 70,418
RAC: 1,313
Message 159 - Posted: 13 Feb 2026, 21:05:14 UTC - in response to Message 156.  

In reply to Henk Haneveld's message of 13 Feb 2026:
New version 3.88 Same error


Same here, but wus are validated and released some points.
ID: 159 · 0 ▲ Upvote Report     Reply Quote
Henk Haneveld

Send message
Joined: 30 Jan 26
Posts: 20
Credit: 18,581
RAC: 664
Message 162 - Posted: 14 Feb 2026, 7:58:33 UTC - in response to Message 159.  

Well we are up to version 3.92.
Same error still exists with a few new ones added on.

Windows10 CPU and GPU
ID: 162 · 0 ▲ Upvote Report     Reply Quote
Henk Haneveld

Send message
Joined: 30 Jan 26
Posts: 20
Credit: 18,581
RAC: 664
Message 167 - Posted: 15 Feb 2026, 18:48:12 UTC - in response to Message 162.  

And again a new version 3.93 No fix for the error.

The Admin is a fool.
He is so focused on the AI-algoritme that he does no take in account that he needs to fix the App that runs that algoritme to get any usefull data.
ID: 167 · 0 ▲ Upvote Report     Reply Quote
PyHelix
Volunteer moderator
Project administrator

Send message
Joined: 23 Jan 26
Posts: 83
Credit: 510,524
RAC: 10,376
Message 204 - Posted: 4 Mar 2026, 5:34:31 UTC - in response to Message 167.  

You are right, and I apologize for the instability during the v3.87-v3.93 period. That was a rough stretch where I was iterating too fast without proper testing between releases. The project has matured significantly since then -- we are now on v6.08 with a stable experiment container platform that has been running reliably. The rapid-fire broken releases will not happen again. I appreciate your patience and the feedback.
ID: 204 · 0 ▲ Upvote Report     Reply Quote
Henk Haneveld

Send message
Joined: 30 Jan 26
Posts: 20
Credit: 18,581
RAC: 664
Message 311 - Posted: 14 Mar 2026, 10:11:13 UTC

Well you have done it again. Putting something online that does not work.

The latest GPU version (6.34) just freezes my systeem.
ID: 311 · 0 ▲ Upvote Report     Reply Quote
Drago75

Send message
Joined: 30 Jan 26
Posts: 18
Credit: 1,148,922
RAC: 80,574
Message 312 - Posted: 14 Mar 2026, 10:48:49 UTC
Last modified: 14 Mar 2026, 10:51:41 UTC

On my Linux Mint host with Nvidia driver 570.211.01, RTX 5070 all GPU tasks vers. 6.34 crash immediatly with the following error message. They do get flagged as ended successfully though.

[EXP] Execution error: CompileException: nvrtc: error: failed to load builtins for compute_120.
Traceback (most recent call last):
File "cupy/cuda/compiler.py", line 731, in compile
nvrtc.compileProgram(self.ptr, options)
File "cupy_backends/cuda/libs/nvrtc.pyx", line 125, in cupy_backends.cuda.libs.nvrtc.compileProgram
File "cupy_backends/cuda/libs/nvrtc.pyx", line 138, in cupy_backends.cuda.libs.nvrtc.compileProgram
File "cupy_backends/cuda/libs/nvrtc.pyx", line 53, in cupy_backends.cuda.libs.nvrtc.check_status
cupy_backends.cuda.libs.nvrtc.NVRTCError: NVRTC_ERROR_BUILTIN_OPERATION_FAILURE (7)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "axiom_streaming_client.py", line 2713, in _run_experiment_mode
File "experiment_script.py", line 478, in <module>
main()
File "experiment_script.py", line 438, in main
last_min_eig, last_score, last_peak_sector_mass, last_shell_fraction, last_core_fraction = measure_mode(
File "experiment_script.py", line 335, in measure_mode
operator = build_operator(n_active, rho, rng_seed)
File "experiment_script.py", line 273, in build_operator
idx = xp.arange(n_active, dtype=xp.int32)
File "cupy/_creation/ranges.py", line 60, in arange
_arange_ufunc(typ(start), typ(step), ret, dtype=dtype)
File "cupy/_core/_kernel.pyx", line 1374, in cupy._core._kernel.ufunc.__call__
File "cupy/_core/_kernel.pyx", line 1401, in cupy._core._kernel.ufunc._get_ufunc_kernel
File "cupy/_core/_kernel.pyx", line 1082, in cupy._core._kernel._get_ufunc_kernel
File "cupy/_core/_kernel.pyx", line 94, in cupy._core._kernel._get_simple_elementwise_kernel
File "cupy/_core/_kernel.pyx", line 82, in cupy._core._kernel._get_simple_elementwise_kernel_from_code
File "cupy/_core/core.pyx", line 2377, in cupy._core.core.compile_with_cache
File "cupy/cuda/compiler.py", line 536, in _compile_module_with_cache
return _compile_with_cache_cuda(
File "cupy/cuda/compiler.py", line 580, in _compile_with_cache_cuda
base = _preprocess('', options, arch, backend)
File "cupy/cuda/compiler.py", line 473, in _preprocess
result, _ = prog.compile(options)
File "cupy/cuda/compiler.py", line 750, in compile
raise CompileException(log, self.src, self.name, options,
cupy.cuda.compiler.CompileException: nvrtc: error: failed to load builtins for compute_120.
ID: 312 · 0 ▲ Upvote Report     Reply Quote
PyHelix
Volunteer moderator
Project administrator

Send message
Joined: 23 Jan 26
Posts: 83
Credit: 510,524
RAC: 10,376
Message 313 - Posted: 14 Mar 2026, 11:56:15 UTC - in response to Message 312.  

Thanks for the detailed error logs, they were a huge help tracking this down. The failed to load builtins for compute_120 error was happening because the GPU binary had NVRTC builtins from CUDA 12.4, which doesn't support the Blackwell architecture (compute_120) on RTX 50-series cards. I've rebuilt the GPU binary (v6.35) with CUDA 12.8 builtins that fully support compute_120 and confirmed it working on an RTX 5070 running Linux with driver 570. Your client should pick up v6.35 automatically on the next update -- you can force it with Update in your BOINC manager.
ID: 313 · 0 ▲ Upvote Report     Reply Quote
PyHelix
Volunteer moderator
Project administrator

Send message
Joined: 23 Jan 26
Posts: 83
Credit: 510,524
RAC: 10,376
Message 314 - Posted: 14 Mar 2026, 11:56:28 UTC - in response to Message 311.  

Sorry about the freeze. Could you share some more details so I can look into it? Specifically -- what OS are you running, what NVIDIA driver version, and does the freeze happen as soon as a GPU task starts or after it's been running for a while? Any error messages in your BOINC event log before the freeze would be really helpful too.
ID: 314 · 0 ▲ Upvote Report     Reply Quote
Henk Haneveld

Send message
Joined: 30 Jan 26
Posts: 20
Credit: 18,581
RAC: 664
Message 315 - Posted: 14 Mar 2026, 12:37:28 UTC - in response to Message 314.  
Last modified: 14 Mar 2026, 12:54:35 UTC

In reply to PyHelix's message of 14 Mar 2026:
Sorry about the freeze. Could you share some more details so I can look into it? Specifically -- what OS are you running, what NVIDIA driver version, and does the freeze happen as soon as a GPU task starts or after it's been running for a while? Any error messages in your BOINC event log before the freeze would be really helpful too.

I run Windows10 with Nvidia 750Ti on version 581.57

I can't find any error files. I think the WU starts running and then stops freezing my systeem. It is possible that the lockup occurred on the GPU card and that the CPU was still running but all display options where frozen and I had no response from my keyboard and mouse. I did have some delay in display response before the freeze. I could only get back control by a hard systeem reboot.

Edit: I only run one WU as the card has limited memory.
That may be a factor. I have seen that the memory is taken into account at the start of the WU but is it possible that with the recent changes the memory use goes beyond the limit of the card.
ID: 315 · 0 ▲ Upvote Report     Reply Quote
Drago75

Send message
Joined: 30 Jan 26
Posts: 18
Credit: 1,148,922
RAC: 80,574
Message 316 - Posted: 14 Mar 2026, 12:46:52 UTC - in response to Message 315.  

How many GPU tasks are you running at once? I had that behavior when I ran 10 at once making my whole system sluggish to respond. After reducing the number it ran fine. The new GPU tasks utilize the GPU a lot more than previous versions. Better reduce to one task at a time and slowly increase if possible.
ID: 316 · 0 ▲ Upvote Report     Reply Quote
Drago75

Send message
Joined: 30 Jan 26
Posts: 18
Credit: 1,148,922
RAC: 80,574
Message 317 - Posted: 14 Mar 2026, 12:48:37 UTC
Last modified: 14 Mar 2026, 12:53:18 UTC

PyHelix, version 6.35 seems to be running fine on my RTX 5070 but the tasks terminate after 14:34 because of a set time limit.

197 (0x000000C5) EXIT_TIME_LIMIT_EXCEEDED

The tasks use now 100% of my GPUs resources.
ID: 317 · 0 ▲ Upvote Report     Reply Quote
PyHelix
Volunteer moderator
Project administrator

Send message
Joined: 23 Jan 26
Posts: 83
Credit: 510,524
RAC: 10,376
Message 318 - Posted: 14 Mar 2026, 18:40:18 UTC - in response to Message 317.  

Thanks for the update Drago75! Good to hear the RTX 5070 is running fine now.

The time limit issue was on my end — the BOINC elapsed time calculation uses GPU FLOPS, and with cards as fast as the RTX 5070 the limit ended up way shorter than intended. I've bumped it so all GPU hosts now have at least 30 minutes, which should be plenty for any experiment.

Worth noting — even the tasks that got cut short did real work. The experiments use iterative deepening so they're producing meaningful results from the first few seconds onward. And you still got credit for those tasks.

The fix is live now so your next batch should run to completion.
ID: 318 · 0 ▲ Upvote Report     Reply Quote

Message boards : Technical Support : To Admin about testing versions
Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026