Thread '160GB of RAM for a single Work Unit !'

Message boards : Suggestions : 160GB of RAM for a single Work Unit !
Message board moderation

To post messages, you must log in.

AuthorMessage
Jean-Luc

Send message
Joined: 8 Mar 26
Posts: 11
Credit: 12,663,836
RAC: 817,049
Message 274 - Posted: 11 Mar 2026, 14:29:10 UTC

I've noticed that running a single WU (Work Unit) with a name starting with "exp_nk_walk_scaling_" requires 165 GB of RAM.
That's enormous !!!

So I have a suggestion.
In our account settings, under "Axiom Distributed AI Preferences," would it be possible to add the option to select the type of WU we want to calculate and, more importantly, to see how much RAM per WU is required so we know which WUs are compatible with our computer ?
The PrimeGrid project has such a system that indicates the L3 cache required for each WU.
It would be the same information, but regarding RAM.

The problem is that all types of WUs are mixed together.
I have 128 threads and 256 GB of RAM.
If, unfortunately, the calculation of two WUs "exp_nk_walk_scaling_" starts simultaneously, my computer crashes and the other 126 WUs (other than "exp_nk_walk_scaling_") cannot be calculated.
This is what happened last night while I was sleeping !

Another solution : allow the calculation for only one WU "exp_nk_walk_scaling_" at a time.
ID: 274 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PyHelix
Volunteer moderator
Project administrator

Send message
Joined: 23 Jan 26
Posts: 83
Credit: 504,548
RAC: 9,567
Message 277 - Posted: 11 Mar 2026, 19:16:33 UTC
Last modified: 11 Mar 2026, 19:16:33 UTC

Hi Jean-Luc,

Thanks for flagging this — you're right that the total memory usage is way too high. I dug into it and the issue isn't the experiment script itself (nk_walk_scaling uses well under 100MB of working data), but the Python runtime. Each BOINC task loads a full Python + NumPy environment which uses about 2.5GB per process. On a machine with many cores, BOINC runs one task per core, so the memory adds up fast — 128 threads x 2.5GB = 320GB.

I went through the BOINC documentation on memory management and found the fix:
rsc_memory_bound
. This tells the BOINC client how much RAM each task actually needs, and the client automatically limits how many tasks run concurrently so it doesn't exceed your available RAM. We weren't setting this before, so BOINC assumed each task used almost no memory and happily scheduled all of them at once.

This is now fixed — all new workunits are created with
rsc_memory_bound
set to 2.5GB. Your BOINC client will automatically scale down the number of concurrent tasks to fit your available RAM. You shouldn't need to change anything on your end.

Thanks for the report, this is a big improvement for everyone with high core counts.
ID: 277 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jean-Luc

Send message
Joined: 8 Mar 26
Posts: 11
Credit: 12,663,836
RAC: 817,049
Message 282 - Posted: 11 Mar 2026, 21:19:54 UTC - in response to Message 277.  

It's excellent that you were able to fix that.
Perhaps more people will now dare to embark on the Axiom project.
I'm very glad to have contributed to the development of this project, which I really enjoy !

;-)
ID: 282 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Suggestions : 160GB of RAM for a single Work Unit !
Network Statistics
Powered byBOINC
© 2026 Axiom Project 2026