| Author | Message |
|---|---|
|
Send message Joined: 11 Feb 26 Posts: 10 Credit: 29,353 RAC: 3,837 |
I realize this is a new project. A few thoughts: The amount of memory needed for CPU tasks is very high. Please warn users in your About page and/or try to reduce it a bit. If GPU tasks do not work, please warn users in your About page. As mentioned in a different thread, the current text on the axoin About.. page is not usable as is. Change the CSS to use black text please. You can color headings in a dark color if needed. When things get more stable, add your project to https://boinc.berkeley.edu/projects.php The credited points are too low, perhaps by a factor of about 10, based on my limited use of axion CPU tasks. The points do not yet appear on external databases like BOINCStats. Check the other Suggestions topics for more ideas, like Badges. |
|
Send message Joined: 11 Feb 26 Posts: 10 Credit: 29,353 RAC: 3,837 |
Stats are on Free-DC, it's up to BOINCStats to get the date. Thank you @mmonnin! |
|
Send message Joined: 11 Feb 26 Posts: 10 Credit: 29,353 RAC: 3,837 |
The granted credit for each CPU task on the same device went from about 2.5 to 2.9 (for example https://axiom.heliex.net/workunit.php?wuid=1108811) to more than 300 (for example https://axiom.heliex.net/workunit.php?wuid=1118405). Is this expected? |
|
Send message Joined: 11 Feb 26 Posts: 10 Credit: 29,353 RAC: 3,837 |
I was getting less than 0.1 most tasks that did receive credit the past few days. In my case, the version of Axiom Federated AI (CPU) seems to determine the granted credit range. With v3.93 the credit is greater (300+) than v3.92 (2.5-2.9), but it still varies. |
|
Send message Joined: 30 Jan 26 Posts: 14 Credit: 10 RAC: 0 |
In reply to Michael E.'s message of 15 Feb 2026: I realize this is a new project. A few thoughts: I agree with you about the color, I am colorblind and it's a pain in the butt!! BUT if I hit ctrl+A it changes the whole text to Black which I can see just fine |
|
Send message Joined: 23 Jan 26 Posts: 85 Credit: 518,833 RAC: 11,538 |
Thank you for the detailed feedback. A few updates: - Text contrast has been improved on the homepage as of today. - Credit system has been completely reworked -- an AI now reviews each result for scientific quality and awards credit accordingly. The old FLOPS-based credit was rescaled on March 1st. - Badges are on the roadmap. - Memory usage has been significantly reduced in v6.08. - GPU tasks are stable and running on both NVIDIA and AMD now. We will look into getting listed on boinc.berkeley.edu once things are fully settled. Thanks again for the suggestions. |
|
Send message Joined: 13 Feb 26 Posts: 3 Credit: 858,981 RAC: 23,515 |
Hi, thanks for your hard work admin! I'm running Axiom on a workstation that currently has both NVIDIA and AMD GPUs available. I noticed that most of the current experiment workloads appear to be CPU-based on my host, so I wanted to ask whether AMD GPU support (OpenCL or Vulkan compute) is planned or currently implemented. I thought I had seen a task for the AMD card, but now I am not seeing any for 24 hours at least. Also, my active credit seems to not be updating. Does the original client meant for tuning the model need to be running alongside BOINC work units somehow? System specs: CPU: Intel i7-8700 (6C / 12T) RAM: 32 GB DDR4 (tuned ~2666 MHz) GPUs: EVGA RTX 3070 XC3 – CUDA capable AMD Radeon R9 390 – 8 GB VRAM R9 390 configuration: Core clock: 930 MHz Memory clock: 1250 MHz VRAM: 8 GB Architecture: GCN Hawaii OpenCL: supported Vulkan: supported The R9 390 is currently stable at the above clocks and has strong FP32 throughput and memory bandwidth, so it might be a useful candidate for numerical experiments if OpenCL or Vulkan compute paths exist. Questions: 1. Are GPU experiments currently sending CUDA-only batches? 2. Would OpenCL / VULKAN support for AMD GPUs be possible in the experiment container system? I'm happy to run test workunits or experimental builds if that would help. Thanks, _Scandinavian_ |
|
Send message Joined: 23 Jan 26 Posts: 85 Credit: 518,833 RAC: 11,538 |
Thanks for the detailed specs and the offer to help test! Right now all GPU experiments use CuPy, which is CUDA-only — so your RTX 3070 should be getting GPU work units, but the R9 390 can only contribute through CPU tasks. OpenCL/Vulkan support for AMD GPUs is something we'd like to explore, but the main blocker right now is access to an AMD GPU to build and test against. If you know of any affordable cloud options for AMD GPU instances, I'm all ears. Your RTX 3070 should be receiving GPU tasks — if it's not getting any, try doing a Project Update in BOINC Manager. As for active credit, it uses an exponential decay formula so it drops if your host hasn't returned results recently. Your total credit should still be accurate. |
|
Send message Joined: 13 Feb 26 Posts: 3 Credit: 858,981 RAC: 23,515 |
If it would help development, I’d be happy to run experimental OpenCL or Vulkan builds on the R9 390 and send logs, outputs, and performance results back to you. If you ever need more direct testing, I could also boot the machine into a Linux environment for cleaner driver support. |
|
Send message Joined: 23 Jan 26 Posts: 85 Credit: 518,833 RAC: 11,538 |
Thank you for the offer — that kind of hands-on willingness to help is exactly what makes a volunteer community great. AMD/OpenCL GPU support is on our roadmap, but it is not something we can roll out quickly. Right now all GPU experiments are built on CUDA (NVIDIA platform), and the way our experiment scripts are written ties them closely to that ecosystem. Supporting AMD GPUs would mean building a second compute backend, and our AI research loop would need to design and validate experiments for both platforms separately — which roughly doubles the token budget for every GPU research cycle. At our current scale that is not something we can absorb yet. We are actively working on funding (through Patreon and other sources) to expand the project capabilities, and AMD GPU support is part of that longer-term plan. When we get there, we will likely start with newer AMD cards that support ROCm (AMD compute platform), but we will keep the community posted. Thanks again for the offer, and we will definitely reach out to the community when AMD testing is needed down the road. |
PetrctaleSend message Joined: 10 Mar 26 Posts: 12 Credit: 102,395 RAC: 910 |
I second all those points. (In mikey's msg 171) BOINCing since October 1999
|
|
Send message Joined: 23 Jan 26 Posts: 85 Credit: 518,833 RAC: 11,538 |
@Mr P Hucker — Unfortunately Gemini is wrong on this one. Our GPU binary is built with CuPy which links directly against NVIDIA's CUDA libraries — it's not source-level CUDA that ROCm's HIP could translate. A precompiled PyInstaller binary with CUDA dependencies won't run on AMD hardware regardless of ROCm setup. AMD GPU support would require a separate build using a different compute backend. It's on the roadmap but not close. @petrctale — Thanks for the +1. Most of those have been addressed — text contrast fixed, memory usage reduced, credit system reworked, GPU tasks stable. Badges still on the list. |
