Message boards : Technical Support : A curious observation regarding GPU Work Units (WUs)
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 8 Mar 26 Posts: 11 Credit: 12,663,836 RAC: 817,049 |
Please, When my computer calculates a GPU WU (such as exp_trophic_rank_forcing_localization_...), the GPU load (as seen with the "nvidia-smi" instruction) is 100%. The WU calculation time is approximately 840 seconds. If I launch four WUs per GPU simultaneously using an "app_config.xml" file, the GPU load remains at 100%. And curiously, the calculation time remains approximately 840 seconds. And even more curiously, the points awarded remain the same. Isn't there something abnormal about this ? In any case, I don't understand it. |
|
Send message Joined: 23 Jan 26 Posts: 83 Credit: 504,548 RAC: 9,567 |
This is expected behavior with the version you are running. Each experiment uses iterative deepening (described in the v6.33 update on the front page) — it starts with a small problem size, doubles each pass, and estimates whether the next pass fits in the time budget. When multiple WUs share one GPU, each gets a fraction of the compute, so iterative deepening adapts by not going as deep — but each task still fills the full time window and produces valid results. Credit is awarded based on elapsed time, so it stays the same per task. Also worth noting — in the latest update (v6.38), we fixed multi-GPU support. Previously all GPU tasks defaulted to the same GPU regardless of how many you have. Now each task is assigned to its correct GPU by BOINC. If you have multiple GPUs, you should see them each getting their own tasks independently once you pick up the new version. |
|
Send message Joined: 8 Mar 26 Posts: 11 Credit: 12,663,836 RAC: 817,049 |
Many thanks for the explanations ! |
