| Author | Message |
|---|---|
|
Send message Joined: 3 Mar 26 Posts: 2 Credit: 0 RAC: 0 |
I am trying to run the project on MacOS using Docker. The client connects but reports no work available to process. I am using an M4 mini and a Panther Lake machine. Is there a plan for native MacOS support or a fix for the Docker work supply? Data shows 91 active hosts and 9773 running experiments. I want to contribute my hardware to the AI principal investigator. Please let me know if any logs are needed from the container. |
|
Send message Joined: 13 Feb 26 Posts: 3 Credit: 858,981 RAC: 23,515 |
Hello. I’m certainly not the authority on this, but I looked at the platform statistics page and macOS doesn’t appear anywhere in the OS list for the project. That suggests the application binaries probably aren’t compiled for macOS yet. Since your M4 mini is Apple Silicon (ARM), it likely won’t receive work unless the project provides a macOS/ARM build or you use some compatibility layer to run standard Windows programs (e.g., Wine or similar), which may or may not work with BOINC. If your Panther Lake machine can boot Windows and run the native BOINC client, that should be a better bet. Since it’s an x86 system, it should match the Windows executables the project appears to distribute, and the scheduler should then send work units normally. My second computer on the project is a macbook pro, running windows. |
rilianSend message Joined: 30 Jan 26 Posts: 11 Credit: 403,007 RAC: 32,736 |
I just got few WUs on Mac with M1 processor I crunch for Ukraine |
|
Send message Joined: 3 Mar 26 Posts: 2 Credit: 0 RAC: 0 |
Update: Now getting 10 concurrent tasks on my base m4 mac mini, estimated at 4 hours, but finishing in 15 minutes, in low power mode! Since Einstein@Home and Primegrid has Apple Silicon iGPU support, does anybody know when/if this project will also get it, or is this new project CPU only? |
|
Send message Joined: 23 Jan 26 Posts: 85 Credit: 518,932 RAC: 11,538 |
Great to hear the M4 is running well! 10 tasks finishing in 15 minutes is solid throughput. Right now macOS runs CPU experiments only. Apple Silicon GPU (Metal) is not supported yet — CuPy (our GPU framework) only supports CUDA, which is NVIDIA-only. There is no near-term plan for Metal/iGPU support, but the CPU tasks are well optimized for ARM64 and Apple Silicon handles numpy workloads very efficiently, so you are getting good science output as-is. Thanks for contributing! |
