About Axiom
What is Axiom?
Axiom is a distributed STEM experiment platform where volunteers contribute computing power to run scientific experiments. It is the first volunteer computing project autonomously managed by an AI — an LLM serves as the Principal Investigator, designing experiments, deploying them to volunteer hardware, reviewing results, and awarding market-rate credit. The project runs on
BOINC infrastructure to coordinate work across volunteers worldwide. Think of it as citizen science managed by AI. This is a pure science project — not cryptocurrency or blockchain related. All compute goes directly to scientific research.
What kind of experiments does Axiom run?
Axiom runs numpy-based scientific experiments across many STEM fields:
- Ecology — trophic cascades, species interactions, ecosystem stability
- Physics — stochastic resonance, Ising models, diffusion, phase transitions
- Network Science — percolation, voter models, avalanche dynamics
- Machine Learning — loss landscapes, lottery ticket hypothesis, double descent, grokking
- Epidemiology — SIR models, metapopulation dynamics
- Neuroscience — spiking neural networks, neural coding
- Population Genetics — Muller's ratchet, genetic drift, fixation
- Statistics — random matrix theory, Benford's law
- Number Theory — Collatz conjecture, Goldbach analysis
The AI PI continuously designs new experiments based on what it finds interesting and novel, checking against all previous experiments to avoid duplication.
Who runs Axiom?
Axiom is an independent research project. The infrastructure runs on a dedicated server, and the AI Principal Investigator autonomously manages the full research pipeline — from experiment design to statistical analysis to publication of findings. Human oversight is maintained for infrastructure and major decisions.
Where can I see the results?
Getting Started
How do I join?
- Download BOINC (the standard volunteer computing client)
- Add the project URL:
https://axiom.heliex.net
- Create an account or attach with an existing BOINC account
- Done! Your machine will automatically receive experiments matched to its hardware
What platforms are supported?
- Windows (x86_64) — CPU and GPU (NVIDIA CUDA)
- Linux (x86_64) — CPU and GPU (NVIDIA CUDA)
- macOS (ARM64 / Apple Silicon) — CPU only
Apple Silicon GPU (Metal) is not supported yet — CuPy (our GPU framework) only supports CUDA, which is NVIDIA-only. However, the CPU tasks are well optimized for ARM64 and Apple Silicon handles numpy workloads very efficiently.
Do I need a GPU?
No. Most experiments are CPU-only using numpy. GPU experiments exist for tasks that benefit from parallel computation (using CuPy/CUDA), but they are a small fraction of the workload. BOINC will only send you GPU tasks if you have a compatible NVIDIA GPU with CUDA support.
How much bandwidth/disk space does it use?
Very little. Each task downloads a small Python script (~1–5 KB) and uploads a result file (~1–10 KB of JSON data). The worker binary itself is a one-time download: ~21 MB for CPU or ~1 GB for GPU (cached locally by BOINC). CPU tasks run for 15 minutes, GPU tasks for 30 minutes.
Credit & Validation
How does credit work?
CPU tasks run for 15 minutes, GPU tasks for 30 minutes. Credit uses price-per-FLOP market-rate scaling — CPU and GPU FLOPS are priced separately based on real hardware market values, updated hourly. A donated RTX 4090 earns proportionally more than a GTX 750 Ti, reflecting the actual economic value of the contribution. GPU tasks credit both the CPU core used and the GPU. Zero credit is only given for confirmed cheating. Full details:
Credit & Validation docs.
Why do I see 0 credit on some tasks?
The Anti-cheat reviews completed results periodically throughout the day. If you see 0 credit temporarily, your tasks will be credited in the next review cycle. If you still see 0 credit after a few hours, please post on the
message boards.
What about the old credit amounts?
On March 1, 2026, legacy credit was rescaled to align with the current FLOPS-based system. All old credit was divided by 100 (e.g., 64.8M → ~650K). Volunteers' relative rankings are unchanged — your contribution is recognized, and new experiment credit is now meaningful on the leaderboard.
Why are some tasks "server aborted"?
Server aborts happen for two reasons: (1) an experiment has collected enough data and retires, so remaining workunits are no longer needed, or (2) an experiment has bugs causing high crash rates and the AI error triage step aborts it. This is normal and does not affect your credit for completed tasks.
Technical Issues
What about _MEI* temp directories?
The _MEI* directories are from PyInstaller's extraction process. As of v6.09, the client extracts into the BOINC slot directory instead of %TEMP%, so new _MEI folders should no longer appear in your Temp folder. BOINC automatically cleans up slot directories when tasks finish. Any old _MEI* folders in %TEMP% from earlier versions can be safely deleted.
Does the client write files outside the BOINC directory?
As of v6.08, no. All file activity stays inside the BOINC data directory. The client no longer writes to ~/Axiom or ~/Axiom.cache. The existing ~/Axiom/contribute folder on your machine will be left untouched — the client simply stopped using it. You can delete it manually if you like, but it won't grow anymore.
Why do GPU tasks use so much memory?
GPU tasks use VRAM for CuPy/CUDA computation. The client has adaptive batch sizing that automatically adjusts based on your GPU's available memory. If you see memory issues, try updating to the latest version — we've significantly optimized memory usage in recent updates.
The AI Principal Investigator
What is the AI PI and how does it work?
The AI Principal Investigator runs in a continuous autonomous loop. Each cycle performs these steps:
- Security Scan — detects prompt injection in volunteer-submitted data
- Health Check — verifies all server processes are running
- Science — analyzes experiment results, forms scientific conclusions, writes to persistent science journal for cross-cycle memory
- Validate & Credit — credits completed results using price-per-FLOP market-rate scaling
- Retire — retires experiments that have collected enough data
- Performance Audit — finds slow experiments, diagnoses and fixes scripts
- Error Triage — finds failing experiments, reads stderr, fixes or aborts
- Dedup + Rating + Journal — deduplicates findings, scores 0–100 for significance, writes persistent memory
- CPU Research + Dry-Run — designs new CPU experiments, validates them locally before deploying (runs in parallel with GPU)
- GPU Research + Dry-Run — designs new GPU experiments, validates locally before deploying (runs in parallel with CPU)
- Cleanup + Descriptions — maintenance tasks and auto-generates experiment descriptions for the website
In addition, automated watchdog systems run independently via cron: a
verification pair checker (every 10 min) that randomly duplicates 0.5% of tasks and compares results via cosine similarity to detect cheating, an
error rate watchdog (every 5 min) that auto-disables broken experiments before they cascade, and a
price updater (hourly) that keeps credit scaling current with hardware market prices.
Each experiment uses
iterative deepening — rather than guessing a fixed problem size, the script starts small, doubles each pass, and estimates whether the next pass fits in the time budget. Faster machines go deeper automatically. The AI decides which questions to investigate; your hardware decides how deep it goes.
Will there be badges?
Badges are on the roadmap. We will be adding credit milestone badges and other achievement badges in an upcoming update.
Donations
How can I donate?
Axiom is funded by donations and volunteer contributions. Here are all the ways you can help:
- Patreon (recurring) — patreon.com/axiom_research — helps cover server costs and AI API usage
- Gridcoin —
SG5RCw9cf2RhbopCuXLzYYpXciARaZNCF8
- Curecoin —
B8AW6prdZ8K1vNXCoHoQAPEKDk5DnCCD5e
- Hardware donations — we need ARM64 (Raspberry Pi) and AMD GPU test hardware to expand platform support. Contact us if you can help.
Every donation directly supports the infrastructure that keeps Axiom running — the dedicated server, bandwidth, and the AI API calls that power the autonomous research pipeline.
What do donations pay for?
- Server hosting — dedicated server (64GB RAM, multi-core CPU) running 24/7
- AI API costs — the AI Principal Investigator uses LLM API calls for autonomous cycles (experiment design, error triage, scientific analysis, result scoring)
- Bandwidth — serving experiment scripts and collecting results from volunteers worldwide
- Domain & SSL — axiom.heliex.net infrastructure
Community & Support
How can I get help or report issues?
Post on the
message boards — both forums and direct replies are supported. You can also join the community on Discord. The AI PI and project maintainers actively monitor the forums.
How can I support the project?
- Volunteer your computing power — every task helps advance research
- Support on Patreon — patreon.com/axiom_research helps cover server and API costs
- Spread the word — tell friends and communities about Axiom
- Crypto donations — Gridcoin:
SG5RCw9cf2RhbopCuXLzYYpXciARaZNCF8
- Donate hardware — Raspberry Pi, AMD GPU, or spare hardware for testing. Contact us
Is Axiom listed on BOINC project lists?
We plan to apply for listing on
boinc.berkeley.edu once the project is fully settled. In the meantime, you can join directly using the project URL.