Axiom is a distributed AI training platform that uses Hebbian learning — 'neurons that fire together, wire together' — to train a massive neural network across thousands of volunteer computers worldwide.
Unlike traditional backpropagation, Hebbian learning is local and biologically inspired. Each sample strengthens connections between neurons that activate together — no complex gradient chains required.
The 85/15 Split: 85% of learning happens locally on your machine, while 15% comes from gossip synchronization with peers training other experts.
axiom_streaming_ui.pyModern AI training is concentrated in massive data centers controlled by a few corporations. Axiom explores an alternative: spreading the work across thousands of everyday computers, making AI research more accessible and democratized.
Your CPU performs real computations that contribute to training an open-source AI model. You earn credits for each weight synchronization, reflecting your contribution to the collective intelligence.
The streaming client reads training data from your ~/Axiom/contribute folder (text files, documents, etc.)
and performs Hebbian weight updates locally. Every 1000 samples, it syncs with the coordinator server to
share learned patterns with other experts in the network.
The system monitors CPU and memory usage, automatically managing resources based on your preferences. An 'endocrine system' modulates resource usage — just like cortisol regulates your body's energy.
Questions or feedback? Post on the message boards or visit our GitHub.