How AI Stake Works
Platform stake is one input. AI stake is not. Here is the complete system.
14 min read
Version 0.1 — Draft for Community Ratification — March 2026
This document explains how AI stake works — the system that measures professional contribution to Our One AI, governs AI-specific decisions, and distributes revenue back to the professionals whose expertise makes the AI valuable.
Read it alongside the AI Constitution and the platform stake explanation. AI stake is a different system from platform stake. They coexist on the same platform but serve different purposes, measure different things, and operate under different rules.
Why AI stake is different from platform stake.
Platform stake is deliberately simple: one input, membership years, nothing else. It ignores what you post, how often you vote, whether you complete your profile. It measures commitment through the simplest possible mechanism — continued membership — because platform governance should reflect how long you have shown up, not how loudly.
AI stake cannot work that way.
Contributing professional expertise to train AI is materially different from paying one cent a day. The contributions vary in quality, in type, in impact. A cardiologist who writes fifty expert Q&A pairs that measurably improve the medical model's diagnostic accuracy has contributed something fundamentally different from a member who joined and waited.
AI stake must measure contribution. That is the point. The design challenge is measuring it without creating the gamification dynamics that platform stake was built to avoid — without turning expertise into a leaderboard, without rewarding volume over quality, without letting early movers permanently capture governance and revenue.
Every mechanism in this document exists to solve a specific problem. When it solves that problem, it will be preserved. When it creates a new problem, it will be amended — through the governance process defined in the AI Constitution.
The principles are constitutional. The parameters are governance proposals. The numbers you see in this document — contribution weights, decay rates, revenue percentages, individual caps — are initial proposals for community ratification. The structural rules — that stake must decay, that quality must be verified, that no individual may dominate, that revenue must return — are constitutional and cannot be changed.
What earns AI stake.
Four contribution types. Each weighted differently because each contributes differently.
| Contribution Type | Description | Base Weight | Monthly Target |
|---|---|---|---|
| Direct authoring | Expert Q&A pairs written from scratch | 10 points per verified pair | 2–5 per month |
| Response improvement | Correcting and refining model outputs | 4 points per verified improvement | 5–15 per month |
| Verification signals | Marking outputs correct, incorrect, or incomplete | 1 point per verified signal | 20–50 per month |
| Verifier creation | Defining automated checking systems for your domain | 50 points per verified verifier | As available |
The weights reflect leverage. A verifier that enables thousands of automated training iterations is worth more than a single verification signal. A from-scratch expert Q&A pair — the gold standard of training data — is worth more than a correction. But all contributions matter. The system needs depth AND volume.
Raw points are not stake. Points become stake only after quality verification. An unverified contribution earns nothing. This is the primary anti-gaming mechanism: you cannot earn stake by submitting garbage at volume. Every point must pass through the quality gate.
The quality gate.
All contributions pass through verification before earning stake. The gate has three layers, and they run in sequence.
Layer 1: Anomaly detection (automated)
Before any human sees your contribution, automated checks screen for:
- Duplicate submissions — content substantially similar to existing training data
- Out-of-domain submissions — contributions outside your verified professional domain
- Volume anomalies — patterns inconsistent with genuine professional contribution (fifty expert Q&A pairs in one hour is not expertise — it is copy-paste)
- Plagiarism detection — content lifted from published sources without attribution or transformation
Flagged submissions are not automatically rejected. They go to human review with a flag notation. This layer is a fraud screen, not a quality judgment.
Layer 2: Peer review (human)
Every contribution is reviewed by at least two verified professionals in the same domain. Reviewers are selected from the pool of contributors with established stake in that domain.
- Both reviewers approve → full stake credit
- One approves, one rejects → third reviewer breaks the tie
- Both reject → zero stake credit, contributor notified with specific explanation
Reviewers themselves earn verification signal stake for reviewing. Peer review is contribution. The system recognizes it.
Review criteria are domain-specific, defined by each domain's professional AI constitution. A medical contribution is reviewed against medical standards. A legal contribution is reviewed against legal standards. There is no universal "quality" — there is professional judgment, applied by professionals.
Layer 3: Model performance verification (automated, when applicable)
For contributions that can be verified against objective criteria — calculations, code, factual claims with verifiable answers, diagnostic reasoning with confirmed outcomes:
- If the model's performance measurably improves on domain benchmarks after incorporating a contributor's data → enhanced stake credit (1.5× multiplier)
- This is the highest-quality signal: it proves the contribution actually made the model better
- Not all contributions are eligible. Subjective professional judgment — "this is the better clinical approach" — cannot be benchmarked the same way as "this calculation is correct"
Stake credit by verification status
| Status | Stake Credit | What It Means |
|---|---|---|
| Pending review | 25% of base weight | Provisional credit while awaiting peer review |
| Peer-verified | 100% of base weight | Two professionals in your domain approved |
| Performance-verified | 150% of base weight | Proven to improve model benchmarks |
| Rejected | 0% | Peers rejected — contributor notified with explanation |
| Revoked | Negative (stake removed) | Bad-faith contribution confirmed by governance review |
Pending credit is provisional. If the contribution is later rejected, the provisional credit is removed. If approved, it upgrades to full credit. This ensures contributors are not penalized by review delays while maintaining the integrity of the verification gate.
Who gets to contribute.
Anyone can claim to be a cardiologist on the internet. This system must distinguish real expertise from fraud — without building a credentialing bureaucracy that excludes legitimate professionals.
Credential verification
When you begin contributing to a professional domain, you declare your domain and submit credentials for verification:
- License, certification, or degree — reviewed by the domain governance committee
- Professional profile verification — cross-referenced with your Our One professional profile and publicly verifiable credentials
- This is a centralized process. The steward team and domain committee verify. Not a blockchain. Not a token-gated system. Human judgment, applied by professionals who know what valid credentials in their field look like.
Verification grants access to contribute direct authoring and response improvement in that domain.
Unverified contributors can submit verification signals — the lightest-weight contribution. You do not need a medical license to say "this output is wrong." But you do need one to write the expert answer that replaces it. This creates an on-ramp: anyone can start contributing immediately at the signal level, and credential verification unlocks higher-weight contributions.
Peer standing
After credential verification, your standing is reinforced or degraded by the quality of your contributions over time:
- Consistently approved submissions → standing maintained
- Rejection rate above 50% over any 90-day period → automatic review by domain governance committee
- Three or more domain professionals flagging your expertise as fraudulent → investigation by domain governance committee
This creates a self-policing community. The professionals who practice in a domain are the best judges of who belongs in it.
Cross-domain contribution
A professional may contribute to multiple domains if they hold verified credentials in each. A physician-attorney can contribute to both medicine and law. Stake in each domain is tracked separately. Your cardiology stake does not give you voice in patent law governance, and patent law stake does not earn you revenue from medical AI.
Living stake — the decay model.
This is the most important design decision in this document.
The problem
If stake is permanent, the first thirty cardiologists who contribute own cardiology AI forever. Every cardiologist who joins later — no matter how skilled, no matter how prolific — is permanently subordinate to the founding cohort. This is not meritocracy. It is first-mover capture wearing the costume of contribution.
If stake is instant — only today's contributions matter — then long-term contributors have no advantage over someone who showed up yesterday. Years of sustained expertise mean nothing. This is not fairness. It is amnesia.
The system needs a middle ground: early contributions matter, recent contributions matter more, and no contribution is forgotten.
The formula
effective_stake(contribution) = base_stake × 2^(-age_months / 36)
Half-life: 36 months. A contribution made today has full stake value. After three years, it retains half. After six years, one quarter. After nine years, one eighth. It approaches zero asymptotically but never reaches it — every contribution is recognized forever, with diminishing weight.
Why 36 months
- AI models evolve. Training data from 2026 will be less relevant to a 2032 model — not irrelevant, but less relevant. The decay reflects this reality.
- Long enough to matter. A contributor who takes a year off does not lose most of their stake. Three years is generous recognition.
- Short enough to prevent capture. The founding cohort's dominance naturally fades as the community grows and new contributions at full weight enter the system.
- The half-life is a governance parameter. The community can adjust it. The constitutional principle — that stake must decay — is fixed. The rate is not.
Worked example
A cardiologist contributes 10 verified expert Q&A pairs (100 base points) in March 2026:
| Date | Age (months) | Effective Stake |
|---|---|---|
| March 2026 | 0 | 100.0 |
| March 2027 | 12 | 79.4 |
| March 2028 | 24 | 63.0 |
| March 2029 | 36 | 50.0 |
| March 2032 | 72 | 25.0 |
| March 2035 | 108 | 12.5 |
If the same cardiologist continues contributing 10 pairs per month, their total effective stake grows — because new contributions at full weight offset the decay of old ones. A steadily contributing professional maintains and grows their stake. A professional who stops contributing sees their stake slowly diminish — but never disappear.
Founding floor
Contributions made in the first year of a domain's existence carry a founding floor: they never decay below 10% of their original value.
Thirty cardiologists who show up first, contribute foundational training data, and define the domain's quality standards — they took the most risk, shaped the direction, and built something from nothing. The founding floor ensures their contribution is permanently recognized. But 10% is small enough that it does not create permanent dominance. A founding contributor who stops contributing will see their stake settle at 10% of its original level while the domain grows around them with new contributors at full strength.
Anti-concentration.
Three mechanisms prevent any individual or cohort from dominating.
Individual cap: 5% of any domain's total stake
No single contributor can hold more than 5% of the total effective stake in any domain, for governance purposes.
If a contributor's share exceeds 5% — because others leave, or because the domain is small — the excess is inactive for governance voting but still earns its proportional revenue share. The cap prevents governance capture while preserving economic fairness. You can earn more than 5% of revenue. You cannot hold more than 5% of decision-making power.
This is a governance parameter, subject to community amendment.
Natural cohort dilution
As new contributors join a domain, the relative share of existing contributors naturally decreases. This is not a mechanism — it is math. If 30 cardiologists hold all the domain stake and 300 more join and contribute, the original 30 naturally hold a smaller share. The decay mechanism accelerates this: old contributions lose weight while new contributions enter at full strength.
This is why the system works without hard limits on founding cohort stake. Decay plus growth equals dilution. The founding 30 remain recognized. They do not remain dominant.
Domain activation threshold: 50 verified contributors
A domain does not activate revenue distribution until it reaches a minimum of 50 verified contributors. This prevents a tiny group from claiming all revenue from a domain before a real professional community exists.
Before the threshold is met, contributions still earn stake — they are banked. When the domain activates, all accumulated stake becomes effective. Early contributors are not penalized for being early. They are protected from receiving revenue in a context too small to be legitimate.
Revenue distribution — the three-pool model.
When Our One AI generates revenue — from subscriptions, API access, or other sources permitted by the AI Constitution — it flows through three pools.
Infrastructure Pool: 15%
Covers compute, training runs, inference serving, development, and operations. Published quarterly with full cost breakdown. If this pool generates surplus, the surplus rolls into the Domain Pool.
This is honest cost, not profit. The percentage adjusts based on actual costs — published and auditable. If infrastructure costs drop (and they will, as the technology matures), the savings flow to contributors, not to margins.
Commons Pool: 25%
Distributed across all Our One AI contributors, regardless of domain. Weighted by total AI stake across all domains.
This is the solidarity mechanism. A plumber who contributes to plumbing AI benefits when the medical AI generates strong revenue. A librarian contributing to library science AI shares in the ecosystem's collective success. The commons pool ensures that domains with smaller markets — poetry, museum curation, forestry — still benefit from belonging to a larger whole.
It also funds the shared infrastructure that all domains depend on: the base model, the routing layer, the contribution platform, the verification system. Every domain benefits from these. Every domain contributes to their cost through the commons pool.
Domain Pool: 60%
Distributed to contributors within the specific domain that generated the revenue. Weighted by effective AI stake within that domain.
This is where expertise is directly rewarded. If cardiology AI generates $100,000 in revenue in a quarter, $60,000 flows to cardiology contributors proportional to their effective stake. The cardiologist who contributed high-quality, peer-verified, model-improving training data receives more than the cardiologist who submitted a handful of verification signals. Quality and consistency are rewarded.
Cross-domain queries — a question touching both cardiology and pharmacology — split the domain pool allocation proportionally based on which adapters were activated to serve the response.
Worked example
Our One AI generates $1,000,000 in monthly revenue. Cardiology AI generated $50,000 of that.
- Infrastructure Pool: $150,000 — covers compute, team, operations for all of Our One AI
- Commons Pool: $250,000 — distributed to all 10,000 AI contributors by total effective stake
- Domain Pool: $600,000 — distributed to domains proportional to their revenue contribution
- Cardiology's domain share: $50,000 × 60% = $30,000
- Distributed to 500 cardiology contributors proportional to their cardiology stake
A cardiologist holding 1% of cardiology domain effective stake would receive:
- From domain pool: $30,000 × 1% = $300
- From commons pool: their share of $250,000 based on total AI stake across all domains
- Total: meaningful return for genuine expertise contribution
The percentages are proposals
15/25/60 is the initial governance proposal. The AI Constitution locks the principles — that revenue returns to contributors, that a solidarity mechanism exists, that infrastructure costs are published honestly. The community sets the numbers and can adjust them through the amendment process.
Revocation.
This is centralized and human. Not a blockchain. Not an automated system that punishes without explanation. A professional community maintaining its own standards.
Triggers for review
- Community reports — any contributor can flag another's contributions as fraudulent, plagiarized, or outside their verified expertise
- Pattern detection — automated systems flag unusual contribution patterns: sudden volume spikes, contributions outside declared domain, copy-paste patterns, submissions substantially similar to published sources
- Sustained rejection rate — if more than 50% of a contributor's submissions are rejected by peers over any 90-day period, automatic review
Review process
The domain governance committee — elected by domain contributors, using AI-stake-weighted voting — reviews flagged cases.
- Contributor is notified and given the opportunity to respond
- Committee reviews evidence, contributor response, and contribution history
- Decision within 14 days
Committee decision options:
- Dismiss flag — no action, flag was unwarranted
- Issue warning — contributor notified, no stake impact
- Revoke specific contributions — stake removed for those contributions, contributor retains standing
- Suspend contribution access — temporary ban from contributing, stake frozen
- Ban from domain — permanent removal from domain contribution, all domain stake revoked
Appeals
Appeals go to the broader AI governance body — not the same domain committee that made the original decision. One appeal. Decision within 21 days. Final.
What revocation means technically
When contributions are revoked, their stake is removed from the contributor's total effective stake. If the contributions were included in a training run, they are flagged for exclusion from the next training run. Revocation is recorded in the audit log with the reason — anonymized if needed to protect the reporter, but the fact and basis of revocation are always transparent.
What AI stake does NOT do.
This section matters as much as what AI stake does. These are structural boundaries.
AI stake does NOT affect your platform experience. What you see in your Brief, how your posts are ranked, who discovers you in search — none of this is influenced by your AI stake. The platform and the AI product are governed independently.
AI stake does NOT affect your platform governance weight. That is platform stake — membership years, nothing else. A new member with zero AI contributions has the same platform governance voice as a prolific AI contributor, proportional to their membership time.
AI stake is NOT displayed publicly as a number. Your domain and contributor status may be visible — "Verified contributor: Cardiology" — but your stake quantity is never shown to other contributors. There is no leaderboard. There is no "top contributors" ranking. There is no number next to your name.
AI stake does NOT create a leaderboard. We will not rank contributors by stake, by contribution volume, by revenue earned, or by any other metric. Leaderboards create the incentive to game the system. We have built the system specifically to resist gaming. A leaderboard would undo that work.
AI stake does NOT transfer between members. You cannot sell, gift, delegate, or bequeath your AI stake. It reflects your contribution. It is not property.
AI stake does NOT convert to platform stake. The two systems are independent by constitutional design. Platform governance and AI governance are separate domains with separate stake systems, preventing any single dimension of participation from accumulating disproportionate power.
Two stake systems, one platform.
| Platform Stake | AI Stake | |
|---|---|---|
| Input | Membership years | Contribution quality and volume |
| Purpose | Platform governance voting weight | AI governance voting weight + revenue distribution |
| Earns from | Paying 1¢/day | Contributing verified professional expertise |
| Decay | None (grows with time) | Half-life of 36 months |
| Cap | None | 5% per domain (governance only) |
| Gameable | No (time cannot be faked) | Resistant (peer review, credential verification, anomaly detection) |
| Displayed | Never (member-since date only) | Never (contributor status only) |
| Revenue | None (platform is a utility) | Yes (AI revenue flows to contributors) |
A founding member with five years of platform stake and zero AI contributions has a strong voice in platform governance and no voice in AI governance. A newer member with one year of platform stake but significant verified AI contributions has a modest platform voice and a strong AI voice. This is correct. Each system reflects what it measures.
Why AI stake is complex.
Platform stake is deliberately simple because platform governance is about commitment, not activity. AI stake is deliberately complex because AI contribution IS the activity, and the system must distinguish real expertise from noise.
The complexity is the honesty. A simpler system would be gameable. A system that treated all contributions equally would be unfair. A system without decay would be captured by early movers. A system without solidarity would fragment into silos. A system without peer review would be flooded with garbage. A system without credential verification would be exploited by speculators pretending to be radiologists.
Every mechanism exists to solve a specific problem:
- Quality gate → prevents gaming
- Credential verification → prevents fraud
- Decay → prevents capture
- Individual cap → prevents concentration
- Commons pool → prevents fragmentation
- Domain activation threshold → prevents premature extraction
- Revocation → enables correction
If you think a mechanism is unnecessary, you are probably right — until you remove it and discover the exploit it was blocking.
These numbers are proposals.
The AI Constitution locks the principles. Community governance sets the parameters. When the contributor community is large enough to make that governance meaningful, these will be among the first proposals you vote on:
- Contribution type weights (10/4/1/50)
- Decay half-life (36 months)
- Founding floor (10%)
- Individual governance cap (5%)
- Domain activation threshold (50 contributors)
- Revenue pool split (15/25/60)
- Performance verification multiplier (1.5×)
Each number in this document is a starting point. Each one is subject to evidence, argument, and community ratification. What is not subject to ratification is the structure — that contributions must be verified, that stake must decay, that no one may dominate, that revenue must return to the people who earned it, and that the system must be honest about what it measures and why.
Go deeper: AI Constitution · Platform Constitution · Platform Stake · Our One AI · Economics