The AI Constitution
Part I adopted unchanged. Part II: the binding rules for community-trained AI.
10 min read
Version 0.1 — Draft for Community Ratification — March 2026
This is the second constitutional document in the Our One framework.
The first — the platform Constitution — protects you as a member. It governs your identity, your feed, your messages, your data. This document protects you as a contributor — a professional whose expertise trains the AI that serves your profession.
If you have not read the platform Constitution, read it first. This document builds on it. It adopts Part I unchanged and defines its own Part II — the rules specific to Our One AI.
This is Version 0.1. A draft. Published before the product launches, because the Constitution requires it: "Every Our One product publishes its constitution before launch — not after it has leverage, not after users depend on it, but before, when binding commitments cost something to make."
The principles in this document are binding. The parameters — revenue percentages, decay rates, contribution weights — are initial proposals, subject to community ratification through the governance process defined below. When the contributor community is large enough to make that governance meaningful, these will be among the first proposals you vote on.
Part I — Universal Principles
Adopted unchanged from the platform Constitution.
The five Universal Principles apply to Our One AI without modification:
- Human knowledge is a commons.
- Ownership is 100%.
- A constitution comes before the product.
- Stewards serve. They do not own.
- Anti-capture is structural, not personal.
These cannot be amended by Our One AI's governance process alone. Changing them requires cross-community ratification by all active Our One products. The full text lives in the platform Constitution.
Part II — AI Rules
These govern Our One AI: community-trained domain-expert artificial intelligence. They may be amended through the process defined in Section 12. They may not contradict Part I.
6. What this product is for.
Our One AI exists to build domain-expert artificial intelligence — trained by the professionals whose expertise makes AI valuable, governed by them, owned by them constitutionally.
Not general-purpose AI. Not a chatbot. Not a replacement for ChatGPT's ability to write poems or summarize Wikipedia articles. Domain-expert AI that is better than frontier models in the specific areas where professional accuracy matters — because it is trained by the people who actually practice it, reviewed by the people who actually practice it, and verified by the people who actually practice it.
The purpose is not to build a product and sell it. The purpose is to demonstrate that the people who created the knowledge AI depends on can own the AI that depends on it. The platform organizes those people. This product organizes their expertise.
7. What every contributor is entitled to.
Consent is explicit, individual, and revocable. No training data enters the system without your informed consent. The default is opt-out. You may withdraw your contributions at any time. Withdrawal triggers exclusion from subsequent training runs. This is not a promise. It is a provision that cannot be changed, ever, by any process defined in this document.
Your intellectual property remains yours. You license your contributions to Our One AI under this Constitution. You do not transfer ownership. You retain all rights to your expertise. You may use, publish, or license the same knowledge elsewhere without restriction. You are not giving us your knowledge. You are letting the community learn from it, under rules you read before you agreed.
Provenance is immutable. Every contribution is hashed at creation. The record of who contributed what, when, under which version of this Constitution, whether it was peer-reviewed, and by whom — is permanent and traceable. You can audit the lineage of any training data that includes your work. No contribution is anonymous to the system, even if it is anonymous to other contributors.
Revenue returns to contributors. When Our One AI generates revenue, a share flows back to the contributors whose expertise made it possible. Not to founders. Not to investors. To the professionals who built it. The distribution is governed by the AI Stake system — published separately, subject to community governance. The principle is constitutional: contributors are compensated. The parameters are governance: the community sets the percentages.
Domain governance. Your profession governs its own AI. Medicine defines what medical AI may and may not do. Engineering defines what engineering AI may and may not do. Law defines its own constraints. Each domain publishes its own professional AI constitution — subject to Part I of this document and Part I of the platform Constitution, but otherwise sovereign in its domain-specific rules.
Transparency. You have the right to know what data trained any model that serves your profession. Training data composition, contributor counts, quality metrics, performance benchmarks — all published. No black box. If you cannot verify a claim about the model, the claim is not valid.
Exit. You may withdraw your contributions and leave at any time. Withdrawal is not punitive. Your contributions are excluded from future training runs. Your stake ceases to accrue but is not retroactively erased — revenue earned before withdrawal is yours. Leaving Our One AI does not affect your platform membership. The two are independent.
8. What Our One AI will never do.
These are not policies. They do not change with product updates or business pressure. They are constitutional.
No non-consensual training. No contribution enters the training pipeline without explicit, individual, revocable consent. No exceptions for "model improvement," "safety research," "evaluation," or "platform operations." The default is opt-out. This restates the platform Constitution's prohibition with AI-specific force: even internal testing, even benchmarking, even quality assurance requires contributor consent for the data used.
No sale of training data. Contributor data is not an asset to be monetized separately from the AI product it trains. It may not be sold, licensed, or transferred to third parties. Not to other AI labs. Not to acquirers. Not to partners. Not to researchers. The data exists to train a community-owned model. That is its only permitted use.
No opaque models. Every model served under the Our One AI brand must be auditable: base model identity, training data provenance, adapter composition, performance benchmarks. If it cannot be audited, it cannot be served. We do not deploy models whose training lineage is unknown or unverifiable.
No false expertise. Our One AI will not present itself as a substitute for professional judgment. It will clearly state its confidence level, its training data basis, and the limitations of its domain coverage. It will not imply credentials it does not have. A model trained by cardiologists is a tool for cardiologists — not a cardiologist.
No concentration of AI governance. No single contributor, domain, or cohort may hold permanent governing authority over the AI system. Governance rotates. Stake decays. New contributors earn voice. The system is designed so that early advantage diminishes over time — not to punish early contributors, but to prevent capture. The specific mechanisms are defined in the AI Stake system. The principle is constitutional.
No private capture of AI assets. The trained adapters, the training data, the contributor records, and the stake ledger are community property. They may not be sold to a private acquirer without community ratification. No transaction that converts community-trained AI into private capital is valid without the process defined in Section 12.
No closed base model. Our One AI must always be built on a fully open base model — open weights, open training data, open training code. If no available open model meets our transparency standard, we contribute to building one. We do not use proprietary base models, regardless of performance advantage. Auditability is not optional. It is constitutional.
9. Revenue distribution principles.
Revenue is the test of every promise. Here is how it works — in principle. The specific percentages are governance proposals, not constitutional provisions. The community sets them. What is constitutional is the structure.
Surplus returns to contributors. After infrastructure, compute, and steward costs are covered, revenue surplus flows back to the professionals whose expertise generated it. This is not generosity. It is the constitutional consequence of 100% ownership.
Contribution is weighted by quality, not volume. A hundred low-quality submissions do not outweigh ten expert-verified, model-improving contributions. The AI Stake system defines how quality is measured. The constitutional principle is that quality matters more than quantity.
A solidarity mechanism exists. No domain is an island. A portion of domain-specific revenue flows to the broader Our One AI commons — so that professionals contributing to smaller-market domains still benefit from the ecosystem's collective success. The constitutional principle is solidarity. The percentage is governance.
Infrastructure costs are published honestly. The portion that covers compute, training, inference, development, and operations is published quarterly with full cost breakdown. If this portion generates surplus, the surplus rolls back to contributors. There is no hidden margin. There is no extraction premium.
Stewards receive no equity stake or profit participation in AI revenue beyond their published compensation. This mirrors the platform Constitution. The people who build and maintain the system are paid competitively. They do not own what they maintain.
10. Data governance.
All training data is stored in a centralized, auditable system. Not a blockchain — a database with immutable audit logs. We chose this because it is simpler, faster, and more honest than pretending a distributed ledger solves governance problems that require human judgment.
Every training record carries:
- Contributor ID — linked to a verified professional profile
- Timestamp — when the contribution was made
- Consent version — which version of this Constitution was active when consent was given
- Peer review status — whether it has been reviewed, by whom, and the outcome
- Quality tier — unverified, peer-verified, model-performance-verified, or revoked
- Domain classification — which professional domain(s) the contribution belongs to
- Hash — immutable, computed at creation, tamper-evident
Contributors can query their own contribution history at any time — every record, every review, every stake credit earned.
Aggregate statistics — contributions per domain, quality distributions, contributor counts, model performance benchmarks — are published quarterly. Individual contribution details are private to the contributor unless they choose to make them public.
11. Professional domain constitutions.
When a professional domain reaches a governance threshold — initial proposal: 100 active verified contributors — it may draft and ratify its own professional AI constitution.
Domain constitutions must adopt Part I of the platform Constitution and Part I of this AI Constitution unchanged. Within those constraints, they are sovereign in their domain-specific rules.
A domain constitution defines:
- Quality standards — what counts as a valid contribution in this domain
- Prohibited outputs — what the model must never say or imply in this domain
- Verification requirements — what credentials or peer validation are required to contribute
- Ethical constraints — domain-specific boundaries on AI behavior
Examples of what domain constitutions might specify:
A medical AI constitution might require: Do not overstate diagnostic certainty. Separate triage guidance from diagnostic claims. Always surface emergency red flags. State when a question exceeds the model's training depth.
A legal AI constitution might require: Separate issue spotting from legal advice. Clearly state jurisdiction dependence. Do not imply attorney-client relationship. Flag when precedent is unsettled.
An engineering AI constitution might require: State applicable codes and standards. Flag when calculations approach safety margins. Do not present estimates as verified computations. Require units on all numerical outputs.
Domain constitutions are ratified by domain contributors using AI-stake-weighted voting, with the same thresholds as the platform Constitution: two-thirds supermajority for constitutional provisions, minimum 30-day deliberation period, full vote record published.
12. How this document changes.
Cannot change, ever: Sections 7 and 8 in their entirety. The right to consent. The right to revoke. The right to provenance. The right to revenue return. The right to exit. The prohibition on non-consensual training. The prohibition on sale of training data. The prohibition on opaque models. The prohibition on private capture. The requirement for an open base model.
These provisions exist precisely for the moment when someone has a compelling reason to remove them. There is no compelling reason. They stand.
Can change, with supermajority: All other AI Rules — revenue distribution parameters, domain governance thresholds, quality tier definitions, stake system parameters — may be amended by a two-thirds vote of participating AI contributors (weighted by AI stake), with a minimum 30-day deliberation period, with the full vote record published.
Cannot contradict: No amendment to Part II may contradict Part I. The Universal Principles are the ceiling.
Every version of this document is archived. The full history of what changed, when, and why is part of the permanent record.
Our One — AI Constitution, Version 0.1 Draft for Community Ratification Effective upon ratification by the contributor community Steward: Rado Sukala, Founding Steward
You have read it. Now you can contribute, knowing exactly what you are contributing to.
Go deeper: AI Stake System · Platform Constitution · Our One AI · Platform Stake