Our One — The AI Document
On who should own the most powerful technology in human history, and why the answer is not who currently owns it.
The question nobody is asking loudly enough
There is a conversation happening right now in boardrooms, in government offices, in the corridors of universities and research labs, about artificial intelligence. It is mostly a conversation about capability. How powerful will it become? How fast? What will it be able to do that humans cannot?
These are real questions. But they are the second question.
The first question — the one that will determine whether AI becomes the greatest liberation of human potential in history or the most comprehensive concentration of power ever achieved — is simpler and more uncomfortable:
Who owns it?
Not who built the first version. Not who raised the most capital. Not who currently controls the weights of the most capable models. But who, in a permanent and structural sense, will own the AI systems that replace human labor, generate human knowledge, and mediate human experience at civilizational scale.
Right now, the answer is: five companies. Maybe eight. Certainly fewer than twenty.
That answer is not inevitable. It is a choice. And the window to make a different choice is open right now, and closing.
What was taken, and from whom
To understand why a different answer is possible, you need to understand what AI actually is, beneath the marketing.
A large language model is, at its foundation, a compression of human knowledge. It was trained on text — on the accumulated written output of millions of people over decades. Books and papers and forum posts and code repositories and documentation and tutorials and arguments and explanations and stories. Human beings writing down what they knew, what they figured out, what they wanted to communicate to other human beings.
Most of that writing was created in a spirit of sharing. The developer who answered a question on Stack Overflow was not thinking about AI training data. They were thinking about the next developer who would have the same problem. The researcher who published their findings open-access was thinking about science as a collective enterprise. The programmer who committed code to a public repository was thinking about building something useful that others could learn from and improve.
They built a commons. Deliberately, enthusiastically, over thirty years.
Then, buried in terms of service written by lawyers for a purpose no user imagined, that commons was enclosed. The content was scraped, processed, and used to train models that are now being sold — at extraordinary valuations, with extraordinary returns flowing to a very small number of people — to automate the work of the very people who created the training data.
We are not saying this was illegal. We are saying it was a transfer. A massive, largely invisible transfer of collective intellectual labor into private capital. And we are saying that this transfer does not have to be the permanent structure of the AI age.
The lie about who can build AI
For the last three years, the story told about AI has been a story about scale.
You need billions of parameters. You need millions of GPU hours. You need hundreds of the world's best researchers. You need the kind of capital that only the largest technology companies and the most aggressive venture funds can provide. AI is not a game for small players. This is frontier technology, and the frontier requires frontier resources.
This story served a purpose. It justified the concentration. It made the enclosure seem like a natural consequence of the technology's requirements rather than a deliberate structural choice.
Then DeepSeek trained a model competitive with GPT-4 for approximately six million dollars.
Six million dollars. In an industry where companies raise ten billion dollar rounds.
The capability gap between open models and closed frontier models is real, but it is narrowing faster than the labs want to acknowledge. Llama 4, Mistral, Qwen, Falcon — open-weight models that can be downloaded, fine-tuned, and deployed by anyone with modest infrastructure. Models that, with the right training data and the right human feedback, can perform at levels that would have seemed impossible to open-source development two years ago.
The story about AI requiring trillion-dollar companies is becoming less true every quarter.
Which means the window to build a different ownership structure is still open.
The leverage point: who teaches the model
Here is what the scaling story never mentions, because it is inconvenient.
Raw compute is not the primary determinant of AI quality for most real-world applications. The primary determinant is the quality of human feedback during training.
Reinforcement learning from human feedback — RLHF — is the process by which models learn what good looks like. Not just grammatically correct. Not just logically consistent. Actually good. Useful to real people in real situations. Reflecting real expertise. Getting the nuances right that matter to practitioners.
This process currently works like this: AI labs hire thousands of workers, largely in the Philippines, Kenya, and Venezuela, paying them a few dollars per hour to rate model outputs. Is this answer better than that one? Is this code correct? Is this explanation accurate?
These workers are doing their best. But they are generalists evaluating outputs in domains they are not expert in. A medical diagnosis. A legal argument. An engineering decision. A pedagogical approach. They can assess surface quality — grammar, clarity, obvious errors — but they cannot assess the deeper correctness that comes from genuine domain expertise.
Now ask a different question.
What if a million professionals — doctors, engineers, lawyers, teachers, scientists, designers, tradespeople, researchers — contributed their expertise to training an AI model they collectively owned?
What if the RLHF process was not outsourced to low-wage workers in the developing world, but crowdsourced to genuine experts who had skin in the game because they owned the outcome?
What if the model improved because a cardiologist in Prague and a civil engineer in Lagos and a constitutional lawyer in Berlin and a high school teacher in São Paulo all contributed their real knowledge, their real corrections, their real sense of what good looks like in their domain — not because they were paid three dollars an hour, but because they owned a stake in the model they were training?
The quality of that model would be different in kind, not just degree. And it would belong to them.
What Our One AI actually is
Our One AI is not a plan to build a competitor to GPT-5. It is not a moonshot requiring billions of dollars and years of research before anything useful exists.
It is a governance layer over the AI development process that already exists in the open-source ecosystem.
Here is what it looks like in practice.
We start with an open-weight foundation model. Llama. Mistral. Any of several models that are already trained, already capable, already available to build on. We do not need to solve the hardest problems in AI research. We need to solve the governance problem: how do you build a model that belongs to its community, improves through genuine expertise, and cannot be enclosed or sold against the interests of the people who built it.
We build a constitutional framework for the AI — the same framework that governs Our One products. The model's training data is documented and transparent. The human feedback process is open to community participation. The weights are owned collectively. The commercial applications are governed by the same principles as other Our One products: user-owned, constitution-first, anti-capture by design.
We build tools for professional contribution. Not just "vote on this output" but structured domain expertise pipelines. A nurse practitioner can contribute feedback on medical questions in a way that is weighted by their credential and their track record of helpful contributions. A senior software engineer can contribute to code generation quality in ways that accumulate into genuine improvement. A primary school teacher can shape how the model explains concepts to children.
The model improves. The people who improved it own the model. The benefits flow back to the community.
Why this is different from everything that came before
The open-source AI community is not new. Hugging Face has existed for years. Meta open-sourced Llama. Hundreds of fine-tuned models exist on every imaginable topic.
What has been missing is not the technology. It is the economic and governance structure.
When a developer fine-tunes an open model and improves it, where does that improvement go? Usually to another model release that someone else builds on, with no connection between the contributor and the benefit. There is no mechanism for the people who do the work of improvement to own the result of that improvement.
When a thousand domain experts contribute their knowledge to making an AI more accurate in their field, who captures the value? Currently, the answer is whoever has the capital and compute to integrate those contributions into a product people pay for.
Our One AI changes this. The governance layer ensures that contribution creates ownership. That improvement creates stake. That the person who made the model better in their domain holds a meaningful claim on the model's value in that domain.
This is not complicated technically. It is complicated institutionally. Building the trust infrastructure, the contribution verification, the ownership record, the governance process for model updates — these are hard governance problems, not hard engineering problems.
They are exactly the kind of problems Our One was designed to solve.
The window
We want to be honest about timing, because it matters.
The open-weight model ecosystem is strong now. The capability gap is narrow now. The public understanding of what was taken from the commons is growing now. The political will to think differently about AI ownership is forming now.
In three years, possibly less, the frontier models may be so capable and so entrenched that the gap becomes practically unbridgeable. The companies may lobby successfully for intellectual property protections that make building on open weights legally uncertain. The concentration may become self-reinforcing in ways that make alternatives marginal.
Or the open ecosystem may win. The trajectory of cost reduction may continue. The political backlash against concentration may create space for alternatives. The developers and professionals who understood what was taken may choose to put their expertise into something they own rather than something that extracts from them.
Both futures are possible. The determining factor is whether a credible, trustworthy, well-governed alternative exists before the window closes.
That is what we are building.
What we are asking
Not your data. Not your code. Not your content handed over to be enclosed.
Your expertise, contributed on terms you control, to a model governed by a constitution you can read, owned by a community you belong to.
A cardiologist making cardiac AI better because cardiac AI that is excellent serves patients and reflects their knowledge back to the world in a form that helps people.
A programmer making code generation more trustworthy because trustworthy code generation is something they have spent their career caring about.
A teacher making educational AI more genuinely pedagogical because they have spent thirty years understanding how children actually learn, not how engagement metrics would suggest they should be served.
The AI that will shape the next century of human experience is being trained right now.
The only question is: by whom, for whom, owned by whom.
We are answering that question differently.
The commons was built by millions of people who believed that knowledge shared is knowledge multiplied.
The AI built on that commons should belong to them.