AI Tools — Category Research Report
Your questions, your creative work, your business strategies — fed into models you don't own.
Your questions, your creative work, your business strategies — fed into models you don't own, improving products you don't control. This is the landscape, the data, and the opportunity.
The Landscape
The AI tools market barely existed three years ago. It is now projected to exceed $300-540B annually by 2026 (estimates vary by methodology), with Bain & Company projecting the total AI hardware+software market could reach $780B-$990B by 2027.
The Major Players (March 2026)
| Product | Owner | Est. Users | Pricing (Monthly) | Revenue Model |
|---|---|---|---|---|
| ChatGPT | OpenAI (PBC) | ~800-900M weekly | Free / $8 Go / $20 Plus / $200 Pro / $25-30 Business / Custom Enterprise | Freemium + ads on free tier |
| Gemini | Google/Alphabet | ~350M+ | Free / $19.99 Pro / ~$42 Ultra / Bundled in Workspace | Bundled + subscription |
| Claude | Anthropic | ~19M MAU (web), ~300M incl. API | Free / $20 Pro / $100-200 Max / $25-150 Team / Custom Enterprise | Freemium subscription |
| Copilot | Microsoft/GitHub | 20M+ total, 1.3M paying | Free / $10 Pro / $39 Pro+ / $19 Business / $39 Enterprise | Freemium subscription |
| Grok | xAI | ~64M MAU | Free (on X) / $30 SuperGrok / $300 SuperGrok Heavy / $40 X Premium+ | Platform-bundled + subscription |
| Perplexity | Perplexity AI | ~45M active | Free / $20 Pro / $200 Max | Freemium subscription |
| Midjourney | Midjourney Inc. | ~20M registered | $10-120/mo | Subscription only |
| Cursor | Anysphere | ~1M+ (360K paying) | Free / $20 Pro / $40 Business / $200 Ultra | Freemium subscription |
Sources: DemandSage ChatGPT Statistics, Backlinko Claude Users, Business of Apps Grok Statistics, TechCrunch GitHub Copilot, Sacra Cursor, Business of Apps Perplexity, DemandSage Midjourney.
Notable: nearly every product still charges $20/month for premium access. This price convergence is not coincidence -- it reflects the cost structure from 2023-2024 that has since collapsed. Inference costs have fallen 10x-50x per year (see cost analysis below), but consumer prices remain static. The margin between cost and price is now enormous and widening every quarter.
The $20/month ceiling is breaking: OpenAI launched a $8/month "Go" tier in January 2026. OpenAI began testing ads on free and Go tiers in February 2026. The race to the bottom has begun, but it's being filled with ads and data extraction rather than genuine price reduction.
Revenue & Valuation Context (Early 2026)
| Company | Est. 2025 Revenue | Valuation | Key Metric |
|---|---|---|---|
| OpenAI | ~$10B ARR (projected $29.4B 2026) | ~$500B+ (PBC) | 800-900M weekly users, 10M+ paid subscribers |
| Anthropic | $3.3-9B ARR (projected $26B 2026) | ~$380B (Feb 2026) | 300K+ business customers, Claude Code at $2.5B run-rate |
| xAI (Grok) | ~$500M standalone | Raised $42B total | 64M MAU, spending ~$1B/month on infrastructure |
| Perplexity | ~$200M ARR | ~$20B | 45M active users, 780M+ queries/month |
| Cursor | ~$1.2B ARR | ~$29.3B (Nov 2025) | 360K paying customers, 50%+ Fortune 500 adoption |
| Midjourney | ~$500M | ~$5-7B (est.) | ~20M users, self-funded/profitable |
Sources: ChatGPT Stats Index.dev, Backlinko Claude Users, Business of Apps Grok Statistics, Sacra Perplexity, Sacra Cursor, Fueler Midjourney.
Open-weight alternatives: The gap between open-weight and proprietary models has narrowed dramatically. Meta's Llama 4, DeepSeek V3/R1, Alibaba's Qwen 3, and Mistral Large 3 now rival or exceed many closed models on benchmarks -- often at a fraction of the cost. DeepSeek R1 operates under the MIT license (fully permissive commercial use), Qwen 3 and Mistral Large 3 under Apache 2.0. A Your 99 AI product does not need to build its own foundation model -- it can build on this open-weight infrastructure at dramatically lower cost than even 12 months ago.
The Enshittification Timeline
This category is unique -- it's enshittifying before it even matures.
-
2022-2023: The free era. ChatGPT launches free. Captures 100 million users in two months -- the fastest consumer adoption in history. The free tier is genuinely good. Users form habits, integrate AI into their workflows, become dependent.
-
2023: Monetization begins. ChatGPT Plus launches at $20/month. The free tier starts degrading -- slower responses, limited model access, usage caps. The pattern is familiar: give generously to create dependency, then charge.
-
2024: The squeeze. Free tiers across all platforms become functionally limited. GPT-4 access on free is heavily throttled. Gemini Advanced and Claude Pro gate the best models behind $20/month. Meanwhile, the cost of running these models drops by 10-50x due to hardware and efficiency improvements. Prices do not drop.
-
2024-2025: Enterprise pivot. AI companies shift focus to enterprise sales ($25-60/user/month). Consumer features stagnate. The consumer product becomes a funnel for enterprise sales, not a product built for consumers. OpenAI restructures as a Public Benefit Corporation (October 2025), completing its transition from nonprofit to for-profit entity. Microsoft gets 27% ownership; SoftBank invests $40B. OpenAI removes the word "safely" from its mission statement.
-
2025: The data policy reversal. In August 2025, Anthropic -- previously marketed as the "privacy-first" alternative to ChatGPT -- reversed its stance and began using consumer chat data for model training by default, giving users until October 2025 to opt out. Google made a similar change for Gemini. The entire industry converged on the same strategy: make data collection the default and require users to actively opt out. Users who do not allow training maintain a 30-day retention period; those who do (or fail to opt out) have data retained for up to five years.
-
2025-2026: Ads arrive. In February 2026, OpenAI began testing ads in ChatGPT on Free and Go tiers. First-wave advertisers include Target, Ford, Adobe, and Expedia, with a $60 CPM and $200,000 minimum commitment. Ads appear as early as the first prompt. Anthropic ran a Super Bowl ad promising to stay ad-free -- but it too now collects training data by default. The message is clear: if the product is free, you are both the customer and the product.
-
2025-2026: Price-performance chasm. Inference costs have fallen ~80% year-over-year. DeepSeek R1 API costs $0.55/$2.19 per million tokens -- undercutting competitors by ~90%. GPT-4-equivalent performance now costs $0.40/million tokens vs. $20 in late 2022. Yet subscription prices remain $20/month. Some companies are adding $200/month and $300/month tiers above the standard price rather than reducing it. The gap between what AI costs to run and what users pay has never been wider.
Sources: OpenAI Ads Announcement, TechCrunch Anthropic Privacy, OpenAI For-Profit via TechCrunch, Epoch AI Inference Prices, a16z LLMflation.
The Data Audit
What AI companies collect from your conversations:
- Every prompt you type (your questions, instructions, context)
- Every response generated (often containing your proprietary information reflected back)
- Conversation metadata (timing, frequency, topics, patterns)
- File uploads (documents, images, code, spreadsheets)
- Usage patterns (which features, how often, what workflows)
- Feedback signals (thumbs up/down, regenerations, edits to outputs)
The training data reality (updated March 2026):
OpenAI (ChatGPT): For Free, Plus, and Pro users, data is used for training by default. OpenAI states: "When you use OpenAI's services for individuals such as ChatGPT, Sora, or Operator, we may use your content to train our models." Users can opt out via Settings > Data Controls > "Improve the model for everyone" toggle, or through the privacy portal. However: "If you do not want us to use your Content to train our models, you can opt out... In some cases this may limit the ability of our Services to better address your specific use case." Even after opting out, if you provide thumbs-up/down feedback, the entire conversation associated with that feedback may be used for training. Business, Enterprise, and API data is NOT used for training by default. Service Terms updated January 9, 2026; Usage Policies updated October 29, 2025.
Sources: OpenAI Data Usage Policy, OpenAI Terms of Use, OpenAI Help Center.
Anthropic (Claude): As of August 2025, Anthropic reversed its previous privacy-first position. Consumer chat data (Free, Pro, Max plans) is now used for training by default unless users opt out. Users who do not allow training maintain a 30-day data retention period; those who allow it face retention of up to five years. Previous chats with no new activity and deleted conversations are not used for training. Business, Enterprise, API, and education plan data is not affected. Anthropic states data will "never be sold to third parties" and will undergo filtering to reduce sensitive data exposure.
Sources: Anthropic Consumer Terms Update, TechCrunch Anthropic Privacy Shift, Bitdefender Analysis.
Google (Gemini): Similar opt-out policy announced September 2025, covering user-uploaded files, photos, videos, and screenshots. Gemini AI features are now bundled into all Google Workspace plans (16-22% price increase in 2025), with no opt-out from the price increase even if you don't use AI features. Starting March 2026, expanded AI access requires an additional paid add-on.
Sources: Google Workspace Pricing, Google Workspace AI Update Blog, Cumulus Global.
The bottom line: Every major consumer AI tool now uses your data for training by default. Opting out requires active steps that most users never take. Business tiers offer privacy protections -- at 2-5x the consumer price. Your data is the price of the "affordable" tier.
What happens at acquisition or pivot:
OpenAI completed its for-profit restructuring in October 2025. It is now a Public Benefit Corporation controlled by the OpenAI Foundation (26% ownership), with Microsoft holding 27% and other investors/employees holding 47%. The company removed "safely" from its mission statement. SoftBank invested $40B; OpenAI's valuation exceeds $500B. An IPO is expected. When this happens, fiduciary duty to shareholders will further pressure data monetization strategies. Users have no governance rights, no consent mechanism beyond a settings toggle, and often no awareness of what's being collected.
Sources: TechCrunch OpenAI Restructuring, Fortune OpenAI Restructuring, TIME OpenAI Timeline.
Security & Privacy Incidents (2024-2026)
The rapid deployment of AI tools has significantly outpaced security practices:
-
ChatGPT credential theft (2024): Security firm Group-IB found over 100,000 stolen ChatGPT credentials on the dark web, lifted from malware-compromised devices. A separate incident found 225,000 OpenAI credentials for sale, stolen by infostealer malware.
-
OpenAI vendor breach (November 2025): Hackers breached an OpenAI vendor and stole sensitive information about business customers, including names, emails, locations, and technical details about their systems.
-
ChatGPT Deep Research exploit (2025): A "zero-click" exploit in ChatGPT's Deep Research mode allowed attackers to exfiltrate data without target interaction -- a malicious actor could send an HTML-formatted email containing hidden instructions.
-
Salesloft-Drift AI chatbot breach (August 2025): A single AI chatbot breach exposed data from 700+ companies, including security leaders like Palo Alto Networks, Cloudflare, and Zscaler. Attackers systematically exfiltrated data from connected Salesforce instances.
-
Chat & Ask AI exposure: This popular AI app (50M+ claimed users) left hundreds of millions of private messages exposed in an unsecured database.
-
AI-assisted government hack (2026): An attacker used Claude and ChatGPT to breach Mexico's government networks, stealing 150GB of data including 195 million taxpayer records.
-
Microsoft Copilot data exposure (2025): Concentric AI found that GenAI tools such as Microsoft Copilot exposed ~3 million sensitive records per organization during H1 2025.
-
Systemic app insecurity: Between January 2025 and February 2026, at least 20 documented security incidents exposed data of tens of millions of users. CovertLabs' 2026 scan found 196 out of 198 iOS AI apps actively leaking data. 20% of global organizations reported data breaches due to shadow AI (IBM).
-
Italy fined OpenAI for processing users' personal data without adequate legal basis under GDPR and failing to prevent children under 13 from accessing the platform.
-
Apple Intelligence privacy concerns: Black Hat 2025 research showed Siri transmits dictated message content (including WhatsApp messages) to Apple servers even when such transmission isn't necessary, and even when users disable Siri "learning" settings. Apple disputed the characterization.
Sources: Wald.ai ChatGPT Incidents, Barrack.ai AI Breaches, Trend Micro AI Breach, Malwarebytes AI Chat Leak, CyberScoop Apple Intelligence.
The AI Coding Tools Race
The AI coding assistant market reached $7.37B in 2025 and is projected to reach $30.1B by 2032 (27.1% CAGR). Two players dominate:
GitHub Copilot (Microsoft): 20M+ total users, 1.3M paying subscribers, 50,000+ organizations. 42% market share. 90% of Fortune 100 companies use it. Five tiers from Free (2,000 completions/month) to Enterprise ($39/user/month). Revenue growing 40% YoY.
Cursor (Anysphere): The fastest-growing SaaS company of all time -- from $1M to $500M ARR in record time, hitting $1.2B ARR in 2025. 1M+ users, 360K paying. 18% market share. Valued at $29.3B after November 2025 Series D ($2.3B round). Team of just 12 people. 50%+ Fortune 500 adoption. Users include OpenAI, Midjourney, Perplexity, and Shopify developers. Launched Background Agents for autonomous coding and its own proprietary coding model "Composer" (4x faster than comparable models). Controversial June 2025 switch from request-based to credit-based billing effectively halved Pro user allowances.
Sources: Sacra Cursor, TechCrunch GitHub Copilot, Cursor Series D, UserJot GitHub Copilot Pricing.
AI Image & Video Generation (2025-2026)
| Company | Est. 2025 Revenue | Valuation | Status |
|---|---|---|---|
| Midjourney | ~$500M | ~$5-7B (est.) | Self-funded, profitable. V8 current. 20M users. Transitioned from Discord to web interface. Now includes video and 3D. |
| Runway ML | ~$90M ARR (mid-2025) | ~$1.5B | Gen-4 and Aleph models. Lionsgate partnership. Credits-based pricing $0-95/month. |
| Stability AI | ~$50M (2024) | ~$1B | New CEO (June 2024). Eliminated debt. EA and Warner Music partnerships. Won copyright case vs Getty Images (Nov 2025). 190 employees. |
Sources: DemandSage Midjourney Statistics, Fueler Midjourney, Sacra Stability AI.
The Open-Weight Model Landscape (Early 2026)
The gap between proprietary and open-weight models is closing at remarkable speed. Four families dominate:
| Model Family | Key Model | Parameters | License | Strength |
|---|---|---|---|---|
| DeepSeek | V3.2 / R1 | 685B total, 37B active | MIT (fully permissive) | Math, coding, reasoning. Undercuts competitors ~90% on cost. |
| Qwen (Alibaba) | Qwen 3 / 3.5 | 235B total, 22B active | Apache 2.0 | Multilingual (119 langs), hybrid thinking modes. 92.3% on AIME25. |
| Llama (Meta) | Llama 4 Scout/Maverick | Up to 2T (Behemoth) | Meta custom license | 10M+ context window, strong general performance. |
| Mistral | Large 3 | Various | Apache 2.0 | Balanced performance, easy deployment, building sovereign/air-gapped versions. |
Other notable models: Microsoft Phi-4 (14B parameters, MIT license) approaches DeepSeek R1 performance on math benchmarks -- a 14B model rivaling a 671B model. Kimi K2 Thinking is arguably the best open model by benchmark score (per Artificial Analysis). Qwen3-Coder-Next (80B, 3B active) outperforms much larger models on coding tasks.
What's coming: DeepSeek R2 (reasoning successor to R1), Llama 4 Behemoth (2T+ parameters), and Qwen 4.0 are all expected by Q2 2026.
Why this matters for Your 99: These models are free or nearly free to use commercially. Running them on efficient infrastructure is dramatically cheaper than paying API fees to closed providers. A DeepSeek R1 deployment costs roughly $0.55/$2.19 per million tokens vs. OpenAI's GPT-5.2 at $1.75/$14. The "build vs. buy" equation has shifted decisively toward build.
Sources: Sebastian Raschka Open-Weight LLMs, Onyx Self-Hosted LLM Leaderboard, Understanding AI, Shakudo Top LLMs.
AI Inference Cost Trends (The "LLMflation" Effect)
Andreessen Horowitz coined the term "LLMflation" for this phenomenon: the cost of AI inference is falling faster than compute costs fell during the PC revolution or bandwidth costs fell during the dotcom boom.
Key data points:
- For an LLM of equivalent performance, cost decreases 10x every year (a16z, 2025).
- Epoch AI research: across all benchmarks, prices declined between 9x and 900x per year, with a median of 50x per year. The median recently accelerated to 200x per year.
- GPT-4-equivalent performance: $20/million tokens (late 2022) to $0.40/million tokens (December 2025) -- a 50x reduction.
- DeepSeek R1 debuted at $0.55/$2.19 per million tokens, undercutting competitors by ~90%.
- Cloud H100 GPU prices: 64-75% decline from peaks, stabilizing at $2.85-$3.50/hour.
- NVIDIA Blackwell GPUs enabled another 4x improvement in cost per token vs. Hopper.
- Quantization techniques reduce costs 60-70%; speculative decoding cuts latency 2-3x.
- Total inference cost decline: ~80% year-over-year through 2025 into 2026.
The paradox: Despite plummeting per-token costs, total enterprise AI spending is skyrocketing as companies move from experimental chatbots to thousands of autonomous agentic workflows running 24/7. Inference now accounts for 85% of the enterprise AI budget (up from a training-dominated split in 2024).
What this means for subscription pricing: The $20/month subscription that was cost-justified in 2023 now carries enormous margins. A heavy user generating ~1M tokens/month costs the provider roughly $0.40-$3.00 to serve (depending on model). The $20 price is 7x-50x the actual cost. This is the clearest signal that a cheaper, user-owned alternative is viable.
Sources: a16z LLMflation, Epoch AI Price Trends, NVIDIA Blackwell Blog, AnalyticsWeek Inference Economics.
AI Market Size (2026-2027 Projections)
| Source | 2026 Projection | 2027 Projection | Methodology |
|---|---|---|---|
| Fortune Business Insights | $375.9B | -- | 26.6% CAGR |
| Grand View Research | $539.5B | -- | SW + services |
| MarketsandMarkets | $310B | $407B | 39.7% CAGR |
| Statista | $312B | -- | 27.7% CAGR |
| Bain & Company | -- | $780B-$990B | HW + SW + services, 40-55% annual growth |
Regional breakdown (2026): North America 31.8% market share. U.S. alone: $83.2B (16.2% of global). Asia Pacific: $112.2B (34.7% CAGR, fastest-growing). Europe: $82.0B. Long-term: Grand View Research projects the global AI market at $3,497B by 2033.
Enterprise spending trend: BCG reports corporations expect to double AI spending in 2026, from ~0.8% of revenues to ~1.7%.
Sources: Fortune Business Insights AI Market, Grand View Research AI Market, Bain & Company Global Technology Report, MarketsandMarkets AI.
Regulatory Landscape: EU AI Act (Status March 2026)
The EU AI Act is the first comprehensive AI regulation worldwide. It entered into force in 2024 with phased implementation:
-
February 2, 2025 (in effect): Banned AI systems -- manipulative AI, predictive policing, social scoring, real-time biometric identification. These are now illegal in the EU.
-
August 2, 2025 (in effect): Governance infrastructure activated. General-purpose AI model (GPAI) obligations began. AI Office and AI Board operational. Member states designated national competent authorities. Penalty regime activated: up to EUR 35M or 7% of global turnover for prohibited practices; up to EUR 15M or 3% for other violations; up to EUR 7.5M or 1% for misleading information. However, many enforcement powers don't begin until August 2026 -- creating a gap between penalties on paper and enforcement in practice.
-
August 2, 2026 (upcoming -- major deadline): Full general application. High-risk AI system requirements take effect (law enforcement, healthcare, education, critical infrastructure). Transparency duties apply. Conformity assessments, technical documentation, and CE marking required. EU-level fines for GPAI providers apply. Each member state must have at least one regulatory sandbox operational.
-
August 2, 2027: Full scope applies to all risk categories. Final deadline for GPAI that was already in use before August 2025.
Notable developments: Italy enacted national AI law (October 2025) with fines up to EUR 774,685. The Commission published the GPAI Code of Practice (July 2025). A proposed "Digital Omnibus" simplification (November 2025) may adjust some high-risk timelines.
Sources: EU AI Act Service Desk Timeline, EU AI Act Implementation Timeline, LegalNodes EU AI Act 2026, DLA Piper Analysis.
Apple Intelligence (2025-2026)
Apple is the outlier in this market. Rather than building a standalone AI product, Apple is embedding AI into its device ecosystem:
Features: On-device processing via Apple Silicon for many AI tasks. Live Translation, visual intelligence, Genmoji, Image Playground, intelligent Shortcuts. Foundation Models framework allows third-party developers to access the on-device model with as few as three lines of Swift code -- inference is free of cost and works offline. Next-generation Siri with deeper contextual awareness expected in 2026. Apple-Google partnership (January 2026) to incorporate Gemini models into future Apple Intelligence features.
Privacy approach: Apple's model processes most tasks on-device. For complex tasks, Private Cloud Compute processes data only for the task at hand and does not store it afterward. Apple states it has "never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone." Training data comes from licensed sources and public web content (via AppleBot), with PII filters.
Privacy concerns: Black Hat 2025 research revealed Siri sends dictated message content to Apple servers even when transmission isn't necessary and even when users disable Siri learning settings. Data flows occur outside Private Cloud Compute. Apple "respectfully disagrees" with the characterization.
Legal issues: Apple settled a class-action lawsuit (December 2025) over allegations that Apple Intelligence features advertised at product launch did not exist at the time and wouldn't be available until 2026.
Sources: Apple Newsroom Intelligence, CyberScoop Apple Intelligence Privacy, AppleMagazine 2026 Forecast, Corellium Apple Intelligence Security.
Vulnerability Score
| Criterion | Rating | Explanation |
|---|---|---|
| User resentment | Very High | Ads arriving on free tier. Privacy reversals (Anthropic, Google). $20/month pricing unchanged despite 80%+ cost drops. OpenAI's nonprofit-to-profit conversion. "You are the training data" awareness growing. |
| Switching cost | Very Low | Conversations have no network effects. No social graph. Most users can switch AI assistants instantly. Model quality convergence means less lock-in to any specific provider. |
| Technical feasibility | High | Open-weight models (DeepSeek R1, Llama 4, Qwen 3) now match or exceed many proprietary models. MIT and Apache 2.0 licenses allow unrestricted commercial use. Inference costs have dropped 80%+ YoY. A 14B-parameter model can rival a 671B model on key benchmarks. |
| Monetization clarity | Very High | Users already pay $20/month. A $10/month alternative is immediately compelling given the 7x-50x margin incumbents enjoy. Proven willingness to pay across 10M+ ChatGPT subscribers. |
| Data sensitivity | Extreme | AI conversations contain business strategies, medical questions, personal reflections, creative work, proprietary code, legal documents. Every major provider now trains on this data by default. 20+ documented data breaches in 13 months. This is the most intimate data stream in technology history. |
| Network effects | Very Low | AI tools are almost purely single-user. Your experience doesn't depend on other users. No platform lock-in. |
| Regulatory tailwind | Strong | EU AI Act creates compliance requirements that favor transparent, privacy-respecting alternatives. Penalties up to 7% of global turnover for violations. |
Overall vulnerability: Very High. Very low switching costs, zero network effects, proven willingness to pay at $20/month, surging resentment over ads and privacy reversals, extreme data sensitivity, viable open-weight alternatives at a fraction of the cost, and regulatory tailwinds favoring transparent approaches. This category is among the most vulnerable to a user-owned alternative.
The Your 99 Blueprint
Revenue model: $10/month (half of ChatGPT Plus) or pay-per-use for heavy consumers. Built on open-weight models (DeepSeek R1, Llama 4, Qwen 3, Mistral) run on efficient infrastructure. As costs continue falling (10x per year), the price can decrease further -- the opposite of the incumbent pattern. At current inference costs, serving a heavy user costs $0.40-$3.00/month -- the margin at $10/month is healthy even before scale efficiencies.
Draft Contribution Map:
| Contribution | Stake per month |
|---|---|
| Active use (10+ sessions/month) | 10 base units |
| Paid subscription ($10/month) | 30 base units |
| AI output feedback (quality ratings, corrections) | 10-50 units (scaled by value) |
| Bug reports (verified) | 5 bonus units |
| Prompt templates shared (used by others) | 10 bonus units |
| Referral (becomes active 30+ day user) | 15 bonus units |
The RLHF insight: The Contribution Map for AI tools includes something unique: feedback on AI outputs as a first-class contribution. When you tell the AI "this answer was wrong, here's why" or "this code has a bug, here's the fix" -- that is RLHF data. The same data that companies pay millions to collect from contractors. In Your 99, your expertise earns you ownership.
Economics at scale:
| Scale | Users | Paying % | Monthly Revenue | Compute Costs | Distributable | Builder 1% | Per Paying User |
|---|---|---|---|---|---|---|---|
| Small | 10,000 | 50% | $50,000 | $15,000 | $29,750 | $298 | $5.29 |
| Medium | 100,000 | 50% | $500,000 | $150,000 | $297,500 | $2,975 | $5.29 |
| Large | 500,000 | 50% | $2,500,000 | $750,000 | $1,487,500 | $14,875 | $5.29 |
(Assumes $10/month, ~30% compute costs, ~5% other operating costs, standard 1%/10%/89% split)
The pitch in one line: You pay $10. You get ~$5.29 back. Half-price AI -- and you own it. And your feedback actually improves the product for everyone, including you.
Key differentiator beyond ownership: Transparent data policy -- your conversations are yours, period. No training on user data without explicit community governance approval. Full conversation export. Model choice (use whichever open-weight model suits your task). No ads, ever. And the RLHF loop: your expertise improves the product, and you're rewarded for that improvement.
Minimum viable feature set: Chat interface with model selection, conversation history, file/image upload, code highlighting, conversation sharing (user-controlled), conversation export. Phase 2: custom instructions/personas, API access, specialized tools (writing, coding, analysis). Phase 3: community-trained model improvements, specialized domain models.
Open Questions
- Open-weight models now genuinely compete with GPT-5/Claude Opus on most everyday tasks. The remaining gap is in frontier reasoning and specialized domains. Is this gap material for 90% of consumer use cases? Evidence suggests no.
- How do compute costs scale with users? At 500,000 users doing 10+ sessions/month, what's the real infrastructure cost? With inference costs falling 10x/year, today's cost estimates will be 10x too high within 12 months.
- Is the real product a general AI assistant, or specialized AI tools (AI for writing, AI for code, AI for health, AI for education)? Specialization might create stronger Contribution Maps and clearer ownership value.
- Could the Your 99 community become a source of high-quality RLHF data at scale? Millions of domain experts providing feedback in their fields of expertise -- this would be unprecedented and enormously valuable.
- Should this be a standalone product or an AI layer integrated into other Your 99 products (notes, social, productivity)? The "AI for everything" approach vs. the focused tool approach.
- What about the environmental cost of running AI models? Should the community governance include compute budget decisions?
- How should Your 99 position relative to the EU AI Act? Full compliance from day one could be a competitive advantage against slower-moving incumbents facing August 2026 deadlines.
- With Anthropic and Google reversing their privacy stances, is there a first-mover advantage in being the only AI tool that guarantees no training on user data by default, with no opt-out gymnastics required?
Report version 0.2
Last updated 2026-03-03