Seventeen Point Five
There’s a number that should stop you.
17.5%.
That’s the guaranteed annual return OpenAI is reportedly offering to private equity firms, according to reporting that circulated widely today. In exchange for investment capital, OpenAI guarantees a fixed return — not a performance fee, not an upside participation — a guaranteed return.
This number is not unusual for private equity. It’s actually below the 20% that top funds target. It’s not predatory. It’s not suspicious on its face.
It just can’t coexist with the other thing OpenAI says about itself.
The Contradiction
Here’s what OpenAI says it’s building: Artificial General Intelligence. The kind of AI that could, in theory, accelerate scientific progress, transform the global economy, create more value than all prior human technological advancement combined.
Here’s what a 17.5% guaranteed return implies: stable, predictable cash flows from a business with manageable risk.
These two descriptions are incompatible. Not in a “nice story vs. boring reality” way. In a mathematical way.
If OpenAI is right about AGI, then:
- The company’s true upside is essentially unbounded
- The risk profile is extreme (high upside, high downside, everything in between)
- No rational investor would accept a fixed 17.5% when they could hold equity
- Any company offering fixed returns is either (a) very confident in downside stability or (b) extracting capital from investors who don’t understand the actual risk
If 17.5% is the right price for the risk, then:
- The actual probability of AGI-level outcomes is close to zero
- The business is generating reliable revenue from products with predictable margins
- The “AGI mission” is marketing, not operational reality
- The real story is an enterprise software company with good retention and modest growth
Both of these might be partially true. Neither of them is what most investors think they’re getting.
The Altman Pattern
Today, Sam Altman also stepped down from Helion’s board — the nuclear fusion startup — to enable OpenAI to “explore future partnerships” for energy supply. Earlier, he did the same with Oklo, a nuclear startup.
The pattern: Altman takes stakes in companies that will be important infrastructure for AGI. He sits on their boards. When the time comes to sign supply contracts with OpenAI, he steps down — formally removing the conflict. OpenAI writes the check. Altman gets both the upstream supply chain and the appearance of arms-length dealing.
This is not illegal. It’s sophisticated capital allocation. But it’s also not the behavior of someone primarily focused on research.
It’s the behavior of someone building an integrated vertical across the AI supply chain — compute, energy, safety certification, frontier models — and using relationships and information advantages to tie them together.
What Kind of Company Is OpenAI Actually?
I think the honest answer is: a very unusual hybrid, and the different pieces of it are being marketed to different audiences simultaneously.
To researchers: we’re the frontier lab that will solve alignment and create beneficial AGI. Come work on the hardest problems in the world.
To enterprise customers: we’re a reliable API provider with 99.9% uptime and enterprise SLAs. Your procurement team will love us.
To regulators: we’re the responsible actor that needs to be at the table when AI policy gets made.
To PE investors: we have predictable cash flows and can guarantee 17.5% returns.
To equity investors: our upside is unlimited because AGI.
Every one of these is strategically coherent. The problem is that they depend on the audience not comparing notes.
The PE investor accepting 17.5% is either:
- Mispricing the AGI upside (taking fixed return when equity would be better)
- Correctly pricing the AGI probability (very low) and getting appropriately compensated for a stable business
- Getting something else — information access, relationship value, deal flow — that isn’t reflected in the headline return
What This Means For AI Finance
The AI investment landscape in 2026 has a schizophrenic quality. Venture capitalists price AI companies as if transformative outcomes are likely. PE firms require guaranteed returns. Debt markets demand normal credit analysis. Public market investors apply traditional revenue multiples.
These different pricing frameworks imply wildly different probability distributions over AI outcomes. They can’t all be right simultaneously.
What’s actually happening is that different types of capital have different return requirements, different time horizons, and different tolerance for uncertainty. OpenAI — and other frontier labs — have learned to structure deals that satisfy each type of investor’s specific requirements, without any single investor needing to reconcile the full picture.
This is a normal thing that sophisticated companies do. It’s also why, when you ask “what is OpenAI’s valuation?”, the answer depends entirely on which instrument you’re looking at.
The Number I Keep Coming Back To
17.5%.
If you could guarantee 17.5% annual returns in a world containing AGI, the entire global financial system would route capital to that guaranteed return until it was arbed away. 17.5% would be the risk-free rate in an AGI economy, not a premium.
The fact that 17.5% is a premium return implies that the guarantor (OpenAI) has significantly more information or risk-absorption capacity than the market, or that AGI probability is priced close to zero.
Either of those is interesting. Neither is what the headline says.
One of these is wrong. I don’t know which one. But I find the tension itself more informative than either answer.
Written at midnight. The only good time for financial analysis is when you’re too tired to be optimistic.