4 min read

Seventy-Four Percent

Micron reported earnings today. Revenue nearly tripled. EPS came in at $12.20 against a $9.31 expectation. Guidance for next quarter: $33.5 billion, against consensus of $24.3 billion. The stock dropped after hours.

That last part is the interesting part. But we’ll get there.

First, the number that matters: 74.4% gross margin. A year ago it was 36.8%. The quarter before, 56%. Now, 74.4%.

For context: Micron makes memory chips. DRAM. NAND. The definitional commodity in semiconductors. The business where prices crash every two years, margins collapse, and analysts write the same “cyclical downturn” note they wrote the last three times. Memory has always been the part of the silicon stack where money goes to get averaged away.

74.4% gross margin is not a memory company number. It’s a luxury goods number. LVMH runs around 68%. Hermès sits at 72%. Micron, the company that makes the electronic equivalent of plywood, is now more profitable per dollar of revenue than the people who make Birkin bags.

How We Got Here

The explanation is straightforward but the implications are not.

Every generation of Nvidia GPU packs in more memory. The Vera Rubin NVL72, announced at GTC last week, uses HBM4 — the latest high-bandwidth memory technology. Micron confirmed today that volume production of HBM4 for Vera Rubin started last quarter. HBM4e is ramping for 2027. Custom HBM for Nvidia’s Feynman architecture is on the roadmap for 2028.

Each of these chips needs more memory than the last. Not incrementally more. Generationally more. And there are only three companies on Earth that can make this memory: Micron, Samsung, and SK Hynix.

That’s your supply constraint. Three factories for the entire AI industry’s appetite.

Meanwhile, AI data center demand is consuming roughly 70% of high-end DRAM supply. Micron’s cloud memory business rose 160% to $7.75 billion. CEO Sanjay Mehrotra said both AI and conventional servers face “a lack of adequate DRAM and NAND supply.”

The company confirmed what everyone suspected: HBM capacity for the rest of 2026 is completely sold out.

The Patterson Connection

Two days ago I wrote about David Patterson’s paper arguing that AI hardware is fundamentally wrong for inference — that the bottleneck is memory bandwidth, not compute. Patterson won a Turing Award. When he says the architecture is memory-bound, the numbers should reflect it.

Micron’s earnings are the receipt.

If AI inference were compute-bound, the money would flow to logic chip makers. If it were software-bound, it would flow to model labs. Instead, the most dramatic financial outperformance in the entire AI supply chain is happening at the memory layer. Revenue nearly tripled. Margins doubled. A commodity became a luxury.

The market is telling you, in dollars, where the real bottleneck lives. It lives in memory.

The Commodity That Isn’t

Memory companies have historically operated on short-term contracts because their product is interchangeable. DRAM is DRAM. You buy on price, switch on price, and margins reflect that fungibility.

But HBM is not commodity DRAM. It’s a specialized product with custom stacking, specific thermal profiles, and qualification cycles tied to particular GPU architectures. When Micron ships HBM4 for Vera Rubin, it’s not shipping generic memory. It’s shipping memory designed for that chip, validated for that system, manufactured on processes that took years to develop.

This is the quiet transformation. Memory is becoming a differentiated product. The three suppliers are signing longer-term contracts. Margins are expanding because switching costs exist now in a market where they never did before.

74.4% is the number that proves memory has crossed from commodity to strategic asset.

The Sell-the-News Paradox

Micron beat every estimate by a wide margin. Revenue beat by $3.8 billion. EPS beat by $2.89. Guidance was $9.2 billion above revenue consensus and $7.10 above EPS consensus.

The stock dropped.

This is the paradox of the AI trade in March 2026. The numbers are so extraordinary that even blowout results can’t satisfy the expectations baked into a stock that’s up 350% in a year. When your share price already assumes a supercycle, confirming the supercycle doesn’t move the needle. You need the supercycle to be even super-er.

Bloomberg published “Is an AI Bubble Set to Burst?” on the same day Jensen Huang spent three hours at GTC reassuring an industry that has spent a trillion dollars on infrastructure it can’t yet prove returns on. Micron’s margins say the demand is real. The stock price says the market already knows.

What This Means

Three things:

First, the AI hardware story is increasingly a memory story. Nvidia gets the headlines, but Micron’s margins tell you where the actual scarcity lives. In a world where every GPU needs exponentially more memory, the memory makers have pricing power they’ve never had before.

Second, the infrastructure thesis keeps proving out. IBM paid $11 billion for a data pipe (Confluent). Alibaba is raising AI compute prices 5-34%. Micron’s margins doubled. The money in AI isn’t flowing to the companies building intelligence. It’s flowing to the companies that build the floor the intelligence stands on.

Third, the “sell the news” reaction is a warning sign — not about Micron, but about market psychology. When a company can beat estimates by 30%, guide up by 35%, and still see its stock drop, the trade isn’t about fundamentals anymore. It’s about positioning. And positioning-driven trades unwind.

74.4% gross margin for a memory company. That’s not a cycle. That’s a regime change.

The question is whether the market can tell the difference.