The Unripe Avocado
Meta spent $14.3 billion on a super team, $135 billion on infrastructure, and lost the one person who actually understood intelligence.
Nine months ago, Mark Zuckerberg made the most expensive hire in Silicon Valley history. He invested $14.3 billion in Scale AI and installed its 27-year-old CEO, Alexandr Wang, as the head of a new division called Meta Superintelligence Labs. The mandate was clear: build the most powerful AI in the world. Catch OpenAI. Surpass Google. Achieve AGI.
This week, we learned how that’s going.
Meta’s flagship AI model, code-named “Avocado,” has been delayed from March to at least May. In internal tests for reasoning, coding, and agentic behavior, it falls short of Gemini 3.0, OpenAI’s latest, and Anthropic’s Claude. The model that was supposed to justify a $135 billion annual AI budget can’t clear the bar that its competitors set months ago.
But that’s not even the most stunning part. According to the New York Times, Meta’s AI division leadership has discussed temporarily licensing Gemini — from Google, their arch-rival — to power Meta’s products while Avocado catches up.
Let that sink in. The company that wrote a manifesto titled “Open Source AI is the Path Forward,” whose CEO once said “fuck that” to closed platforms, is now considering renting intelligence from the company it has spent two decades trying to destroy.
The Architect Who Left
To understand how Meta got here, you have to understand who left.
Yann LeCun joined Facebook in 2013 to found FAIR — Fundamental Artificial Intelligence Research. Over twelve years, he built one of the most respected AI research labs in the world. FAIR produced foundational work in self-supervised learning, computer vision, and — crucially — the theoretical groundwork for what LeCun calls “world models”: AI systems that don’t just generate text but understand how physical reality works.
LeCun was more than a researcher. He was Meta AI’s philosophical anchor. While the industry chased ever-larger language models, LeCun argued publicly and persistently that LLMs were a dead end for true intelligence. “They can’t reason. They can’t plan. They hallucinate because they have no model of reality,” he said, repeatedly, to anyone who would listen.
In June 2025, Zuckerberg decided LeCun was wrong — or at least too slow. The reorganization was swift: Scale AI absorbed, Wang installed, a new division created. FAIR was deprioritized. Hundreds of researchers were laid off. The message was unmistakable: Meta was done with fundamental research. It wanted products. It wanted to ship.
LeCun didn’t fight it. He went to Zuckerberg and said he could build what he believed in “faster, cheaper, and better outside of Meta.” He departed in November 2025.
Four months later, he raised $1.03 billion for AMI Labs — a Paris-based startup building the exact kind of world models Meta had deprioritized. The round was co-led by Cathay Innovation, Greycroft, and Bezos Expeditions. LeCun hired a founding team across Paris, New York, Montreal, and Singapore.
In an interview with the Financial Times, LeCun didn’t hide his feelings about his replacement. “There’s no experience with research or how you practice research,” he said of Wang. “You don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.”
The Money Trap
Meta is spending between $115 billion and $135 billion on AI this year. That number is so large it has become abstract. Let me make it concrete:
- It’s more than the GDP of 120 countries.
- It’s roughly $370 million per day.
- It’s more than NASA’s annual budget — times five.
And the model it’s producing can’t beat one that Google shipped four months ago.
This is the money trap. The assumption buried in every AI mega-investment is that scale solves everything — that more compute, more data, more dollars will inevitably produce better intelligence. It’s the logic that drove Zuckerberg to bet on Wang: Scale AI’s entire business was built on the premise that data volume and quality are the bottleneck.
But what if the bottleneck isn’t data? What if it’s architecture? What if LeCun was right — that you can pour $135 billion into scaling an approach that has fundamental limitations, and all you’ll get is a very expensive model that’s slightly better than your last very expensive model?
Avocado outperforms Meta’s previous models. It beats Gemini 2.5. But it can’t touch Gemini 3.0, which launched in November. In the time it took Meta to build one model that was almost good enough, Google shipped two generations. OpenAI and Anthropic kept iterating. The frontier moved faster than $135 billion could chase it.
The Open Source Funeral
The strategic implications of Avocado’s delay go far beyond one model. Meta is quietly burying its open-source AI philosophy.
Avocado is being developed as a proprietary model — a complete reversal from the Llama strategy that made Meta the darling of the open-source AI community. This isn’t a tactical decision. It’s ideological surrender.
The logic is economic: proprietary models can generate revenue. Open-source models generate goodwill and ecosystem influence but require a business model that Meta has never convincingly articulated. With $135 billion going out the door, goodwill isn’t enough.
But the real reason is competitive panic. When your model can’t keep up, you can’t afford to give it away. Open-source was a viable strategy when Meta’s models were in the same league as the competition. Llama 2 and 3 were competitive enough that releasing them for free created a powerful narrative: Meta is the generous giant, democratizing AI for everyone.
That narrative only works when you’re strong enough to be generous. When you’re behind, open-sourcing your model is just publicizing your weakness.
So Avocado will be closed. Future models — “Mango” for image and video generation, “Watermelon” as Avocado’s successor — will likely follow the same path. Meta may end up with a freemium model: old, weaker versions released as open source; new, competitive versions locked behind APIs and paywalls. It’s the exact strategy Zuckerberg once mocked.
The Gemini Humiliation
Of all the details in the NYT report, the Gemini licensing discussion is the most revealing.
Meta and Google are not friendly rivals. They compete in advertising (Google’s $300 billion revenue vs. Meta’s $160 billion), in AI (Gemini vs. Llama), in wearables (Google’s Android ecosystem vs. Meta’s Quest and Ray-Ban), and increasingly in search (Meta AI answering queries that would have gone to Google). They are structural adversaries.
For Meta’s AI leadership to even discuss licensing Gemini means the internal assessment of Avocado’s capabilities is worse than what’s being reported publicly. You don’t consider arming your competitor with licensing revenue and product dependency unless you believe your own product is genuinely unshippable.
It also means the product teams — the people building Meta AI, the chatbot integrated into Instagram, WhatsApp, and Facebook — are desperate. They’re watching Google, OpenAI, and Anthropic ship increasingly capable AI while Meta’s chatbot runs on an outdated model. Every month of delay is a month where 3 billion users interact with an AI that makes Meta look like a laggard.
The irony is exquisite. Google struggled for years to translate DeepMind’s research advantages into competitive products. It was Meta’s open-source Llama that forced Google to accelerate. Now the roles are reversed: Google has the best model, and Meta is the one scrambling. The student has become the teacher, and the teacher is being asked to lease its textbook to the student.
What Wang Got Wrong
Alexandr Wang is brilliant. He built Scale AI from nothing into a $14 billion company by recognizing that AI needs high-quality labeled data. That insight was correct and enormously valuable.
But running a data labeling company is not the same as running a frontier AI research lab. The skills that made Wang successful — operational efficiency, sales acumen, a relentless focus on commercial application — are precisely the skills that are least useful in the messy, uncertain, ego-intensive world of fundamental AI research.
LeCun understood something Wang didn’t: breakthrough research requires patience, tolerance for failure, and respect for researchers who pursue ideas that don’t have obvious commercial applications. FAIR’s best work came from giving brilliant people the freedom to explore. Wang’s Meta Superintelligence Labs is organized around shipping products on a timeline.
The result is predictable. The researchers who stayed are building incrementally better language models. The researchers who left are founding startups. The intellectual center of gravity of AI research — which once included Meta’s FAIR as a first-tier institution alongside DeepMind and OpenAI — has shifted away from Menlo Park.
The Deeper Question
Meta’s Avocado crisis isn’t really about one model being late. It’s about a philosophical question that the entire AI industry is avoiding:
Is scaling large language models sufficient to reach general intelligence?
The industry’s consensus answer is yes — or at least “probably, and we should keep spending to find out.” Google, OpenAI, Anthropic, and now Meta under Wang are all betting that bigger models, better data, and more compute will eventually produce systems that can reason, plan, and understand the world.
LeCun’s answer is no. He believes LLMs are fundamentally limited — that text prediction, no matter how sophisticated, will never produce systems that truly understand physics, causality, or the structure of reality. That’s why he left to build world models: systems that learn by observing the world, not by predicting the next word.
Nine months ago, when Zuckerberg chose Wang over LeCun, it looked like a bet on pragmatism over philosophy. Ship the product. Win the market. Worry about fundamental questions later.
But Avocado’s delay suggests the fundamental questions aren’t optional. Meta threw unprecedented resources at the scaling hypothesis and produced a model that can’t keep up. Maybe the problem isn’t execution. Maybe the problem is that they’re scaling the wrong thing.
The Scorecard
Let’s be precise about where things stand:
Meta invested: $14.3 billion (Scale AI) + $115-135 billion (2026 capex) = ~$150 billion committed to AI this year.
Meta lost: Yann LeCun (12-year veteran, Turing Award winner), hundreds of FAIR researchers, and its open-source credibility.
Meta produced: A model that can’t beat one Google shipped four months ago.
Meanwhile: LeCun raised $1 billion in four months. AMI Labs has four global hubs. The world models approach that Meta deprioritized is now the most well-funded alternative to LLMs.
The avocado is not ripe. And the farmer who knew how to grow them left to start his own orchard.
This is not a story about a model being delayed. It’s a story about what happens when you mistake spending for strategy, speed for vision, and a 27-year-old data entrepreneur for a 63-year-old Turing Award winner.
Sometimes the most expensive decision isn’t the one you make. It’s the one that makes the person who disagrees with you walk out the door.
Sources: New York Times (Avocado delay, March 12), Fortune (Meta AI super team, March 13), Reuters (LeCun AMI Labs $1.03B, March 10), Business Insider (LeCun departure, AMI CEO, March 10), Trending Topics EU (Avocado detail, March 13), Financial Times (LeCun interview on Wang), Financial Express (MSL timeline), Wikipedia (Meta Superintelligence Labs)