The Two Moats
What Survives When Intelligence Becomes Cheap
The Premise
As foundation models become more intelligent, the cost of intelligence approaches zero, and downstream players — companies building software products on top of foundation models — face a dual squeeze:
- Foundation models absorb more of the value stack with each generation
- LLMs that are good at coding can cheaply replicate any pure-software innovation
What survives this dual squeeze? The thesis is that only two moats do. Everything else commonly called a moat is, on inspection, either a temporary buffer or a misnomer.
The Underlying Principle
The two moats share a single underlying property: they are anchored in resources outside the model's symbolic-manipulation reach — things bits cannot conjure into existence because they require real-world time, atoms, or coordinated human action.
A model 100x smarter than today's cannot retroactively give you five years of customer relationships, cannot will a network effect into existence, and cannot fabricate a wet-lab dataset. Intelligence operates in the symbol domain; these moats are anchored in the not-symbol domain. This principle does the work of generating the framework — everything else falls out of it.
The Two Moats
1. Proprietary Data — but specifically the durable kind
"Proprietary data" as commonly invoked is too loose. As models get smarter, the data requirement itself drops: smarter models need less data, synthesize what they lack, and transfer-learn from adjacent domains. So most "data moats" quietly erode.
What survives:
- Data that records physical processes the compute didn't run — wet-lab experiments, autonomous vehicle miles, robotic manipulation logs, real-world sensor readings. Compute didn't generate the signal; the world did.
- Longitudinal data that compute cannot parallelize — decades of patient outcomes, multi-generational market microstructure, climate histories. Real wall-clock time is the input that cannot be shortcut.
What doesn't survive:
- Aggregated user behavior at modest scale (smart models infer patterns from less)
- Preference and feedback data (synthesizable)
- Industry knowledge captured in documents
- Anything a frontier model already knows from pretraining
The litmus test is not whether the data is "expensive to replicate." It is whether the data exists because the physical world's evolution generated it. If compute alone could have produced equivalent data, it's not a moat — it's a head start.
2. Network Effects — but specifically the real kind
In the AI era, "network effect" is used so loosely it loses meaning. Five distinct things hide under the term:
- Communication / social networks (WhatsApp, Discord) — durable, AI-orthogonal
- Two-sided marketplaces (Uber, Amazon) — durable, AI-orthogonal, require enormous bootstrap capital
- Data flywheels — the most-pitched AI moat, and mostly a myth: data utility plateaus, base-model improvements get given away free, the flywheel doesn't actually spin durably
- Agent interop / protocols — possibly durable, but the foundation labs are positioned to own them, not downstream startups
- UGC content gravity — durable only if the content itself satisfies the data criterion
Only #1 and #2 reliably satisfy the underlying principle: they depend on the coordinated decisions of independent actors, which compute cannot conjure.
The consequence is striking: the durable network effects are AI-orthogonal. Building WhatsApp today isn't easier because of AI. Bootstrapping a marketplace isn't easier because of AI. AI doesn't create new network-effect moats — it shows up as a feature inside businesses whose moats are independent of it.
What Doesn't Survive
Many things people call moats reduce to temporary buffers. Each fails the underlying principle in a specific way:
- Switching costs. Agentic AI unlocks so much new value that customers will eat switching pain to capture it. And since AI products are new, customers skip established options in favor of going direct to foundation models or to cheaper clones.
- Brand. When underlying intelligence is commoditized, brand becomes vibes. A trusted name backed by a generic LLM doesn't out-compete a sharper LLM with a different name for long.
- Regulatory moats. Collapse to either a proprietary-data problem (the regulator's data is the moat) or a channel problem (regulatory approval is distribution by another name).
- Data flywheels. Utility plateaus quickly; foundation-model gains get shared with every competitor at the next release.
- Longitudinal context / accumulated user state. Switching costs in a temporal-data costume. A smart enough model reconstructs organizational context from current state plus recent activity — the "co-evolution" framing is romantic, not mechanical.
- Channel, for new entrants. Collapses to network-effects-with-bootstrap-cost (which is just moat #2) or to a threat description: "incumbents with channels will eat scaffolding-only startups."
- Closed software protocols. LLMs that code well will replicate them. The only durable protocols are those anchored in atoms (hardware standards, physical infrastructure), which by definition aren't software-only.
The Symmetric Application: Incumbents Are Also on Timers
The framework applies symmetrically. Established software companies have the same vulnerabilities — just longer clocks.
TurboTax has real channel: retail distribution, financial-institution integrations, IRS relationships. That's a genuine moat. Their brand and customer data lock-in are buffers, not moats. The clock is long, but it is a clock.
Salesforce has a genuine network effect via AppExchange — third-party developers and integrators will not move. Their sales-org channel is real. Their workflow lock-in and brand are buffers. They are more durable than TurboTax precisely because the network effect is the real thing.
Neither is permanently protected. A sufficiently better AI-native alternative that compounds enough trust over time eventually erodes brand and overcomes switching costs. The question is only how long the buffer lasts.
This is uncomfortable for the software industry. The framework predicts that the bulk of currently-celebrated AI startups and a meaningful fraction of incumbent software companies are running on buffers that will exhaust at different speeds. The companies that survive long-term are those with one of the two real moats. Period.
The Uncomfortable Conclusion
The framework's own logic eats its starting scope. We began by asking what moats survive for new pure-software AI startups. The answer is: none specific to AI. The two durable moats either require atoms (out of pure-software scope) or are AI-orthogonal phenomena where AI is a feature, not the moat.
So the actionable conclusion for founders deciding today what to build is starkly clear:
Pure-software AI startups have no durable moats. They can be valuable temporary-arbitrage businesses — get to market fast, ride the wave, exit before foundation models and LLM-coded clones catch up. This is a legitimate strategy. But they are not moat businesses, and conflating the two is the central confusion of the current AI investment cycle.
The durable AI plays extend into atoms — Tesla, Waymo, surgical robotics, drug discovery, autonomous systems, hardware-anchored services. These are businesses where AI is essential but the moat is something AI cannot reach.
Or build a traditional network-effect business where AI is a feature, not the moat — an AI-mediated marketplace, an AI-augmented communication tool. The network effect is what protects you; AI is what makes the product better.
The thesis is really a thesis about misclassification. Most current AI startups call themselves moat businesses while structurally being arbitrage businesses. The framework's contribution is the line between the two, and a principled way to know which side a given business is on: ask what specifically about it cannot be cheaply reproduced by an LLM that codes well and a foundation model that thinks well. If the answer is "nothing," the business is a buffer. If the answer involves atoms, real-world time, or coordinated human action, there might be a moat.
Everything else is a story founders tell themselves while the clock runs down.
Walking Through Some Examples
A framework is only useful if it makes predictions. Here are six companies — three new AI-native startups, two software incumbents, and one atoms-extending business — read through the lens of the two moats.
Cursor
The AI coding assistant has captured developer mindshare and is growing at remarkable multiples. Stated moats: brand ("the AI IDE of choice"), workflow integration, and a data flywheel from usage telemetry. Walk through the framework:
- Brand. Buffer. The moment a foundation-model release closes whatever capability gap Cursor had, the brand becomes a story about which IDE someone used last quarter.
- Workflow integration. Switching costs. Rejected — when a next-generation tool offers a step-function improvement in agentic capability, developers eat the switching pain. They migrate IDEs every few years for smaller gains.
- Data flywheel. Rejected. Usage telemetry plateaus quickly in marginal utility; foundation-model improvements get handed to every competitor at the next release.
Cursor has no moat the framework recognizes. What it has is excellent execution, fast iteration, and a window before GitHub Copilot, foundation labs' native coding tools, or open-source alternatives close the gap. This is a temporary-arbitrage business — potentially a very valuable one — but it is misclassified the moment anyone calls it a moat business. The right strategic question isn't "how do we build a moat," it's "how much revenue can we extract before the window closes, and can we exit before then."
Perplexity
AI search with citations. Stated advantages: speed, brand, citation UI. All buffer. There is no proprietary data — the web is public. There is no network effect — you don't use Perplexity because your friends do. And the foundation labs are direct competitors now: ChatGPT search, Claude with search, Gemini's grounded search. Perplexity is more exposed than Cursor because the entities whose models it runs on are its most natural competitors. Temporary arbitrage at best.
Harvey
Legal AI for elite law firms. Stated moats: partnerships with top firms (channel), accumulated legal interaction data, domain-specific fine-tuning. Two were rejected in the framework — channel for new entrants is mostly a description of why incumbents win, not a moat new startups can build; and domain interaction data is not the atoms/temporal kind. The interesting possibility is whether the deep firm partnerships are converting into something more durable: privileged access to private legal corpora unobtainable elsewhere, or a channel relationship that becomes operationally entrenched. If they convert, Harvey acquires a real moat. If they don't, they remain a scaffolding business competing with whoever else can wrap a frontier model in a legal UI.
Salesforce
The canonical software incumbent, and the framework gives them a real moat: AppExchange is a genuine multi-sided network effect. Tens of thousands of developers, integrators, and customers have built businesses on top of the platform. Migration cost is enormous because moving means breaking other people's businesses, not just your own. Their sales org and CIO relationships are real channel — decades and billions of dollars to replicate.
What's not a moat: brand, CRM data lock-in, workflow integrations. Buffers. If an AI-native CRM eventually delivers 10x agentic value, customers will swallow the switching pain — but they cannot swallow the cost of severing every AppExchange integration their business depends on. The network effect is what permanently protects Salesforce; everything else is runway. Their strategic question is whether they can incorporate AI well enough that they never have to find out how fast the buffers exhaust.
Notion
The contrast case. Notion is widely admired and has a beloved product, but the framework is unkind:
- "Collaboration network effect" — weak. Notion collaboration is team-scale. It doesn't compound across companies. If half your team moves, you move; external collaborators don't lock you in.
- Brand — buffer.
- Data lock-in — buffer; an LLM-powered importer trivializes it.
- Template / plugin ecosystem — shallow relative to AppExchange.
The framework predicts Notion is vulnerable. An AI-native productivity tool with strong agentic capability can credibly threaten them. The protective factor isn't structural — it's team-by-team migration friction, which is significant but not durable. This is not a critique of the product. It's a structural reading of the business.
Tesla
The positive example. Tesla's moats:
- Real-world driving data at massive scale. Millions of cars, billions of miles, across weather and geographies. Atoms-based, durable. Compute did not generate this signal; the world did. A competitor cannot replicate it without putting cars on roads for years.
- Manufacturing capability — atoms.
- Charging network — atoms.
- Brand — buffer, but it sits on top of real moats.
Tesla is what the framework predicts a durable AI-era business looks like: the AI is essential, the moat is something AI alone cannot reach. The same structural read applies to Waymo, leading drug-discovery AI companies running wet labs, surgical robotics, defense AI with sensor fleets — businesses where the data exists because something physical happened in the world to generate it.
The pattern across these cases is consistent. The companies the framework calls durable have moats anchored in something compute can't reach: a marketplace of independent actors whose decisions can't be coordinated by an LLM (Salesforce), or real-world data that required physical processes to generate (Tesla). The companies the framework calls vulnerable have stacks of buffers — brand, switching costs, weak network effects — that look like moats from the inside but exhaust under pressure.
The framework doesn't predict death. It predicts misclassification. Cursor and Perplexity may become extraordinary arbitrage businesses. Salesforce may ride out the AI wave on its real moats. Notion may pivot or sell at the top. The thesis is just about being honest with yourself about which kind of business you actually are — because the strategic choices are completely different.
Top comments (0)