Gears Within Gears
AI isn't a value-add for software; software is a value-add for AI
Photo by Katie Currieron Unsplash

AI isn't a value-add for software; software is a value-add for AI

· 5 min read

When a company hits a ceiling on its core business, it often tries to grow demand. The Michelin Guide wasn’t about restaurants; it was about miles driven. Facebook’s internet.org wasn’t about altruism; it unlocked expansion by growing the size of the entire internet. The ceiling on the business was the medium, so they expanded it.

AI companies face the same structural problem: their constraint on inference revenue is the volume of tasks that flow through models. From where they sit, every piece of software that still runs without model calls is unrealized market.

Software and inference are in a demand loop

One way companies stimulate demand for their own products is by making complementary products cheaper and more abundant: highlighting better places to eat makes them more accessible, leading to more miles driven (and more tire wear). Joel Spolsky famously put his finger on the tech playbook for this back in 2002: you commoditize your complements.

The relationship between software and inference now reflects that dynamic. More software means more inference calls, and better inference means more software worth building. The two are locked in a demand loop, but the model vendors are on the side with pricing power. The rational move is to make the software layer as cheap and abundant as possible, because it shifts value toward the inference layer.

AI companies are doing exactly this. They are building software, acquiring software, and making AI-native alternatives so compelling that the old versions wither. The commoditization of the software layer isn’t the goal; it’s the byproduct. Their goal is to make the market for tokens bigger, and software is the means.

The product serves the meter

The present narrative, especially in SaaS, goes like this: AI is a feature. You sprinkle it hither and yon to make the product better, maybe stickier, maybe richer, and maybe to please the board. Every SaaS company on earth is scrambling to add an AI layer, under the premise that AI capabilities will drive retention and upsells and margins. Likewise, a hundred thousand software engineers have rediscovered their inner maker1 and are shipping their side projects. In this story, AI is the lever.

This story gets the causality backwards. From the model vendor’s perspective, software is not the product; it is the surface that drives inference demand. Claude Code is free; it is almost certainly the most aggressive token-consuming product Anthropic has shipped so far. OpenAI hired the creator of OpenClaw not because they needed a task-automation agent but because those products burn tokens. The entrepreneurs flooding the market have figured this out from the other direction: the gold rush is not just about chat replacing the interface; it is about the chance to rebuild existing products with a radically different cost structure, one where AI-native products route work through models by default.

Every feature the model vendors ship, every tool they acquire, every agent built by them or someone else is a mechanism for converting engagement into inference calls. The product serves the meter.

The oil builds the cars

The natural objection here is that this is just how commodity markets work, and commodity producers do fine without encroaching on their customers. Exxon doesn’t build cars any more than Saudi Aramco runs a taxi service. They sell the fuel, let other people build whatever they want on top, and do spectacularly well because everything downstream depends on them. AI companies could, in theory, follow the same playbook: sell the tokens, collect revenue on every API call, stay mostly neutral otherwise.

But there’s a difference between oil and tokens. Oil is fuel: you need it to run the car, but you can’t use it to design the car, manufacture the car, or improve the car. Tokens are commoditized thought work2: you can use them to run software, and you can also use them to build software, design software, and replicate software. The AI company doesn’t just sell the fuel; it sells the thing that can replace the factory.

Exxon never threatened GM because oil can’t build cars. Anthropic and OpenAI are already building software products that compete with their own customers, using the same resource they’re selling to those customers. Cursor is built on Claude’s API; Claude Code competes with Cursor. The complement doesn’t just drive demand for the core product; the complement can be generated by the core product.

Jevons paradox is their growth strategy, not your safety net

Jevons paradox isn’t a counterargument to the AI companies’ strategy; it is the AI companies’ strategy. More software gets built, all of it consumes tokens, every line metered. The model providers get paid for every inference call regardless of whether the application succeeds or fails. The volume increase in software construction is, from their perspective, TAM expansion working as intended.

The company expanding the market has the means, and the incentive, to eventually capture the margins on the expanded market. Jevons paradox suggests more software gets built. It does not guarantee that you’re the one building it, or that you’re the one getting paid.

The inversion is already here

Most of the industry is still treating AI as a feature you add to software, but the model vendors are treating software as a feature you add to AI. Those are not the same bet. The product is inference; the software is the surface area. And the surface area has gotten very, very cheap to produce, because the product can produce it.


  1. Which is delightful, by the way. And also, same. ↩︎

  2. Shoutout to all my commoditized Thoughtworkers out there. ↩︎