The mistake I keep seeing: founders bolt an LLM onto their product as a feature, announce "now with AI," and wonder why it doesn't change outcomes.
The distinction that matters is between AI as a surface and AI as a substrate.
AI as a surface
A chat interface added to a product that wasn't designed for conversation. A "summarize this" button. A search bar that now uses semantic search instead of keyword matching. These are real improvements and sometimes they're the right call. They're also discrete additions that don't change how the rest of the product works.
Products built with AI as a surface are still fundamentally the same product with AI features attached. The product logic, the data model, the user flows: all designed before LLMs were part of the picture. The AI fits into the gaps.
AI as a substrate
The product is designed from the ground up around what LLMs can do. The data model, the user flows, the product logic: all built to work with and through the model. The AI isn't in the gaps. It's the structure.
Cursor is a clear example. The IDE is designed around the assumption that the model has context about your codebase and can act on it. The AI isn't a feature in Cursor. It's why Cursor exists as a different thing from existing IDEs.
Most enterprise software built with AI as a substrate doesn't look like a chatbot. It looks like workflows where the model handles the parts of the work that are language-shaped, and humans handle the parts that require judgment, approval, or accountability.
Why the distinction matters
Products with AI as a surface can be built by adding AI to what you already have. Products with AI as a substrate require rethinking what you're building.
The failure mode of the surface approach: you build a good product with AI features that could be removed without changing the fundamental value proposition. Competitors who build with AI as substrate build products that can't be described without the AI. They're not competing on the same dimension.
The failure mode of the substrate approach: you build around AI capabilities before you've validated that the AI capabilities are reliable enough for your use case. You ship a product whose quality is directly dependent on model behavior you don't control. When the model changes or fails, your product changes or fails.
What I'm watching at Countercheck
Our core product is computer vision, not language models. But GPT-4 is already in our workflow for the tasks I wrote about last month. The question I'm sitting with: are there parts of the Countercheck product where the right move is to design around LLM capabilities rather than add them later?
The candidate: anomaly explanation. When our system flags a potential counterfeit, we currently surface the confidence score and the detection result. The next useful thing for the client is an explanation of why it was flagged that a non-ML person can act on. That's a language task. It's a place where building for the language model from the start, rather than adding it later, would produce a different product.
We haven't built it yet. The observation is that the product design question and the AI integration question are the same question. The founders who are going to build the most interesting things in the next few years are the ones who are asking them together.
With gusto, Fatih.