Writing

Build what is obvious in retrospect and unthinkable before

Twenty years of building has convinced me this is the only bar worth holding.

April 2026·8 min read

This is the sentence I couldn't write for years. It names something every serious builder recognises but almost nobody condenses. Read it slowly. Then read it again.

It isn't a call to be contrarian. It isn't a call to wait for genius. It isn't a call to ignore users. It's a call to build toward inevitability.

I've been building in digital for twenty years. I've watched dozens of SaaS categories congeal around the same three or four feature sets. I've seen agencies buy the same dashboards from four different vendors and pretend one of them was meaningfully different. I've sat through user interviews where the feedback was, almost verbatim, a description of the tool the user left six months ago. Most software is recursive. It's the same product built again, slightly improved, wrapped in whatever design trend is current.

Recursion isn't progress. It's the shape of mediocrity compounding.

The sentence above is what breaks out of the recursion. Everything that follows is my argument for why it's the only bar that matters.

Why benchmarks are a trap

I'm not a strong proponent of benchmarks. They measure how well you've caught up to what's already been built — not whether what you're building deserves to exist.

"We hit the industry average for conversion rate." "We match competitor X's feature set." "Our NPS is on par with the sector." All of this describes a product that hasn't failed. None of it describes a product that matters.

Agencies live inside benchmarks. Every marketing platform sells against them. Conversion benchmarks, CPA benchmarks, engagement benchmarks, retention benchmarks. The framing is always: here's what everyone else is achieving; aim to be slightly above average. And the tools those agencies use — analytics platforms, reporting products, AI-generated insight engines — are built to the benchmarks the sector already codified.

The result is an entire industry pointed at "slightly better than average," which is the same thing as "the same as everyone else, just this year's version of it."

The products that actually mattered in the last decade weren't graded against existing benchmarks. They rendered the benchmarks obsolete. iPhone didn't meet smartphone benchmarks of the mid-2000s — it made the old benchmarks irrelevant. Figma didn't beat Sketch on Sketch's terms; it introduced a mode of work that Sketch couldn't be measured against. Stripe didn't compete against PayPal on PayPal's metrics; it changed what developers expected from a payments API. Notion didn't win on features; it rejected the category's framing entirely.

The bar for these products wasn't a benchmark. It was inevitability. They felt obvious the moment they existed and unthinkable until they did.

That gap — between "of course" and "unthinkable" — is where products that matter live. Everything else is variation on what already exists, optimised slightly better.

What twenty years has shown me

Every product category I've watched evolve has followed the same arc.

One product defines the category. It does something that, before it existed, nobody was asking for. After it exists, the market realises it was the obvious answer. Then a second product shows up that's 80% the same but adds one twist. Then a third, and a fourth. Each new entrant benchmarks against the leader, adds features the leader lacks, and positions itself as "X but for [segment]." Each one is incrementally better on some axis. None of them are inevitable. None of them would feel wrong to be absent from the market.

Dashboards are a perfect example. AgencyAnalytics exists. So does DashThis. So does Whatagraph. So does Looker Studio. So do another twenty I won't name. They're all the same product. Each one optimises for a slightly different slice of the agency market. None of them are genuinely different. None of them will be studied in ten years as category-shaping.

The reason is not the builders. Most of those teams are smart. The reason is the bar. They're building to benchmarks. To competitor parity. To "what users asked for." To what VCs pattern-match as investible. All of these are retrospective anchors. They pull products back toward the mean.

Inevitability doesn't come from retrospective anchors. It comes from refusal.

Why AI-assisted building makes this worse

AI has made a specific kind of mediocrity cheap. You can clone a competitor in a weekend. Prompt through a feature list and ship something that looks considered, tests fine, and gets no one fired.

None of it reaches the bar.

The default of AI-assisted building is shipping the acceptable version of a familiar pattern, faster. More dashboards. More filter bars. More "insights unlocked with AI" copy. More settings screens asking users to translate their questions into a UI dialect. The pattern reinforces itself, because it's always easier to improve a familiar thing than refuse it.

The AI-build boom is producing an enormous amount of software that is exactly this. Competent. Shipped fast. Indistinguishable. Built by people who are smart enough to follow the defaults and not yet at the point of refusing them.

The bar is the refusal.

What this looks like in practice

I'm building LDOO — conversational analytics for marketing agencies. Every existing product in the space is a dashboard. Every user interview I did pointed toward the dashboard pattern. Let users filter by date. Pick their chart. Slice by channel. Users expect this. They ask for it when it's missing. Shipping it would have tested well.

It would also have built another dashboard product. The entire bet — that asking your data in plain English is a category, not a feature — would have died in the first release.

So I refused.

Removed the drag-and-drop builder entirely. Made "Client Portal" the term, not "dashboard." The word "dashboard" is banned across every product surface, every line of marketing copy, every piece of UI microcopy. Wrote copy that starts with what the agency operator gets, not what the platform does. Rejected every UI pattern that required operators to translate their questions into configuration. Each decision individually trivial. Together, a product that does something a dashboard product structurally can't.

I don't know yet if the bet lands. I'll know when customers start paying. But I know the product isn't acceptable. Acceptable would have been a better dashboard. This is trying to be obvious in retrospect.

The question I ask every time now

A proposal passes the usual gates — helpful, reliable, trustworthy. That used to be the end of the test. Twenty years of shipping has taught me it isn't enough. Now there's one more question:

Would a competent team building a different product also ship this?

If yes, it's not the version worth shipping. Find the version only this product would.

Most things will fail that test. Most features will be pattern-matched versions of competitor moves. Most copy will read like SaaS copy. Most UX will default to what the market has already normalised. That's not a problem to solve — it's the entire landscape. The question is whether you refuse it often enough to build something that feels genuinely different.

If you're building with AI right now, "ship fast" is not the bar. Everyone has that. Velocity is table stakes; it doesn't distinguish anything anymore.

The bar is: ship the version that wasn't available to you from the start. The version that required refusing defaults at every turn. The version where the user thinks "of course" the first time they experience it, and cannot imagine how the previous generation of products felt workable.

Most products will be acceptable.

Acceptable is the ceiling of not having refused to be.

Twenty years in, I'm only interested in building toward inevitability.