The Weekend Startup Ritual
Spend enough time in today’s startup ecosystem and you’ll notice a new ritual. An entrepreneur opens a laptop on Friday evening, prompts an AI coding assistant, registers a domain name, and by Sunday has a working web app. The interface looks professional. The copy is crisp. The demo feels persuasive.
By Monday, almost nobody wants it.
This is not a failure of coding. It is a misunderstanding of innovation.
AI as a Mutation Accelerator
Large language models (LLMs) are astonishing at generating variation. They can propose features, pricing schemes, landing page drafts and product narratives in seconds. They compress huge amounts of textual patterns into outputs that sound coherent and strategic.
In evolutionary terms, they are mutation accelerators.
But evolution has two halves: variation and selection. And innovation lives or dies in the second.
Markets Select for Causality, Not Fluency
Markets do not reward coherence or fluency. They reward causal fit.
A product succeeds not because it is elegantly described, but because it changes behavior for specific reasons. It relieves a pressure. It resolves a constraint. It satisfies what innovation theorists call a “job to be done”.
The distinction matters because LLMs are, at their core, predictive systems. They are trained to guess the next word given a context. That objective encourages mastery of correlation and association. It does not automatically confer an understanding of cause and effect.
Talking About Causality Is Not the Same as Learning It
This is not to say that LLMs cannot discuss causality. They can. They can reproduce scientific explanations and reason through causal scenarios. But their knowledge is grounded primarily in observational text, not in interventions.
Causality, in the scientific sense, requires counterfactuals: what would happen if we changed X and held everything else constant? It requires structured experimentation.
Innovation Is Applied Falsification
Innovation, when done rigorously, resembles the scientific method more than brainstorming.
Entrepreneurs form conjectures about why customers behave the way they do. They test those conjectures through minimal viable products and market experiments. The Lean Startup feedback loop — build, measure, learn — is a form of applied falsification. Hypotheses that survive are refined. Those that fail are discarded.
LLMs are excellent at generating conjectures. They are less reliable at enforcing falsification discipline.
The Rise of the Polished but Shallow App
This imbalance is already visible. The ease of AI-assisted coding has lowered the cost of building superficial products. Many of these are thin interfaces wrapped around a general-purpose model. They automate something that is technically possible but not deeply demanded.
The result is a proliferation of polished but shallow applications: functional, yet misaligned with the underlying drivers of customer behavior.
The problem is not that AI cannot innovate. It is that today’s AI is rarely embedded in systems that track and update causal beliefs over time.
Bayesian Updating Is Not Enough
Bayesian reasoning offers a clue to what might be missing. In principle, Bayesian updating allows beliefs to shift as new evidence arrives. In startup terms, each experiment should revise our confidence in a hypothesis about customer needs.
But Bayesian updating alone does not guarantee causal insight. If the data are observational and confounded, we may simply become more confident in the wrong story.
Causal inference requires explicit structural assumptions: which variables influence which, what counts as an intervention, and how to interpret counterfactuals. These are not automatically encoded in current LLM architectures. They must be imposed, either by humans or by additional modelling layers.
What Real AI-Assisted Innovation Would Look Like
A more serious form of AI-assisted innovation would combine several components:
-
Language models to articulate hypotheses
-
Structured causal models to formalize assumptions
-
Real-world feedback loops to test interventions
-
Systematic belief updating across iterations
Such a system would not merely generate ideas. It would manage experiments and update beliefs about what truly drives behavior.
Until then, AI will remain a powerful engine for variation, but not for selection.
The Asymmetry of Consequences
There is a deeper difference between AI and entrepreneurs, one that is rarely acknowledged.
When an AI system proposes ten product ideas and all ten fail, nothing happens.
-
No capital is lost.
-
No years are wasted.
-
No team disbands.
-
No reputation is damaged.
The model simply generates more text.
But when a human entrepreneur forms the same ten conjectures and they fail, the cost is real. Money disappears. Time is consumed. Energy drains. Motivation erodes. Sometimes entire careers shift. Therefore for humans, Innovation is not just an epistemic exercise. It is a costly one.
Humans operate under constraint — financial, emotional, temporal. These constraints force prioritization and seriousness and they create discipline because every experiment has a price.
But AI does not bear that price.
This asymmetry matters because cost sharpens causal thinking. When failure is expensive, you try harder to isolate the real driver before you commit. You seek discriminating tests. You question your assumptions. You feel the weight of being wrong.
An LLM does not feel that weight. It can generate conjectures without ranking them by survival value. It does not allocate scarce capital and It does not experience regret.
The Irreducible Human Risk
Entrepreneurs still bear epistemic risk. They decide which hypotheses are worth testing. They allocate scarce resources. They absorb the consequences of being wrong. An LLM can propose ten elegant explanations for why users churn. It does not suffer if all ten are false.
The current wave of AI tools has democratized production. It has not eliminated the need to understand why customers act. Innovation advantage still lies in causal compression: identifying the few variables that genuinely move the system.
AI can generate possibilities at unprecedented speed. But markets do not reward possibilities. They reward explanations that survive contact with reality.
And until our AI systems are grounded in intervention as well as prediction — and embedded in systems where errors carry consequences — the scientific core of innovation will remain stubbornly, and necessarily, human.