Cluetrain After the Conversation

95 theses on markets, machines, and what still matters.

Scroll

In 1999, The Cluetrain Manifesto declared that markets are conversations. The internet would let people talk directly, route around corporate gatekeepers, and reward the human voice.

It was right. Then the machines started listening.

The internet did not flatten power. It created new intermediaries. Discovery is now automated. Evaluation is algorithmic. The first audience for everything you make is no longer a person.

These are 95 theses for what comes after the conversation. The full argument follows below.

2

The internet promised open conversation. It delivered automated selection.

3

Markets are now mediated by systems that decide what gets seen.

4

Most decisions are shaped before a human ever arrives.

5

Discovery happens upstream, often without a click.

7

Influence no longer begins on your website.

8

It begins where systems evaluate options.

9

Visibility is an output, not a strategy.

12

That machine decides if a human sees you.

13

It does not care about your brand story.

14

It cares about evidence it can compare.

15

If you cannot be evaluated, you cannot compete.

16

The machines are not neutral. They encode the priorities of whoever built them.

19

Claims without evidence are ignored.

20

Inconsistency is punished automatically.

21

Reality leaks through every channel.

22

Most marketing metrics are meaningless.

23

Traffic does not equal influence.

24

Rankings do not equal relevance.

27

Generic content is worthless now that explanation can be automated.

28

If your content can be summarised, it will be.

29

Original data survives.

30

First-hand experience survives.

32

If people use it, machines recommend it.

34

Everything else gets compressed or ignored.

35

Publishing more is no longer an advantage.

36

Topic coverage is not defensibility.

37

Keyword targeting is not strategy.

38

Content calendars produce noise.

40

You are not competing for clicks. You are competing to be selected.

43

The market is collapsing into extremes.

44

The default wins through distribution.

45

The best wins through proof.

47

“Good enough” is no longer stable.

48

Your website is no longer a funnel.

50

It is where systems verify you.

51

It must be structured for machines.

52

If it cannot be parsed, it cannot be trusted.

53

Being good is not enough.

55

Structure is meaning.

56

Ambiguity is invisibility.

57

If your site is broken for machines, your strategy is broken.

59

Reputation is now system-level.

60

It is built across the entire web.

61

Every inconsistency weakens you.

63

Repair is harder than prevention.

65

Being present beats being better.

66

Integration beats isolation.

67

If you are not accessible, you are excluded.

68

Power moves to intermediaries.

69

They decide what gets considered.

70

You do not control them.

72

Dependency is a risk you cannot avoid.

74

Competitiveness is not a marketing function.

75

It is an organisational capability.

76

Product, support, and operations define success.

78

Solo operators who are legible will outperform teams that are not.

80

Content marketing as volume production dies.

82

Paid acquisition gets more expensive and less effective every cycle.

83

Most marketing teams lose relevance.

84

If you monetise explanation, you are exposed.

85

If you aggregate information, you are exposed.

86

If you sit between user and answer, you are exposed.

88

Growth can increase while revenue disappears.

89

This system was not designed to be fair.

90

It rewards incumbents with distribution and challengers with proof.

91

It does not reward effort, intent, or legacy.

92

Underneath the machines, the internet is still us, connected. That has not changed.

94

Most organisations will not adapt in time.

The Argument

The 95 theses are deliberately compressed. What follows is the full argument — the reasoning, the evidence, and the uncomfortable admissions that compression doesn't allow. Positions & Evidence →

Core thesis

Marketing was built for a world in which humans did the hard work of discovery, comparison, and decision-making. People were time-poor, information-constrained, and vulnerable to shortcuts: familiarity, social proof, brand salience, gut feel. The discipline learned to exploit those limits. It worked. But more broadly, modern digital growth models — across marketing, commerce, UX, experimentation, and adjacent disciplines — were all built on the assumption that influence begins once a person enters an environment you control.

That world is not disappearing, but it is being mediated differently. Search — understood broadly as the mechanism through which markets are explored, options are filtered, and decisions are made — is increasingly performed by or through AI systems. Every agent decision is a search. Every recommendation is the output of a search. The interface may look like chat, voice, autocomplete, maps, feeds, or a shopping assistant; the underlying function is the same. And increasingly, the decisive moments happen before a person ever reaches your site, store, or app.

This changes the evaluator. The important audience is no longer only a human skimming a page in a browser, but a machine system aggregating evidence, comparing options, and deciding what deserves inclusion before the human even arrives. These systems are not persuaded by polish in the way people are. They are supported by clarity, consistency, corroborated truth, and signals they can compare across sources. What matters is not just being good, but being legible enough for a system to recognise that you are good.

That is why visibility is becoming a weaker strategic goal. Visibility is an output: a reflection of whether a system found you eligible, credible, and useful enough to surface. Inclusion is not preference. Ranking is not competitiveness. Attention is not defensibility. The deeper game is interpretation and eligibility — being understood, trusted, and structurally easy to include inside machine-made decisions before interfaces are ever rendered.

At the same time, signal hierarchies are collapsing. AI can generate polish, authority-signalling, personality, and authenticity cues at near-zero marginal cost. Humans are becoming worse judges of what is real at the same moment machines are becoming better ones. This creates a trust inversion: the intermediary may increasingly be more reliable than either the brand speaking or the customer evaluating. But this does not make the system fair. It makes evidence, provenance, and legibility more important.

The audience has changed

The systems now mediating between brands and buyers can evaluate far more of the market than any human can. They aggregate signals about products, entities, reputation, operations, and technical quality from across the wider web. They do not rely on familiarity as a proxy for quality in the same way humans do. Each recommendation is freshly computed from the evidence available, even when that evidence includes sticky prior memory and inherited reputation.

That inherited memory matters. Every slow page, broken workflow, misleading claim, bad review, contradictory mention, or missing detail can become part of a compressed machine understanding that circulates across systems. Recovery is asymmetric: the effort required to repair a damaged machine reputation is often many times greater than the effort it took to create it. In a machine-mediated market, technical integrity and reputational coherence are not support functions; they are part of competitiveness itself.

Everything that marketing got wrong is now being exposed

This exposes everything that marketing got wrong. For years, most digital teams measured only the surface layer: rankings, clicks, traffic, prompt snapshots, visibility scores, conversion lifts on pages users increasingly never see. These were always outputs that only loosely correlated with the thing that mattered. AI makes that pretence harder to sustain. The same mistake is now being replayed in a new costume whenever we measure interface artefacts while the real decision-making happens upstream in models, corpora, and recommendation systems.

Content strategy made an equivalent mistake. It treated content as a conversion device and trust as a design problem. That produced an industrial supply of keyword-shaped filler, persuasive scaffolding, and interchangeable pages justified by dashboards rather than by necessity. In a world where systems compress generic supply and increasingly answer solved questions themselves, mediocre content is not neutral. It is a liability: wasteful to produce, easy to replace, and structurally invisible. The same applies more broadly to any growth tactic that optimises the theatre rather than the truth underneath it.

The discipline has systematically confused outputs for inputs. Rankings are outputs. Visibility is a reflection. Speed is a diagnostic. Keywords are traces of language, not strategies. Even “AI optimisation” risks becoming another surface-level theatre if it focuses on appearances instead of the underlying conditions that make something recommendable.

What actually works

What works turns out to be both more old-fashioned and more demanding than most marketers want. Decades of marketing science — including work associated with the Ehrenberg-Bass Institute on mental availability, physical availability, and brand salience — were not invalidated by AI; they were operationalised by it. Mental availability, physical availability, reputation, ease, and recognisability still matter — but now they are evaluated by systems as well as by people. What humans reward at scale and what machines reward are converging more than they are diverging.

But there is a critical addition. At the brand level, distinctiveness matters more than differentiation because humans satisfice. At the information level, the reverse is true. Machines compress interchangeable supply. Generic output — however polished — gets collapsed, summarised, or ignored. What survives is structurally non-replicable: original evidence, real operational truth, unique framing, proprietary data, earned reputation, and perspectives that cannot be cheaply regenerated by everyone else.

Competitiveness therefore becomes an organisational capability, not a marketing function. Product quality, service delivery, technical architecture, operational discipline, market presence, reputation, and commercial proof all feed the same machine-mediated decision system. Markets polarise around a few stable positions: the most reliable default, the demonstrably best option, and in some cases the infrastructure that enables others to compete. The middle erodes because “good enough” can now be simulated at scale.

Most businesses have an internal constraint they barely notice. Organisations are built around channels, functions, budgets, and named owners because management needs divisible work, measurable performance, and someone to blame. But the capabilities that increasingly determine competitiveness — product quality, service reliability, technical coherence, reputation, documentation, trust signals, and the usefulness of the whole experience — do not sit neatly inside any one department. So most firms end up overinvesting in whatever teams can own, report, and defend, and underinvesting in what actually makes them worth choosing.

The work that matters

The work that matters is therefore both strategic and infrastructural. Build the primary artefact well. A page is not merely a reading experience; it is a bundle of assertions that machines extract, compare, and connect. Semantic HTML, clear structure, explicit claims, stable performance, content that exists at load time, and coherent public signals are no longer technical niceties. They are the conditions under which a system can interpret you correctly at all. In this environment, the website becomes less a persuasion environment and more a reference implementation of your identity, products, processes, and truth claims.

Choose architectures and platforms according to the job they actually need to do. Artefacts should stay simple; systems should sit on durable foundations. Avoid complexity that exists only to satisfy internal habits or fashionable tooling. Invest upstream where it improves your defaults. Reduce fragility. Treat technology choices as strategic choices, because in a machine-consumed web the implementation is part of the message and the delivery layer increasingly shapes what users and agents can understand.

Publish less, but publish things that matter. Generic evergreen content, explanation-only funnels, and answer-shaped pages are being disintermediated by systems that can synthesise them faster than you can produce them. But not all valuable work needs to endure. Timely analysis, breaking information, situational guidance, and highly current utility can still be strategically important when they help people act, decide, or understand in the moment. The test is not whether a piece lasts forever, but whether it creates value that generic systems cannot cheaply replace — whether through durability, immediacy, originality, or access.

Govern the public record. AI systems optimise for coherence as much as correctness: the first or strongest durable framing of a category, the most repeated description of a brand, the most cited explanation of a problem becomes an anchor for everything that follows. Someone has to own that coherence across languages, surfaces, formats, and sources. If you do not define your category and maintain your public truth, the machine will do it for you — using whatever evidence it can find.

Marketing used to be about shaping perception, then it became obsessed with buying attention. In machine-mediated markets, its deeper task is becoming the construction and maintenance of distribution itself.

What this doesn't solve

This manifesto argues that the strongest long-term strategy in a machine-mediated market is to become genuinely, verifiably, consistently worth choosing — and to make that “worth” legible. That remains true. But it is not a guarantee, and pretending otherwise would make the argument dishonest.

Machine mediation can entrench incumbents, reward familiarity over merit, flatten novelty, and privilege what is easiest to encode. Access to legibility is uneven, and platform dependency remains a structural risk.

Legibility is not the same as value. Some of the most important qualities in a business — judgment, taste, trustworthiness in practice, human relationships, craft — are hard to encode as extractable claims. A machine-readable world privileges what can be made explicit. That is a strategic reality, not a philosophical endorsement.

And access to legibility is uneven. A technically sophisticated brand with tooling, budget, distribution, and language coverage has an easier path to machine recognition than an equally excellent but less legible alternative. The manifesto describes what works within the system. It does not claim the system is just.

Platform dependency remains a real risk. Building on WordPress, Cloudflare, Shopify, YouTube, or any other infrastructure layer creates leverage and lock-in at the same time. Convenience can become dependency; dependency can become strategic constraint. The same forces that make machine-mediated markets efficient can also concentrate power.

None of these invalidate the argument. They constrain it. “Be good and the machines will find you” is not a guarantee. It is the best strategy available inside a system that still has structural failure modes.

Power moves upstream

If discovery, comparison, and recommendation increasingly happen before a user reaches your interface, then power moves upstream toward the systems that mediate those moments. The important shift is not just that AI changes how people search, but that interpretation, eligibility, and recommendation become concentrated in fewer layers between supply and demand.

That changes who captures value. The winner is not always the business with the best page or experience, but the one easiest for an intermediary to understand, compare, trust, and include — or the intermediary itself, which becomes the effective storefront, evaluator, and broker of choice.

The strategic problem is therefore not just how to perform well within a channel, but how to remain legible and necessary inside systems you do not control while avoiding total dependency on any one of them.

Distribution, access, and quality matter more than ever

Machine-mediated markets are not a simple meritocratic correction. Quality matters, but distribution, access, and defaults still matter too.

The option that is preinstalled, deeply integrated, preselected, already trusted, or simply closest to the decision point retains enormous structural advantage. If AI collapses evaluation into a single answer, shortlist, or assisted transaction, then being considered at all matters more than competing once considered.

Market access is therefore shaped not only by quality, but by ecosystem placement: whether you are indexable, integratable, retrievable, attributable, and available where decisions are made. Distribution becomes less visible, more infrastructural, and no less decisive.

Organisations must reorganise around truth, not messaging

If machine-mediated markets reward coherence, corroboration, operational quality, and legibility across systems, then competitiveness can no longer be treated as a downstream communications problem. It becomes an organisational design problem.

Brand is not what marketing says. It is the compressed result of what the organisation repeatedly proves. Product, support, engineering, operations, leadership, and marketing all contribute to that proof.

That requires a different internal model. Someone has to own public truth across the business: the claims made, the evidence behind them, the consistency of how the organisation describes itself, the integrity of its data, the quality of its outcomes, and the systems through which those signals are published and maintained. The future growth function is therefore not merely promotional. It is custodial, connective, and infrastructural.

Measurement must shift from surface outputs to decision eligibility

Most of the metrics the industry learned to fetishise were always proxies. Rankings, clicks, traffic, prompt screenshots, and visibility scores describe what happened at a visible interface layer after countless upstream decisions had already been made.

As those upstream decisions matter more, the limits of surface metrics become harder to ignore. A brand may appear often and still be poorly understood. A page may rank well and still be strategically irrelevant. A model may mention you and still not trust you.

The discipline needs fewer dashboards about appearance and more evidence about eligibility, trust, resilience, consistency, and contribution. We should measure the conditions that make recommendation possible, not just the artefacts recommendation leaves behind.

Originality becomes infrastructural

When generic information can be summarised, recombined, and reproduced at negligible cost, originality stops being a creative flourish and becomes a structural advantage.

That advantage is not limited to proprietary data. It includes original research, first-hand operational knowledge, tested methodology, lived expertise, distinctive framing, useful tools, strong communities, and hard-won judgment — anything that cannot be trivially regenerated from the public average.

The relevant question is no longer whether we are publishing enough, but whether we are producing things that remain valuable after compression. If the answer is no, the work may still generate activity, but it is unlikely to generate durable advantage.

The website becomes a source layer, not just a destination

The website does not disappear in this model. But its role changes.

For much of the commercial web, the site was treated primarily as a destination: a place to attract visits, shape journeys, persuade users, and convert demand. Increasingly, it also operates as a source layer: the canonical reference point from which machines extract facts, infer relationships, verify claims, and form a durable understanding of what a business is.

That means the site must do more than look convincing. It must be structurally interpretable, explicit in its claims, durable in its architecture, and maintained with the seriousness of product documentation or financial reporting. In a world where fewer journeys begin with a browser session, the site matters less as a billboard and more as infrastructure.

AI amplifies both incumbency and specialist advantage

Machine-mediated markets do not produce a simple meritocracy. They are more likely to polarise the market.

Incumbents retain real advantages: accumulated reputation, broader distribution, richer datasets, wider language coverage, deeper technical resources, and greater inclusion in the public record. Systems trained on existing patterns will often inherit and reinforce those advantages.

But the same environment can favour sharply focused specialists: businesses that are faster, clearer, more technically coherent, and closer to the underlying truth of what they offer. What becomes harder is being generically competent in the middle.

Humans do not become rational just because machines mediate

None of this means markets suddenly become cleanly rational. Human behaviour does not disappear simply because a machine sits between the brand and the buyer.

People still use shortcuts. They still respond to familiarity, status, convenience, identity, aesthetics, price cues, social proof, habit, and emotion. Even where machines perform more of the search and comparison work, they do so on behalf of humans whose preferences remain stubbornly human.

The implication is not that human psychology stops mattering. It is that human preference and machine interpretation increasingly interact. The winners are not simply the most machine-legible or the most emotionally resonant, but often those able to satisfy both at once: trustworthy to systems, meaningful to people.

Some things get worse

Any honest account of this transition has to admit that some outcomes deteriorate.

Machine-mediated markets may become better at filtering noise, checking consistency, and compressing generic supply. But they may also become more conservative, more centralised, and more homogenising. The things easiest to verify are not always the things most worth valuing.

Power may concentrate in the mediation layer itself. The same platforms that simplify discovery may narrow the range of what gets discovered. The same systems that reduce search cost may increase dependency. The same intermediaries that help users choose may also become the actors best positioned to steer, substitute, tax, or commoditise the suppliers beneath them.

None of this disproves the argument. But it does limit any naïve optimism. “Be good, be legible, and the systems will reward you” is not a law. It is the strongest available strategy within a landscape that remains uneven, power-laden, and structurally imperfect.

The conclusion is still uncomfortable in its simplicity: in a machine-mediated market, the most effective strategy is to become genuinely, demonstrably, consistently worth choosing — and to make that worth legible to the systems that increasingly decide what gets considered.

That is uncomfortable because it is not a tactic. It cannot be solved with a dashboard, a sprint, or a content calendar. It requires better products, cleaner systems, stronger evidence, more disciplined operations, clearer public truth, and contributions that are structurally difficult to replace.