← Back to the theses

Positions & Evidence

The detailed positions, arguments, and source references that underpin the manifesto. Each position contains the full argument and linked sources.

0. Foundational premise

Search is how markets function

AI makes it more important, not less.

Search, broadly defined, is the mechanism through which people enter categories, gather information, evaluate options, and make purchase decisions. This has always been true — from asking friends, to browsing shops, to querying Google, to prompting ChatGPT. AI does not replace search; it extends it. LLMs perform the same function — discover, evaluate, decide — but mechanically differently: through training data (pre-crawled market knowledge), grounding (real-time retrieval), and fan-out queries (multi-step research). The capacity is superhuman, but the function is identical.

The rise of AI-assisted and ambient search makes search more important, not less. As discovery, evaluation, and decision-making get offloaded from humans to machines — comparison shopping via agents, automated research, AI-mediated recommendations — the volume and significance of search activity increases dramatically, even as traditional query counts might decline. Every AI agent decision is a search. Every grounding call is a search. Every recommendation is the output of a search. Being findable and evaluable by machines is therefore not a marketing channel — it is a structural property of the business.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

AI-mediated trust will replace human trust hierarchies

Not because humans are lazy, but because humans can no longer tell what's real.

Signal hierarchies are collapsing: AI can generate polish, imperfection, and authenticity at zero marginal cost, making every traditional trust marker reproducible and therefore unreliable. Fake reviews, weaponised social proof, and astroturfed discussions flood every reputation channel. Humans cannot evaluate this landscape — they lack the capacity to cross-reference transaction data, behavioural signals, review velocity, and linguistic patterns across millions of data points. Well-trained AI systems can. This creates a trust inversion: for the first time, the intermediary between consumer and brand may be more trustworthy than either party. The rational human response is to delegate more decisions to AI — not just trivial ones but consequential ones. Each good recommendation builds trust in the system, which drives more delegation, which gives the system more data, which improves its filtering. This delegation cascade is the engine behind the rise of ambient search. It also means that being genuinely good becomes the only viable long-term strategy, because the machines will eventually determine if you are not.

Source: The Hotmail effect — Jono Alderson, 25 Oct 2025

Search is how markets function

Search is not a channel sitting beside the market; it is the mechanism through which markets are explored, options are filtered, and decisions are made. People search when they are confused, curious, comparing, validating, or ready to act. Recommendation engines, marketplaces, maps, internal site search, review platforms, AI assistants, and traditional search engines all perform versions of the same underlying function: reducing uncertainty between intent and choice. This is why SEO was always more important than the industry understood — not because Google mattered, but because search-shaped decision-making mattered. As more of that decision work gets delegated to software, the volume and significance of search increases even when query boxes disappear.

Source: Clicks don’t count (and they never did) — Jono Alderson, 12 Mar 2026

AI changes the evaluator

The decisive shift is not from blue links to chat interfaces. It is from human-first evaluation to machine-mediated evaluation. The systems increasingly sitting between brands and buyers do not browse the web the way humans do. They aggregate evidence, compare options, retrieve supporting context, and compute recommendations before a person encounters the shortlist. Each decision is freshly derived from available evidence, even when that evidence includes sticky prior memory, inherited reputation, and compressed summaries from earlier interactions. This changes what counts as persuasive. Machines are not convinced by rhetoric, polish, or theatrical confidence. They are supported by clarity, consistency, corroboration, and usable evidence.

Source: Optimising for the surfaceless web — Jono Alderson, 30 Oct 2025

Recommendation depends on legible truth

In a machine-mediated market, success depends less on being visible and more on being eligible for inclusion. Eligibility comes from two things working together: genuine competitiveness in the real world, and legibility to the systems that must recognise that competitiveness. Being good is necessary but not sufficient if your evidence is fragmented, contradictory, hidden behind bad architecture, or missing from the public record. Legibility is not cosmetic optimisation. It is the act of making real quality, real capability, and real trustworthiness machine-comprehensible through clean artefacts, coherent assertions, stable performance, third-party corroboration, and governed public truth. Visibility is the by-product. Inclusion is the prize.

Source: A page is more than just a container for words — Jono Alderson, 3 Feb 2026

1. Business

Competitiveness is not a marketing function

It is an organisational capability.

The structural drivers of brand competitiveness (experience quality, distribution, reputation, distinctiveness, and commercial proof) span product, operations, customer service, and finance. Organisations that silo these under "marketing" or "SEO" will systematically underinvest in the capabilities that increasingly determine visibility in both search and AI systems. This implies a need for cross-functional ownership models where competitiveness metrics are shared KPIs, not departmental vanity metrics.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

Upstream influence is risk management, not philanthropy

Most businesses build on foundations they did not pour — WordPress, Shopify, React, Chromium, Schema.org, Stripe. These systems are not weather; they are built by people, in public, with roadmaps and issue trackers. Sponsoring contributors, participating in roadmap discussions, and shaping standards builds relationships with maintainers, provides early visibility into the direction of travel, and creates the ability to advocate for features that matter commercially. Organisations that only react to platform changes are always slightly behind. Organisations that participate are steering. This does not require engineers — product managers, SEOs, and marketing leaders who can articulate real-world needs change the tone of standards conversations from theoretical to tangible.

Source: Raise the floor — Jono Alderson, 2 Mar 2026

Speed is a capability filter, not a technical metric

Web performance collapses a huge amount of organisational behaviour into a single observable outcome. Architecture choices, tooling decisions, team structures, dependency management, development culture, and operational discipline all pass through one bottleneck: the code that reaches the browser. Fast websites require cross-team coordination, architectural restraint, and platform understanding. Slow websites almost always reveal systemic dysfunction — not engineering incompetence, but an organisation that has lost the ability to reason about its systems end-to-end. When a website is slow, the problem is usually the organisation, not the website.

Source: Speed is the first competency test — Jono Alderson, 6 Mar 2026

Small businesses have structural advantages

…that large organisations cannot replicate.

Small businesses can decide fast (no committees or quarterly cycles), take opinionated positions ("this isn't for everyone"), pivot or rebuild their stack in days, and maintain naturally coherent brand signals because one person controls product, messaging, tech, and customer experience. In a world where AI systems evaluate entities based on coherent, consistent signals across the web, small businesses with clear identities and tight feedback loops may be disproportionately favoured over large brands with fragmented, committee-driven presences. Large organisations spend enormous effort trying to simulate what small businesses get for free. Conversely, large organisations can leverage scale for reliability and distribution — but cannot easily manufacture the focus and specificity that defines the "demonstrably best" pole.

Source: The middle is a graveyard — Jono Alderson, 10 Jan 2026

"Be the market" is a viable business model

But only if you serve the market, rather than compete with it.

Infrastructure providers (Shopify, Stripe, Cloudflare), marketplaces (Etsy, specialist directories), and reputation platforms (G2, Trustpilot) occupy a distinct strategic position: they enable others to compete without competing directly themselves. As AI agents need structured, machine-readable access to evaluate markets, platforms that provide clean data, APIs, and curated information become essential infrastructure for machine-mediated commerce. The business model risk is own-brand temptation: the moment a platform starts competing with its participants (Amazon Basics vs marketplace sellers), it erodes the trust that makes the platform valuable. The organisational discipline required is to remain the map, not a destination on it.

Source: The middle is a graveyard — Jono Alderson, 10 Jan 2026

Machine memory is sticky

Reputational damage is asymmetric.

Machines log, compress, and share every interaction with your brand. Those accumulated memories form an immune system that defends itself against friction, inconsistency, and unreliability. Weaknesses are recorded, compressed into durable summaries, spread across systems, and reduce your chances of rewriting the story. The flywheel is self-reinforcing: once scarred, fewer systems revisit you, which starves you of the fresh positive signals needed to overturn the old diagnosis. Recovery is asymmetric — the effort required to dig out is many times greater than the effort it took to fall in. This is not malice; it is mechanics. In a network optimised for efficiency and trust, bad memories are easier to keep than to re-evaluate. Prevention (building processes, data, and experiences that resist scarring in the first place) is dramatically cheaper than cure.

Source: Marketing against the machine immune system — Jono Alderson, 24 Sep 2025

Someone must own the brand's public record

The brand's "public record" extends far beyond the website: press releases, partner portals, speaker bios, investor materials, Wikipedia entries, industry directories, archived PDFs, LinkedIn profiles, conference listings. Machines will pull from any of it. An ex-employee still listed as "Head of Innovation" on a high-ranking event page, an outdated product name in a supplier portal, a superseded logo on an open-license repository — all of these feed the model's understanding. When reality changes (product launch, rebrand, leadership shift), the narrative update must be treated as a programme across all these touchpoints, not just a press release. This requires governance: someone who owns the coherence of the brand's public record across the entire web, with the authority to coordinate PR, content, SEO, legal, and product teams. Same orchestra, new conductor.

Source: On propaganda, perception, and reputation hacking — Jono Alderson, 14 Aug 2025

Marketing is not advertising

Conflating them defunds the capabilities that matter.

"Performance marketing" is a euphemism for advertising with attribution. Platforms benefit from conflating marketing with paid media because your budget is their revenue. When "doing marketing" becomes synonymous with "spending money on ads," entire capabilities get defunded: organic strategy is deprioritised, brand-building becomes a luxury, long-term vision is replaced by short-term optimisation, and teams chase metrics that are easy to measure rather than outcomes that matter. Marketing is the umbrella — understanding markets, shaping products, crafting positioning, building awareness, nurturing relationships. Advertising is one tactic within it, not the sum of it. Performance is an outcome, not a methodology: defining marketing by what is measurable is a category error. This matters for the manifesto because organisations that define marketing as ad-buying structurally underinvest in the capabilities that actually drive competitiveness in a machine-mediated market — distinctiveness, reputation, experience integrity, content quality. If your "marketing strategy" is an ad budget and a spreadsheet, you are renting attention, not building anything that persists.

Source: "Performance Marketing" is just advertising (with a dashboard) — Jono Alderson, 3 Jul 2025

Ship the obvious

Testing culture is often institutional fear in disguise.

Most organisations default to testing not because it is effective, but because it is defensible. If every improvement needs a test, every test needs sign-off, and every sign-off needs consensus, you do not have a strategy — you have inertia. SEO is not a closed system: you cannot isolate variables, control groups do not hold, and the world changes while you wait for significance. Testing things that are obviously right (improving speed, fixing broken navigation, clarifying structure) is not caution — it is procrastination dressed as rigour. Meanwhile, the things that matter most (quality, credibility, helpfulness, editorial integrity) resist clean measurement entirely. You cannot A/B test authority. So organisations over-invest in the testable and under-invest in the meaningful. The fix is cultural: make quality a non-negotiable default, not a concession to be fought for. Empower teams to ship obvious improvements without political cover. Test only what is genuinely uncertain. If your team needs a test to prove that fixing something broken will not backfire, the issue is not uncertainty — it is fear. The future belongs to the bravest, not the smartest.

Source: Stop testing. Start shipping. — Jono Alderson, 30 Jun 2025

Over-engineered stacks concentrate power in engineering teams

…and lock everyone else out.

The more complex the technical stack, the more it concentrates power in the hands of the people who built it. Marketers cannot update copy without raising tickets. SEOs cannot fix a canonical tag without a deployment. Content editors cannot preview what users will see. Every change routes through engineering — which deepens the dependency, which justifies more abstraction, which requires more engineers. This is not malicious; it is structural. The stack was built by developers for developers, so when something breaks or needs changing, it goes back to the people who understand the machinery. The result is an organisation where web strategy is decided not by the people responsible for outcomes, but by the people responsible for infrastructure. This has direct commercial consequences: campaign launches are delayed, A/B tests are scrapped, SEO improvements sit in backlog for sprints. Complexity is not just a technical burden — it is a financial one. More engineers, slower launches, higher maintenance costs, less agility. Small businesses are at an advantage here: a lean, simple stack with a non-technical founder who can update their own site is more agile than an enterprise with a six-sprint deployment pipeline. The right technical architecture is the one that returns control to the people responsible for outcomes.

Source: JavaScript broke the web (and called it progress) — Jono Alderson, 19 Jun 2025

The hidden cost of unhelpfulness never shows up in a dashboard

Organisations optimise for resilience — reactive problem-solving — instead of proactive helpfulness. The logic is that reactive failures are measurable and quantifiable; proactive investments in helping users before they fail are not. So most brands invest in fixing problems after the fact and fight for the margins, while leaving the upstream helpfulness gap unfilled. The hidden cost of unhelpfulness is immeasurable and therefore invisible: users who are failed do not complain, do not raise tickets, do not leave reviews. They simply stop choosing you. Quietly. Permanently. That silently translates into decreasing reach, increasing acquisition costs, and competitors eating your lunch — none of which is traceable to the outdated page that failed a user at a critical moment. Resilience is what users need when they have been failed, and it leaves a bad taste. In a marketplace where consumers expect brands to help them, a model based on reacting to failures cannot build loyalty or preference. This matters in the AI era because machine systems that evaluate your brand will absorb the ambient signal of user frustration, forum complaints, and negative sentiment — even when no formal complaint was ever made. The hidden cost of unhelpfulness is not just commercial; it is reputational infrastructure damage that compounds invisibly over time.

Source: Airport resilience — Jono Alderson, 1 Dec 2023

Source: Businesses are not designed to be helpful — Jono Alderson, 1 Oct 2023. The fundamental tension — between the desire to sell and the need to help — is where the SEO industry has historically thrived, often on the wrong side. Companies that fail to embed helpfulness into their DNA will lose out to platforms where users seek authentic voices. Being helpful is not a strategy; it is a business imperative.

AI breaks business models built on monetising explanation and discovery

AI systems do not only erode publisher traffic; they also sever the discovery funnels that monetise explanation, documentation, and abstraction. If a business captures demand by helping users understand how to use a tool, compare options, or navigate complexity, then an AI intermediary can ingest that explanatory layer and answer the question directly without sending attention, intent, or revenue back to the source. This is especially dangerous where the monetisation model depends on a documentation or content funnel feeding product sales. Usage can grow while the commercial mechanism collapses.

The deeper lesson is about business-model resilience. Models built on one-off monetisation of accumulated knowledge (for example, lifetime access to abstractions, templates, or educational assets) are especially vulnerable when AI can unbundle explanation from purchase. Sustainable defensibility increasingly lives in operational layers that cannot be trivially extracted: hosting, managed services, workflow integration, proprietary data, community, distribution, or recurring participation in the customer's ongoing work. In an AI-mediated market, owning attention at the moment of explanation is less durable than owning the system the customer has to keep using.

Source: The Tailwind paradox: the high price of “enough” — Joost de Valk, 31 Mar 2026

2. Strategy

Measure competitiveness, not visibility

Visibility — in search results, AI answers, or any other interface — is an output, not an input. It reflects the aggregated strength of underlying competitive signals: experience integrity, mental and physical availability, distinctiveness, reputation, and commercial proof. Strategies that target visibility directly (rankings, prompt mentions, share-of-answer) are optimising a reflection rather than the thing casting it. Competitive strategy should focus on strengthening the structural capabilities that generate visibility as a by-product.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

Distinctiveness drives brand growth

Unique utility drives information survival

These are different claims operating at different levels, and the manifesto holds both. At the brand level, Ehrenberg-Bass research is clear: consumers rarely notice or care about strategic positioning differences. Growth comes from mental availability and physical availability, powered by distinctive brand assets — not USPs. But at the information level, AI synthesis compresses interchangeable content into canonical answers. What you publish must be structurally non-replicable — original data, tools, opinionated frameworks, real case studies — because retrieval systems collapse sameness.

Source: Stop trying to rank for keywords — Jono Alderson, 21 Feb 2026

Competitive advantage comes from what you build

Workarounds accumulate technical debt: cognitive overhead, fragility, slower publishing, scarier upgrades. Upstream contributions — to platforms, standards, browsers, and open source — build compounding equity. When the baseline improves, obvious technical friction disappears, and differences in strategy and execution become more visible. Raising the floor raises the ceiling for organisations that are ready. If your edge depends on rivals being slowed by bad defaults, that is not a competitive advantage — it is a tax you have learned to pay more efficiently.

Source: Raise the floor — Jono Alderson, 2 Mar 2026

Source: Less is more — Jono Alderson, 1 Aug 2024. A useful forcing function: what if every new URL required you to delete an old one? Digital space is infinite but attention is not. The long tail of mediocre content does not just fail to rank — it dilutes the signal quality of everything else. Depth over breadth. Precision over scale. The future of successful websites is defined by what they choose not to do.

Performance is a system property, not a tuning exercise.

Speed is determined by the shape of the system, not by individual bottlenecks. Architecture that assumes large client-side bundles, complex runtime logic, and browser-side data fetching will be slow regardless of how many optimisation sprints are applied. Fast systems make different architectural choices from the start — delivering content early, minimising browser workload, treating JavaScript as enhancement rather than foundation. Strategic decisions about platform architecture should treat performance as a first-class constraint, not an afterthought.

Source: Speed is the first competency test — Jono Alderson, 6 Mar 2026

The middle is a graveyard

Markets polarise around "safest default" and "demonstrably best".

Human limitations (time-poor, information-constrained) historically rewarded acceptable competence — breadth, adjacency, coverage. AI-mediated decision-making erodes this. Systems that can evaluate the full competitive landscape simultaneously do not use brand familiarity as a proxy for quality, do not infer expertise from adjacency, and re-evaluate every option fresh rather than reusing past decisions. Two positions survive: the most reliable default for a given job (Amazon-style scale and consistency), or demonstrably best for a specific need (specialist depth). The middle — adequate but undifferentiated — introduces risk that omniscient agents are designed to eliminate. This is not a tactical problem solvable with better SEO; it is a strategic positioning question. Growth increasingly requires subtraction: clarity beats coverage, depth beats breadth.

Source: The middle is a graveyard — Jono Alderson, 10 Jan 2026

Source: Differentiate or disappear — Jono Alderson, 7 Mar 2025. You are not competing against other brands — you are competing against indifference and algorithms that decide whether you deserve attention. The diagnostic: what would be lost if your brand disappeared tomorrow? If you cannot answer that, neither can your audience nor the models that decide whether you are worth including.

Source: What the Walking Dead taught me about the future of consumer loyalty — Jono Alderson, 26 Mar 2017. The earliest articulation of the differentiation imperative: finite attention plus infinite choice makes the value exchange the only battleground for loyalty. If the value exchange feels imbalanced, the audience looks elsewhere — and while they are looking, they are more receptive to competitors.

Ecosystem orchestrators may be a conditional third stable position

…but only if they resist competing with their own participants.

Platforms that connect specialists to demand (marketplaces, aggregators, infrastructure providers) can survive the collapsing middle by being market infrastructure rather than a contestant. AI agents evaluating options need structured, curated access to competitive landscapes — orchestrators that provide this become more valuable as AI mediates more decisions. Network effects compound: more specialists listed means a more complete map of the market. However, this position is conditional. First, AI agents may eventually disintermediate aggregators by assembling comparisons directly from the open web — the orchestrator's survival depends on curation and structure being genuinely better than what agents can build independently. Second, the moment an orchestrator competes with its own participants (own-brand products, preferential ranking), it undermines the trust that makes it useful. Third, orchestrators that extract rent without bearing quality risk (comparison sites that take commissions but don't guarantee outcomes) may find agents routing around them to go direct. The position is viable but fragile: it requires discipline.

Source: The middle is a graveyard — Jono Alderson, 10 Jan 2026

Where value is integrated matters more than who owned the old layer

A firm can remain world-class at integrating the old stack and still lose if AI shifts the layer where user value is actually assembled. Competitive advantage is not a permanent property of a company; it is a function of where coordination, interface, and fulfilment happen. If the decisive layer moves, incumbents can become misaligned to the new point of control even while their historical strengths remain intact.

This sharpens the argument about ecosystem orchestrators and interface control. In AI-shaped markets, winning may depend less on owning the best component and more on owning the layer where intent, context, and execution are stitched together. The moat belongs to the layer that integrates demand with fulfilment, not necessarily to the firm that dominated the previous technical or product boundary.

Source: Apple’s 50 Years of Integration — Ben Thompson, 31 Mar 2026

Compete for existence, not attention

In a system where AI models continuously compress, merge, and overwrite their understanding of the world, persistence is an active process. Entities drift, meanings decay, facts are reinterpreted. You do not exist once; you exist continuously, by reasserting your pattern every time the world is rewritten. This is not repetition — it is reinforcement: maintaining coherence so that traces of your presence echo across time, context, and medium. A brand becomes not just a promise to people but a pattern that helps machines resolve ambiguity — a consistent cluster of language, data, and context that supports their confidence. When your presence improves the system's accuracy, you stop being external and become infrastructure. The competition is no longer for attention; it is for whether the machine still knows you when it retrains.

Source: Optimising for the surfaceless web — Jono Alderson, 30 Oct 2025

Define your category or the machine will define it for you

AI systems optimise for coherence as much as correctness. The first widely cited definition of a category, the earliest comprehensive guide to a topic, the clearest framing of a problem — these become gravitational anchors. Every subsequent mention is interpreted through that lens. Competing narratives struggle to break in because the model already has a coherent story. This means category definition is a first-mover advantage: whoever frames the language, exemplars, and boundaries of a category shapes how machines understand every player in it. If you do not help define your category, the machine will do it for you, using whatever scraps it can find. Pick the ideas, phrases, and definitions you want attached to your name for the next five years. Seed them in durable, citable sources — encyclopaedic entries, standards bodies, industry reference material — not transient campaign pages.

Source: On propaganda, perception, and reputation hacking — Jono Alderson, 14 Aug 2025

Standing still is falling behind

Continuous adaptation is the operating rhythm.

Digital performance is relative, not absolute. Rankings, visibility, and attention are zero-sum. The web is a dynamic ecosystem where external forces — link graph redistribution, cultural events, platform policy changes, competitor moves, macro-economic shifts — constantly reshape the landscape whether you act or not. "Nothing changed on our site" is not a defence; it is an admission that the environment moved and you did not. Compounding micro-improvements beat sporadic overhauls. Every page, product, and process is a draft — there is no final version. Treat nothing as finished. Improve 100 small things in 100 small ways continuously, rather than waiting for the big redesign (which will take three times longer than promised). This is the operational expression of persistence: you exist continuously by reasserting your pattern, not by publishing once and hoping.

Source: Standing still is falling behind — Jono Alderson, 11 Aug 2025

Localisation is for the systems, not just the customers

AI systems are trained on vast multilingual datasets and do not respect market boundaries. Content in Polish can feed a model's understanding of a concept it surfaces in English. A Japanese forum mention can boost perceived authority in Germany. A mistranslation in Korean can colour entity understanding globally. This means localisation is no longer about translating for customers in markets you serve — it is about shaping the multilingual data ecosystem that determines how machines understand you everywhere. Absence is not safety; it is vulnerability. If you have no footprint in a language, the model guesses, extrapolates, or imports context from elsewhere — and that guess may be wrong. Strategic presence in markets you will never sell to can shape the ambient data that teaches machines what your brand is. This is perception engineering applied globally.

Source: Shaping visibility in a multilingual internet — Jono Alderson, 8 Aug 2025

The next generation of marketing leaders will orchestrate, not optimise.

Brand and SEO are not parallel channels — they are causally linked. Brand investment creates the signals that search converts: recognition drives navigational queries, preference drives click behaviour, trust drives link acquisition, salience drives entity association. Google does not just index pages; it indexes reputation. This means brand spend is upstream signal generation for search performance, not a separate budget line. But the answer is not for SEO to take the brand budget and spend it on technical tactics — that would kill the creative, emotional, unmeasurable work that makes brand generate signals in the first place. Nobody Googles a brand they have never heard of. The answer is orchestration: brand campaigns designed to be searched for, SEO insights shaping brand messaging, shared briefs and editorial calendars, shared goals. The marketing leaders who win will not just optimise — they will integrate. Every ad, article, and answer reinforcing a consistent, cohesive signal across every surface. Silos between brand and performance do not just create inefficiency; they destroy the amplification loop that makes both work.

Source: SEOs want the brand budget. But should they get it? — Jono Alderson, 16 May 2025

Source: From Meta Tags to Meta Discipline — Jono Alderson, 4 Apr 2025. SEO is a meta-discipline — not a function but connective tissue that weaves through the organisation, linking search behaviour, business strategy, and technical execution. The best practitioners do not execute; they facilitate, educate, and influence across the org. Stop firefighting and justifying presence; own the conversations that shape the business.

Source: Fulfilling vs. creating demand — Jono Alderson, 11 Nov 2024. SEO by default is demand capture — optimising for searches that already exist. The bigger lever is demand creation: using brand, PR, and paid channels to get more people searching for terms you already own. A campaign that sparks interest sends behavioural signals that protect existing rankings, often with better ROI than competing for new keyword territory.

Source: SEO is no longer enough — Jono Alderson, 1 Dec 2023. Nobody imposed the constraint that SEO must produce content, chase links, and improve websites — the discipline did that to itself. It can choose to become something better: an industry that improves how helpful businesses are to their audiences. Search does not need to be a zero-sum game.

Be the first click, not the last

Win the journey before it starts.

Everyone fights for bottom-of-funnel: the moment someone is ready to convert, informed, and comparing options. The entire upstream journey — the confusion, the exploration, the problem-framing — is abandoned. This is strategically backwards. If you are the first useful thing someone encounters, before they have formed preferences and before they are in comparison mode, you insulate that trust. They are less likely to wander, less likely to start the search again, less likely to end up in a competitor’s comparison table. Being genuinely helpful early in the journey is not altruism — it is pre-emptive preference formation. The deeper strategic point: stop being customer-centric and become audience-centric. Customers are the small slice who make it through the system. The audience is everyone upstream. If you want to win at the gatekeeper level — to convince Google, AI agents, and every other filter that your content deserves to be surfaced — you must be useful to the whole audience, not just the ready-to-buy slice. This is how you earn links from journalists, get quoted and bookmarked, and build the ambient reputation that AI systems encode. Show up for the audience, consistently and generously, and you earn the right to show up for the customer.

Source: Contentless marketing — Jono Alderson, 6 May 2025

Competitor research should analyse strategic inputs

…Not just lagging SEO outputs

Most competitor analysis in SEO looks backward: rankings, keywords, links, content footprints, technical features. Those are useful, but they are lagging signals. They tell you what has happened, not why it happened or what might happen next. A more strategic approach studies the inputs behind competitor behaviour: organisational structure, team seniority, funding model, risk tolerance, agency relationships, executive incentives, operating constraints, and growth expectations. These shape the kinds of strategies a business can sustain, the decisions it will default to, and the directions in which it is likely to move. If strategy is about outmanoeuvring competitors through time, then understanding their internal logic matters more than copying their visible outputs.

Source: Competitor research reimagined — Jono Alderson / SISTRIX, Jul 2024

SEO strategy has ethical and societal consequences

…Not just commercial ones

SEO is often treated like a technical game played on abstract systems. In reality, it shapes what information people encounter, what businesses survive, how trust is distributed, and which narratives gain legitimacy. Visibility is not neutral. Amplifying one source suppresses another; shaping what is found also shapes what is believed. As machine-mediated discovery becomes more powerful, the social consequences of optimisation get bigger, not smaller. This does not overturn the manifesto’s strategic focus, but it does add a stewardship obligation: marketers, SEOs, and platform designers are not merely competing inside information systems; they are helping to shape the public knowledge environment those systems inherit.

Source: It’s all just a game, right? — Jono Alderson / SISTRIX, May 2024

Systems reward underlying fitness, not the appearance of improvement

One of the oldest mistakes in optimisation is to obsess over the visible score instead of the underlying conditions the score reflects. People compare small movements, reverse-engineer individual fluctuations, and hunt for tactical shortcuts because the ranking is the only output they can see. But complex systems rarely reward isolated tricks in any durable way. They reward the deeper qualities that the score is trying to proxy: technical fitness, relevance, originality, usefulness, reputation, and integrity. This is why audits so often reveal not a single magic bullet but hundreds of small bugs, compromises, and weaknesses. The path back is usually not a clever hack but the slow work of becoming less broken. Superficial improvements can produce temporary lifts; systems eventually normalise toward what is genuinely fit.

Source: Jono Alderson, conference talk transcript / notes

Technical control creates strategic freedom when organisations are bottlenecked

A recurring constraint in digital strategy is that organisations know what they should improve but cannot act because of backlogs, legacy systems, cross-team dependencies, or misplaced ownership. The talk’s edge examples matter less as Cloudflare-specific tricks than as a general strategic lesson: control over the delivery layer can create freedom to improve experiences, fix machine-facing issues, reduce wasted infrastructure cost, and test better behaviours without waiting for full organisational alignment. This reinforces a broader manifesto theme: competitiveness is often constrained not by lack of insight but by lack of executable leverage. The businesses that can create lightweight intervention points between intention and implementation gain disproportionate advantage.

Going over the edge — Jono Alderson, conference talk transcript / notes

Marketing and CRO were built around crossing a threshold that is now disappearing

For most of marketing history, influence depended on getting a person across a threshold into an environment you controlled. In physical retail that meant getting them into the shop; on the web it meant getting the click, the visit, the session, the funnel entry. Marketing’s job was to get them in. CRO’s job was to optimise what happened once they were inside. The whole paradigm assumed that the decisive moments of framing, persuasion, and evaluation happened after arrival. That assumption is breaking. Search engines, feeds, answer systems, and AI assistants increasingly resolve intent upstream, before a user ever reaches the website, and often without them needing to arrive at all. That means the threshold is no longer a reliable leverage point, and disciplines built around optimising either side of it have to move upstream to remain relevant.

Surviving the Surfaceless Web — Jono Alderson, conference talk transcript / notes

“Is this a ranking factor?” is the wrong question

The right question is, “does this change behaviour?”

Ranking-factor thinking narrows strategy to the small set of mechanics we can most easily observe inside search interfaces. But many of the things that meaningfully affect visibility and recommendation do so indirectly, by changing how people feel, behave, talk, buy, review, return, and recommend. Typography, packaging, delivery quality, product truth, above-the-line advertising, customer service, leadership behaviour, office culture, and local reputation may all shape the data trails that systems later interpret as evidence of fit. The more useful strategic question is therefore not “does X directly affect rankings?” but “might this alter human behaviour in ways that change the signals from which systems infer value?” That reframing expands optimisation from page mechanics to market mechanics.

Are puppies a ranking factor? — Jono Alderson, conference talk transcript / notes

The interface that assembles demand becomes the market

AI shifts competitive advantage toward the layer that assembles user intent, comparison, trust, and fulfilment. That layer may be a search interface, shopping agent, protocol, or commerce operating system, but it is increasingly not the brand website itself.

The strategic prize is not just owning the product or the transaction. It is owning the orchestration layer that frames choices, mediates relationships, and determines how markets are navigated.

Agentic commerce does not remove intermediaries. It creates more powerful ones.

Source: AI shopping gets simpler with Universal Commerce Protocol updates — Ashish Gupta, 19 Mar 2026

Source: Millions of merchants can sell in AI chats — Shopify, Mar 2026

Source: MPP — Machine Payments Protocol — Tempo / Stripe, 2026

You do not rank first; you become rankable first

Visibility in retrieval systems increasingly depends on signals earned outside the SERP. Familiarity, engagement, citations, trust, and platform presence help determine who gets considered before ranking mechanics do their work.

That makes SEO less a standalone optimisation discipline and more the downstream expression of being known, reinforced, and legible across the wider information ecosystem.

In that environment, ranking is less often the starting point. It is the result of having already become credible enough to include.

Source: Google Search Console Insights with social channels — Barry Schwartz, Dec 2025

You have to canonicalize yourself

In a fragmented, AI-mediated ecosystem, a brand is represented by the aggregate residue of everything it has published, claimed, or forgotten across the web. Stale bios, outdated logos, old partner pages, abandoned profiles, recruiter databases, slide decks, and press releases all become part of its machine-readable identity.

That shifts brand-building from occasional storytelling toward ongoing maintenance. The task is not only to create a narrative, but to keep it accurate, consistent, and coherent across every touchpoint where people and machines might encounter it.

Reputation is no longer shaped only by flagship assets. It is assembled from fragments. Which means narrative hygiene becomes strategic.

Source: Expanded integrations with Google and OpenAI — Bloomberg, Sep 2025

If you are interchangeable, you are compressible

In AI-mediated markets, sameness is not neutral. It makes businesses easy to summarise, substitute, and ignore. If your products, positioning, claims, and content are interchangeable with the category average, a machine can collapse you into a generic answer without losing much fidelity.

That makes distinctiveness more than a branding advantage. It becomes a defence against compression. Businesses that converge on the mean become training data for their own irrelevance.

The risk is not only that a competitor outperforms you. It is that the machine decides there is no meaningful difference between any of you.

Source: SISTRIX July '25 NewsWatch — Jono Alderson, Jul 2025

3. Marketing

AI systems operationalise the signals that marketing science has studied for decades

Ehrenberg-Bass's framework of mental availability (brand salience in buying situations) and physical availability (presence where decisions happen) describes the same forces that LLMs use when deciding which entities to recommend. Brand growth comes from being easy to notice, easy to recall, and easy to buy — not from persuasion or optimisation. The most meaningful marketing metrics are therefore not extracted from interfaces (rankings, prompt mentions) but observed directly from the market: recall, salience, penetration, and distinctive asset recognition, typically via survey and panel research.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

Sameness is compressed

Unique utility is the minimum viable content strategy.

When AI systems synthesise answers and collapse similar documents into canonical sources, interchangeable content disappears from view. The question is not "how do we rank for X?" but "what can we create that competitors cannot easily replicate?" This applies at every scale: startups close to a problem can document trade-offs openly; commodity e-commerce can differentiate through compatibility guides, real testing, and community Q&A. Specificity beats generic authority. Note: this is information-level differentiation (what you publish), not brand-level differentiation (how you are perceived). Both matter, but they operate through different mechanisms.

Source: Stop trying to rank for keywords — Jono Alderson, 21 Feb 2026

Source: Adrift in a sea of sameness — Jono Alderson, 9 Jul 2025. Sameness is self-reinforcing: when the web is beige, the model learns to serve beige. Your content becomes fuel for your own redundancy.

Marketing shifts from performance to persistence

From visibility, to viability within the model.

The web is losing its surfaces. AI systems compress, distil, and internalise information — what survives is not what is visible, but what is useful, true, and integral to the model's understanding. Useful: content that clarifies, contextualises, or resolves ambiguity improves the model's predictions. True: signals that remain consistent across time, context, and corroboration become stable landmarks — contradictions and rhetorical pivots weaken entity definition until the model stops believing you exist. Integral: ideas that are cited, linked, quoted, and built upon become structural — their removal would create tension in the model's understanding. Old marketing rewarded novelty; machine systems reward consistency. Most campaigns capture a moment; few survive a model update. The goal is not to win visibility but to earn residency — to become something the machine recognises as part of its metabolism.

Source: Optimising for the surfaceless web — Jono Alderson, 30 Oct 2025

Signal hierarchies are collapsing

When everything can be faked, trust markers stop working.

An entire generation of marketers trained businesses to treat trust as a design problem — polish as proof of competence. Now AI can generate polish, imperfection, and authenticity itself at zero marginal cost. Every signal that once conveyed truth (professionalism, vulnerability, spontaneity) can be manufactured and optimised. Platforms amplify what triggers trust; creators imitate what performs; the algorithm learns from the imitation. Authenticity becomes a closed loop, refined until indistinguishable from what it imitates. The result: meaning collapses into noise. This is accelerating with fake social proof at industrial scale — AI-generated reviews, weaponised negative reviews as competitive sabotage, astroturfed community discussions that LLMs then ingest as genuine signal. Humans will increasingly distrust social proof even as they continue to seek it. The brands that survive this will be those whose reputation signals are verifiable, traceable to real transactions and real usage — not just plentiful.

Source: The Hotmail effect — Jono Alderson, 25 Oct 2025

Content is not advertising

Stop measuring it like ads.

Ads are designed to compel immediate action. Content builds trust, salience, and preference indirectly — in the messy middle where people loop through doubt, reassurance, comparison, and procrastination. Making content behave like ads ruins both functions: you strip out the qualities that make it engaging, and you fail to generate the conversions you were chasing. Most brand content is industrialised mediocrity — a process optimised to churn out keyword-targeted filler that looks like content without risking being interesting. It persists because it is measurable (traffic, CTR, assisted conversions), even though the metrics are meaningless proxies for the things content actually does. The most commercially valuable content is often the least "optimised" for conversions: the ungated guide that gets bookmarked, the explainer passed around Slack, the resource cited by journalists. These are signals of salience — harder to track, far more powerful than gated downloads. Authored, opinionated content with a real voice is risky and memorable. Industrialised filler is safe and forgettable. Only one has a future — and in an agentic world where systems effectively make one decision rather than giving you a hundred chances, mediocrity is not just wasteful, it is dangerous.

Source: If you want your blog to sell, stop selling — Jono Alderson, 3 Sep 2025

Treat content like product, not like campaigns

The measurement problem with content is not that it should be unaccountable — it is that it is held accountable to the wrong metrics. Ad metrics (clicks, CTR, cost-per-acquisition, conversion rate) measure immediate action. Content does not produce immediate action; it builds trust, salience, and preference over time. The solution is not "stop measuring" but "measure like a product, not like a campaign." Product metrics translate directly: adoption/penetration (how many in the target audience have encountered this?), retention (do people return, bookmark, reference it again?), referral/NPS (does it get shared in Slack, cited by journalists, linked by peers?), time-to-value (how quickly does it help?), churn/negative signal (how many leave with a worse impression?), and market share of attention (what proportion of category citations does this hold?). This framing makes content defensible: it has a lifecycle (maintained, updated, sunset when it stops working), an investment model (not publish-and-forget), and metrics that matter. It also kills the CTA problem — nobody expects a product to sell another product mid-use. The product IS the value. Content treated as product forces the "what do we uniquely know?" question before anything gets published, connects to the persistence model (products that retain and generate referrals are persisting), and explains why mediocre content should be killed rather than kept alive because "it drives some traffic."

Source: If you want your blog to sell, stop selling — Jono Alderson, 3 Sep 2025

Positive perception compounds. So does negative.

Machine memory is not neutral — it is a self-reinforcing loop in both directions. Positive framing in credible sources generates positive behaviour (branded search, clicks, longer engagement), which creates data patterns that make you look like the best to systems learning from aggregate behaviour, which reinforces the positive framing. The loop is slow to start but self-sustaining once established. The inverse is equally true: negative associations compress, circulate, and attract further negative signal. This is the mirror image of the flywheel of forgetting — and it means that the strategic priority is not just avoiding negative scar tissue, but actively seeding the right narratives in durable, high-gravity sources. Not advertising copy in disguise, but clear, accurate, repeatable language that works for you when stripped of context and paraphrased by a machine. Someone in the organisation must own the brand's "public record" — not just the website, but the wider corpus that describes you across the web.

Source: On propaganda, perception, and reputation hacking — Jono Alderson, 14 Aug 2025

Latent influence has four dimensions

Presence, Positioning, Perception, Permanence.

In a machine-mediated world, direct influence is disappearing. You do not persuade the user — you present your evidence to a system and hope it survives the summarisation. That requires investing in four dimensions of latent influence: (1) Presence — are you cited in places machines consider credible? Not just blogs and landing pages, but docs, forums, schema, transcripts, FAQs, datasets. (2) Positioning — are you described consistently and clearly across those surfaces, or do you show up fragmented, contradictory, or vague? (3) Perception — what is the quality and sentiment of your adjacents? Are you next to trusted voices, or surrounded by noise? (4) Permanence — are your signals stable, persistent, and embedded in surfaces likely to be crawled, trained, and referenced long-term? A competitor with better documentation, cleaner markup, tighter semantic alignment, and more coherent citations may become the default not because they are better, but because they are easier to summarise. This is branding for an audience that reads everything and forgets nothing. You are not trying to win a SERP. You are trying to shape a model’s memory. Inclusion is earned through embeddedness, not engagement.

Source: Everything is now opaque — Jono Alderson, 25 Jun 2025

Build outposts, not funnels

Search is everywhere your audience makes decisions.

Search is not just Google. TikTok, Reddit, YouTube, Amazon, LinkedIn — these are all search engines in their own right, each playing a different role in how audiences research, compare, and decide. People go to Reddit for unfiltered opinions, YouTube for walkthroughs, TikTok for fast tutorials, Amazon for ratings and stock. If your presence is only optimised for Google blue links, you are invisible for most of the actual journey. A surface area strategy means building outposts: native, useful content placements across the platforms where your audience already spends time and makes decisions. This is not diversification for its own sake — it reinforces Google presence too. Google increasingly surfaces Reddit threads, YouTube videos, and TikTok content in its own results. Earning attention on those platforms feeds back into traditional search visibility. The structural shift: the old playbook was build a site and wait for traffic. The new reality is that attention is fragmented, journeys are unpredictable, and search is ambient. Content needs to know where it belongs before it is published, and be native to those spaces. This connects directly to the four dimensions of latent influence: Presence, Positioning, Perception, Permanence — surface area strategy is how you operationalise Presence across the full ecosystem, not just your owned properties.

Source: Contentless marketing — Jono Alderson, 6 May 2025

Earned third-party signals are now reputation infrastructure

…not link acquisition.

The case for earned media has fundamentally changed. The old digital PR argument was about link acquisition: DA scores, anchor text, PageRank manipulation. That critique was correct — it was renting algorithmic attention, not doing meaningful marketing. The new case is different in kind. Journalists, analysts, customers, partners, communities, and other companies are high-provenance signal generators. What they publish about you gets absorbed into model training corpora, appears in the latent web as third-party non-self-generated signals, is harder to fabricate than anything you publish yourself, and shapes the ambient reputation that AI systems use to form opinions about you. The link is a side effect, not the point. What matters is the entry into the citation economy: a journalist's description of who you are, a customer's review on a trusted platform, a partner's public endorsement — these are Permanence and Perception signals in the latent influence framework, delivered by someone other than you. This is existentially important because models weight third-party, non-self-generated signals more heavily than self-published claims. You cannot self-certify your own trustworthiness. You cannot train a model to trust you by publishing more about yourself. Only credible third parties can do that for you. The management of this signal ecosystem is therefore a core marketing function: monitoring what is being said about you across surfaces, in what framing, correcting misattributions, seeding accurate context in credible places, surrounding negative discourse with stronger signals. This is not PR as it was understood. It is reputation operations — active, ongoing, existential.

Source: Jono Alderson, in conversation — Mar 2026. No published article yet.

Brand maintenance is canonicalisation at ecosystem scale

In a fragmented, AI-mediated ecosystem, brand-building is not just about campaigns, storytelling, or salience. It is also about maintenance: keeping the public record coherent across every place a system might encounter you. Old press releases, stale social profiles, outdated partner pages, abandoned listings, old decks, podcast bios, app store entries, and inconsistent descriptions all become part of the evidence layer from which machines construct identity. This makes brand management increasingly similar to canonicalisation: not merely creating the preferred version of your story, but ensuring that the preferred version survives intact across the surfaces, formats, and third-party references that shape retrieval and recommendation.

Source: On brand maintenance — Jono Alderson / SISTRIX, Oct 2025

Platforms are environmental; websites are transactional

A useful strategic distinction is that brand websites and apps are usually transactional environments: people visit them to complete a task, get information, or buy something. Platforms are different. They are environmental: places where audiences spend time, consume media, maintain relationships, and let the system curate what they see next. This creates an asymmetry. Brands try to buy or earn attention on platforms in order to pull users away into a more controlled environment, while platforms are designed to minimise that leakage and satisfy users in situ. As platforms become faster, more personalised, and better at packaging content, the burden on the website rises: if you want people to leave the platform, your destination has to offer something distinctly valuable.

Digital marketing in a post-digital world — Jono Alderson, conference talk transcript / notes

Distributed content trades ownership for discoverability and influence

As audiences spend more of their time inside platforms, brands face a strategic trade-off: keep editorial and educational content on owned properties and accept shrinking attention, or distribute that content into external environments where people already are. Distributed content reduces ownership and can weaken direct attribution, but it also reduces friction, improves discoverability, and allows brand-building to happen inside the user’s preferred context. This matters because much of marketing’s job is not to convert a user immediately, but to shape preference and recall before the moment of need. In that light, sacrificing ownership can be rational if it increases the likelihood that a brand enters later consideration sets.

Digital marketing in a post-digital world — Jono Alderson, conference talk transcript / notes

SEO evolves into relevance engineering

If search, recommendation, and AI systems infer value from a broad mix of behavioural, reputational, cultural, and operational evidence, then the old definition of SEO becomes too small. The work is no longer just content, links, technical hygiene, and on-page tuning. It becomes a broader discipline of relevance engineering: improving the real-world and perceived fit between a proposition and an audience, across every touchpoint that shapes memory, behaviour, and recommendation. That includes product quality, service design, delivery experience, storytelling, reputation, discoverability, and the wider cultural contexts that influence how a brand is interpreted. In that sense, relevance is not something you optimise into pages after the fact; it is something you engineer into the business and its footprint.

Are puppies a ranking factor? — Jono Alderson, conference talk transcript / notes

Marketing becomes the construction and maintenance of distribution

Marketing used to be framed as the discipline of shaping perception, then increasingly as the discipline of buying attention. In machine-mediated markets, that framing becomes insufficient. The more strategic task is to build, grow, and sustain distribution: the assets, signals, interfaces, relationships, and system-level presence that keep a business discoverable, interpretable, retrievable, and recommendable across the environments where decisions are increasingly made. This does not eliminate persuasion, but it demotes it. If your business is not structurally available to the systems that mediate choice, then the quality of the message matters less because the message never meaningfully enters consideration.

Source: Draft synthesis from executive summary + manifesto updates, 31 Mar 2026

4. Technology

The web platform has already solved most performance problems

The industry keeps re-creating them

Modern browsers speculate, prioritise, stream, cache, and optimise in ways that would have seemed implausible a decade ago. Native lazy-loading, streaming rendering, speculative prefetch/prerender, and CSS-driven interactivity handle entire classes of problems that once required JavaScript. The fundamentals of speed are simple: send less data, avoid blocking the browser, cache aggressively, render something useful early. Yet the ecosystem keeps getting slower — because frameworks, build systems, and development patterns designed for a pre-platform era persist through habit, training data, and cultural inertia. The default should be to use the platform as intended and justify deviations, not the reverse.

Source: Speed is the first competency test — Jono Alderson, 6 Mar 2026

Sensible defaults are survival traits in a machine-consumed web

As the web becomes feedstock for AI systems that do not render JavaScript and do not tolerate chaos, clean semantic markup, predictable structure, and well-behaved defaults are no longer aesthetic preferences. They are structural requirements for being parsed, understood, and recommended by machines. Organisations that invest in improving the defaults of the platforms they depend on — cleaner HTML output, better accessibility, more accurate structured data — reduce their own maintenance burden while making their content more consumable by both humans and automated systems. Every workaround is friction; every improved default is compounding leverage.

Source: Raise the floor — Jono Alderson, 2 Mar 2026

Websites are increasingly infrastructure, not just interfaces

The web is no longer consumed only by humans. Search engines, AI agents, automation systems, and retrieval chains navigate it as sequences of tasks — discover, fetch, parse, reason, act. Latency compounds across those chains. A slow website does not just inconvenience a user; it slows every downstream system that depends on it. Fast, structured, predictable systems behave like reliable infrastructure. Slow, cumbersome ones become friction for everything built on top of them. Performance is therefore no longer a front-end UX concern — it is a structural property that determines whether your systems are usable building blocks in an increasingly machine-mediated web.

Source: Speed is the first competency test — Jono Alderson, 6 Mar 2026

There will not be one web for humans and another for machines

Machine-only representations (markdown mirrors, llms.txt, stripped-down feeds) create a second candidate version of reality. The moment that exists, optimisation follows — caveats soften, commercial messages sharpen, the machine-facing version drifts from what humans see. Consuming systems then face an arbitration problem that is economically unsustainable at scale. The cheaper, richer, more trustworthy option is always the page itself. A page is not just a container for words — hierarchy, emphasis, placement, and framing are signals about meaning, not decoration. Systems that want to approximate human understanding will converge on the human-facing artefact, just as Google evolved from not rendering pages to full rendering. The solution to bad machine readability is not a shadow version — it is a better page: semantic HTML, clear structure, progressive enhancement, content that exists at load time.

Source: A page is more than just a container for words — Jono Alderson, 3 Feb 2026

Technical integrity is your immune system resistance

Every flaw in your technical systems — a slow page, a broken endpoint, misleading schema, a failed checkout — gets logged by machines. Those logs are compressed into durable summaries ("site is unreliable", "data is inconsistent") that spread across systems and shape future behaviour. A single timeout is a blip; a thousand become a reputation. This is not a front-end problem — it is an infrastructure problem. Designing systems that generate the right kinds of traces (fast responses, consistent data, reliable endpoints, accurate structured data) is the technical foundation of machine trust. The cost of prevention is dramatically lower than the cost of recovery, because the flywheel of negative machine memory is self-reinforcing.

Source: Marketing against the machine immune system — Jono Alderson, 24 Sep 2025

Source: Do your Core Web Vitals scores really matter? — Jono Alderson, 1 Jan 2024. To make a site faster, you have to address all of the underlying organisational reasons it was slow in the first place. CWV work surfaces developer knowledge gaps, misaligned technology priorities, unexamined third-party dependencies, and the balance of power between engineering and marketing. Speed improvement as diagnostic audit — the technical fix is the smallest part of the value.

The web is no longer URL-shaped. Optimise assertions, not pages.

Machines break pages into assertions — discrete claims about the world (subject → predicate → object) — that they extract, evaluate, and connect. Search engines store these as symbolic facts in knowledge graphs; LLMs encode them as patterns in vector spaces. In both cases, the unit that matters is not the page but the claims inside it. This means optimisation shifts from documents to the clarity, consistency, and connectivity of assertions. Design every page as a bundle of extractable claims. Semantic HTML matters because it enforces the clarity, hierarchy, and consistency that models learn from. "Write for humans" is incomplete — you must write for humans AND engineer for machines, making claims explicit and structuring relationships in the same artefact.

Source: The web isn't URL-shaped anymore — Jono Alderson, 30 Jul 2025

Websites are not apps. Stop building them like apps.

SPAs were a clever solution to a temporary platform limitation: smooth transitions between pages required client-side routing because browsers could not do it natively. That limitation no longer exists. View Transitions API enables native, declarative page transitions with CSS alone — fading between pages, animating shared elements, maintaining persistent headers — all with real URLs, real page loads, and zero JavaScript routing. Speculation Rules enable instant navigation by prerendering pages before the user clicks. bfcache snapshots entire pages for instant back/forward navigation — but only for clean, declarative architecture that SPAs break by design. Most websites are not apps. They do not need shared state, client-side routing, or interactive components on every screen. A homepage with six content blocks does not need hydration, suspense boundaries, and a rendering strategy. Yet "make it feel like an app" — uttered by a stakeholder — locks in architecture designed for real-time collaborative UIs. The result: 1-3MB JavaScript bundles, 3-5 second TTI, simulated transitions, fragile SEO, and unreliable scroll behaviour. A modern MPA with View Transitions and Speculation Rules delivers 0KB JS, ~1s TTI, native transitions, trivial SEO, and perfect browser-default behaviour. This is not anti-framework. It is anti-misapplication. Use React for apps. Use the platform for websites.

Source: It's time for modern CSS to kill the SPA — Jono Alderson, 24 Jul 2025

Semantic HTML is infrastructure, not decoration

Semantic HTML is not a purity concern — it is a performance, accessibility, and machine-readability concern with measurable consequences. Bloated DOMs (div soup) cause layout thrashing, increase style recalculation cost, and prevent GPU compositing optimisations. Autogenerated class names break caching, analytics targeting, and CSS reuse. Deep nesting forces expensive ancestor checks on every interaction. By contrast, semantic tags (

,

Platform choice sets your differentiation ceiling

Convenience is not a neutral decision.

The platform decision is not tactical, temporary, or easily reversible. It defines what you will be able to differentiate, adapt, and build — not just at launch, but years from now. Convenience-first platforms (Wix, Shopify, Squarespace and their peers) are optimised for onboarding speed and lock-in, not for capability. They abstract complexity, remove depth, and keep you inside defined architectural boundaries. That produces structural monoculture: millions of sites with the same templates, the same schema limitations, the same SEO ceiling. When everyone uses the same platform with the same constraints, generic output is not a side effect — it is the designed outcome. The business case for choosing a more capable platform is not about features today; it is about headroom tomorrow. If your platform cannot implement new structured data types the week they land, cannot reflect your operational complexity in HTML, cannot run real infrastructure experiments, your technical differentiation is capped before you start. Succeeding in search (and in machine-mediated discovery) requires finding and exploiting edges — tailoring structure, content, performance, and signals in ways that reflect your unique business and cannot be easily cloned. A platform that cannot bend to your will cannot help you win. Platform flexibility is a strategic lever, not a launch decision.

Source: The long-term cost of short-term platforms — Jono Alderson, 21 Jun 2025

Most websites are collections of documents, not applications

JavaScript frameworks are the wrong tool for most of the web.

The JavaScript cargo cult optimised for developer experience at the expense of user experience. We replaced server-rendered HTML with megabytes of JavaScript to simulate interactivity nobody asked for, using tools built for full-blown applications to solve problems the web already solved. The result is sites that are slower, harder to maintain, harder to discover, and harder to use — all in the name of modern development. Most websites are not applications. They are brochures, catalogues, articles. They do not need complex state management, hydration strategies, client-side routing, or reactive runtimes. The right stack for most of the web is boring: server-rendered HTML, semantic markup, clean URLs, lightweight templates, edge caching, and targeted JavaScript only where it adds genuine value. SPAs are fine for applications. They are architecturally wrong for websites. This matters for discoverability: content buried behind JavaScript is harder to crawl, harder to index, harder to parse semantically, and harder to include in model training corpora. Every unnecessary abstraction layer is a signal barrier. Build for the web, not for the development pipeline.

Source: JavaScript broke the web (and called it progress) — Jono Alderson, 19 Jun 2025

Source: LLMs aren't playing by Google's rules — Jono Alderson, 7 May 2025. LLMs do not render JavaScript. They fetch raw HTML and move on. Google at least tries to work around bad code; AI agents do not. A site that relies on client-side rendering may be entirely invisible to the next generation of discovery systems.

Structured data is explicit intent, not implicit understanding

Build a graph, not a checklist.

LLMs are increasingly good at inferring what a page is about from content alone. Schema.org serves a different and complementary purpose: it is not about helping machines understand your content, it is about the author declaring explicit intent — what they consider essential, with no ambiguity. Without schema, a model might grasp the topic; schema directs it to the precise entities, relationships, and priorities you intend to highlight. This is the difference between being understood and being correctly understood. But schema's power is not in isolated labels — it is in the connected graph. An author writes an article; a brand publishes it; the article references a product; the product belongs to a category. These relationships build a semantic map that machines can navigate. Repetition and interconnection reinforce certainty: the more consistently an entity and its relationships are defined across markup, the more confidently any system — Google, an LLM, a voice assistant, an AI agent — can reference it. Schema is not a set-and-forget checklist for rich results. It is an ongoing investment in machine-readable legibility across the broadening universe of AI systems that are becoming primary digital gatekeepers. Actionable types (FAQ, Product, Review) deliver immediate SEO returns. Descriptive types and relationship graphs are the long-term foundation for AI-era discoverability. Both matter. Neither is optional.

Source: What if Schema.org is just labels? — Jono Alderson, 3 Nov 2024

For many sites, the CMS was a workaround for human editing constraints

…And not a strategic necessity.

For two decades, "having a website" was conflated with "needing a CMS". That was often a tooling assumption, not a business requirement. Many websites are structurally simple: a handful of pages, a blog, some metadata, maybe search and comments. For these cases, the complexity of a traditional CMS often solves editorial workflow problems while introducing operational costs: plugin maintenance, security surface, rendering conflicts, performance overhead, and brittle dependencies. Static or mostly-static architectures can now reproduce most of the output that mattered — clean HTML, metadata, schema, feeds, social cards, search — with less failure surface and tighter control over what actually ships.

The remaining moat for the CMS on simple sites has largely been the editing interface: non-technical users need a way to update content without touching files or deployment workflows. AI weakens that moat. If conversational systems can reliably edit source content, manage versioning, and deploy changes, then the CMS ceases to be the default answer for simple publishing problems. It remains valuable where complexity is real — multi-user editorial operations, dynamic relational content, application behaviour, personalised experiences — but not where it merely compensates for a lack of technical fluency. The strategic question is no longer "which CMS?" but "what level of system complexity does this site genuinely require?"

Source: Do you need a CMS? — Joost de Valk, 31 Mar 2026

The right architectural base depends on what you’re publishing

Is it an an artefact, or operating a system

There is no contradiction between saying many sites do not need a CMS and saying platforms like WordPress remain strategically valuable. The distinction is architectural role. A simple site — some pages, a blog, a marketing presence — is primarily an artefact to be published. In that context, CMS complexity is often overhead: extra moving parts added to solve editorial convenience rather than real system needs. But when the website behaves more like an application — with users, permissions, workflows, dynamic content models, commerce, personalisation, or complex state — then a durable generalised base becomes an advantage rather than a burden. The generalisation tax buys battle-tested logic, upgrade paths, security maintenance, and shared standards.

AI makes this distinction more important, not less. It lowers the cost of building bespoke surfaces, but not the cost of maintaining bespoke systems over time. A custom AI-generated engine may look lean on day one while quietly accumulating long-term fragility: patchability, security drift, undocumented logic, and dependence on individual prompts or models. For artefacts, simplicity wins. For systems, maintainable foundations win. The strategic mistake is not choosing WordPress or avoiding WordPress; it is failing to distinguish between publishing complexity and application complexity, then applying the wrong architecture to the wrong job.

Source: The generalization tax: why WordPress is still the smart architectural base — Joost de Valk, 31 Mar 2026

Design leaves fingerprints that machines learn to reward indirectly

Design has often been excluded from SEO and technical conversations because it is difficult to isolate, benchmark, or reduce to a single ranking mechanic. But users respond to design whether analysts can model it cleanly or not. Thoughtful design changes trust, engagement, sharing behaviour, recall, conversion, and how people describe their experiences. Those human reactions leave traces in the wider corpus: reviews, mentions, screenshots, recommendations, behavioural patterns, and downstream citations. Machines do not need to “understand” design at the pixel level to learn from those traces. If well-designed experiences consistently generate stronger outcomes, then retrieval and recommendation systems will increasingly inherit that pattern. Design is therefore not decorative frosting on top of optimisation; it is part of the evidence trail from which systems learn what quality looks like.

Source: The importance of design — Jono Alderson / SISTRIX, Sep 2025

Edge layers turn infrastructure into a strategic optimisation surface

The layer between request and response is no longer just plumbing. Modern edge infrastructure makes it possible to intercept, rewrite, cache, classify, personalise, and enrich experiences before a request ever reaches origin systems. That matters strategically because it turns infrastructure into an optimisation surface: a place where businesses can improve speed, resilience, localisation, bot handling, resource efficiency, and machine-readability without always needing to replatform the whole stack. In practical terms, the edge becomes a leverage layer between ideal architecture and organisational reality — allowing meaningful improvements even when legacy systems, CMS constraints, or development bottlenecks would otherwise block progress.

Going over the edge — Jono Alderson, conference talk transcript / notes

The website’s new job is to anchor meaning, not to host persuasion

When websites stop being the primary interface for decision-making, they do not become irrelevant; they change role. The website becomes the one environment a brand controls end to end, and therefore the primary place to stabilise meaning for machines. Its job shifts from persuading visitors through crafted journeys to publishing clear, precise, structured, and complete representations of what is true: product details, definitions, processes, relationships, limitations, policies, and identity claims. In that sense, the website becomes less like a showroom and more like a living specification — the reference implementation of the brand’s synthetic identity. If the wider web is noisy, inconsistent, or adversarial, the site is the anchor the model can normalise against.

Surviving the Surfaceless Web — Jono Alderson, conference talk transcript / notes

Websites must become tools, not brochures

In agent-mediated markets, being machine-readable is only the starting point. The more important question is what a machine can actually do with your business.

Can it retrieve live inventory, compare variants, verify trust, check availability, configure an option set, book, buy, reserve, subscribe, or complete a task? A website that only publishes persuasive text is increasingly inert compared with one that exposes useful capabilities.

Websites are shifting from communications surfaces into operational tools. Businesses will compete not just on what they say, but on what their systems let machines accomplish.

Source: AI shopping gets simpler with Universal Commerce Protocol updates — Ashish Gupta, 19 Mar 2026

Source: Millions of merchants can sell in AI chats — Shopify, Mar 2026

Source: MPP — Machine Payments Protocol — Tempo / Stripe, 2026

Your website may become source material, not destination

In AI-mediated search, the platform may no longer treat your website as the thing the user visits. It may treat it as source material from which to construct a synthetic experience tailored to the user and the query.

That is a deeper shift than losing clicks. It means losing control over framing, sequencing, context, interface, and potentially even how your value is interpreted.

The website becomes both destination and supply layer: something intermediaries ingest, reshape, and re-present in contexts you do not control.

Source: AI-generated content page tailored to a specific user — Google LLC, 27 Jan 2026

The machine does not just need to read your site. It needs to use it.

In an agent-mediated web, machine comprehension is only the first threshold. The next competitive frontier is whether an agent can actually invoke the capabilities your business exposes.

If websites expose structured actions such as search, configure, reserve, book, buy, compare, or submit, they become part of the machine’s decision space. If they only describe those actions in prose, they risk becoming explanatory wrappers around more useful competitors.

The web is shifting from readable pages toward callable capabilities. The winners may be the sites that can be used, not just understood.

Source: WebMCP – DEJAN — Dan Petrovic, Feb 2026

Your real homepage is wherever people first meet you

In mediated environments, users rarely begin at the brand's chosen front door. They arrive through snippets, product panels, citations, quoted passages, comparison surfaces, and AI summaries that present only a fragment of the whole.

That means the effective homepage is no longer the literal homepage. It is whatever entry point forms the user's first impression when encountered in isolation.

Businesses should optimise the integrity and usefulness of those fragments, rather than designing around the illusion of a tidy, linear journey that begins on the homepage.

Source: The future of media is essentially just YouTube — Adweek, Nov 2025

5. Information Retrieval

Retrieval is shifting from document evaluation to entity evaluation

Early search engines evaluated documents: which page best matches this query? That created a large optimisation surface (keywords, links, structure). AI-augmented retrieval increasingly evaluates entities: which brand, product, or source is credible enough to represent this answer? The decision is often made before the interface renders a result list. This collapses the historic gap between discoverability and desirability — it is no longer sufficient to be well-structured without being genuinely preferred. Prompt tracking and LLM visibility monitoring risk repeating the same mistake as ranking obsession: measuring the volatility of the interface rather than the strength of the underlying signals.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

Source: The recommendation mindset — Jono Alderson, 8 Feb 2025. Search visibility is earned referral, not entitlement. When Google lists your page it is recommending you, not displaying you. Think in terms of a citation economy: the goal is not to rank but to be the most citable source on a topic. Academic citations are currency; the most cited works become foundational. The same logic applies to machine-mediated retrieval.

Source: Everything, and nothing, is a ranking factor — Jono Alderson, 1 Feb 2024. The right question is not "does X affect rankings?" but "is X the kind of thing a site that deserves to rank would do?" You are not pulling levers; you are becoming the kind of entity that systems want to surface. Typography, empathy, customer service — none are ranking factors, but all are properties of sites that users would want to visit, which is exactly what retrieval systems are trying to model.

Being the best and being understood as the best are different problems

Search engines and AI systems are inference engines, not omniscient judges. They infer quality from signals: reviews, mentions, citations, depth of discourse, author credibility, and consistency of expertise across contexts. Declaring superiority on your own domain is insufficient — the wider web must reflect it. In an AI-mediated world, this inference happens at the entity level: who is speaking, what they have demonstrably done, where else they are cited, whether their expertise is consistent. Real authorship, explicit relationships between people and organisations, and traceable footprints are structural requirements for being modelled accurately. Anonymous or corporate-bylined content is averaged out.

Source: Stop trying to rank for keywords — Jono Alderson, 21 Feb 2026

Differentiation matters more to machines than to humans

Humans are cognitive misers: they satisfice, use heuristics, and buy what is salient and available. Sharp's research accurately reflects this — meaningful differentiation often does not register with consumers because they lack the capacity to evaluate the full competitive landscape. But AI systems operate under fundamentally different constraints. They can process the entire competitive field simultaneously, evaluate entity signals across the whole web, and are specifically designed to compare, synthesise, and select. When compressing similar content into a canonical answer, they are literally performing differentiation analysis — deciding which entity is distinct enough to name. At the human layer, distinctiveness and availability drive choice; at the machine layer, genuine differentiation drives selection into consideration sets, recommendations, and synthesised answers. The machine layer is increasingly the gatekeeper to the human layer.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

Retrieval systems will converge on the human-facing artefact

Current LLM limitations (not rendering CSS, not executing JavaScript) are transitional, not permanent. Google followed the same trajectory: it learned to render because modelling relevance as humans experience it requires understanding placement, prominence, and context. AI systems pursuing the same goal will follow the same path. This means the signals embedded in a well-structured page — editorial hierarchy, emphasis, framing — become more important over time, not less. Shortcuts like llms.txt may offer transitional convenience as hints, but strategic dependency on machine-only channels is a bet against the direction of travel.

Source: A page is more than just a container for words — Jono Alderson, 3 Feb 2026

Training cycles exert selection pressure

Only stable, corroborated patterns survive compression.

Every cycle of training, pruning, and retraining redraws what the machine believes to be true. Information is weighed, compared, and either reinforced or allowed to fade. Fragments that align with the model's broader understanding are retained; those that contradict it or contribute nothing new dissolve into statistical noise. In a web of endless redundancy and synthetic repetition, this selection pressure is profound. Duplicate content dissolves; contradictions cancel out; persuasive noise is treated as waste heat. Only the most stable patterns survive ingestion, compression, and re-ingestion. This resembles natural selection: clarity, consistency, and corroboration are fitness traits. Manipulation, self-serving signals, and rhetorical noise lose fidelity with each generation until they are effectively gone.

Source: Optimising for the surfaceless web — Jono Alderson, 30 Oct 2025

The provenance of reputation signals becomes as important as their content

If retrieval and recommendation systems rely on reputation signals from the wider web (reviews, mentions, editorial coverage, community discussion), and those signals are increasingly polluted by synthetic social proof, then systems must develop increasingly sophisticated authenticity detection — an immune system for reputation. Signals traceable to verified transactions, real identities, and observable usage patterns will carry disproportionate weight. Anonymous, unverifiable praise or criticism will be discounted. This favours brands that invest in genuine customer experience and verifiable proof (case studies with real numbers, named testimonials, transaction-linked reviews) over those that manufacture volume. The spam arms race in social proof mirrors the link spam arms race of early SEO — and will likely resolve the same way: systems learn to devalue what can be cheaply manufactured.

Source: The Hotmail effect — Jono Alderson, 25 Oct 2025

Entity understanding crosses linguistic boundaries

LLMs are trained on multilingual corpora and interpolate across languages. If there is limited information about an entity in one language, the model fills in blanks using content from others. Your English-language reputation becomes the proxy for how you are understood in Portuguese; a technical blog post in German might colour how your brand is interpreted in a French answer. The training corpus is uneven — some languages are well-represented, others are fragmented, biased, or dominated by spam. A scraped product description in Romanian, a mistranslation in Korean, or a third-party reference in Turkish that misrepresents what you do can all become part of the model's truth set. This makes multilingual presence a retrieval-level concern, not just a market-entry concern.

Source: Shaping visibility in a multilingual internet — Jono Alderson, 8 Aug 2025

Trust is graph-shaped

Authority emerges from coherence and connectivity.

Assertions gain strength not from volume but from how well they are reinforced and corroborated across the wider web. Search engines evaluate trust through explicit graph relationships. LLMs evaluate it through statistical density. Authority emerges from coherence and connectivity. The assertion network extends beyond your own site — competitors, aggregators, marketplaces, YouTube, Reddit, and scraped feeds all contribute. You must defend against the hostile corpus: competitors, affiliates, and bad actors actively pollute models with manufactured claims, weaponised contradictions, and strategic misinformation.

Source: The web isn't URL-shaped anymore — Jono Alderson, 30 Jul 2025

The feedback loop is severed

There are no useful metrics for machine-mediated influence.

Marketing used to be observable: dashboards, attribution models, funnels. That era is over. When an AI assistant decides what to recommend based on Reddit sentiment, embedded documentation, third-party schema, or the tone of a YouTube review, your analytics stack does not capture it. Clicks and conversions still happen, but you cannot see the causal story. The diagnostic power is gone. AI visibility tools — panel-based trackers, prompt-based probes, blended impression metrics — are comfort metrics: theatre retrofitting old measurement paradigms onto systems never designed to be interrogated. A single snapshot of "what your brand appears as in an LLM" tells you almost nothing, because real interactions are iterative, contextual, and memory-driven. The danger is not that we lack data — it is that we pretend proxies are precise. You will not see a drop in rankings when a model stops including you. You will just stop being mentioned. Not rejected — omitted. If you cannot track what works, you must build what lasts.

Source: Everything is now opaque — Jono Alderson, 25 Jun 2025

Solved query spaces make entire content categories structurally obsolete.

A solved query space is a topic area where Google already knows enough to answer the majority of queries without sending users anywhere. The knowledge is stable, the concepts are settled, the variables are known. Google has ingested enough examples to synthesise answers on demand. This is not just zero-click search — it is the structural obsolescence of entire content categories. Recipes. Definitions. Simple how-tos. Product comparisons. Once Google has modelled these spaces, no amount of optimisation will restore their traffic potential. You cannot outscale a system that has consumed the entire web. You cannot outrank a model that synthesises answers in real time. If your content exists solely to summarise information that already exists, you are not just invisible — you are redundant by design. The appropriate response is not to produce more content in the same category; it is to create content that cannot be synthesised: original research, proprietary data, first-hand expertise, perspectives only your organisation can offer. Inputs, not summaries. If you would not miss it if it disappeared tomorrow, neither will Google.

Source: Contentless marketing — Jono Alderson, 6 May 2025

Source: The death of the category page? — Jono Alderson, 1 Oct 2024. Category pages — the stable middle layer of most e-commerce and service sites — are a concrete example of the solved query space problem. AI synthesises better comparative summaries than any individual category page can provide. Competing at the category level is increasingly futile; the value lies at the specific product or service level, where personalised, context-driven experience is something AI cannot yet replicate.

Source: "Good content" is not enough — Jono Alderson, 1 Apr 2024. Search engines have little incentive to expend resources consuming yet another page on the same topic. The question is not "is this good content" but "why would a search engine crawl, index, or rank this at all?" Content that does not add to the corpus actively costs the system to process.

Quality is now the baseline, not the differentiator

The competition is differentiation.

AI levelled the baseline. Everyone now has access to good-enough copy, ideas, and polished output. The result is a flood of content that reads fine, checks the boxes, and looks professional — but is indistinguishable from everything else. The shift most people are missing: the advantage is no longer in producing quality. Everyone can produce quality now. Quality is the entry ticket, not the prize. The gap between average and great did not disappear — it became more obvious. The competition has moved up a level: from quality to differentiation. Saying "we use AI" is not a value proposition; it signals you are doing what everyone else is already doing. Prompt engineering is not differentiation; it is table stakes. What retrieval systems are now rewarding is not technical correctness but signals of trust, uniqueness, and perspective — content that gets referenced by people, talked about, and actually adds something new to the discussion. If your content could be written by anyone, it will be ignored by everyone. The things that actually differentiate are usually messy, personal, and earned — and none of them come from a prompt. Safe used to feel smart. Now safe is generic, and generic is exactly what AI is best at producing. Playing it safe is the riskiest move you can make.

Source: Good Enough Is Dead — Nick LeRoy, SEO for Lunch, Mar 2026

AI accelerates the classic search spam cycle by collapsing the feedback loop

The structural pattern is familiar from early SEO: cheap distribution plus gameable ranking signals produces exploitation before it produces usefulness. What changes in the AI era is not the underlying incentive but the speed of the cycle. Content generation is effectively free, prompt-level iteration is instantaneous, and every platform response becomes fresh training data for the next round of manipulation. That compresses the whole system: exploitation, detection, filtering, adaptation, and retraining all happen on shorter loops. The result is faster degradation of generic content environments, faster platform hardening, and faster obsolescence of tactics built around loopholes. This makes durable value even more central. When the cycle time shrinks, there is less and less economic room for anything that cannot survive scrutiny on provenance, usefulness, and differentiation.

Source: AI optimization is replaying early SEO, just faster — Joost de Valk, 31 Mar 2026

Retrieval systems reward fitness, originality, reputation, and integrity together

A useful way to think about competitive visibility is that systems are not looking for one thing but for a composite of qualities. First, fitness: the extent to which the underlying experience is technically sound, usable, and free from friction. Second, originality: whether the contribution is relevant, differentiated, and worth consuming rather than merely derivative. Third, reputation: whether other people talk about you, recommend you, and connect you into the wider graph of trust. And finally, integrity: whether those signals appear earned rather than manipulated. This last dimension becomes more important as systems mature. In early-stage ecosystems, weak proxies can be gamed; once manipulation becomes widespread, systems are forced to evolve criteria that distinguish genuine signal from manufactured performance. Integrity is therefore not an ethical afterthought but a structural necessity in any self-defending retrieval environment.

Source: Jono Alderson, conference talk transcript / notes

You are no longer optimising for one Google

Google is no longer a singular crawler, interface, or stable set of rules. Retrieval is fragmenting across user-triggered agents, products, contexts, and surfaces with different purposes and behaviours.

That makes logs, crawl assumptions, and channel-specific optimisation less complete as sources of truth. As retrieval becomes ambient and distributed, resilience, consistency, and machine legibility matter more than tactical wins against one canonical surface.

Source: Google User-Triggered Fetchers — Google for Developers, 2026

Search is bigger than the SERP

Search is not a page of links. It is the broader system by which people discover, evaluate, and trust options across many surfaces.

Platforms like YouTube are not auxiliary channels bolted onto a web strategy. For many journeys they are core discovery and trust environments, and therefore part of market infrastructure.

Winning in search increasingly means showing up wherever attention and trust consolidate, not just wherever rankings are measured.

Source: YouTube Lays Claim to Another Crown: The World’s Largest Media Company — The Hollywood Reporter, Mar 2026

The shared SERP is dying

Search is moving from a public marketplace of relevance toward a private marketplace of context. As systems incorporate memory, history, preferences, purchases, and personal data, two people asking the same question may no longer be participating in the same search environment.

This weakens the idea of a single, shared competitive landscape. Visibility becomes less about winning one universal position and more about being consistently reinforced across many personalised contexts.

The important shift is not that search is personalised. It is that answer construction itself is becoming personalised.

Source: Personal Intelligence in AI Mode in Search: Help that's uniquely yours — Google, Jan 2026

Source: Gemini introduces Personal Intelligence — Google, 14 Jan 2026

6. AI

AI exposed that SEO was always measuring the wrong layer

Rankings, traffic, and authority scores were pragmatic approximations built on the limited visibility that the search interface allowed. They worked because the surface correlated closely enough with reality. AI systems — by aggregating signals about entities, reputation, and preference across the entire web — make the gap between interface metrics and actual competitiveness impossible to ignore. The discipline is not being displaced; it is being forced to reconnect with what retrieval systems were always trying to approximate.

Source: Clicks don't count (and they never did) — Jono Alderson, 12 Mar 2026

AI agents re-evaluate every decision from scratch

Past trust does not carry over without fresh evidence.

Human decision-making compounds past trust — once a brand is known, people reuse decisions without re-evaluating. Agentic AI systems do not. Every decision is fresh, every option re-examined. Familiarity does not soften scrutiny; trust does not carry over unless current evidence supports it. This fundamentally changes market dynamics: doing more no longer extends trust by association, it dilutes it. Every additional category a business operates in is another place where it is probably not the best — another body of evidence suggesting it is spread thin. The shift is not from lists to chat, or blue links to summaries; it is from information-constrained human research to systems that are not constrained in the same way. Note: current AI systems are not truly omniscient (training cutoffs, retrieval biases, hallucination), but the direction of travel is clear. The middle will erode gradually, not collapse overnight.

Source: The middle is a graveyard — Jono Alderson, 10 Jan 2026

You do not persuade a model

You support it - with clarity, consistency, and connection.

Marketing has traditionally been the art of persuasion aimed at humans. AI models are not persuaded; they are supported. When your language, data, and presence align in ways that improve a system's accuracy, you stop being external content and become infrastructure the system depends on. This is symbiosis, not manipulation. The most durable entities will be those that made themselves useful to the synthetic environment — not through volume but through contribution. Human-facing surfaces still matter (people still use interfaces, websites still exist), but the decision about who gets surfaced is increasingly made in the substrate before any interface renders. The surface is no longer where the decision happens; it is where the decision is displayed.

Source: Optimising for the surfaceless web — Jono Alderson, 30 Oct 2025

AI systems will need immune systems for social proof

And this will reshape what counts as evidence.

As synthetic content floods every reputation channel (reviews, testimonials, community discussions, social media), AI systems face a signal-to-noise problem that intensifies with each training cycle. The response will mirror how search engines handled link spam: develop classifiers that distinguish genuine signal from manufactured consensus, and progressively devalue what can be cheaply produced. This creates a new competitive dynamic. Verifiable, transaction-linked, identity-backed social proof becomes a moat. Brands with genuine customer relationships and traceable evidence of quality will be disproportionately favoured by systems that have learned to distrust volume. The paradox: humans will trust social proof less even as machines get better at filtering it — meaning AI-mediated recommendations may eventually be more trustworthy than human-evaluated reviews.

Source: The Hotmail effect — Jono Alderson, 25 Oct 2025

Marketing becomes Agent Relations

If the old discipline was about shaping human perception, the new one is about shaping machine memory. It means understanding how crawlers, recommenders, shopping bots, and language models record, compress, and share their experiences of you. It means designing products, pages, and processes that generate the right kinds of traces — and maintaining the technical integrity that resists being scarred. This sounds closer to operations, QA, or infrastructure than to traditional marketing. But in a landscape where machines are the gatekeepers of discovery and recommendation, this is marketing. The story you tell still matters — but only if it survives contact with the evidence. Cross-system sharing of reputation signals (already happening via phishing lists, fraud fingerprints, spam signatures) means that what one system concludes about you rarely stays contained. Your flaws circulate and become the version of you that other systems inherit.

Source: Marketing against the machine immune system — Jono Alderson, 24 Sep 2025

Synthetic authority is not fakery

Authority is being unbundled from identity.

EEAT as currently practised is largely theatre: ghost-written content, stock author photos, fabricated personas. Brands fudge credibility because they cannot scale genuine human authority — their real experts are not public-facing, are not writers, and pose legal risks. This creates a structural dishonesty at the heart of current content strategy. Synthetic systems can fulfil the same EEAT criteria more reliably and auditably than most human alternatives. Expertise: AI has read more than any human and remembers everything. Experience: most published content is already reconstructed from second-hand data; synthetic systems can simulate and test at scales no human team can match. Authority: consistency, coherence, and reliability at scale. Trust: if a system is accurate, auditable, and transparent, there is no principled reason to distrust it. This is not in tension with the manifesto’s position that fakery gets detected and punished. The distinction is critical: fake human authority (stock photo + ghost-written bio presented as real) is detectable and punishable. Transparent synthetic authority (declared AI system, auditable, consistent process) is a new legitimate credibility model. Authority is being unbundled from identity. Expertise is becoming a property of process, not personality. Brands that build transparent, auditable synthetic knowledge systems will outperform those clinging to the performance of human authorship. Those that fake it will be exposed. Those that declare it will be trusted.

Source: The rise of synthetic authority — Jono Alderson, 17 Jun 2025

⚠️ Editorial note: This position is deliberately speculative and provocative — a thought experiment rather than a settled argument. The direction of travel is plausible, but synthetic authority at the level described is not yet established practice. The current reality is that human authorship, personal experience, and earned credentials still carry significant weight with both audiences and retrieval systems. The interesting question is where the threshold lies and how fast it shifts. Hold this position lightly.

Websites are becoming sources, not destinations

You are not losing to competitors — you are being filtered out before competition starts.

AI agents do not browse. They evaluate, filter, and shortlist. Your site may never be consulted — not because a competitor beat you, but because you did not qualify for consideration. You are not losing a ranking war. You are being eliminated before it starts. This changes what optimisation means fundamentally. The old model: show up, get clicked, persuade. The new model: be comprehensible, be verifiable, be trusted by a system that may never send a human to your page. Websites are shifting from destinations (users visit you) to sources (agents extract from you). The audience is not a person with a browser — it is a model with a mission. Agents care about whether they can understand you, verify your claims, and trust you enough to include you in a shortlist. Design, clever copy, and conversion optimisation are irrelevant to this audience. What matters: structured content, consistent claims, clear entity signals, and eligibility for inclusion in a world where the decision-maker may never be human. Optimisation is no longer about traffic. It is about trust, inclusion, and eligibility.

Source: What happens when your buyer is a bot? — Jono Alderson, 3 Jun 2025

Managing AI agents requires HR skills, not IT skills.

When you delegate meaningful work to AI agents, you face the same problems as managing people: you cannot fully see inside the black box, you can never fully trust reported outcomes, and you manage through observed behaviour and intuition rather than audit trails. Did the agent properly understand the task? Was it completed or quietly fabricated? Was it handled with integrity or self-interest? Was it misled by another agent upstream? These are not technical questions — they are evaluative, interpersonal ones. The skills required are soft: calibrated trust, nose for trouble, knowing when 'yes it's done' actually means it is not. And it compounds: you are not just managing your own agents, you are defending them. Other agents — operated by competitors, bad actors, or simple chaos — will try to manipulate them, steal their attention, or feed them misleading inputs. The competitive advantage in an agentic future is not building better agents. It is knowing how to get good work out of complicated, fallible, semi-autonomous entities. This structurally advantages people and organisations already skilled at managing ambiguity, delegating effectively, and reading incomplete signals — which includes small business operators far more than large enterprises that will attempt to manage agents like IT infrastructure and fail for the same reasons their people management fails.

Source: The future of AI is HR, not IT — Jono Alderson, 29 Apr 2025

Human advantage shifts from output to judgment

AI drastically lowers the cost of producing passable output: essays, code, analysis, plans, copy. This raises the floor and erodes the market value of average execution. But the critical asymmetry remains: models can generate plausible answers without understanding whether those answers are actually right, appropriate, or complete. As a result, the scarce human capability moves upstream from production to evaluation — decomposing claims, spotting where the smooth surface hides broken reasoning, and exercising judgment under uncertainty. In practical terms, the value of a human worker increasingly lies in verification, orchestration, and accountability rather than in raw throughput. The person who can tell when the machine is wrong matters more than the person who can make it speak quickly.

This creates a training problem as well as a labour-market one. Senior judgment is built through doing the junior work repeatedly enough to develop instincts about quality, failure, and edge cases. If AI removes too much of that apprenticeship layer, organisations may accidentally consume the pipeline that produces future experts. The short-term efficiency logic is obvious; the long-term capability cost is not. The strategic response is not to reject AI, but to design work, education, and management around active verification rather than passive acceptance.

Source: Why healthy doubt beats AI confidence theater — Joost de Valk, 31 Mar 2026

Assistant systems choose through availability, suitability, and inferred preference

When interfaces disappear and assistants mediate choice, recommendation systems do not need to replicate the full human journey. They can build and reduce consideration sets on behalf of the user using a smaller set of inferred criteria. A useful model is threefold: availability (is the option valid, open, nearby, in stock, within constraints?), suitability (is it good enough, relevant enough, appropriately priced, low risk?), and implied preference (does it align with what the system understands about the user’s values, habits, affinities, and likely tastes?). This third layer matters because future assistants will not only optimise for explicit requirements; they will also use connected behavioural data and patterns from similar users to infer what a person is likely to prefer. The job of marketing therefore shifts upstream: not merely persuading at point of comparison, but creating the prior data and associations from which systems infer preference later.

Digital marketing in a post-digital world — Jono Alderson, conference talk transcript / notes

Brand currency accumulates through distributed interactions

…Not just direct conversions

Long before a user explicitly enters a buying journey, brands are already accumulating a residue of meaning through ads, social encounters, content consumption, peer discussion, product experience, and the behaviour of similar audiences. The talk’s label for this is “brand currency”: a latent score of affinity, recall, and propensity to choose. The exact metaphor may be debatable, but the strategic point is sound. Systems that personalise recommendations need some way to infer likely preference from prior interactions, network effects, and lookalike patterns. That means every meaningful interaction can feed future eligibility. Marketing is not only about triggering an immediate sale; it is about seeding the evidence from which future systems decide what feels familiar, appropriate, and trustworthy.

Digital marketing in a post-digital world — Jono Alderson, conference talk transcript / notes

Experimentation must move from the interface layer to the interpretation layer

If machine systems increasingly shape the shortlist, frame the options, and interpret the brand before a page is ever visited, then experimentation has to target that layer rather than just the on-site experience. The new optimisation questions are not only “which page converts better?” but “which claims improve model understanding?”, “which contradictions create drift?”, “which pieces of evidence strengthen trust and inclusion?”, and “how does the system describe us after we change our footprint?” This is still experimentation in the proper sense — hypothesis, intervention, measurement, iteration — but the object of study changes. Instead of optimising buttons, flows, and surface heuristics, the work shifts to coherence, completeness, entity clarity, interpretation, and inclusion in model-mediated journeys.

Surviving the Surfaceless Web — Jono Alderson, conference talk transcript / notes

Traffic was the subsidy. AI removes the subsidy.

The open web ran on an implicit bargain: publishers made content accessible, and platforms returned value through visits, attention, and downstream monetisation. Answer engines and agents weaken that bargain by extracting value without necessarily returning traffic.

That makes attribution, compensation, licensing, and credit structural questions, not implementation details. If platforms increasingly perform the user-facing interaction themselves, then the economics of original work can no longer depend on clicks as the default reward mechanism.

The real issue is not only visibility. It is whether creators, publishers, and experts can still capture value once intermediaries absorb the journey.

Source: Building Toward a Sustainable Content Economy for the Agentic Web — Microsoft Advertising, Feb 2026

Trust becomes the scarce resource

When content can be generated, pitched, published, summarised, and re-circulated at industrial scale, abundance destroys informational value. The scarce asset is no longer content volume, and often not even surface-level quality. It is trustworthiness.

In an automated content economy, synthetic material increasingly feeds other synthetic systems. That makes provenance, verification, and reputation more important than output volume, because the bottleneck shifts from production to trust.

As publication becomes cheap and infinite, the businesses and sources that remain legible as credible will capture disproportionate value.

Source: Automated PR tool is bombarding UK media with AI-generated content — Press Gazette, Nov 2025