Where AI Search Actually Stands Today
Key Stat
When an AI Overview appears on a Google results page, the click-through rate on traditional organic results drops from 15% to 8%. Clicks on AI summary source links happen on just 1% of visits. Source: Pew Research Center, July 2025 (68,879 searches, 900 US adults, March 2025).
Answer Engine Optimisation — AEO — is the practice of structuring your content, data, and technical signals so that AI-powered search interfaces are more likely to cite your site when generating an answer. “Answer engines” include Google's AI Overviews, Google AI Mode, Perplexity, ChatGPT Search, Microsoft Copilot, and the growing cohort of specialist AI search tools. It is sometimes called Generative Engine Optimisation (GEO) or LLM SEO; the terminology is still shifting, the discipline is the same.
The shift is not theoretical. Pew Research Center analysed 68,879 Google searches from 900 US adults in March 2025 and found that roughly 18% of searches triggered an AI Overview. When an AI Overview appeared, the click-through rate on traditional organic results collapsed from 15% to 8% — close to a 50% drop. Clicks on the source links inside the AI summary itself occurred on just 1% of visits. And 26% of browsing sessions ended after an AI summary appeared, compared with 16% when there was none.
Google disputes the methodology. Publishers do not. Internal traffic data across multiple content categories — how-to, comparison, definitional — shows the same direction of travel: impressions holding up, clicks compressing, session durations shortening. Meanwhile Google AI Overviews expanded to India in August 2024 and to 100+ countries with Hindi, Portuguese, Spanish, Japanese and Indonesian language support by October 2024. Over one billion global users now see AI Overviews every month.
The uncomfortable conclusion for agencies still billing only on organic rank tracking: you are reporting on a metric that increasingly correlates less with business outcomes. The question is not whether to adapt. It is how much of your 2026 effort you allocate to AEO, and how you measure the return on it.
Why Classic SEO Alone Is No Longer Sufficient
The assumption underlying twenty years of SEO practice is that ranking well in the SERP produces traffic. When the SERP itself is a generated answer, that assumption breaks at two points — and most agency workflows still have not adjusted for either.
Ranking and citation are not the same signal. In July 2025, Semrush analysed citation data across ChatGPT, Perplexity and Microsoft Copilot and found that only 12% of URLs cited by these systems rank in Google's top 10. ChatGPT Search, in particular, cited pages at position 21 or lower in approximately 90% of prompts. The corollary: if your AEO strategy is simply “rank higher in Google,” you will lose a meaningful share of LLM citation opportunities to competitors who have optimised specifically for how language models select sources — which is a different and more permissive process than Google's ranking algorithm.
The top-cited domains are not who you would expect. In the same window of analysis, the most-cited domains across ChatGPT responses included Reddit, Wikipedia, Amazon, Forbes and Business Insider — sources that weight heavily toward community discussion, encyclopedic summary, and editorial authority rather than purely commercial content. This tells you two things. First, the signal LLMs are reading is not raw backlinks or ranking position — it is whether the source reads like a trusted, corroborating voice on the topic. Second, volatility is extreme: ChatGPT's Reddit citation rate, per Semrush tracking, moved from close to 60% of prompts in early August 2025 to roughly 10% by mid-September 2025. Citation allocation can shift within weeks as models retrain and as freshness windows change.
Conversion is different too. Semrush data shows visitors arriving from LLM citations convert approximately 4.4× better than organic search visitors. The intuition is correct: a user who has already read a generated summary and then clicks through has done substantially more pre-qualification than someone scanning ten blue links. Fewer visitors, higher intent. This matters because the crude “AI is killing our traffic” framing obscures what is actually happening — traffic volume compresses, traffic quality improves, and the strategic task becomes capturing the high-intent residue rather than mourning the commodity clicks that disappear.
The Five Signals LLMs Weight When Choosing Sources
Large language models do not publish a ranking algorithm, and anyone claiming to have reverse-engineered one completely is overselling. But pattern analysis across ChatGPT, Perplexity, Gemini and Copilot citation logs — combined with what each provider has publicly disclosed about retrieval-augmented generation — converges on five signals that consistently correlate with citation frequency.
- (1) Direct, declarative answers in the opening lines of a section. LLMs process content by extracting the most concise, high-confidence response to a query. Hedged or throat-clearing openings (“there are many factors to consider…”) are systematically deprioritised. A 40–60 word direct answer immediately after a question-formatted heading is the single highest-leverage content-structure change most sites can make.
- (2) Freshness with dated attribution. Models weight content with explicit publication and modification dates higher than undated content on equivalent topics — particularly for queries with temporal intent (“best X in 2026”, “latest Y”). Structured data exposing
datePublishedanddateModifiedis non-negotiable. Articles updated annually with substantive changes, not just a bumped date, retain citation share longer than static content. - (3) Topical corroboration across multiple independent sources. LLMs prefer claims they can verify across sources. If your article is the only place a specific statistic or claim appears, your citation probability drops — counter-intuitively — because the model cannot corroborate it. Linking to primary sources, citing datasets with identifiers, and referencing widely-reported figures increases the likelihood your version of the claim is the one the model surfaces.
- (4) Demonstrated first-hand experience (the “E” in E-E-A-T). Google's E-E-A-T framework — Experience, Expertise, Authoritativeness, Trust — has become more load-bearing in the LLM era, not less. Content that describes specific accounts the author has run, specific incidents observed, specific timelines executed, outperforms content that describes topics in the abstract. First-person practitioner writing, case study data, and original research are consistent citation-getters across every answer engine studied.
- (5) Structured machine-readable formatting. Tables, numbered lists, FAQ blocks with schema, and headings that restate the question as written are all formats LLMs can parse with high confidence. Walls of prose with the answer buried in paragraph four are parseable but lower-probability citations. The less work the model has to do to extract your answer, the more likely your source is the one it cites.
Content Structure — Front-Load Everything
Key Stat
Roughly 40–45% of all LLM citations come from the first 30% of a page's content. If your substantive answer is not in the opening third, you are competing for little more than half of the citation opportunity. Source: multiple independent analyses of ChatGPT citation logs, 2025.
Across multiple independent analyses of ChatGPT citation logs through 2025, a consistent pattern has emerged: roughly 40–45% of all LLM citations come from the first 30% of the content on a page. Not the best 30%, the cleverest 30%, or the most engaging 30% — the first 30%. If your direct, substantive answer is not in the opening third of a piece, you are eliminating yourself from almost half of the citation opportunity regardless of how good the rest of the article is.
This is a significant departure from the “long-form, comprehensive, 3,000-word pillar page” orthodoxy that dominated SEO content strategy through the late 2010s. The pillar page is not dead — comprehensiveness still signals authority — but the opening must now function as a standalone answer. Think of it as a two-audience structure: the first 300 words serve the LLM and the scanner; the next 2,700 words serve the practitioner who has decided to read deeply.
The practical structural template that correlates best with citation frequency is:
- H1 and opening paragraph — state the question and answer it. The H1 mirrors the most common query variant. The opening paragraph (60–80 words) contains the direct, substantive answer. No rhetorical build-up, no stage-setting, no “in today's fast-paced digital landscape.”
- Each H2 section — question as heading, answer in first 50 words. Every major section is a standalone Q&A unit. The H2 is a fully-formed question. The first 40–60 words directly and specifically answer it. The remainder expands with detail, caveats, and supporting evidence for the human reader.
- Inline data and citations. Specific numbers with named sources (“Pew Research, July 2025”, “Semrush analysis, Q3 2025”) are weighted higher than vague claims. Attribute every non-obvious figure. Date your claims.
- Explicit “what this means” synthesis at the end of each section. Models are drawn to sentences that begin “the practical implication is…” or “what this means for your strategy is…” — because those are the sentences that summarise actionable takeaways, which is exactly what a user querying an AI assistant is seeking.
Technical Requirements — Schema, llms.txt, and What Does Not Matter
💡 Pro Tip
The single highest-ROI technical change for most sites in 2026 is implementing FAQ schema across every page with a Q&A structure, with every answer front-loaded in 40–60 words directly after the question. This compounds for AI Overviews, voice search and zero-click snippets simultaneously.
AEO is not a new technical discipline. It is classic technical SEO with two additions and one emerging proposal that is worth watching but not worth panicking over.
FAQ schema on every Q&A-structured page. FAQPage structured data (schema.org/FAQPage) remains the highest-leverage single technical change for AEO. It tells answer engines explicitly which pairs of text are question-answer pairs, which dramatically improves the probability that your answer is extracted cleanly. Validate with Google's Rich Results Test after every deployment — malformed FAQ schema is worse than no schema because it can prevent the whole page from being indexed for rich results.
Article schema with complete authorship, dates, and organisation data. Every article needs Article structured data with author (linked to a Person schema with credentials), datePublished, dateModified, publisher (linked to Organization schema), and a mainEntityOfPage. Incomplete Article schema is the most common technical failure we see during AEO audits — sites publish articles with missing authorship or publisher data and wonder why LLMs treat them as lower-authority sources.
HowTo schema on step-by-step content. For any process-driven content — audits, setup guides, tutorials — HowTo schema (schema.org/HowTo) structures the steps explicitly. LLMs read HowTo schema steps sequentially when answering “how do I…” queries, making this format disproportionately effective for practical, instructional content.
Product and Organization schema for commercial pages. Any page describing a commercial offering — service, product, pricing — needs Product schema with full data (name, description, provider, offer, aggregateRating if applicable). Organization schema on the homepage with sameAs links to your verified social profiles, LinkedIn company page, and any industry directories establishes the entity relationship that LLMs use to disambiguate your brand from other organisations with similar names.
The llms.txt proposal — useful, not urgent. A proposed standard (llms.txt, at the domain root, analogous to robots.txt) allows publishers to describe their site specifically for LLM crawlers. Adoption is genuine but partial across providers: Anthropic, OpenAI and a handful of specialist AI search tools read it; Google has not committed. Implementing it is low-cost and low-risk — a plain text file describing your site's structure and key pages. Do it. It will not transform your AEO outcomes overnight, but it is a small permanent improvement with no downside.
What does not matter: the vast majority of “AI-optimised content tools” being sold in 2026. AEO content is content. It needs to be well-structured, well-evidenced, and front-loaded. No tool replaces the work of writing a directly-responsive answer to a real question and backing it with real data. Buying a tool is not a strategy.
How to Audit Your Current AEO Visibility
Audit Action Tiers
Broken tracking, critical SEO errors, active budget waste costing you money today.
Landing page improvements, keyword refinements, content gaps on high-traffic pages.
New channel launches, content calendar, attribution model migration.
Before committing to an AEO programme, measure your current visibility. An AEO audit runs in three progressively deeper tiers — pick the depth that matches the question you are trying to answer.
Measuring AEO Without a Dedicated Analytics Tool
No standard analytics tool reports “visits from AI citations” as a clean channel. Traffic from ChatGPT, Perplexity, Gemini and Copilot lands in GA4 tagged variously as direct, referral, or organic depending on the interface and browser. The following measurement approach uses available tools to construct a reliable proxy picture — sufficient for quarterly reporting and for demonstrating business impact.
GA4: isolate AI referral sources. In GA4, create an exploration report filtered by session source containing “chatgpt.com”, “perplexity.ai”, “copilot.microsoft.com”, “gemini.google.com”, and “bing.com/search” (since Copilot traffic often attributes through Bing). These are not comprehensive — many LLM-originated visits strip referrer data — but the trend in this subset is a reliable directional indicator. Track the month-over-month growth rate of this cohort as a primary AEO KPI.
Google Search Console: AI Overview query segmentation. GSC does not report AI Overview appearances directly, but the pattern is detectable: query-level impression growth paired with flat or declining CTR is the fingerprint of a query being answered by an AI Overview rather than clicked through. Build a segment of your top 50 informational queries and monitor impression-to-click ratio monthly. A sudden CTR compression without position change is your signal that Google is serving an AI summary for that query and eroding your click share — either you optimise to be cited inside the summary, or you find replacement queries with better click-through economics.
Direct prompt testing. Once a week, run a defined set of 20–30 prompts against ChatGPT, Perplexity, Gemini and Copilot that are representative of how your ideal customer would describe the problem you solve. Log the citations returned. Track the percentage of prompts that cite your domain and the percentage that cite direct competitors. This is the closest thing to a ranking report that exists for AEO — and because almost no one is doing it yet, the baseline measurement itself is a competitive advantage.
Specialist AEO monitoring tools. Semrush, Ahrefs, and standalone tools like Otterly, Profound, and Scrunch have all launched LLM visibility modules through 2025. Pricing ranges widely. For a single-domain audit in 2026, a fortnightly manual prompt test is usually adequate. For ongoing programme measurement across a client portfolio, one of the specialist tools becomes cost-justified once you are managing AEO for more than three or four sites.
Tie everything back to business outcomes. AEO traffic converts at roughly 4.4× the rate of standard organic traffic. The headline metric that matters to your finance director is not “citations per week” — it is whether revenue attributed to organic and LLM-referral sources, combined, is growing. Report on the blended number. Internal stakeholders who have been alarmed by “AI is killing our traffic” coverage need to see the combined revenue trend, not just the click count, to make the correct strategic decision about 2026 marketing investment.
Frequently Asked Questions
What is Answer Engine Optimisation (AEO) and how is it different from SEO?
Is SEO dead in the AI search era?
How do I get my content cited by ChatGPT, Perplexity, or Google AI Overviews?
Should I publish an llms.txt file for my site?
How do I measure AEO traffic when GA4 does not track it directly?
Published by
Digitaso Media
Digital Marketing Agency
Digitaso Media is a full-stack digital marketing agency helping businesses generate predictable leads and sales through data-driven SEO, paid advertising, and conversion strategy.
About Digitaso Media →