ChatGPT has 800 million weekly users. Perplexity handles 780 million queries a month. Only eleven percent of domains are cited by both. Here's the platform-by-platform playbook for getting cited in 2026 — and what gets you quietly de-ranked.

Two years ago, getting cited by ChatGPT was a curiosity. A nice-to-have. Something you might brag about on LinkedIn after stumbling into it on accident.

Twelve months ago, it was an emerging discipline. Most agencies were still telling clients to "keep doing SEO" and add a schema audit on top.

That advice is now actively wrong.

AI-referred sessions jumped 527% year-over-year in the first half of 2025. ChatGPT now serves over 800 million weekly active users. Perplexity processes roughly 780 million queries a month. Google's AI Overviews appear in up to 60% of searches. Traditional organic search traffic is projected to decline 25% by the end of 2026.

What's happening — and the part most agencies are missing — is that the high-intent buyer behavior that used to live on Google now lives inside AI answers. The buyers who type "best CRM for mid-market SaaS" into ChatGPT in 2026 are the same buyers who used to type it into Google in 2022. They get one paragraph back. They get three names. If you aren't one of the three names, you don't exist in that conversation.

This is the world Generative Engine Optimization — GEO — was built for. And the rules are different enough that the old SEO playbook will actively cost you citations if you apply it without translation.

SEO gets you clicked. GEO gets you quoted.

That's the cleanest way to think about the distinction. Traditional SEO optimizes for ranking on a search engine results page so a person can click through to your site. GEO optimizes for being one of the sources an AI system pulls from, summarizes, and cites when it writes its answer.

The two disciplines reinforce each other at the foundation. A slow site with thin content and no domain authority isn't going to get cited by an AI engine, no matter how perfectly structured your FAQ schema is. But once you have the SEO foundation in place, the optimization criteria for citations diverge sharply from the optimization criteria for rankings.

527% Year-over-year increase in AI-referred sessions in the first half of 2025
11% Of domains that are cited by both ChatGPT and Perplexity for the same query
60% Of Google searches that now show an AI Overview at the top of the page

The 11% overlap statistic is the one that catches operators off guard. It tells you that ranking on one AI platform does not transfer to another. Each engine has its own selection logic, its own preferred source types, and its own bias toward freshness, structure, and authority signals. If you treat them as a single category, you optimize for none of them.

Platform by platform: what each engine actually wants

CHATGPT

ChatGPT carries roughly 70% of AI search query volume. When it runs a web search, it pulls primarily from Bing's index — not Google's. This single fact rewrites the priority list for any team optimizing for citation share.

The first move is unglamorous and free: submit your sitemap to Bing Webmaster Tools. If you've spent the last decade as a Google-first SEO shop, your Bing index footprint is probably worse than you think. Half the work of getting cited by ChatGPT is making sure Bing knows you exist.

The second move is structural. ChatGPT consistently favors content with clear authority signals: a named author with a real bio, visible publication and update dates, inline references and citations to source material, and a tight thesis stated in the first 60 words of any section. The model is doing comprehension at the section level, not the page level. If your section header doesn't have its answer near the top, it loses.

PERPLEXITY

Perplexity is the easiest of the major engines to model. It is explicitly citation-first — every claim in every answer is attributed to a specific source. It runs real-time web search on every query. It has a strong bias toward recent content. And it weights structured factual content heavily.

What gets cited on Perplexity: short, declarative, fact-dense sections; recent publication dates (last 90 days is a real threshold); content that answers a specific question with a specific answer, not a meandering essay; and sources Perplexity has cited before, which compounds.

What doesn't get cited on Perplexity: thin content, content older than two years that hasn't been refreshed, marketing pages that don't answer questions, and PDFs unless they're well-indexed elsewhere.

GOOGLE AI OVERVIEWS

Google's AI Overviews are the most "SEO-like" of the AI surfaces because they reuse Google's existing ranking signals as input. If you rank in the top three for a query, you have a meaningfully higher chance of being summarized in the AI Overview for that query — but it's not a guarantee, and Google's logic for what gets quoted is less transparent than Perplexity's.

Structured data matters here in a way it doesn't on ChatGPT or Perplexity. FAQPage schema, HowTo schema, Article schema with proper author markup, and product schema for commerce queries all influence what gets pulled into the synthesis.

CLAUDE AND GEMINI

Both Claude and Gemini are growing faster than most analytics tools are tracking. Their selection logic is closer to ChatGPT's than to Perplexity's — comprehensive, well-sourced, authority-signaled content. The optimization work that wins on ChatGPT will get you most of the way there on both.

"Each AI search engine has its own selection logic, its own preferred source types, and its own bias toward freshness, structure, and authority signals. If you treat them as a single category, you optimize for none of them." — The 2026 GEO reality

The five things that actually move citation share

After running this work for B2B clients across financial services, professional services, and technical industries, the same five interventions show up in every winning playbook.

  1. A 40-to-60-word conclusion at the start of every major section. AI systems do extraction at the section level, not the page level. The model wants a tight summary it can quote. Put one at the top of each H2, even if it feels redundant to a human reader. It's the single highest-leverage change you can make.
  2. Visible author, visible bio, visible dates. No anonymous content. No "Posted by the Team." A real human, with real credentials, with the date this was written and the date it was last updated. ChatGPT and Claude both downweight content that can't be attributed to a person.
  3. Structured data on every page. Article schema, FAQPage schema where applicable, organization schema with proper sameAs links. JSON-LD format. This is the price of admission for Google AI Overviews and a meaningful lift on the other engines.
  4. Inline references to credible sources. Not as a footnote tax — as a working part of the content. If you're citing a statistic, link the source. If you're making a claim, ground it. AI systems use outbound references as a signal that your content is itself reference-grade.
  5. Update cadence. Recent dates win on Perplexity. Refreshed dates win on ChatGPT. A 2023 article with a 2024 update is materially more citeable than a 2024 article with no update history. Building an update review into your editorial calendar is now part of GEO, not part of housekeeping.

What gets you quietly de-ranked

This is the section every agency leaves out, because the practices that get punished are also the practices a lot of agencies are still selling.

AI-generated content with no human judgment. The frontier models can detect their own output at high accuracy, and they actively downweight it. Generation is fine. The issue is publishing what came out of the prompt without an editor who can argue with it. If your content doesn't have a point of view, it won't get cited as a source.

Astroturfing. Fake reviews, paid community mentions, sockpuppet Reddit accounts, fabricated case studies. Detection has gotten very good. The downside — being flagged as untrustworthy by the engines you're trying to be cited by — exceeds the upside in every model we've run.

Thin pages. Two hundred words of marketing copy on a service page won't get cited for any meaningful B2B query. Either commit to the page being a real answer to a real question, or remove it from the index.

Treating GEO as a one-time audit. Citation share decays. Engines refresh their preferred sources. New competitors enter the citation set. A GEO program is a maintenance discipline, not a project.

/ / / / /

How to measure what's actually working

Most teams don't measure GEO at all. The ones who do tend to use a mix of direct query monitoring, citation tracking tools, and referral analytics.

Direct query monitoring is manual but reveals ground truth. Pick 15-25 target queries — the questions your buyers actually ask AI engines about your category. Run them through ChatGPT, Claude, Perplexity, and Gemini. Document which sources get cited. Repeat weekly. Watch the share-of-citation number move.

Citation tracking tools automate this process. Goose, Profound, Writer, Otterly, and several emerging platforms now monitor brand mentions and citations across multiple AI engines on a schedule. Expect this category to mature fast through 2026.

Referral analytics catches the click-through traffic. ChatGPT and Perplexity both send measurable referral sessions now, and GA4 can identify them by referrer. The volume is still smaller than organic search — but the conversion rates from AI-referred traffic are running 2-4x higher than traditional organic in most B2B categories we've measured. Pre-informed buyers convert better.

If your buyers are asking AI about your category — and AI isn't naming you — we can fix that.

Our AI SEO & LLM Authority program audits your current citation share, identifies the queries you should be winning, and rebuilds the content infrastructure to be cited by ChatGPT, Perplexity, Claude, and Google AI Overviews. 90-day program. Reportable results.

See how the program works →

The bottom line

The companies that started serious GEO work in 2024 and 2025 are the ones being cited across multiple AI engines today. That position is defensible. It compounds. And it's only getting more valuable as the percentage of high-intent B2B research that happens inside AI keeps climbing.

The companies that wait another twelve months will be trying to displace incumbents who have spent two years building citation share — and who have the most valuable real estate on the internet, which is the answer ChatGPT gives when somebody asks about your category.

Buyers ask AI before they ask Google now. Your business should be the answer.