What Actually Works in GEO: 15 Evidence-Backed Tactics and 7 Speculative Ones

Written by Gabriel Bertolo
April 14, 2026
what works in GEO and AI SEO

A single content change, swapping vague language for specific statistics, delivered the biggest single-method gain in AI search visibility anyone has ever measured: +37 to +41% on Position-Adjusted Word Count.

That number comes from the only peer-reviewed study on Generative Engine Optimization ever published. Researchers from Princeton, Georgia Tech, IIT Delhi, and the Allen Institute for AI tested nine optimization strategies across 10,000 queries, ran the results through a custom RAG pipeline, validated everything on Perplexity, and published at KDD 2024. A top-tier conference. Real methodology.

Keyword stuffing, the tactic that still shows up in half the “GEO checklists” floating around LinkedIn? It did nothing. Sometimes performed worse than baseline.

That’s the gap between what actually works and what most agencies are charging for.

I spent three months pulling apart every major AI search study I could find. Twelve of them. Covering more than 17 million AI citations across ChatGPT, Perplexity, Google AI Overviews, Gemini, Bing Copilot, and Claude. Ahrefs. Semrush. Seer Interactive. Profound. BrightEdge. SE Ranking. Yext. Surfer SEO. Independent researchers like Kevin Indig and Mike King.

This article is the result. Twenty-two tactics, split into two sections.

Section 1 covers the 15 tactics supported by peer-reviewed research, controlled experiments, large-scale data studies, or credible first-party analysis. The things the evidence actually backs.

Section 2 covers the 7 speculative tactics that get a ton of airtime but don’t have the data to justify the hype. Some are promising. Some are a waste of money. Two are ethically sketchy enough that I’ll tell you outright not to touch them.

At the end, I’ll show you how we applied this research for a national B2B manufacturer who went from invisible to #1 in AI search, and how every result maps directly to the studies below.

Fair warning: this one runs long. I’m not watering the data down.

 

Section 1: Evidence-Backed Tactics

1. Statistics, citations, and quotations remain the highest-impact content modifications

The Aggarwal et al. GEO paper is the foundation for everything in this article. They tested nine content-level modifications and measured two things: Position-Adjusted Word Count (how much of your content AI uses and how prominently it uses it) and Subjective Impression (how meaningfully that content shapes the final response).

 

Three tactics dominated.

 

Statistics Addition

Swapping qualitative claims for quantitative data delivered +37 to +41% improvement on Position-Adjusted Word Count. The effect was strongest for law, government, and opinion queries where verifiable numbers anchor otherwise subjective arguments. SE Ranking’s independent study of 129,000 domains confirmed it at scale: pages with 19 or more statistical data points averaged 5.4 ChatGPT citations versus 2.8 for data-light pages. ZipTie.dev found data-rich pages earn nearly double the AI citations overall.

 

Quotation Addition

Embedding attributed expert quotes produced +28 to +40% visibility improvement. Most effective for history, people, and explanation queries. SE Ranking backed this up: pages with expert quotes averaged 4.1 citations versus 2.4 without. ZipTie.dev reported expert quotes correlate with +71% more AI citations.

 

Cite Sources

Adding inline references to credible publications showed a modest +8% improvement on its own but became the most powerful combinatorial tactic, averaging +31.4% when paired with other methods. Citations reduce the risk calculus for RAG systems deciding whether to trust and surface your content. Simple as that.

 

The best-performing combination was Fluency Optimization + Statistics Addition, which outperformed any single strategy by more than 5.5%. Write well. Pack it with data. That’s the formula.

These findings replicated on Perplexity, where the best methods improved the baseline by 22% on Position-Adjusted Word Count and 37% on Subjective Impression. Here’s the part that matters most for agencies and challenger brands: lower-ranked sites (position 5+) improved visibility by 115% when optimized, while position-1 sites actually decreased by around 30%. GEO optimization disproportionately helps the underdog, which is exactly why we keep seeing it work for our mid-market clients going up against Fortune 500 incumbents.

AutoGEO (2025) later validated similar patterns on Gemini, GPT-4o-mini, Claude, and DeepSeek. This isn’t a platform-specific phenomenon.

Platforms: Custom GPT-3.5 RAG pipeline, Perplexity, Gemini, GPT-4o-mini, Claude, DeepSeek.

Implementation: Go through your highest-priority pages and replace vague language (“significant growth”) with specific numbers (“37% year-over-year growth”). Add attributed expert quotes with credentials. Drop in inline citations to .edu, .gov, and high-authority sources. And don’t skip readability. Simplifying language alone produced a consistent 15-30% lift independent of everything else.

 

2. Answer-first structure exploits documented position bias in LLMs

Stanford’s “Lost in the Middle” study (Liu et al., TACL 2024) nailed down something important about how LLMs actually read. They exhibit a U-shaped performance curve. They perform best when relevant information shows up at the beginning or end of the input context, with a 30%+ accuracy drop when answers get buried in the middle of a 20-document context.

That architectural bias changes how you should structure every page.

CXL’s analysis of 100 AI Overview citations found 55% of citations originate from the top 30% of a page. Kevin Indig’s study of 1.2 million ChatGPT citations (published in Growth Memo) coined what he called the “Ski Ramp” effect: ChatGPT pays disproportionate attention to content at the top of the page, and content with direct “X is Y” opening statements gets cited far more often. Cited text was 2x more likely to contain question marks. Q&A formatting improves extractability. The p-value was 0.0. Statistically indisputable.

SE Ranking’s study of 216,524 pages locked in the structural piece: 120-180 words between headings is the optimal section length, correlating with 4.6 citations versus 2.7 for sections under 50 words. And here’s the one that should kill a few industry myths: content length itself showed zero correlation with citation probability (Ahrefs, r=0.04). Fifty-three percent of cited pages contained fewer than 1,000 words. What matters is structure, not length.

Semrush analyzed 304,805 URLs cited by LLMs versus 921,614 Google-ranking URLs and ranked the top citation predictors: clarity and answer-first summarization (+33%), E-E-A-T signals (+31%), Q&A format (+25%), section structure with heading hierarchy (+23%), and structured data (+22%). The message is consistent across every study: AI systems reward content structured for extraction, not content that’s long or keyword-dense.

Platforms: All of them. The architectural bias exists in every transformer-based model.

Implementation: Lead every section with a 40-60 word “answer capsule” that directly responds to the implicit question in the heading. Use self-contained paragraphs of 120-180 words that make sense even when extracted alone. Headers should be full questions or descriptive phrases, not clever labels. And put your most critical information in the first 30% of the page and in the final paragraph. That’s where AI is looking.

 

3. Brand mentions now correlate more strongly with AI visibility than backlinks

This one should rewrite your PR strategy.

Ahrefs studied 75,000 brands in August 2025 and produced arguably the most paradigm-shifting finding in GEO: brand web mentions correlate at 0.664 with AI Overview visibility, while backlinks correlate at only 0.218. The top three correlating factors were brand web mentions, branded anchors, and branded search volume. Brands in the top 25% for web mentions earned more than 10x the AI Overview citations of the next quartile. And 26% of brands had zero AI Overview mentions. A flat-out visibility cliff.

This aligns with Chen et al.’s September 2025 paper (“Generative Engine Optimization: How to Dominate AI Search,” arXiv 2509.08919), which found AI search exhibits systematic and overwhelming bias toward earned media. Third-party authoritative sources, over brand-owned and social content. The contrast with traditional Google search, which presents a more balanced source mix, is dramatic.

Airops (October 2025) put a number on the earned media advantage: brands are 6.5x more likely to be cited through third-party sources than their own domains. Stacker’s December 2025 analysis found that distributing content to reputable publications can boost AI citations by up to 325% versus own-site-only publishing. Press releases? Nearly worthless. Syndicated wire content earns just 0.04% of AI citations.

Kevin Indig independently confirmed the strong correlation between brand popularity and AI search appearances. Profound’s analysis of 680 million citations found brand search volume is the single strongest direct predictor of AI citations. And fewer than 30% of brands most mentioned by AI are also among the most cited, which tells you that mention volume and citation quality are separate optimization problems.

Platforms: The brand-mention advantage applies everywhere, but which sources each platform trusts varies a lot. ChatGPT favors Wikipedia and authoritative media (Forbes, Reuters, TechRadar, NerdWallet). Perplexity favors Reddit and community content. Google AI Overviews lean on a mix weighted toward its own organic index.

Implementation: Shift PR strategy from link acquisition to mention acquisition. Secure editorial coverage in publications AI systems actually cite. Track unlinked brand mentions as a primary KPI next to traditional backlink metrics. Build a systematic earned media program targeting the specific domains each AI platform favors.

This is exactly what we did in our GEO case study: distributed over 100,000 content pieces across third-party platforms. Result: 587.6% branded search growth and 283 AI citations across platforms in seven months.

 

4. Content freshness is now a measurable citation advantage across all platforms

Ahrefs’ analysis of 17 million citations (July 2025) found AI-cited content is 25.7% fresher on average than traditionally ranked organic content. The freshness premium varies dramatically by platform.

ChatGPT shows the strongest recency signal. 76.4% of its most-cited pages were updated within the last 30 days. Perplexity is even more extreme, with about 50% of citations coming from current-year content alone (Seer Interactive). Google AI Overviews show the least freshness bias, with citation patterns closer to organic ranking age profiles.

Multiple practitioners converge on a consistent finding: content updated within the past three months is about 2x more likely to be cited than content older than 90 days. The effective shelf life for AI citation eligibility is roughly 13 weeks. Fifty percent of AI citations come from content less than 13 weeks old.

Here’s the caveat. Google explicitly identifies artificially inflated modification dates as a spam signal. Changing a publish date without meaningful content updates (at least 20% substantive revision) gets you nothing and risks penalties. The freshness signal has to come from genuine content improvement. New data. Updated examples. Revised analysis. Not a fake date bump.

Platform ranking by freshness obsession: Perplexity (most), then ChatGPT (strong recency bias), then Gemini (balanced), then Google AI Overviews (least, most aligned with organic age profiles).

Implementation: Set up a quarterly update cadence for your high-value pages. Add visible “Last updated” dates and changelogs. Use dateModified schema that reflects real changes. For fast-moving industries (AI, SaaS, fintech), monthly updates are necessary. Prioritize statistics-heavy and comparison pages first. Those compound the freshness advantage with the statistics citation advantage from the GEO paper.

 

5. YouTube is the single strongest signal for AI Overview visibility

This one surprised me.

Ahrefs’ 75,000-brand study found that mentions on YouTube (in video titles, transcripts, and descriptions) are the strongest correlating factor with AI Overview visibility, surpassing even general web mentions. YouTube is the most-cited domain in AI Overviews, accounting for roughly 5.6% of all citations and growing 34% over six months. Surfer SEO’s analysis of 36 million AI Overviews placed YouTube at approximately 23.3% of all citations. Number-one domain.

Now here’s the part that should flat-out change how you think about content strategy. Among pages cited in AI Overviews that don’t rank in Google’s top 100 organic results, 18.2% are YouTube URLs. YouTube content has a citation pathway that’s completely independent of traditional SEO performance.

AI systems don’t watch videos. They read transcripts, descriptions, chapter timestamps, and on-screen text. The YouTube-Commons dataset contains nearly 30 billion words of transcript data used in LLM training. This makes transcript accuracy the single most important optimization lever. A poorly transcribed video is invisible to AI. A clearly transcribed one becomes a trusted citation source.

Platforms: Google AI Overviews (strongest), Gemini (strong, due to Google integration), Perplexity (moderate), ChatGPT (growing).

Implementation: Make a companion YouTube video for every high-value topic on your site. Upload corrected transcripts. Do not rely on auto-generated ones. Structure videos around explicit question-and-answer segments with chapter markers aligned to search intent. Speak brand names, product names, and key data points clearly so transcripts come out accurately. Optimize metadata with descriptive titles and thorough descriptions. Target detailed 10-15 minute formats (tutorials, reviews, explainers) rather than short-form content. Short-form doesn’t get cited.

 

6. Reddit dominates AI citations, but citation share is volatile, and manipulation is risky

Reddit is the number-one most-cited domain when you aggregate across all AI platforms, at 3.11% of total citations (Profound, 4 billion+ citations analyzed). Perplexity is the most Reddit-dependent platform, with Reddit accounting for 6.6% of total citations and up to 46.5% in some category analyses. Google AI Overviews cite Reddit at about 2.2% and rising. Reddit’s AI Overview market share jumped 4.2 percentage points after Google’s March 2025 core update.

The Reddit-Google data licensing deal (February 2024) cemented the structural advantage. Reddit’s data licensing revenue hit $35 million in Q2 2025, with AI partnerships comprising roughly 10% of total revenue. Reddit traffic grew to 1.4 billion monthly visits by April 2025, with a 450% increase in AI citations from March to June 2025.

But ChatGPT’s Reddit citation behavior got volatile fast. Reddit citations collapsed from ~60% to ~10% of responses in September 2025, likely because OpenAI intentionally started reducing single-source over-citation bias. Reddit has since stabilized at around 3% share on ChatGPT, back to pre-spike levels.

Here’s the finding that should change how everyone thinks about Reddit strategy: most cited Reddit posts have fewer than 20 upvotes and 20 comments. AI systems optimize for helpfulness and relevance, not popularity. The average cited post is about a year old. The formats that get cited are Q&A threads (over 50% of cited Reddit content), comparison posts (“X vs Y”), and discussion formats. Those three account for roughly 75% of all cited Reddit content.

And the risks are substantial. The Trap Plan scandal (late 2025) saw a game marketing firm post around 100 fake “organic” comments and face severe public backlash. The University of Zurich bot experiment (April 2025) involved AI bots making 1,700+ fabricated comments on r/changemyview, which moderators labeled “psychological manipulation.” Reddit’s automated systems flag coordinated inauthentic behavior. Lily Ray has warned that spammy tactics could get you excluded from future LLM training data.

Don’t try to game it. Contribute for real, or stay out.

Platforms: Perplexity (#1 source), Google AI Overviews (#2), ChatGPT (volatile but significant), Grok (#2 source).

Implementation: Pick 3-5 subreddits where your ideal customer asks questions that your product addresses. Contribute genuinely for 30+ days before any brand mention. When you do mention your product, answer questions with the specific, data-backed detail that AI systems favor for citation. Never incentivize posts or use aged-account services. Monitor organic brand mentions and engage authentically. Accept that Reddit-based GEO is a long-term reputation play, not a quick win. If you wanted a quick win, this isn’t the channel.

 

7. Entity optimization through Wikipedia, Wikidata, and sameAs schema has a measurable impact

Wikipedia is the number-one or number-two most-cited source by ChatGPT and most LLMs. It accounts for 7.8% of total ChatGPT citations, nearly half (47.9%) of ChatGPT’s top-10 cited sources. Wikipedia content makes up approximately 3% of GPT-3’s training data and appears in virtually every major LLM training dataset. Academic RAG systems using Wikipedia as a retrieval source (REALM, DPR) provably reduce hallucinations.

Schema App ran a controlled case study on sameAs entity linking. After adding sameAs properties to location pages (linking to Wikipedia, Wikidata, and Google Knowledge Graph entities), they got a 46% increase in impressions and 42% increase in clicks for non-branded queries over 85 days. The sameAs property acts as an “entity canonical.” It disambiguates which real-world entity a page references and strengthens entity confidence in RAG retrieval systems.

Wikidata is a lower barrier to entry than Wikipedia. Verifiable facts can be added without meeting Wikipedia’s notability standards. It also powers knowledge panels in both Google and Bing. A London School of Economics experiment integrating theses into Wikidata produced a 47% increase in downloads and doubled traffic from Wikipedia. Google Knowledge Graph contains 800 billion facts about 8 billion entities, and entity confidence is a ranking input that happens before content quality evaluation in RAG pipelines. In other words, if AI doesn’t know who you are, your content never gets evaluated in the first place.

Platforms: All platforms benefit from strong entity signals. ChatGPT relies most heavily on Wikipedia directly. Google AI Overviews use Knowledge Graph data extensively. Every platform benefits from entity disambiguation via Wikidata and sameAs connections.

Implementation: Audit your Wikipedia article for accuracy, neutrality, and citation quality. Do not edit your own page directly. That’s a conflict of interest violation that’ll get you reverted. Make sure your Wikidata entry exists with accurate structured data. Implement Organization schema with @id and at minimum three sameAs links to Wikipedia, Wikidata, LinkedIn, and Crunchbase. Link Person schema for your key authors to their Wikipedia entries, LinkedIn profiles, and institutional pages. If your brand doesn’t have a Wikipedia article yet, build qualifying media coverage first, then bring in a qualified Wikipedia editor to draft one properly.

 

8. AI crawlers cannot render JavaScript, making server-side rendering a binary requirement

This one isn’t optional. It’s a threshold.

Vercel and MERJ tracked over 500 million GPTBot fetches and found zero evidence of JavaScript execution. Even when GPTBot downloads JavaScript files (11.5% of the time), it doesn’t run them. Same story for ClaudeBot, PerplexityBot, Meta’s ExternalAgent, and Bytespider. Only Googlebot renders JavaScript using headless Chrome in a two-phase indexing system. Bing has partial JavaScript support. Everyone else sees static HTML only.

Glenn Gabe published a case study showing that client-side rendered content was completely invisible to ChatGPT, Perplexity, and Claude. If your site is built on React, Vue, or Angular with client-side rendering, the AI visibility impact is total. Your content literally doesn’t exist in the AI crawlers’ view of the web.

Separately, 79% of top news sites block at least one AI training bot (BuzzStream/Hostinger), but a growing number are adopting a split strategy. Block training crawlers (GPTBot, ClaudeBot) while allowing search crawlers (OAI-SearchBot, Claude-SearchBot, PerplexityBot). This distinction matters. OpenAI updated its crawler architecture in December 2025 so that OAI-SearchBot and GPTBot now share information to avoid duplicate crawling. Blocking GPTBot only affects future training runs. Anything they’ve already ingested stays in the model.

Platforms: Every AI platform’s crawler lacks JavaScript rendering capability. Universal.

Implementation: Audit your rendering architecture today. If you’re on client-side rendering, migrate critical content to server-side rendering (Next.js, Nuxt.js) or static site generation. Quick verification: disable JavaScript in your browser and view your pages. What you see is what AI crawlers see. If your page is blank, you have a problem. Configure robots.txt to allow AI search crawlers (OAI-SearchBot, Claude-SearchBot, PerplexityBot) while blocking training crawlers if you’d rather not feed model training. Monitor AI crawler activity through server logs, because there’s no Search Console equivalent for AI bots yet.

 

9. Schema markup helps, but the evidence is more nuanced than vendors claim

The evidence on structured data for GEO is genuinely mixed, which makes this one of the most important areas to separate signal from noise.

Google’s Search team confirmed in April 2025 that structured data gives an advantage in search results. Microsoft’s Fabrice Canel confirmed in March 2025 that schema markup helps Copilot’s LLMs understand content. Semrush’s 304,805-URL study found a +22% citation lift associated with structured data, the fifth-strongest predictor they measured. All positive.

But Search Atlas’s study of 5.5 million responses across Perplexity, Gemini, and OpenAI found that schema markup does not affect LLM citation frequency. None. The contradiction likely reflects a real distinction between Google’s systems (which clearly use structured data) and standalone LLM platforms (which mostly rely on raw text extraction and semantic similarity).

The most impactful schema types for GEO are Organization (with sameAs, foundingDate, @id), Person/Author (with jobTitle, knowsAbout, worksFor), FAQPage, Article, and HowTo. The sameAs property has the strongest individual evidence behind it (Schema App’s 46% impressions / 42% clicks improvement). FAQPage schema nearly doubles ChatGPT citation chances per SE Ranking’s data. Speakable schema, originally designed for voice assistants, is being floated as a potential AI extraction priority signal. Evidence remains thin.

Platforms: Google AI Overviews and Bing Copilot (confirmed benefit). ChatGPT, Perplexity, Claude, Gemini (no confirmed direct benefit from schema; indirect benefit through better entity disambiguation).

Implementation: Implement Organization, Author/Person, Article, and FAQPage schema as a baseline. Add sameAs properties linking to every authoritative external profile you have. Use the @graph technique to organize multiple related entities. But don’t expect schema alone to drive AI citations on non-Google platforms. Treat it as a foundation for entity clarity, not a citation hack.

 

10. Topical authority predicts AI citation better than domain authority

Wellows’ research found that topical authority (measured as the breadth of keywords a domain ranks for within a topic) correlates with AI citation at r=0.41, making it the strongest individual predictor they measured. Domain Authority, the metric our industry has obsessed over for a decade, explains less than 4% of citation variance (r²=0.032).

Think about that for a second. A decade of DA chasing, and it accounts for less than four percent of the things we’re now optimizing for.

This aligns with how RAG systems actually work. They break queries into sub-queries (fan-out) and search for the best-matching content across each sub-question. Sites with comprehensive topic coverage give the AI more citation opportunities across those fan-out queries.

Kevin Indig’s research quantified the fan-out dimension: 89.6% of ChatGPT queries generate two or more follow-up searches, and 32.9% of cited pages appeared only in fan-out query SERPs. Meaning a third of the cited content would never have been found through traditional keyword targeting alone. Marie Haynes was among the first to document Google’s query fan-out mechanism, noting in March 2025 that queries have “ultimately turned into conversations” under the new architecture.

Surfer SEO’s study of 10,000 keywords gave us the most direct fan-out evidence: pages ranking for fan-out sub-queries are 161% more likely to be cited in AI Overviews. Fifty-one percent of all AIO citations go to pages ranking for both the main query and at least one fan-out query. Under 20% of citations go to pages ranking only for the main query. This makes comprehensive topic coverage (the hub-and-spoke content cluster model) a structural advantage, not just a content marketing preference.

Platforms: Google AI Overviews and AI Mode (strongest, due to explicit fan-out architecture). ChatGPT (3.5x citation lift for Google #1 pages per Indig, with 32.9% from fan-out only). All platforms reward topic comprehensiveness.

Implementation: Build content clusters with a comprehensive pillar page (2,500-5,000 words) and 15-20 supporting spoke pages covering specific subtopics. Use bidirectional internal linking between hub and spokes. Target the fan-out queries that AI systems generate. Tools like iPullRank’s Qforia (Gemini-powered) can simulate query fan-outs for you. Don’t just optimize for primary keywords. Make sure you have content answering the second, third, and fourth questions a user might ask after the first one.

 

11. Original research and proprietary data generate outsized citation returns

Yext’s Q4 2025 analysis of 17.2 million AI citations found data-rich websites earn 4.31x more citation occurrences per URL than directory listings. The GEO paper’s finding that statistics addition is the number-one optimization tactic (+41%) gives this pattern its peer-reviewed foundation.

Google’s Information Gain patent describes the extra value a document provides beyond existing coverage. When ten sources say the same thing, AI picks the one that adds something new. Original research naturally contains the three elements AI rewards most: novel statistics, citable methodology, and quotable expert findings.

Yext’s analysis also found that 86% of AI citations come from brand-managed sources (44% first-party websites, 42% listings). This tells you that while earned media drives mentions, first-party content with original data drives the actual URL citations. Exploding Topics is a concrete case: their original research on AI trust gaps was cited three times by ChatGPT in the first three headings of responses about AI Overviews. Despite only 4% direct traffic from AI chatbots, actual AI citations were estimated at 10x higher than their measurable referrals.

You don’t need to run a study of 17 million citations to benefit from this. Benchmark 50 competitors. Analyze 100 of your own customer interactions. Survey 200 people in your industry and publish the results. Any of that creates unique, citation-worthy data that your competitors don’t have.

Platforms: All platforms reward original data. The effect is strongest on ChatGPT and Perplexity, which actively seek diverse sources beyond the obvious top-ranking pages.

Implementation: Publish original research as structured HTML on your domain with clear sections: what was studied, sample size, key findings, methodology. Lead with findings (BLUF format), not methodology. Add schema markup with specific data attributes. Create a dedicated research or data section on your site. Run annual or quarterly industry surveys and publish the results with downloadable datasets.

 

12. E-E-A-T signals are measurable gatekeepers for AI citation eligibility

Semrush’s 304,805-URL study ranked E-E-A-T signals as the second-strongest predictor of AI citation at +31%, behind only clarity and summarization. BrightEdge reports author credentials carry approximately 16% weight in AI citation decisions, up from 8% in 2024. And Google’s Liz Reid, Head of Search, has stated explicitly that AI systems prioritize content showing genuine first-hand experience over surface-level AI-generated material.

The author entity dimension is getting more quantifiable by the month. AI systems use entity resolution to connect professional profiles across platforms. SE Ranking’s data shows domains with strong social proof profiles have 3-4x higher AI citation rates. ZipTie.dev found that adding author credentials alone improved citation rates from 28% to 43% on 15 articles over four weeks. Modest but measurable, from a single variable.

If you’re publishing content without a named, verifiable author, AI has fewer signals to trust you. That’s the whole game right there.

Platforms: Google AI Overviews (strongest. Quality Raters now explicitly evaluate AI Overviews for accuracy). ChatGPT and Perplexity evaluate author signals indirectly through the authority of cited sources. All platforms benefit from clear expertise signals.

Implementation: Build dedicated author pages with credentials, publications, and external validation. Implement Person schema with jobTitle, worksFor, knowsAbout, and sameAs linking to LinkedIn, institutional pages, and any Wikipedia entries. Make sure named, verifiable experts are authoring content in YMYL categories. Include first-person experience markers, proprietary observations, and demonstrate expertise through specific, actionable detail. Not generic overview content.

 

13. Google AI Overviews now pull 62% of citations from outside the top 10

The relationship between organic Google rankings and AI Overview citations has weakened fast.

Ahrefs’ March 2026 study of 863,000 keyword SERPs and 4 million AI Overview URLs found only 37.9% of cited URLs ranked in the first 10 organic result blocks, down from approximately 76% in July 2025. That’s a massive shift in eight months. The remaining citations are split almost evenly between positions 11-100 (31.2%) and pages beyond position 100 (31.0%).

BrightEdge’s 16-month tracking study tells the same story. Overall citation-organic overlap grew from 32.3% to 54.5%, but only 16.7% of citations come specifically from top-10 results. The “sweet spot” is positions 21-100. Pages with enough authority to be indexed and trusted but that don’t compete for traditional top-10 rankings. This is a fundamental shift from “rank #1 to win” to “be comprehensive and authoritative to win.”

A few things are driving this. Google’s Gemini 3 upgrade (January 2026) replaced approximately 42% of previously cited domains and delivered about 32% more source URLs per AI Overview response. The query fan-out mechanism, splitting one query into 8-16 sub-queries, naturally diversifies sources beyond the top-10 for any single query. AI Overviews now show up on 48% of tracked queries (BrightEdge, February 2026), with YMYL industries at the highest penetration: Healthcare 88%, Education 83%, B2B Technology 82%.

The CTR impact is severe. Seer Interactive’s study (3,119 informational queries across 42 organizations, 25.1 million organic impressions) found organic CTR dropped from 1.76% to 0.61%, a 61% decline, for queries with AI Overviews. Even queries without AI Overviews saw a 41% organic CTR decline (from 2.72% to 1.62%). But here’s the flip side: brands cited within AI Overviews saw 35% higher organic CTR and 91% higher paid CTR than non-cited brands. Being in the AI Overview is now the primary CTR preservation strategy. Not being cited is the real penalty.

Platforms: Google AI Overviews and AI Mode specifically. Google AI Mode is even more extreme: 93% zero-click rate (Semrush), with 60%+ of cited domains and 80% of cited URLs changing between runs.

Implementation: Stop using top-10 rankings as a proxy for AI visibility. Use tools like Surfer SEO’s AI Tracker, Otterly, or Ahrefs Brand Radar to directly measure AI Overview citation rates. Optimize for fan-out sub-queries by building comprehensive topic clusters. Target passage-level optimization: self-contained paragraphs of 134-167 words that make sense when extracted alone. Accept that AI Overview citation is probabilistic, not deterministic. Track trends over 30+ day windows, not individual query results. A single query result tells you almost nothing.

 

14. Platform citation divergence means the multi-platform strategy is non-negotiable

One of the most important findings across all GEO research is just how little overlap there is between platforms. Only 11% of domains are cited by both ChatGPT and Perplexity (2025 analysis). Only 12% of URLs cited by ChatGPT, Perplexity, and Copilot rank in Google’s top 10 (Ahrefs, August 2025). Search Atlas’s study of 5.5 million responses found all model pairs show low domain overlap, with OpenAI and Perplexity showing the lowest at approximately 5-12% median.

Translation: these platforms are not drawing from the same pool of sources.

Each one has a distinct sourcing philosophy:

Platform Primary Citation Sources Freshness Bias JS Rendering Key Optimization Lever
ChatGPT Wikipedia, Forbes, Reuters, Reddit Strong (76% from last 30 days) No Authority + earned media
Perplexity Reddit, YouTube, Gartner, Yelp Very strong (50% current year) No UGC + community presence
Google AI Overviews YouTube, Wikipedia, Reddit, Quora Moderate Yes (Googlebot) Fan-out queries + video
Google AI Mode Wikipedia, LinkedIn, YouTube Moderate Yes (Googlebot) Topical authority + brand
Copilot Forbes, Gartner, LinkedIn Unknown No Bing SEO + IndexNow
Claude Brave Search index Unknown No General web authority
Gemini YouTube, LinkedIn, Reddit, Gartner Balanced Yes (Google infra) Multi-modal content

ChatGPT favors encyclopedic, authoritative sources. Wikipedia dominates at 7.8% of citations, followed by established media. Perplexity is UGC-driven. Reddit leads at 6.6%, with YouTube, Gartner, and Yelp following. Google AI Overviews lean on its own organic index heavily, with YouTube, Wikipedia, and Reddit as top external sources. Microsoft Copilot has a strong Forbes preference at 2.1 million citations, way higher than other platforms.

Content format preferences also diverge. Profound’s analysis of 177 million sources found comparative listicles account for 32.5% of all AI citations. Almost a third, dominating across every platform. Blogs and opinion pieces account for 9.91%, commercial and store pages 4.73%, and video content just 0.95% (despite YouTube’s citation dominance, because YouTube transcripts get cited as text, not as video).

Conversion quality varies a lot, too. AI search visitors convert at 23x the rate of traditional organic visitors (Ahrefs, June 2025). Half a percent of traffic generated 12.1% of signups. Semrush values AI search visitors at 4.4x traditional organic. Adobe found AI-referred visitors show 23% lower bounce rates, 41% longer time on site, and 12% more pages per visit.

Implementation: Track visibility independently across ChatGPT, Google AI Overviews, and Perplexity at a minimum. They barely share citations. Build platform-specific strategies: earned media and Wikipedia presence for ChatGPT, Reddit and community engagement for Perplexity, comprehensive on-site content and YouTube for Google AI Overviews, Bing SEO and IndexNow for Copilot. Measure success by aggregate brand visibility across platforms, not by individual platform performance.

 

15. Page speed directly correlates with ChatGPT citation rates

SE Ranking’s study of 129,000 domains and 216,524 pages produced one of the clearest technical signals out there. Pages with First Contentful Paint under 0.4 seconds averaged 6.7 ChatGPT citations. Slow pages dropped significantly. Pages with FCP under 0.4s are 3x more likely to be cited than those with FCP above 1.13s.

The mechanism is simple. AI crawlers operate with tighter timeouts than Googlebot. Heavy scripts exhaust the crawl budget fast. When a page takes too long to serve its HTML response, AI crawlers move on to the next candidate. Since AI crawlers don’t render JavaScript anyway, any delay from JS frameworks is pure waste. The content they’d produce never gets seen.

Platforms: ChatGPT (strongest measured signal), likely similar patterns across every platform, with timeout-constrained crawlers.

Implementation: Optimize server response times, not just client-side performance. Minimize time-to-first-byte. Serve clean HTML without render-blocking resources. For AI crawler access specifically, make sure critical content is in the initial HTML response. Not loaded asynchronously.

This is part of why our SEO process starts with technical SEO before anything else. You cannot content-optimize your way around a broken foundation.

 

The traffic impact reality check: AI search is small but disproportionately valuable

Before you restructure your whole marketing budget around GEO, conduct a reality check.

Conductor’s 2026 benchmarks report (3.3 billion sessions across 13,000+ domains) found AI referral traffic averages only 1.08% of total website traffic, with 87.4% of that coming from ChatGPT. Glenn Gabe puts it bluntly: AI search currently drives less than 1% of traffic to most sites. SparkToro’s data shows AI tool usage tripled from 0.24% to 0.64% of US web usage (April 2024 to June 2025). Growing fast, but still tiny relative to Google.

Here’s where it gets interesting. The quality premium is extreme. AI search visitors convert at 23x the rate of traditional organic (Ahrefs), spend 68% more time on site (SE Ranking), and show 23% lower bounce rates (Adobe). Brands cited in AI Overviews see 35% higher organic CTR (Seer Interactive). Do the conversion math, and even at 1% of traffic volume, AI referrals may represent 5-10% of actual business value.

ChatGPT referrals grew 52% year-over-year (September-November 2025). Gemini referrals surged 388%. Adobe tracked a 10x increase in AI-driven referral traffic from July 2024 to February 2025. The trajectory is obvious even if the absolute numbers stay modest.

Let me put real math on this for a B2B scenario. Say your average customer value is $75,000 over three years, and you’re generating 40 qualified leads per month from organic search at an industry-standard 2% conversion to sale. That’s 9-10 new customers per month. If GEO optimization adds even a 5% revenue contribution at current AI search volumes (modest, given the 23x conversion premium), you’re looking at roughly $270,000 to $450,000 in annual revenue from what’s essentially content and technical optimization. At projected AI search growth rates, that becomes $1M+ within 24 months.

Not marketing theater. Just the math of being visible in channels your competitors haven’t figured out yet.

 

Section 2: Speculative and Experimental Tactics

Tactics in this section are getting discussed, hypothesized, or tested, but they lack the large-scale studies or peer review that would put them in Section 1. Some are directionally promising. Some are unproven at scale. A few carry real risk. I’ll be straight with you on which is which.

 

1. The llms.txt file has massive adoption but zero proven impact

The llms.txt standard was proposed by Jeremy Howard, co-founder of Answer.AI, in September 2024. It’s a plain Markdown file hosted at /llms.txt that provides a structured summary of a site’s most important content for LLMs. Kind of a “recommended reading list for AI.” BuiltWith tracking shows 844,000+ websites have implemented it. SE Ranking’s study of 300,000 domains found 10.13% adoption. Notable adopters include Anthropic, Cloudflare, Stripe, and Vercel.

Despite all that adoption, the evidence of impact is basically nonexistent. SE Ranking’s 300,000-domain study found no correlation between AI citations and llms.txt. Both statistical analysis and machine learning showed zero effect. Search Engine Land found 8 out of 9 sites saw no measurable change in traffic after implementation. No major AI platform has officially confirmed using llms.txt. Google has explicitly rejected the standard, comparing it to the discredited keywords meta tag. Google did include llms.txt in their Agents-to-Agents (A2A) protocol, suggesting some experimental interest in AI-agent-specific contexts.

John Mueller has flat-out said no AI crawlers have claimed they extract information via llms.txt. The one positive data point, Springs Apps reporting a 20% increase in search visibility, is unverified and conflated with other simultaneous changes.

Who advocates: Jeremy Howard (Answer.AI), developer-focused companies (Stripe, Vercel), and some GEO tool vendors pitching it as a “future-proofing” measure.

What makes it speculative: No platform confirms using it. The largest available study shows zero correlation. Implementation is low-risk (1-4 hours, no downside), so there’s no reason not to add one. But calibrate your expectations accordingly. Don’t pay anyone for “llms.txt optimization.”

 

2. Vector embeddings optimization is theoretically sound but practically unmeasurable

The theory is elegant. RAG systems retrieve content via cosine similarity between query embeddings and document embeddings. Optimize content to be semantically close to target queries, and you should improve retrieval probability. Mike King at iPullRank has championed vector-based content auditing techniques. Lumar’s research found that higher semantic relevance scores correlate with higher search rankings. Content with cosine similarity scores above 0.88 to the query achieves 7.3x higher citation rates (xFunnel analysis).

The practical problem is that content creators cannot directly control how embedding models represent their content. “Optimizing for vector similarity” in practice means writing comprehensive, semantically rich content that thoroughly covers a topic. Which is indistinguishable from good topical authority building. The theoretical framework adds explanatory value for why topical authority works, but it doesn’t produce novel actionable tactics beyond “cover your topic comprehensively and clearly.”

Who advocates: Mike King (iPullRank), Gianluca Fiorelli (semantic SEO practitioner), Lumar’s technical team.

What makes it speculative: No one can measure their content’s embedding similarity to arbitrary queries in real-time across different models. Each LLM uses different embedding architectures. The practical advice is identical to non-vector-based topical authority advice.

 

3. Adversarial content injection targets RAG vulnerabilities with serious risks

Nestaas et al. (2024) studied adversarial methods (Hijack Attacks and Poisoning Attacks) that inject language instructions into web content to influence RAG system outputs. They found RAG systems are susceptible to “context stuffing”: a malicious actor doesn’t need to provide high-quality evidence to sway a model. They just need to dominate the context window. The “Late-Dropper” pattern documented in Reddit analysis involves technically detailed comments posted long after a thread’s active readership. Invisible to humans but indexed by AI systems.

These techniques sit on the boundary between optimization and manipulation. Deliberately crafting content to exploit RAG retrieval mechanics (like embedding invisible or semi-visible instructions targeting AI systems) carries real risks: content penalties, reputational damage, ethical concerns. As AI platforms improve their adversarial detection (SourceCheckup found 50-90% of citations are already not fully supported by sources), the window for exploitation-based tactics is going to narrow fast.

Who is experimenting: Academic researchers, some growth hackers in private communities. No reputable SEO practitioners advocate this publicly.

Risks: Permanent brand damage, content penalties, potential legal exposure under emerging AI content manipulation laws.

Don’t do this. I’m including it only because it exists, and I don’t want anyone reading this article to think the “hidden prompt injection” tricks floating around LinkedIn are a legitimate strategy. They’re not.

 

4. Speakable schema as an AI extraction priority signal

Google’s Speakable schema was originally designed for Google Assistant voice text-to-speech playback. Some practitioners are now pitching it as an AI extraction priority signal. The logic: marking specific content sections as “speakable” tells AI systems which content is most suitable for extraction and citation. Google Search Central documentation confirms support (in beta). One practitioner reported a 127% increase in voice search referrals after implementation on the top 20 pages, though this is an individual, unverified claim.

Who advocates: Some technical SEO practitioners, GEO tool vendors.

What makes it speculative: No AI platform has confirmed using the Speakable schema for citation prioritization. The original purpose was voice search. The one positive data point is unverified. Worth testing as a micro-optimization, not worth significant investment.

 

5. AI “shadow websites” and edge-delivered AI-optimized content

Scrunch AI’s Agent Experience Platform (AXP) creates an AI-optimized “shadow website” that only LLM crawlers can see, deployed at the CDN edge via Cloudflare or Vercel integration. The concept: since AI crawlers can’t render JavaScript and have different content parsing needs, serve them a purpose-built version of your content. Adobe’s LLM Optimizer takes a similar approach.

Who advocates: Scrunch AI, Adobe.

What makes it speculative: Scrunch’s AXP has been on a waitlist with no public timeline for general availability. Serving different content to bots than to users is classic cloaking. That’s a well-established spam violation in traditional SEO. Whether AI platforms will penalize this approach is still unknown. The concept addresses a real problem (AI crawlers can’t process modern web content), but the implementation risks are substantial. Google has historically penalized sites that serve different content to Googlebot than to users.

My take: the server-side rendering solution from Section 1 solves the same problem without the cloaking risk. I’d avoid the shadow website approach until Google clarifies its position.

 

6. Paid placement in AI search is real, but still early and separated from organic

ChatGPT ads launched on February 9, 2026. They crossed $100 million annualized revenue within six weeks with 600+ advertisers. Ads appear as clearly labeled “Sponsored” units below response text, available to Free and Go plan users (about 85% of the base). OpenAI is emphasizing “Answer Independence.” Ads do not influence organic responses. CPM rates are approximately $60, comparable to premium streaming and NFL broadcasts. Perplexity offers sponsored follow-up questions at CPMs over $50. Google AI Overview ads now appear in 25.5% of AI Overview SERPs (up from 3% in January 2025).

The critical unknown is whether paid placement will eventually influence organic AI citations. All platforms currently maintain a firewall between ads and organic answers. But as competitive pressure for AI search monetization intensifies, that separation may erode. For now, paid AI search is a legitimate advertising channel, not an optimization tactic for organic AI visibility.

Who’s involved: OpenAI (ChatGPT Ads), Perplexity (Sponsored follow-up questions, Publisher Program), Google (AI Overview ads), Microsoft (Copilot advertising, early stage).

What makes it speculative for organic optimization: No evidence that paid placement influences organic AI answers. The channel is too new for established ROI benchmarks. Platform monetization strategies are evolving rapidly.

 

7. Direct LLM training data inclusion as a long-term visibility strategy

Some practitioners hypothesize that getting content included in LLM training datasets (via Common Crawl, partnerships, or by being a frequently crawled authoritative source) creates “parametric memory” that persists even when RAG doesn’t retrieve your content. The theory: if an LLM learned about your brand during pre-training, it’s more likely to mention you even without real-time search. Dan Petrovic distinguishes between “model memory” (what LLMs know from training) and “grounded search results” (real-time retrieval), arguing that primary bias from training data influences which brands LLMs default to recommending.

Who advocates: Dan Petrovic (DejanSEO), some enterprise GEO strategists.

What makes it speculative: Training data composition is opaque. No one can verify inclusion. Training runs happen infrequently (months to years between updates). The influence of training data versus real-time RAG retrieval is impossible to isolate in practice. Blocking GPTBot and other training crawlers only affects future training runs. Previously ingested content persists. The tactic is fundamentally unfalsifiable, which means you can’t optimize against it with any certainty.

 

The GEO measurement landscape remains immature and unreliable

This deserves its own section because it explains why you should be skeptical of any “AI visibility score” a vendor quotes to you.

Rand Fishkin’s 2026 experiment (600 volunteers running identical prompts across Claude, ChatGPT, and Google AI) concluded that AI brand visibility tracking is “inherently unreliable” at the individual query level because of non-determinism, caching, personalization, and geographic variation. AI responses produce different results with tiny prompt variations. Christopher Penn has gone so far as to use the term “snake oil” for much of the GEO measurement advice floating around.

The tools space has exploded. Profound ($58.5 million in funding from Sequoia and Kleiner Perkins). Otterly (Gartner Cool Vendor 2025). Peec AI ($6.13 million in funding). Goodie, Scrunch, and dozens of others. Traditional SEO platforms (Semrush, Ahrefs, Surfer SEO, Conductor, BrightEdge) have all tacked on GEO tracking features. Conductor positions itself as the “only end-to-end enterprise AEO platform.” Semrush’s AI Visibility Toolkit launched as a $99/month add-on.

The core challenge: 40-60% of cited domains change monthly (citation drift), which makes point-in-time measurements misleading. SparkToro found less than a 1 in 100 chance that ChatGPT or Google AI will produce the same brand list in any two responses for the same query. There’s no equivalent of Google Search Console for AI platforms. Google Search Console includes some AI Mode data as of mid-2025, but doesn’t filter it out separately.

Practical guidance: Use GEO monitoring tools to track directional trends over 30+ day windows across large query sets. Don’t optimize based on individual query results. Focus on aggregate metrics: share of voice across hundreds of relevant prompts, citation frequency trends, sentiment patterns. And remember Jeremy Moser’s line, which is the most honest benchmark for the whole GEO services industry: “If a GEO service does not openly tell you that success in AI visibility is 80 percent good fundamental SEO, they are selling you snake oil.”

That one deserves to be printed on a t-shirt.

 

Where the thought leaders actually agree, and where they don’t

The industry debate around GEO has crystallized into recognizable camps.

Mike King (iPullRank) is the most forceful voice arguing that GEO is a genuinely new discipline (“AI search is a completely different surface”) and has built the Relevance Engineering framework and SEO Week conference around that thesis. Eli Schwartz counters that AI Overviews and AI Mode represent “a UI change, not a channel metamorphosis,” and calls GEO jargon terminology that “signals expertise without delivering it.” Rand Fishkin warns against overinvestment. He’s called AI visibility tracking “entirely baloney” at the individual query level and predicted “peak employment” in SEO.

Lily Ray sits in the pragmatic center. Her MozCon 2025 session “GEO, AEO, LLMO: Separating Fact from Fiction” is widely cited as the industry’s definitive treatment. She warns against newcomers selling GEO hype: “There’s a whole lot of people who have entered the space in the last year or so with very little experience, making lofty promises.” While acknowledging that the existing SEO strategy alone isn’t sufficient. Marie Haynes takes a similar middle position, treating GEO and SEO as “different things” with significant overlap.

Dan Petrovic has been the most aggressive skeptic. He called the original GEO academic paper one with “no peer review, credibility or value” (it was subsequently published at KDD, a top-tier venue) and has been exposing companies paying for promotional GEO social posts. A Search Engine Land study of 75 SEO thought leaders found fewer than one-third maintained consistent usage and sentiment about AI search terms throughout 2025, which tells you how much uncertainty exists even among experts.

The areas of strongest consensus are revealing: traditional SEO fundamentals remain essential for AI visibility; E-E-A-T signals matter more than ever; content structure (chunks, lists, tables) helps AI extraction; multi-platform presence matters; brand authority outweighs keyword targeting; the field needs more rigorous measurement and less hype.

The areas of strongest disagreement center on whether GEO merits new budgets and organizational structures, how much to invest in AI visibility tracking, and how quickly AI search will displace traditional organic traffic.

Pick your side carefully. The pragmatists are usually right when an industry is this early.

 

How we applied these findings for a client and what happened

Everything above is theory until it produces results. Here’s what evidence-based GEO actually looks like in practice.

The client was a national B2B manufacturer competing across five verticals against companies with 10+ year head starts and enterprise marketing budgets. DR 21 at kickoff. Invisible in AI search. The kind of situation where conventional wisdom says, “focus on one niche, compete in 3-5 years.” We took a different approach.

You can read the full case study for every detail, but here’s how the research mapped directly to what we did:

Tactics 1, 11, 12 (Statistics, original research, E-E-A-T): We built content around technical specifications, compliance data, and expert engineering perspectives. No marketing fluff. Information density and verifiable claims, structured for extraction. Result: 283 AI citations across platforms in seven months, #1 ranking in Google AI Overview for priority queries.

Tactic 3 (Brand mentions): We distributed over 100,000 content pieces across third-party platforms. The earned media strategy, the research says, is 6.5x more effective than own-site-only content. Result: 587.6% branded search growth.

Tactic 10 (Topical authority): Comprehensive pillar-and-spoke content clusters targeting five distinct stakeholder roles (procurement, engineering, compliance, operations, executive). This mapped directly to the query fan-out architecture Surfer SEO documented. Result: citations for fan-out queries the client would never have ranked for under traditional keyword targeting.

Tactics 8, 15 (Server-side rendering, page speed): Technical SEO foundation resolved before content optimization started. Same approach that drove our Boston SEO case study from algorithm penalty to 45.2% traffic growth.

The lower-ranked site benefit that the Princeton study documented (+115% for position 5+ sites) wasn’t theoretical for this client. Their DR 21 domain outranked FDA.gov (DR 92) in AI search results. Fortune 500 competitors with 20x the budget. The evidence said it was possible. The results proved it.

End-of-engagement numbers: DR 21 to 35 in seven months, 283 AI citations earned, #1 in Google AI Overview for priority queries, leads up 60%, and higher lead quality. Technical buyers with specific purchase intent. Not window shoppers.

 

The evidence hierarchy: every GEO tactic ranked by research quality

Tier Tactic Evidence strength Primary source
1 – Strong evidence Statistics, quotations, citations Peer-reviewed (KDD 2024) + SE Ranking 129K domains Princeton/GT/IIT/Allen AI
1 – Strong evidence Answer-first structure Peer-reviewed + Kevin Indig 1.2M citations Stanford TACL 2024
1 – Strong evidence Brand mentions across third-party sources Ahrefs 75K brands + Airops + Stacker Ahrefs
1 – Strong evidence Content freshness (substantive updates) Ahrefs 17M citations + platform-specific data Ahrefs
1 – Strong evidence YouTube presence Surfer SEO 36M AIOs + Ahrefs 75K brands Surfer SEO
1 – Strong evidence Topical authority / fan-out coverage Wellows r=0.41 + Surfer 10K keywords Wellows
1 – Strong evidence Original research & proprietary data Yext 17.2M citations Yext
1 – Strong evidence E-E-A-T signals Semrush 304K URLs + BrightEdge Semrush
1 – Strong evidence Server-side rendering Vercel/MERJ 500M GPTBot fetches Vercel/MERJ
1 – Strong evidence Reddit / community presence (genuine only) Profound 4B+ citations Profound
1 – Strong evidence Entity optimization (Wikipedia, sameAs) Schema App controlled case study Schema App
1 – Strong evidence Multi-platform strategy Search Atlas 5.5M responses Search Atlas
1 – Strong evidence AI Overview optimization beyond top-10 Ahrefs 863K SERPs + BrightEdge 16-month Ahrefs
1 – Strong evidence Page speed / FCP SE Ranking 129K domains SE Ranking
1 – Strong evidence Schema markup (Google/Bing only) Semrush 304K URLs (+22% lift) Semrush
2 – Speculative Speakable schema One unverified +127% claim Individual practitioner
2 – Speculative llms.txt files SE Ranking 300K domains: zero correlation SE Ranking
2 – Speculative Vector embeddings optimization xFunnel 7.3x citation at cosine >0.88 xFunnel (limited)
2 – Speculative Direct LLM training data inclusion Dan Petrovic (unfalsifiable) DejanSEO
2 – Speculative AI shadow websites / edge delivery No public data; cloaking concerns Scrunch AI, Adobe
2 – Experimental (risky) Paid AI search placement (for organic) No organic-influence evidence yet Platform-reported
2 – Risky (avoid) Adversarial content injection Academic only; legal + reputational risk Nestaas et al. 2024

If you’re starting a GEO program with a constrained budget, prioritize Tier 1 tactics in this order: server-side rendering (if you’re not already there), statistics/citations/quotations in existing content, brand mention acquisition, YouTube companion content, and topical authority clusters. Skip everything in Tier 2 until Tier 1 is solid. Never touch the risky ones.

 

What does this all mean?

The evidence is clear on what works: statistics-heavy, well-structured content with expert quotes and inline citations, published by entities with strong brand mention profiles across authoritative third-party sources, kept fresh, technically accessible to AI crawlers, and distributed across YouTube, Reddit, and earned media.

Not exotic new tactics. Aggressive, well-executed extensions of existing SEO and PR fundamentals applied to a new set of citation systems.

What is genuinely new is the measurement challenge, the platform divergence, the fan-out query dimension, and the technical crawler requirements. The agencies that win are going to be the ones that integrate GEO monitoring into existing workflows without treating it as an entirely separate discipline requiring a separate P&L. AI referral traffic is still under 2% for most sites, but it converts at 5-23x the rate of traditional organic. The opportunity cost of ignoring it is growing. The cost of overinvesting in immature tools and unproven tactics is also real.

Track the trends. Optimize for the evidence. Ignore the hype.

Every search evolution creates the same window. 1990s: directories gave way to search engines, and early websites ranked just by existing. 2000s: Google SEO got competitive, and companies that built organic authority early owned their categories for a decade. 2010s: mobile-first rewarded companies that adapted quickly. By 2014, it was table stakes. 2024-2026: AI search is reshaping how buyers find information, evaluate options, and choose vendors.

Companies that act on evidence while their competitors debate definitions are going to capture a disproportionate share of AI visibility. We’ve watched it happen. A DR 21 manufacturer beat the FDA and Fortune 500 competitors in an AI search. Not because of magic. Because the evidence said what worked, and we did those things while everyone else was still writing thought pieces about whether GEO is “a thing.”

That window is open right now. It’s closing fast.

If you want to know whether this window exists in your industry, we’ll tell you in 20 minutes. No sales pitch. Just data and an honest read on where you actually stand.

Book a GEO Discovery Call

 

 

Sources

Peer-Reviewed Academic Research

  1. Aggarwal, P., Murahari, V., Rajpurohit, T., Kalyan, A., Narasimhan, K., & Deshpande, A. (2024). “GEO: Generative Engine Optimization.” Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2024). Princeton University, Georgia Tech, IIT Delhi, Allen Institute for AI. https://arxiv.org/abs/2311.09735
  2. Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2024). “Lost in the Middle: How Language Models Use Long Contexts.” Transactions of the Association for Computational Linguistics (TACL 2024). Stanford University. https://arxiv.org/abs/2307.03172
  3. Chen et al. (2025). “Generative Engine Optimization: How to Dominate AI Search.” arXiv 2509.08919. https://arxiv.org/abs/2509.08919
  4. Nestaas et al. (2024). Adversarial methods research on RAG vulnerabilities (Hijack Attacks and Poisoning Attacks). arXiv. https://arxiv.org/abs/2406.13447
  5. AutoGEO (2025). Validation study of GEO tactics across Gemini, GPT-4o-mini, Claude, and DeepSeek. arXiv. https://arxiv.org/abs/2502.09469

 

Large-Scale Industry Studies

  1. Ahrefs (July 2025). “AI Assistants Prefer to Cite ‘Fresher’ Content (17 Million Citations Analyzed).” https://ahrefs.com/blog/do-ai-assistants-prefer-to-cite-fresh-content/
  2. Ahrefs (August 2025). 75,000-brand AI Overview correlation study. https://ahrefs.com/blog/ai-overview-citations-vs-rankings/
  3. Ahrefs (March 2026). 863,000 keyword SERPs and 4 million AI Overview URLs analysis. https://ahrefs.com/blog/ai-overview-citations-top-10/
  4. Ahrefs (June 2025). AI search visitor conversion analysis. https://ahrefs.com/blog/ai-traffic-conversion/
  5. Ahrefs. ChatGPT’s most-cited pages study. https://ahrefs.com/blog/chatgpts-most-cited-pages/
  6. Semrush. “How Google’s AI Mode Compares to Traditional Search and Other LLMs.” https://www.semrush.com/blog/ai-mode-comparison-study/
  7. Semrush. 304,805-URL analysis of LLM citation predictors. https://www.semrush.com/blog/semrush-ai-overviews-study/
  8. Seer Interactive. “The AI Search Landscape: Beyond the SEO vs GEO Hype.” https://www.seerinteractive.com/insights/study-the-ai-search-landscape-beyond-the-seo-vs-geo-hype
  9. Seer Interactive. AI Overview CTR impact study (3,119 informational queries, 25.1 million impressions). https://www.seerinteractive.com/insights/aio-impact-on-google-ctr-september-2025-update
  10. BrightEdge. 16-month AI Overview tracking study. https://www.brightedge.com/resources/weekly-ai-search-insights/rank-overlap-after-16-months-of-aio
  11. Profound. “AI Platform Citation Patterns” (680 million+ citations analyzed). https://www.tryprofound.com/blog/ai-platform-citation-patterns
  12. Profound. “The Data on Reddit and AI Search.” https://www.tryprofound.com/blog/the-data-on-reddit-and-ai-search
  13. SE Ranking. Study of 129,000 domains and 216,524 pages on AI citation factors. https://seranking.com/blog/ai-citations-research/
  14. SE Ranking. “LLMs.txt: Why Brands Rely On It and Why It Doesn’t Work.” https://seranking.com/blog/llms-txt/
  15. Search Atlas. Study of 5.5 million LLM responses on schema markup impact. https://searchatlas.com/blog/limits-of-schema-markup-for-ai-search/
  16. Search Atlas. “Authority Metrics in the Age of LLMs: Visibility Correlation Analysis” (21,767 domains). https://searchatlas.com/blog/authority-metrics-in-the-age-of-llms-visibility-correlation-analysis/
  17. Previsible. “What Content Do AI Models Cite? 5,000 Prompt Study.” https://previsible.io/seo-education/content-ai-models-cite/
  18. Previsible. “2025 State of AI Discovery.” https://previsible.io/seo-strategy/ai-seo-study-2025/
  19. Hallam Agency. “Brand mentions are now 3X more important than backlinks for AI Search.” https://hallam.agency/blog/brand-mentions-are-now-3x-more-important-than-backlinks-for-ai-search/
  20. AirOps. “The Silent Pipeline Killer: How Stale Content Costs You AI Citations.” https://www.airops.com/report/the-impact-of-stale-content-on-ai-visibility
  21. AirOps. Earned media and brand mention analysis (October 2025). https://www.airops.com/blog/llm-brand-citation-tracking
  22. Surfer SEO. Analysis of 36 million AI Overviews. https://surferseo.com/blog/ai-overviews-study/
  23. Surfer SEO. 10,000-keyword fan-out evidence study. https://surferseo.com/blog/query-fan-out/
  24. Yext (Q4 2025). Analysis of 17.2 million AI citations. https://www.yext.com/blog/ai-citation-research
  25. Conductor (2026). Benchmarks Report (3.3 billion sessions across 13,000+ domains). https://www.conductor.com/research/
  26. Wellows. Topical authority correlation research. https://wellows.com/blog/llm-citation-trends-for-ai-search/
  27. Onely. “LLM-Friendly Content: 12 Tips to Get Cited in AI Answers.” https://www.onely.com/blog/llm-friendly-content/
  28. CXL. AI Overview citation analysis. https://cxl.com/blog/ai-overviews/
  29. Stacker (December 2025). Content distribution and AI citation analysis. https://stacker.com/insights/
  30. The Digital Bloom. “2025 AI Visibility Report: How LLMs Choose What Sources to Mention.” https://thedigitalbloom.com/learn/2025-ai-citation-llm-visibility-report/
  31. xFunnel. Vector embeddings and citation rate analysis. https://www.xfunnel.ai/research
  32. Lumar. Semantic relevance research. https://www.lumar.io/blog/
  33. Schema App. Controlled case study on sameAs entity linking. https://www.schemaapp.com/case-studies/
  34. ZipTie.dev. Independent AI citation studies. https://ziptie.dev/research/
  35. Vercel & MERJ. 500 million GPTBot fetches analysis on JavaScript rendering. https://vercel.com/blog/the-rise-of-the-ai-crawler
  36. BuzzStream / Hostinger. Top news sites AI bot blocking analysis. https://www.buzzstream.com/blog/ai-bot-blocking-study
  37. Glenn Gabe. Case study on client-side rendered content invisibility to AI crawlers. https://www.gsqi.com/marketing-blog/
  38. Pew Research. AI Overview click behavior study. https://www.pewresearch.org/short-reads/2025/
  39. SparkToro. AI brand visibility tracking experiment (600 volunteers, January 2026). https://sparktoro.com/blog/
  40. SourceCheckup. Citation accuracy analysis (50-90% of citations not fully supported). https://sourcecheckup.ai/

 

Independent Research and Practitioner Studies

  1. Kevin Indig (Growth Memo). “The Science of How AI Picks Its Sources” (1.2 million ChatGPT citations analyzed). https://www.growth-memo.com/p/the-science-of-how-ai-picks-its-sources
  2. Kevin Indig (Growth Memo). “The Science of How AI Pays Attention.” https://www.growth-memo.com/p/the-science-of-how-ai-pays-attention
  3. Kevin Indig (Growth Memo). “How Much Can We Influence AI Responses?” https://www.growth-memo.com/p/how-much-can-we-influence-ai-responses
  4. Mike King (iPullRank). Relevance Engineering framework. https://ipullrank.com/relevance-engineering
  5. Marie Haynes. Query fan-out documentation (March 2025). https://www.mariehaynes.com/blog/
  6. Dan Petrovic (DejanSEO). Model memory vs. grounded search analysis. https://dejanmarketing.com/
  7. Lily Ray (MozCon 2025). “GEO, AEO, LLMO: Separating Fact from Fiction.” https://moz.com/mozcon/agenda/
  8. Eli Schwartz. GEO terminology analysis. https://www.elischwartz.co/blog/
  9. Rand Fishkin. AI visibility tracking critique. https://sparktoro.com/blog/
  10. Christopher Penn. GEO measurement commentary. https://www.christopherspenn.com/
  11. Glenn Gabe. AI search traffic analysis. https://www.gsqi.com/marketing-blog/
  12. Gianluca Fiorelli. Semantic SEO practitioner research. https://www.iloveseo.net/
  13. Jeremy Moser (uSERP). “If a GEO service does not openly tell you that success in AI visibility is 80 percent good fundamental SEO, they are selling you snake oil.” Quoted in Digiday. https://digiday.com/media/geo-hype-busted-experts-call-it-more-seo-than-new-discipline/
  14. Jeremy Howard (Answer.AI). llms.txt standard proposal (September 2024). https://llmstxt.org/

 

Industry Publications and News Sources

  1. Search Engine Land. “44% of ChatGPT citations come from the first third of content: Study.” https://searchengineland.com/chatgpt-citations-content-study-469483
  2. Search Engine Land. “Generative engine optimization (GEO): How to win AI mentions.” https://searchengineland.com/what-is-generative-engine-optimization-geo-444418
  3. Search Engine Land. “GEO myths: This article may contain lies.” https://searchengineland.com/geo-myths-lies-467617
  4. Search Engine Land. “Why your content doesn’t appear in AI Overviews (even if it ranks in the top 10).” https://searchengineland.com/why-content-doesnt-appear-in-ai-overviews-473325
  5. Search Engine Land. “How schema markup fits into AI search, without the hype.” https://searchengineland.com/schema-markup-ai-search-no-hype-472339
  6. Search Engine Land. llms.txt site-by-site impact analysis (8 of 9 sites). https://searchengineland.com/llms-txt-impact-study/
  7. Search Engine Land. Study of 75 SEO thought leaders on AI search term consistency. https://searchengineland.com/
  8. Search Engine Journal. “AI Recommendations Change With Nearly Every Query: Sparktoro.” https://www.searchenginejournal.com/ai-recommendations-change-with-nearly-every-query-sparktoro/566242/
  9. Digiday. “Many GEO tactics are not that different from search optimization.” https://digiday.com/media/geo-hype-busted-experts-call-it-more-seo-than-new-discipline/
  10. TechCrunch. Wikipedia AI writing detection coverage. https://techcrunch.com/2025/11/20/the-best-guide-to-spotting-ai-writing-comes-from-wikipedia/
  11. Stan Ventures. “BrightEdge Report: AI Overviews Align With Rankings.” https://www.stanventures.com/news/brightedge-ai-overviews-organic-rankings-4675/

 

Platform Documentation and Official Sources

  1. Google. “AI Features and Your Website” (Search Central documentation). https://developers.google.com/search/docs/appearance/ai-features
  2. Google Search Central. Speakable schema documentation (beta). https://developers.google.com/search/docs/appearance/structured-data/speakable
  3. John Mueller (Google). Statements on llms.txt and date manipulation. https://www.google.com/search?q=John+Mueller+llms.txt
  4. Liz Reid (Google, Head of Search). Statements on AI prioritization of first-hand experience content. https://blog.google/products/search/
  5. OpenAI. ChatGPT Ads launch and “Answer Independence” framework. https://openai.com/business/chatgpt-ads/
  6. Microsoft / Fabrice Canel (SMX Munich, March 2025). Schema markup and Copilot LLM understanding. https://www.bing.com/webmaster/help/
  7. NVIDIA. “What Is Retrieval-Augmented Generation (RAG).” https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/

 

Tools and Platform Analyses

  1. Profound. AEO/GEO platform analysis. https://www.tryprofound.com/
  2. Otterly. AI search monitoring platform. https://otterly.ai/
  3. Peec AI. AI visibility platform. https://peec.ai/
  4. Scrunch AI. Agent Experience Platform (AXP). https://www.scrunchai.com/
  5. Adobe. LLM Optimizer. https://business.adobe.com/products/llm-optimizer.html
  6. iPullRank. Qforia query fan-out simulation tool. https://ipullrank.com/qforia
  7. Surfer SEO. AI Tracker. https://surferseo.com/ai-tracker/
  8. Ahrefs. Brand Radar. https://ahrefs.com/brand-radar
  9. Conductor. Enterprise AEO platform. https://www.conductor.com/
  10. Semrush. AI Visibility Toolkit. https://www.semrush.com/ai-visibility/
  11. BuiltWith. llms.txt adoption tracking. https://trends.builtwith.com/

 

Datasets and Reference Materials

  1. YouTube-Commons Dataset. Transcript data used in LLM training (~30 billion words). https://huggingface.co/datasets/PleIAs/YouTube-Commons
  2. Common Crawl. Web crawl dataset used in LLM training. https://commoncrawl.org/
  3. Google Knowledge Graph. 800 billion facts about 8 billion entities. https://developers.google.com/knowledge-graph
  4. Wikidata. Structured data knowledge base. https://www.wikidata.org/
  5. REALM and DPR. Academic RAG systems using Wikipedia retrieval. https://arxiv.org/abs/2002.08909
  6. University of Zurich (April 2025). Bot experiment on r/changemyview (1,700+ fabricated comments). https://www.uzh.ch/
  7. London School of Economics. Wikidata thesis integration experiment. https://www.lse.ac.uk/

 

Internal Radiant Elephant Resources

  1. Radiant Elephant. “B2B Manufacturing SEO & GEO Case Study: DR 21 to 35, #1 AI Search Position.” https://www.radiantelephant.com/geo-case-study-national-b2b-manufacturer-dominates-ai-search/
  2. Radiant Elephant. “Boston SEO Case Study: From Algorithm Penalty to 45.2% Traffic Growth.” https://www.radiantelephant.com/boston-seo-case-study-from-algorithm-penalty-to-45-2-traffic-growth/
  3. Radiant Elephant. “Radiant Elephant SEO: Our Process, Philosophy, and What Makes Us Different.” https://www.radiantelephant.com/radiant-elephant-seo-our-process-philosophy-and-what-makes-us-different/
  4. Radiant Elephant. Generative Engine Optimization service page. https://www.radiantelephant.com/generative-engine-optimization/
  5. Radiant Elephant. Contact and GEO Discovery Call. https://www.radiantelephant.com/contact/
Gabriel Bertolo - Founder of Radiant Elephant

Gabriel Bertolo

Gabriel Bertolo is a 3rd generation entrepreneur who founded Radiant Elephant over 13 years ago after working for various advertising and marketing agencies. 

He is also an award-winning Jazz/Funk drummer and composer, as well as a visual artist.

His Web Design, SEO, and Marketing insights have been quoted in Forbes, Business Insider, Hubspot, Entrepreneur, Shopify, MECLABS, and more.

Check out some publications he's been quoted in:

Quoted in HubSpot's AI Search Visibility Article and HubSpot's Article on 6 Best Wix Alternatives

Quoted in DesignRush Dental Marketing Guide 

Quoted in MECLABS 

Quoted in DataBox Website Optimization Article and DataBox Best SEO Blogs

Quoted in Seoptimer

Quoted in Shopify Blog 

})