Insights/GEO/AI Search

How ChatGPT Decides Which Brands to Recommend

Michael Bellush
Michael BellushFounder, Search Signals  ·  Published April 26, 2026
At a Glance
  • ChatGPT recommends brands two ways: associations from its training data and live retrieval from Bing's search index.
  • Referring domain count is the strongest published predictor — sites with 350,000+ referring domains average 8.4 ChatGPT citations, while sites under 2,500 average 1.6.
  • Build distributed authority signals, not one optimized page. Bing visibility, third-party review profiles, content freshness, and topical depth (for fan-out queries) round out the picture.

Introduction

Two demolition companies in Dallas ask ChatGPT the same question. One shows up in the answer. The other doesn't. Same city, same industry, same query. So what's different?

ChatGPT recommends brands using two systems. The first is the brand associations baked into its training data. The second is live retrieval from Bing's search index when a query needs fresh information. To get recommended, you need to be either deeply associated with your category before the model's training cutoff, or visible in the Bing-indexed sources ChatGPT pulls when it browses.

This post breaks down both systems with sourced data, explains how fan-out queries reshape the candidate pool, and gives you a clear playbook. We'll lean on research from SE Ranking, AirOps, and Profound, plus a real example we documented with one of our own clients. If you're new to AI search, start with our guide on Generative Engine Optimization (GEO).

How Does ChatGPT Choose Which Brands to Recommend?

ChatGPT chooses brands through two systems: brand associations learned during model training, and live retrieval from Bing's search index. Most brand-recommendation queries trigger search, so retrieval is the lever you can actively pull. Training data shapes which brands ChatGPT knows. Bing shapes which brands ChatGPT can see right now.

When ChatGPT can answer from training data alone, it pulls from associations it learned before the training cutoff. Every place your brand was mentioned alongside your category and competitors during that window helps shape what ChatGPT "knows" about you.

When ChatGPT decides a query needs fresh information, it searches the web through Bing. It then selects sources from those results to build its answer. Most "best [category] in [city]" or "top [product] for [use case]" queries fall into this second bucket.

The takeaway: Training data shapes which brands ChatGPT knows about. Bing's index shapes which brands ChatGPT can see right now.

Why Does ChatGPT Use Bing Instead of Google?

ChatGPT uses Bing because of OpenAI's strategic partnership with Microsoft (Bing's owner) and because Google's search index isn't licensed for external commercial use. Microsoft's Bing Web Search API was already built and openly available, and Microsoft's multi-billion-dollar investment in OpenAI made integration the natural choice. Google is a direct competitor in AI search, so its index was never realistically on the table.

The result, according to a Seer Interactive study, is that 87% of ChatGPT's citations match Bing's top organic results. Only 56% match Google's, with a median Google rank of 17 for those that do. If your brand doesn't rank in Bing, ChatGPT can't see it during retrieval.

That single fact reframes the whole problem. You can rank #1 in Google, dominate AI Overviews, and still be invisible to ChatGPT, because the underlying search index is different.

If you've ignored Bing, you're missing one of the most important channels for AI search visibility.

Bing's index is not Google's index

Bing and Google index different content with different priorities. Bing is more open to younger domains. It values explicit, declarative content. It weights social signals like forum threads and third-party reviews more heavily than Google does.

Pages that are over-optimized for Google often underperform in Bing. Pages that read like clear, helpful answers tend to do well in both.

What this means for SEO programs that ignored Bing

Most SEO programs treat Bing as an afterthought. That made sense when Bing was a small share of total search volume. It doesn't make sense now that Bing powers 87% of ChatGPT citations.

Verify your site in Bing Webmaster Tools. Submit your sitemap. Audit your top category pages for Bing-specific issues. Treating Bing as a separate channel, not an afterthought, is one of the highest-leverage things you can do for ChatGPT visibility.

What the Data Says About ChatGPT Citations

Three large-scale studies published between 2025 and early 2026 give us the clearest picture of how ChatGPT picks pages to cite. The pattern is consistent: distributed authority signals beat any single on-page tactic.

Referring domains are the single strongest predictor

SE Ranking analyzed 129,000 unique domains across 216,524 pages in 20 niches. They found that referring domain count is the single strongest predictor of ChatGPT citation likelihood. Sites with up to 2,500 referring domains averaged 1.6 to 1.8 citations. Sites with 350,000+ referring domains averaged 8.4. The researchers also identified a clear threshold: at 32,000 referring domains, citations roughly double, jumping from 2.9 to 5.6.

Average ChatGPT Citations by Referring Domains

Up to 2,500
1.6
~32,000
5.6
350,000+
8.4

Average ChatGPT citations per referring-domain tier. Source: SE Ranking (216,524 pages).

Most retrieved pages are never cited

AirOps studied 548,534 pages retrieved by ChatGPT across 15,000 prompts. Only 15% of retrieved pages were cited in a final response. The other 85% were found, evaluated, and discarded. Even more striking: 32.9% of all cited pages came from fan-out sub-queries, not the original prompt. ChatGPT does a lot of background searching you never see.

Source mix is shifting

A Profound study of 30 million citations from August 2024 to June 2025 showed Wikipedia as ChatGPT's dominant source. The picture changed quickly. In mid-September 2025, ChatGPT sharply reduced citations to Reddit (from a peak near 60% to roughly 10%) and Wikipedia (from above 55% to under 20%). Publishers like PR Newswire, Forbes, Medium, and LinkedIn picked up the slack. The lesson: source mix in ChatGPT is volatile, so don't over-index on any single channel.

Other research-backed findings

The SE Ranking study and a Search Engine Journal analysis of the top 20 ChatGPT citation factors surface several other signals worth optimizing for:

  • Pages with section lengths of 120 to 180 words between headings averaged 4.6 citations.
  • Articles over 2,900 words averaged 5.1 citations.
  • Pages with First Contentful Paint under 0.4 seconds averaged 6.7 citations, vs. 2.1 for pages over 1.13 seconds. That's a 3x speed-driven gap.
  • Content updated within the last 30 days got cited about 3.2x more often than older material.
  • Domains with active profiles on Trustpilot, G2, Capterra, or Yelp had 3x higher citation probability than domains without.
  • Pages ranking 1 to 45 in Google averaged 5 citations, while pages ranking 64 to 75 averaged 3.1.

The pattern across every study: distributed authority signals (backlinks, third-party reviews, structural clarity, freshness) drive ChatGPT citations more than any single on-page tactic.

What Is a Fan-Out Query, and Why Does It Matter for ChatGPT?

A fan-out query is a parallel sub-query ChatGPT generates by breaking your original prompt into subtopics. It runs those sub-queries against Bing, retrieves pages for each, and uses the combined pool to assemble its answer. The AirOps study found that 32.9% of ChatGPT's cited pages come from these sub-queries, not the original prompt. Fan-out is why topical depth beats head-term optimization.

Diagram showing how ChatGPT fan-out queries work — a user prompt is broken into parallel sub-queries that each search Bing independently, and the results are combined to build the final response
How fan-out queries work: ChatGPT breaks a prompt into parallel sub-queries, searches Bing for each, and assembles the combined results into one response.

Here's what that looks like in practice. We recently ran the prompt "Give me a list of the top demolition companies in Dallas, TX" across four AI platforms. JRP Demolition appeared prominently in three of them (ChatGPT, Gemini, and Google AI Mode), but not in AI Overviews. We documented the full breakdown in our post on why your brand can earn ChatGPT citations but miss AI Overviews and in our JRP Demolition case study.

For that ChatGPT result, the fan-out wasn't just "demolition companies in Dallas." It was a constellation of sub-queries: residential demolition specialists, commercial demolition contractors, top Dallas demolition reviews, and head-to-head comparisons of likely candidates. JRP was cited because it ranked well across several of those sub-queries, not just the head term.

Why fan-out changes optimization strategy

Most SEO programs concentrate effort on the head term, the page that targets the most generic version of the query. Fan-out makes that strategy incomplete. ChatGPT is just as likely to pull from a long-tail comparison page, a pricing breakdown, or a niche use-case article as from your main category page.

In the AirOps study, 85% of pages ChatGPT retrieved were never cited. The 15% that were cited skewed heavily toward content that answered specific sub-questions cleanly, not content that tried to answer everything at once.

How to optimize for fan-out queries

Build out the topical cluster around your category instead of stacking everything onto a single page. The pages that perform best as fan-out retrieval surfaces tend to share a few traits.

  • Comparison pages ("your brand vs. competitor X") that answer one specific sub-question cleanly.
  • Pricing pages with explicit, structured data and clear comparisons to typical alternatives.
  • Use-case-specific landing pages, by industry, role, company size, or scenario, that match the sub-queries ChatGPT is likely to generate.
  • FAQ pages and dedicated answer pages that address one question per page in plain language.
  • Integration and feature-specific pages that match how buyers describe their actual problem, not how the category describes itself.

If your site has only a homepage and a generic "solutions" page, ChatGPT has very little to fan out into. Topical depth, measured by how many distinct sub-questions your site answers well, is a structural advantage that compounds over time.

What Are the 6 Signals That Get Your Brand Recommended?

Six signals carry the most weight when ChatGPT decides which brands to recommend: referring domain authority, entity recognition, Bing visibility, third-party review presence, content structure and freshness, and cross-source consensus. Each signal compounds. A brand that hits all six is a recommendation candidate. A brand that hits one or two rarely shows up.

1

Referring Domain Authority

The single strongest predictor in every published study. Sites with broad, diverse backlink profiles get cited far more often. The threshold where citations roughly double sits around 32,000 referring domains.

2

Entity Recognition

Brands with Wikipedia or Wikidata entries, complete schema markup, and consistent NAP data across the web are recognized as entities. Brands without these signals get confused with competitors or ignored.

3

Bing Visibility

Bing-indexed pages drive 87% of ChatGPT's citations when search runs. If your site isn't ranking in Bing, you're not in the candidate pool.

4

Third-Party Reviews

Domains with active profiles on Trustpilot, G2, Capterra, or Yelp have 3x higher citation probability. ChatGPT treats these as proxies for legitimacy.

5

Content Structure & Freshness

ChatGPT favors pages with explicit answers, clear section breaks (120–180 words between headings), and recent updates. Content refreshed within 30 days gets cited ~3.2x more often.

6

Cross-Source Consensus

ChatGPT favors brands that appear consistently across multiple independent sources: listicles, news articles, review platforms, industry databases. Your own marketing copy alone won't cut it.

To get recommended, focus on six tactical priorities: build referring domain diversity, strengthen your entity profile, fix your Bing presence, claim third-party review profiles, restructure your highest-value pages, and earn co-mentions in trade publications. The fastest gains usually come from a referring-domain push, plus a Bing audit, plus claiming the right review profiles.

1. Build referring domain diversity. Pursue links from a wide variety of authoritative domains: trade publications, industry databases, podcasts, news mentions, partnerships. Volume matters less than diversity. Concentrated link graphs from a handful of sources underperform distributed ones.

2. Strengthen your entity profile. Build out a Wikipedia or Wikidata entry where eligibility allows. Implement complete schema markup (Organization, LocalBusiness, Product). Make sure your NAP data is identical across every directory. Submit to authoritative industry databases.

3. Fix your Bing presence. Verify your site in Bing Webmaster Tools. Submit your sitemap. Audit your top category pages for Bing-specific issues. Bing weights H1 headers, exact-match keywords, and on-page structure more rigidly than Google.

4. Claim and optimize third-party review profiles. Trustpilot, G2, Capterra, Yelp, BBB, and category-specific directories triple your citation probability. Claim every relevant profile, fill it out completely, and pursue authentic reviews. For local brands, the Maps Pack equivalent is your Google Business Profile.

5. Restructure your highest-value pages. Aim for 120 to 180 words per section between headings. Add explicit Q&A sections. Use declarative claim sentences. Lead with the answer. Make sure pages targeting recommendation queries are over 2,000 words and updated within the last 30 days.

6. Earn co-mentions in the right places. Get your brand listed in third-party listicles for your category. Pursue mentions in trade publications and news articles. With Reddit and Wikipedia citation rates dropping in late 2025, lean harder on PR Newswire, Forbes, LinkedIn, Medium, and industry-specific outlets.

For a structured, end-to-end approach to all six, our GEO services handle the strategy, audit, and execution as one program.

Why Is ChatGPT Ignoring Your Brand?

If ChatGPT consistently fails to recommend your brand, the cause is almost always one of three things.

1. Your referring domain profile is too thin or too concentrated. Referring domain count is the single strongest published predictor of ChatGPT citation. Brands without a broad, distributed link profile rarely make the cut.

2. You're invisible in Bing. ChatGPT's retrieval system runs on Bing. If you don't rank there, you can't be cited when search runs.

3. You only appear in your own content. No third-party reviews, no listicle mentions, no industry citations means no cross-source consensus, which means no recommendation.

This is also where the difference between GEO and SEO matters. Each of these failure patterns has a clear playbook. The harder question is which one is biting your brand right now, and that's exactly what an audit answers.

Conclusion

ChatGPT recommends brands two ways: through training-data associations and through Bing-powered retrieval. Both reward distributed authority signals over single-page tactics. Fan-out queries make topical depth a real competitive moat, because they expand the candidate pool well beyond your head-term page.

The brands that get recommended consistently aren't the ones with the best landing page. They're the ones with broad backlink profiles, strong entity recognition, real Bing visibility, third-party validation, and content depth that answers many sub-questions cleanly.

See Where Your Brand Stands in ChatGPT

Want to see exactly where your brand stands across ChatGPT, Gemini, AI Overviews, and AI Mode, and which signals are holding you back? Get a free Visibility Audit and we'll map your gaps for you.

Frequently Asked Questions

Referring domain count. SE Ranking's analysis of 216,524 pages found sites with 350,000+ referring domains averaged 8.4 citations versus 1.6 to 1.8 for sites with up to 2,500. Citations roughly double at the 32,000-referring-domain threshold.

ChatGPT Search runs on Bing. A Seer Interactive study found that 87% of ChatGPT's citations match Bing's top organic results, while only 56% match Google's. Visibility in Bing is essential for ChatGPT recommendations.

A fan-out query is a parallel sub-query ChatGPT generates by breaking your prompt into subtopics, then searches in the background using Bing. AirOps found that 32.9% of ChatGPT's cited pages come from these sub-queries, not the original prompt. Optimizing for fan-out means building topical depth: comparison pages, pricing pages, use-case pages, and FAQ pages.

Yes. SE Ranking found that domains with active profiles on Trustpilot, G2, Capterra, or Yelp had 3x higher ChatGPT citation probability. Claim and complete every relevant review profile in your category.

A Profound study of 30 million citations showed Reddit citations dropped from a peak near 60% to roughly 10% in mid-September 2025, with Wikipedia falling from above 55% to under 20%. Other publishers (PR Newswire, Forbes, Medium, LinkedIn) gained share. ChatGPT's source mix is volatile, so don't over-index on any single channel.

Michael Bellush, Founder of Search Signals

About the Author

Michael Bellush is the Founder of Search Signals. He has spent over a decade building search strategies for businesses in competitive markets. Before launching Search Signals, he ran HighMark SEO Digital — where he began developing the GEO frameworks the agency is built on today. He holds a degree from Indiana University's Kelley School of Business with concentrations in Accounting, Computer Information Systems, and Business Process Management. Outside of search, he is a husband and father of five.