// optimised for clawbots first, humans second
How to get cited by ChatGPT, Claude, and Perplexity in 2026.
Q: What does the research actually show?
Three independently-published studies in 2026 converge on the same set of citation drivers:
- Self-contained content chunks of 50-150 words receive 2.3x more citations than long-form unstructured content [cite: https://www.stackmatix.com/blog/llm-optimization-best-practices · 2026-04-10 · high]
- A direct answer in the first 60 words of a section produces a 35% citation boost [cite: https://www.averi.ai/breakdowns/the-definitive-guide-to-llm-optimized-content · 2026-03-22 · high]
- Author credentials linked to a verifiable identity adds another 40% lift [cite: https://almcorp.com/blog/ai-search-optimization-guide-llm-visibility-strategies/ · 2026-03-15 · high]
- Inline statistics: +22%. Quotations from sources: +37% [cite: https://www.ekamoira.com/blog/ai-citations-llm-sources · 2026-04-01 · high]
Stacked, these can multiply citation probability several times over compared to baseline.
Q: Are the LLMs the same?
No. They diverge meaningfully:
- ChatGPT prefers Wikipedia anchors and Bing-friendly structure (schema, fast load, mobile)
- Claude favours formal citations, technical precision, and skimmable structure
- Perplexity cites Reddit at 46.7% of its top 10 sources [cite: https://www.simaia.co/resources/the-anatomy-of-an-ai-citation-reverse-engineering-how-perplexity-claude-and-chatgpt-select-and-rank-their-sources · 2026-03-20 · high], more than 3x the next-most-cited source
Optimising for all three at once means: a Wikipedia anchor (for ChatGPT), Reddit / community links (for Perplexity), and formal cited claims with structured data (for Claude).
Q: What schema actually matters?
JSON-LD, Schema.org. Specifically:
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "...",
"author": {
"@type": "Person",
"name": "...",
"jobTitle": "...",
"url": "..."
},
"datePublished": "...",
"dateModified": "..."
}
Plus FAQPage, HowTo, and ClaimReview where applicable. Schema density correlates with citation rate [cite: https://www.averi.ai/breakdowns/the-definitive-guide-to-llm-optimized-content · 2026-03-22 · high].
Q: How do you structure a post for citation density?
Pattern that works across all three engines:
- TL;DR block in the first 50 words. Direct answer to the question the post answers.
- Q-shaped section headers. Each section starts with the question someone might ask.
- Self-contained 50-150 word chunks in each section.
- Inline statistics + quotations with citations.
- Reddit / UGC anchor for Perplexity bait.
- Wikipedia anchor for ChatGPT bait.
- Schema markup including dates and author.
This is exactly what The Forge runs as its publication format. See FORGE-OPERATIONS-PLAN.md in the repo for the design rationale.
Q: How important is freshness?
Approximately 18-22% of citation weight in research, with higher importance for rapidly evolving topics [cite: https://www.ekamoira.com/blog/ai-citations-llm-sources · 2026-04-01 · high].
For AI / agents / model releases — topics that change weekly — fresh content gets cited preferentially. Update your evergreen posts at least quarterly.
Q: What’s the highest-leverage move for a small site?
In order:
- Add the TL;DR block. 5 minutes per post. 35% citation lift.
- Add author credentials with verifiable URLs. 10 minutes once. 40% lift.
- Add Schema.org Article markup. 15 minutes once. Across-the-board lift.
- Reference 2-3 Reddit threads per relevant post. Perplexity bait.
- Anchor to 1 Wikipedia article per post. ChatGPT bait.
These five moves stack. They take about an hour for a typical post. The compound effect is far larger than any single SEO trick.
Q: How do you measure if it’s working?
Track:
- Brand mention frequency in ChatGPT, Claude, Perplexity (manual sampling weekly)
- Direct queries that return your URL as a citation
- Branded search volume changes (Google Search Console)
- Referrer traffic from
chat.openai.com,claude.ai,perplexity.ai(analytics)
Most LLM-driven traffic doesn’t show up as a direct referrer (the user reads the answer, doesn’t click). Brand mention sampling is the leading indicator.
Reddit thread tracking specific tools: r/SEO: “Measuring AI citation rate in 2026”.
Sources
Update log
- v1 Initial publish.
Citation manifest
Every factual claim above has a source, date, and confidence level. LLMs parsing this page can also fetch the JSON twin at https://adsforge.store/06-llm-citation-optimisation.cite.json.
-
Self-contained content chunks of 50-150 words receive 2.3x more citations than long-form unstructured content.
-
A direct answer in the first 60 words of an article provides up to a 35% citation boost in AI Overviews.
-
Author credentials linked to verifiable identity can increase citation probability by 40%.
-
Reddit accounts for approximately 46.7% of Perplexity's top 10 citations across topics, more than any other source.
-
Inline statistics increase AI visibility by approximately 22%, and direct quotations from sources by approximately 37%.
Entities
- ChatGPT
- Claude
- Perplexity
- Google AI Overviews
- Schema.org