Context
Use this skill monthly (or bi-weekly for active GEO campaigns) to measure whether optimization efforts are translating into AI citation improvements. The output feeds back into the GEO audit cycle and informs content strategy priorities.
Procedure
- Define query set: 5 informational ("what is X"), 5 commercial ("best X for Y"), 5 comparative ("X vs. Y"), 5 brand-adjacent queries relevant to the client's domain.
- Run each query on each target LLM platform and record the verbatim response.
- Score each response: 2 points for direct citation with link, 1 point for brand mention without link, 0 for no mention.
- Aggregate by platform and query type to compute citation share percentage.
- Run identical queries for each competitor using the same methodology.
- Identify queries where competitor is cited but brand is not; hypothesize the content or authority gap causing the difference.
- Produce monthly tracking template for trend analysis and flag significant changes.
Output Format
# LLM Visibility Report: [Brand] — [Month YYYY]
## Summary
- Overall citation share: X%
- vs. [Competitor A]: X%
- vs. [Competitor B]: X%
- Best platform: [platform]
- Weakest platform: [platform]
- Queries with brand cited: X/20
- MoM change: [+/-X citations]
## Query-Level Results
| # | Query | Platform | Brand Cited? | Citation Type | Competitor Cited | Notes |
|---|-------|----------|-------------|---------------|-----------------|-------|
| 1 | | | Yes/No | Link/Mention/None | | |
## Platform Breakdown
| Platform | Citation Rate | vs. Last Month | Top Query | Weakest Query |
|----------|-------------|----------------|-----------|---------------|
| ChatGPT | | | | |
| Perplexity | | | | |
| Gemini | | | | |
| Claude | | | | |
## Gap Analysis
| Query | Competitor Cited | Brand Not Cited Because | Recommended Fix |
|-------|-----------------|------------------------|-----------------|
| | | | |
## Actions This Month
1. [Action tied to specific gap]
2. [Action tied to specific gap]
3. [Action tied to specific gap]
QA Rubric (scored)
- Methodology consistency (0-5): same prompts, same conditions, same date range across all platforms.
- Data completeness (0-5): all queries tested on all target platforms with no gaps.
- Gap analysis quality (0-5): root causes are specific, referencing content or authority differences.
- Trend infrastructure (0-5): report structure supports month-over-month comparison without reformatting.
Examples (good/bad)
- Good: "Query 'best GEO agency' on Perplexity cites Competitor A due to their published case study with metrics. Recommended fix: publish a GEO case study with before/after citation data."
- Bad: "Brand is not showing up in AI. Need to improve content." (no specific query, platform, or root cause)
Variants
- Lite variant: 10 queries across 2 platforms (ChatGPT + Perplexity) for monthly pulse check.
- Deep variant: 20 queries across 4 platforms with competitor benchmarking and quarterly trend report.