How to Set Up AI Visibility Tracking in 30 Minutes
You can have AI visibility tracking running in 30 minutes. Not a polished dashboard, not a stakeholder-ready program — just the actual measurement loop, end to end, with real data flowing. This post walks through it step by step, with copy-paste code. The output of the 30 minutes is a spreadsheet showing how often each of your top 50 queries mentions your brand across 4 AI engines, and a baseline you can compare against next week.
If you decide it’s worth productionizing afterwards, the path from prototype to production is well-trodden — we covered the build-vs-buy decision in AI search visibility tools: build vs buy. For the framework on what you’re actually measuring, see the AI brand visibility measurement framework.
What you’ll need
- A cloro API key (sign up at cloro.dev) — the free trial covers more than enough credits for the prototype
- Python 3.9+ installed locally (or any HTTP client; we use Python here)
- A Google Sheets or Excel doc to log results
- 30 minutes
That’s it. No infrastructure, no dashboard tool, no scheduler. The goal of this 30 minutes is to validate that AI visibility tracking is worth doing for your brand — not to build the production system.
Minute 0–5: Pick your queries
The single biggest determinant of whether your tracking program produces useful data is the query set. Open a doc and list 50 queries split roughly:
- 15 branded queries that include your company name. Examples:
what does cloro do,cloro vs serpapi,is cloro reliable for SERP scraping. - 15 category queries about your product space without naming any brand. Examples:
best SERP API for AI search,top AI brand monitoring tools,cheapest scraping API for Google. - 20 use-case queries describing the job your product solves. Examples:
how to monitor brand mentions in ChatGPT,how to track AI overview citations,how do I scrape Google AI mode results.
Source the queries from your sales team’s first-call notes (what do prospects actually ask?), your existing Google Search Console data (what queries already drive impressions to your site?), and the autocomplete suggestions for your top 5 head terms. Don’t invent the queries from scratch — they need to be the queries your buyers actually type.
Minute 5–10: Get a cloro API key
If you already have one, skip ahead. If not: go to cloro.dev, sign up, copy the sk_live_... key from your dashboard, and export it as an environment variable so the script below picks it up:
export CLORO_API_KEY="sk_live_your_key_here"
The free trial gives you enough credits to run the prototype several times over without hitting a paywall.
Minute 10–25: Run the script
Save the following as track-ai-visibility.py and run it. It hits 4 AI engines (ChatGPT, Perplexity, Gemini, AI Overview) with each of your queries and logs the results to a CSV.
import csv
import os
import requests
from urllib.parse import urlparse
API_KEY = os.environ["CLORO_API_KEY"]
BRAND_DOMAIN = "cloro.dev" # change to your domain
COMPETITOR_DOMAINS = ["competitor1.com", "competitor2.com"] # change to yours
ENGINES = ["chatgpt", "perplexity", "gemini", "aioverview"]
COUNTRY = "US"
QUERIES = [
# Paste your 50 queries here, one per line, as strings.
"best AI brand visibility tracking tools",
"how to monitor chatgpt mentions",
# ... etc
]
def check_query(engine: str, query: str) -> dict:
"""Hit the cloro API for one engine + one query, return parsed mention data."""
response = requests.post(
f"https://api.cloro.dev/v1/monitor/{engine}",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"prompt": query,
"country": COUNTRY,
"include": {"sources": True, "markdown": True, "entities": True},
},
timeout=120,
)
response.raise_for_status()
data = response.json()["result"]
sources = data.get("sources", [])
text = (data.get("markdown") or "").lower()
brand_mentioned = BRAND_DOMAIN in text or any(
BRAND_DOMAIN in (s.get("url") or "") for s in sources
)
competitor_mentions = sum(
1 for c in COMPETITOR_DOMAINS
if c in text or any(c in (s.get("url") or "") for s in sources)
)
brand_cited = any(BRAND_DOMAIN in (s.get("url") or "") for s in sources)
return {
"engine": engine,
"query": query,
"brand_mentioned": brand_mentioned,
"brand_cited": brand_cited,
"competitor_mentions": competitor_mentions,
"total_sources": len(sources),
}
def main():
rows = []
for query in QUERIES:
for engine in ENGINES:
try:
rows.append(check_query(engine, query))
print(f" {engine}: {query[:60]}")
except Exception as e:
print(f" ! {engine}: {query[:60]} -> {e}")
with open("ai-visibility-results.csv", "w", newline="") as f:
writer = csv.DictWriter(f, fieldnames=rows[0].keys())
writer.writeheader()
writer.writerows(rows)
# Summary
total = len(rows)
mentioned = sum(1 for r in rows if r["brand_mentioned"])
cited = sum(1 for r in rows if r["brand_cited"])
print(f"\nMention rate: {mentioned}/{total} = {100*mentioned/total:.1f}%")
print(f"Citation rate: {cited}/{total} = {100*cited/total:.1f}%")
if __name__ == "__main__":
main()
Run it:
python3 track-ai-visibility.py
For a 50-query × 4-engine matrix, the script takes 5–15 minutes depending on engine response times. The output is a CSV with one row per query-engine pair plus a summary printed to stdout.
Minute 25–30: Read the baseline
Open ai-visibility-results.csv in Sheets/Excel. The summary numbers at the bottom of the script output are your baseline mention rate and citation rate. Compare:
- Mention rate by engine. A pivot table on
engine×brand_mentionedshows whether you’re invisible on Perplexity but present on ChatGPT, or any other per-engine pattern. - Mention rate by query intent bucket. Tag each query as branded/category/use-case and pivot. Branded mention rate near 100% is expected; category mention rate at 0% is your biggest opportunity; use-case mention rate is where buying intent lives.
- Citation rate vs mention rate. The ratio between these two tells you whether AI engines drive traffic to you or just brand awareness without clicks. Low citation rate = you have an attribution problem, not a visibility problem.
That’s the baseline. Save the CSV with today’s date in the filename. Run the script again next week. Diff the two CSVs. That diff is your AI search tracking program in week 2 — and the same pattern scales to weekly cadence indefinitely.
What productionizing looks like (when you’re ready)
The prototype above answers “is this worth doing”. Within a week or two of running it, the answer is usually yes. The path from prototype to production:
- Move the CSV to a warehouse. Pipe results into Postgres, BigQuery, or Snowflake instead of CSV. One row per query-engine-week.
- Schedule the script. GitHub Actions, cron, Airflow — whatever you already use. Weekly is the right default cadence.
- Build a dashboard. Looker/Metabase if you have one, otherwise a simple HTML/Streamlit view. Show: mention rate trend, share of voice trend, citation rate trend, query-level breakdown.
- Add competitor tracking. Expand the script to compute share of voice automatically against your top 3–5 competitors.
- Layer in sentiment. Pipe each response text through a sentiment classifier; store the score per row.
Or skip steps 1–5 entirely and use cloro’s AI visibility tracking as the production layer — same API, same engine coverage, but you don’t build the warehouse, scheduler, or dashboard yourself. We covered the build-vs-buy trade-off in AI search visibility tools: build vs buy.
Common gotchas
- Personalization. If you’re testing AI engines manually in a browser, your logged-in account skews the results. The API approach above sidesteps this — every API call is a clean session.
- Rate limits. ChatGPT and Gemini occasionally rate-limit; cloro’s API handles retries transparently, so the script above doesn’t need its own retry logic.
- Country drift. AI engines personalize by IP geo. The
country: "US"parameter ensures consistent country targeting; change it for international tracking. - Query set staleness. The 50 queries that mattered three months ago aren’t the same queries that matter today. Refresh quarterly.
Next steps
Once you have a baseline, the next decisions are: which engines to add (we recommend AI Mode, Copilot, Grok in that priority), what cadence to settle on (weekly is the right default), and which platform tool to layer in for stakeholder dashboards (see LLM visibility tracking tools). The 30-minute prototype is the smallest possible end-to-end loop. Everything from here is iteration.
Frequently asked questions
Do I need an engineer to set up AI visibility tracking?+
Not for the 30-minute setup in this post — anyone comfortable running a Python script and pasting output into a spreadsheet can complete it. For ongoing automated tracking with weekly cadence and a real dashboard, you eventually want either a data engineer or a no-code platform like Peec AI/OtterlyAI/Profound. The 30-minute version gets you measuring; productionizing comes later.
What does the 30-minute setup cost?+
Negligible. Running 50 queries against 4 engines is 200 API calls at cloro's pay-per-call pricing — under $1 for the initial run. Even at weekly cadence over a month, the API spend stays in the low single digits per month while you're validating the program. Productionizing changes the cost equation; the prototype phase essentially doesn't.
Which engines should I cover in the first run?+
ChatGPT, Perplexity, Google AI Overview, and Gemini. Those four cover the majority of buyer-facing AI search behavior in 2026. AI Mode, Copilot, and Grok can be added in week 2 once the pipeline works end-to-end.
How do I pick the queries?+
Split a 50-query starter set roughly 30/30/40 across (a) branded queries that include your company name, (b) category queries about your product space without naming any brand, and (c) use-case queries describing the job-to-be-done your product solves. Pull the actual queries from your sales team's first-call notes and your existing GSC data — don't invent them from scratch.
Where does the data live after I run the script?+
For the 30-minute prototype, a spreadsheet is fine — Google Sheets or Excel, one row per query-engine pair, columns for mention rate / citation count / sentiment. Productionizing means moving to a data warehouse (BigQuery, Snowflake, Postgres) and a dashboard (Looker, Metabase, or a custom view). The prototype answers 'is this worth automating', and the answer is usually yes within the first week.
Related reading
LLM Visibility Tools: 12 Tested for AI Search
We tested 12 LLM visibility tracking tools on real brand-monitoring workflows across ChatGPT, Perplexity, Gemini, and Google AI Overview. What works, what doesn't.
How to monitor ChatGPT mentions of your brand
Learn proven methods to monitor when ChatGPT mentions your brand, track competitor activity, and improve your AI search presence.
Share of Voice in the AI Era
Google rankings don't matter if ChatGPT doesn't mention you. Learn how to measure and optimize your Share of Model (SoM) in the age of AI search.