AI Visibility Audit — Titusville, FL — April 22–23, 2026

The
Translation
Gap

What AI gets wrong about your practice. A structured, multi-engine audit of three therapy clinics in Titusville, FL — run across ChatGPT, Perplexity, Claude, Copilot, and Gemini on the same day, against the same query.

70AI outputs evaluated
5engines tested
66%invisible rate
0%Copilot citation rate
By Miriam Kraay/Published Apr 23, 2026/Read the full article on Substack →/
Key Findings — 70 AI Outputs × 5 Engines
66%
Invisible Rate
Two-thirds of all AI outputs returned no result at all for Twisted Minds.
5
Engines Audited
ChatGPT, Perplexity, Claude, Copilot, Gemini — each with different visibility logic.
0%
Copilot Citation Rate
No clinic in this test was visible on Microsoft Copilot.
79%
Perplexity Peak
Miracle City's Perplexity citation rate — the highest single-engine score in the test.
01 — The Translation Gap

Even When AI Cites You Correctly,
It Describes You Wrong

Absolute Victory Counseling — the clinic winning the directory game — specializes in PTSD from domestic violence and Post Abortion Syndrome (PAS). Her Psychology Today profile says so in plain English.

What She Actually Does

"I specialize in Post-Traumatic Stress Disorder (PTSD) as a result of domestic violence and Post Abortion Syndrome (PAS) from having an abortion or being a part of the abortion process in some way."

Source: Psychology Today profile (verbatim)
What AI Says About Her

"Focuses on trauma, anxiety, and depression."

Source: Google AI Overview, April 22, 2026

That description is not wrong. It is also not right. A woman looking for a therapist who understands post-abortion grief will not find Sylvia Dorsey in that sentence.

"The expertise is real. The translation is failing."

— Miriam Kraay, The Translation Gap
Diagram showing how specific clinical expertise — Post Abortion Syndrome, PTSD from domestic violence, teen trauma EMDR — passes through an AI summarization funnel and emerges as the generic phrase 'trauma, anxiety, and depression'

The AI summarization funnel: fifteen years of clinical specificity compressed into three generic words. The machine isn't making a judgment about quality — it's making a judgment about legibility.

01

A beautiful website does not guarantee AI visibility.

Having no website does not prevent it. Your visibility depends on where your expertise lives and whether the specific engines your families use can read that location.

02

Being cited is not the same as being described accurately.

The clinic winning this test is still being flattened into 'trauma, anxiety, and depression' when her work is specific to two narrow presentations.

03

You cannot audit this yourself by asking AI.

The tool lies, then corrects itself in the opposite direction, and sounds confident both times. You need raw outputs from multiple engines, captured on the same day.

02 — Engine Performance Data

Five Engines, Three Clinics,
Wildly Different Results

The idea that "AI visibility" is one score is wrong. It's at least five scores, and your expertise can be invisible on half of them while being perfectly findable on the other half. Select a clinic below to explore its engine-by-engine performance.

Citation Rate by Engine (%)14 prompts × 5 engines = 70 outputs
ChatGPTPerplexityClaudeCopilotGemini0%25%50%75%100%
13%
Overall Citation
21%
Displaced
0%
Mention Rate
66%
Invisible Rate
ChatGPTABSENT
0
Cited
10
Displaced
4
Invisible
0%
Rate
PerplexitySTRONG
9
Cited
5
Displaced
0
Invisible
64%
Rate
ClaudeABSENT
0
Cited
0
Displaced
14
Invisible
0%
Rate
CopilotABSENT
0
Cited
0
Displaced
14
Invisible
0%
Rate
GeminiABSENT
0
Cited
0
Displaced
14
Invisible
0%
Rate
Key Finding

Cited 9 of 70 times (13%), but only by Perplexity. Invisible to ChatGPT, Claude, Copilot, and Gemini entirely.

Citation Rate by Query Cluster — Twisted Minds
ClusterCitedMentionedDisplacedInvisibleCitation Rate
Branded3001220%
Category207168%
Differentiator204913%
High-Stakes Decision204913%
04 — The Hallucination Loop

The Tool That Summarizes
the Problem Is Part of the Problem

I asked Gemini to help me analyze the comparison — to tell me which therapists appeared on both the Google results and the ChatGPT results. Gemini gave me a confident answer:

"Wisdom Within Counseling (Katie Ziskind) appeared in both for its holistic approach."

— Gemini, first response. This was false.

It hadn't. Wisdom Within was in the Google results. It was not in the ChatGPT list. When I pushed back, Gemini reversed course:

"Common: None. No single provider appeared on both lists in your prompt."

— Gemini, second response. Also false. Jolie Cogan appeared on both.
The Implication for Your Practice

If you ask ChatGPT or Gemini "How do you describe my practice?" the answer will sound authoritative. It may be accurate. It may be a hallucination. The only way to see what is actually being said about you is to run the same query across multiple engines, capture the raw outputs, and compare them side by side.

Editorial illustration of an AI hallucination — a glowing speech bubble dissolving into static, representing confident but wrong AI output
The Hallucination Loop
1

Gemini shows an overlap that didn't exist

2

You push back with evidence

3

Gemini denies any overlap exists

4

Both responses were wrong. Jolie Cogan appeared on both lists.