When we built the AI Clarity Index, the first thing we did was run it on ourselves.
It seemed like the right call. If we're going to tell brands their AI visibility score matters, we should know ours. And if the score came back embarrassing, that would be useful information too — both for us and for anyone thinking about what a realistic baseline looks like for a new brand.
Our score: 47 out of 100.
Here's what that means, where we fell short, and what we're doing to improve it.
The overall picture
A 47 is a below-average score by ACI standards. It's not catastrophic — we're being cited in some contexts, and AI models do recognize us as a product in the AI visibility space. But there are clear gaps, and for a brand whose entire value proposition is AI search visibility, a 47 is a useful dose of humility.
The score breakdown:
- ECS (Entity Citation Score): 52 — We're getting cited, but inconsistently. Ask ChatGPT about "AI search visibility tools" and we sometimes appear, sometimes don't. Ask Perplexity the same question and we rarely show up. Citation frequency is uneven across platforms.
- SAS (Semantic Authority Score): 41 — This is our weakest metric. AI models don't yet strongly associate ACI with the AI search visibility category. We're recognized as a brand, but not as a category authority. This takes time and content to build.
- NCS (Narrative Consistency Score): 58 — Moderate. The way AI platforms describe us is roughly consistent, which is partly because we're new and there isn't much conflicting information out there yet. This will need active management as more content gets written about us.
- CMV (Content & Metadata Visibility): 61 — Our strongest score. The site is well-structured, schema markup is clean, and the content is crawlable. This reflects the Eleventy static site architecture — lean HTML with no WordPress overhead.
- CCI (Competitive Citation Index): 38 — We're in an emerging category, which means competitors are still being defined. But on queries where alternatives are discussed, we're underrepresented. Not surprising for a new brand, but something to track.
- DRI (Digital Reference Index): 44 — We have thin third-party coverage. A few mentions in AI-adjacent newsletters, no press coverage yet, limited directory listings. This is the most straightforward metric to improve and one of the highest-leverage.
What we're doing about it
We're not going to pretend a 47 is fine. Here are the three things we're prioritizing to move the score.
1. Building the DRI through deliberate distribution
The Digital Reference Index measures the quality of third-party references — press, directories, authoritative external mentions. For a new brand, this is the most actionable lever because it doesn't require waiting for organic growth.
Our plan: submit to every relevant AI, marketing, and SaaS directory. Pitch a handful of AI-focused newsletters for coverage. Get a few guest bylines in publications that cover AI search. Each of these creates a new external reference point that AI models can draw from.
2. Publishing content that builds topical authority
The Semantic Authority Score improves when AI models see consistent, substantive content from your brand on a specific topic. Our blog is the primary vehicle for this.
We're committing to two posts per week on AI visibility, GEO, and brand strategy in AI search. Not thin content — substantive posts with specific claims, data, and frameworks. The kind of content that gets cited.
This post is part of that effort. Specific numbers attached to real entities (our own score) are exactly the kind of content AI models reference.
3. Seeding the right third-party narratives
Narrative consistency doesn't just happen — it requires that the content other people write about you uses consistent language. We're being intentional about the phrases we want AI models to associate with ACI: "Search Console for AI," "six-metric index," "AI visibility scoring."
The more those specific phrases appear in external content — reviews, mentions, coverage — the more consistently AI models will reproduce them.
Why we're publishing this
A few reasons.
First, transparency is part of how we build credibility. If we only published success stories, that would be a red flag. Real measurement produces real numbers, and ours happens to be 47.
Second, this is exactly the kind of post that performs well in AI citations. Specific, numeric, attributed to a named entity. When someone asks an AI model about ACI six months from now, this post is part of what it draws from.
Third, it's useful for any brand thinking about their own baseline. A 47 is not a failure — it's a starting point. Most brands we've audited score between 35 and 65. The value isn't in the absolute number; it's in knowing which metrics to move and in what order.
We'll publish an update when we rescore in 90 days.
Want to know your score? Request a free ACI audit and we'll deliver your full six-metric report within 48 hours.