Production-Ready

Share of Model

Share of Model measures your brand's presence in AI-generated responses as a percentage of total category mentions, the ultimate outcome metric for semantic intelligence.

Published January 28, 2026

Share of Model

Direct Answer: Share of Model (SOM) measures your brand's percentage of mentions in AI-generated responses for category queries, the definitive outcome metric for AI visibility.

Overview

Context: This section explains Share of Model and its role as the ultimate measure of semantic intelligence effectiveness.

What It Is

Share of Model quantifies your brand's presence in AI-generated responses as a percentage of total brand mentions for category-relevant queries. If AI systems mention your brand in 15 of 100 category queries, your SOM is 15%. The metric spans all major AI platforms: ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews.

Why It Matters

SOM is the outcome metric that all other semantic metrics ultimately serve. Semantic density, contextual coherence, and retrieval confidence are inputs; Share of Model is the output. With 58.5% of searches ending without a click, AI-generated answers increasingly replace traditional search results. Brands invisible to AI systems lose access to a growing share of audience attention.

How It Relates to DecodeIQ

DecodeIQ optimizes the inputs that determine SOM. The platform measures semantic density (4-6% target), contextual coherence (80+ target), and retrieval confidence (60+ target), each feeding the citation probability that produces Share of Model. Improving DecodeScore (65+ publish threshold, 75+ recommended) directly increases SOM potential.

Key Differentiation

SOM differs from traditional Share of Voice (media impressions) by measuring the channel gaining relevance rather than channels losing it. Organizations tracking only Share of Voice optimize for where audiences were; those tracking Share of Model optimize for where audiences are going.


How Share of Model Works

Context: This section covers the mechanics of Share of Model calculation and the systems that determine it.

Share of Model emerges from the interaction between your content and AI retrieval systems. When a user queries an AI platform with a category-relevant question, the system performs retrieval-augmented generation (RAG): searching indexed content, ranking by relevance, and synthesizing a response that may cite or mention brands.

The Retrieval Pipeline: AI systems follow a three-stage process: (1) Query embedding, converting the user question into a vector representation, (2) Similarity search, finding content vectors closest to the query, (3) Response generation, synthesizing an answer that may include citations. Your content's position in this pipeline determines whether you appear in the response.

Calculation Method: SOM = (Queries where your brand appears / Total category queries sampled) x 100. A sample of 100 queries about "project management software" might mention Asana 23 times, Monday.com 19 times, Notion 15 times, and your brand 8 times. Your SOM is 8%. The denominator includes all queries, not just those mentioning any brand; some AI responses mention no brands at all.

Platform Variation: Each AI system produces different SOM results because they use different: training data cutoffs, retrieval sources, citation policies, and ranking algorithms. Perplexity emphasizes recent web content and cites sources explicitly. ChatGPT balances training knowledge with real-time retrieval. Claude weights authoritative sources with distinct preferences. Google AI Overviews integrate search ranking signals. Comprehensive SOM measurement requires sampling across all major platforms.

Query Segmentation: SOM varies by query type. Informational queries ("what is project management") may produce different results than commercial queries ("best project management software for startups") or comparison queries ("Asana vs Monday.com"). Segment your SOM measurement by query intent to identify where you perform strongest and where gaps exist.


Share of Model vs Share of Voice

Context: This section clarifies the distinction between legacy visibility metrics and AI-era metrics.

Share of Voice (SoV) measures brand presence across traditional media channels: advertising impressions, PR mentions, social media reach, and search engine rankings. For decades, SoV served as the primary competitive visibility benchmark. Organizations with higher SoV commanded greater audience attention.

The Shift: AI-mediated discovery changes this equation. When users receive answers directly from AI systems, they bypass the channels SoV measures. A user asking ChatGPT "what CRM should I use" never sees your Google ad, never visits your ranking page, never encounters your PR coverage. They receive an AI-generated answer that may or may not mention your brand.

MetricShare of VoiceShare of Model
MeasuresMedia impressions, ad reach, PR mentionsAI citations, recommendations, mentions
ChannelsSearch ads, organic rankings, social, PRChatGPT, Claude, Perplexity, Gemini, AI Overviews
User BehaviorClick through to your contentReceive answer without clicking
TrendDeclining relevance (58.5% zero-click)Increasing relevance
OptimizationBid strategy, SEO, media relationsSemantic structure, entity authority
Time HorizonImmediate (ad spend = impressions)Delayed (content changes take weeks)

The Zero-Click Reality: SparkToro's 2024 analysis found 58.5% of Google searches end without a click to any website. Users find answers in featured snippets, knowledge panels, and AI Overviews. This percentage grows annually. Organizations measuring only SoV track channels that deliver diminishing returns.

Complementary, Not Replacement: SoV remains relevant for brand awareness, consideration, and channels AI does not mediate. But SOM tracks the fastest-growing discovery channel. Organizations need both metrics, with increasing weight on SOM as AI mediation expands.


Measuring and Improving Share of Model

Context: This section covers practical measurement methodology and improvement strategies.

Manual Measurement Protocol: (1) Define 50-100 category-relevant queries spanning informational, commercial, and comparison intents. (2) Query each major AI platform (ChatGPT, Claude, Perplexity, Gemini, Google AI Overviews). (3) Record whether your brand is mentioned, cited, or recommended in each response. (4) Calculate SOM per platform and overall. (5) Repeat weekly to establish trends.

SOM Benchmark Bands:

  • Below 5%: AI invisible, content fails retrieval thresholds
  • 5-15%: Marginal presence, occasionally cited for specific queries
  • 15-30%: Competitive visibility, regular citations but not dominant
  • 30-50%: Strong positioning, frequent citations across query types
  • Above 50%: Category leadership, default mention for category queries

The Causal Chain: You cannot optimize SOM directly. SOM is an outcome, not an input. The improvement path flows through DecodeIQ's measurable metrics:

  1. Semantic Density (4-6%) → Increases entity richness for retrieval matching
  2. Contextual Coherence (80+) → Improves content quality signals
  3. Retrieval Confidence (60+) → Predicts citation likelihood
  4. DecodeScore (75+) → Composite readiness indicator
  5. AI Citation → Content appears in responses
  6. Share of Model → Cumulative citation share

Improvement Timeline: Content optimizations require 4-6 weeks for re-indexing. Initial citation changes appear at 30-60 days. Significant SOM shifts (5-10 percentage points) require 3-6 months. The compounding effect accelerates results: improved content earns more citations, increasing training data exposure for future model versions, creating a virtuous cycle.

Practical Example: A B2B software company measured 7% SOM for "marketing automation" queries. Audit revealed: semantic density 2.8% (below 4% threshold), contextual coherence 62 (below 80 target), retrieval confidence 38 (below 60 target). After 12 weeks of content restructuring targeting DecodeScore 75+, metrics improved to: semantic density 5.1%, contextual coherence 84, retrieval confidence 67. At week 16, SOM reached 18%. The 11-point SOM gain traced directly to input metric improvements.


Version History

  • v1.0 (2026-01-28): Initial publication. Core concept definition, SOM vs SoV comparison, measurement methodology, improvement framework. 7 FAQs, 5 related concepts, 6 external sources. Validated against DecodeIQ semantic intelligence framework.

Frequently Asked Questions

Share of Model (SOM) measures your brand's percentage of mentions in AI-generated responses for category-relevant queries. Share of Voice measures media impressions, ad reach, and PR mentions across traditional channels. The distinction matters because 58.5% of Google searches now end without a click, users increasingly bypass traditional media channels and receive answers directly from AI systems. SOM tracks the channel gaining attention; SoV tracks channels losing it. A brand with high Share of Voice but low Share of Model has visibility where audiences used to be, not where they are going.

Related Concepts

Sources & References

JM

Founding Technical Architect, DecodeIQ

M.Sc. (2004), 20+ years semantic systems architecture

Jack Metalle is the Founding Technical Architect of DecodeIQ, a semantic intelligence platform that helps organizations structure knowledge for AI-mediated discovery. His 2004 M.Sc. thesis predicted the shift from keyword-based to semantic retrieval systems.

Published Jan 28, 2026Version 1.0

Ready to Apply These Metrics?

Start Free Trial