Protocol Reference

Documentation

How the Bersyn protocol measures and verifies your product's representation across AI surfaces.

GEO — Generative Engine Observation

GEO scans query four AI providers — ChatGPT, Claude, Perplexity, and Gemini — with the strategic questions your buyers are actually asking. Each response is analyzed for whether your product is mentioned, how it's positioned, and whether the narrative is accurate.

Scans run in two layers: CCI scans target your core buying conversations, while CSI scans cover the broader category landscape. A weekly automated scan runs every Monday at 06:00 UTC for all active projects.

GEO observes. It does not manipulate.

CCI — Core Control Index

CCI measures your mention rate in the buying conversations that matter most — the queries where someone is actively evaluating solutions in your category.

Score: (queries_mentioned / total_queries) × 100

Thresholds

Strong ≥ 80% AI systems consistently recommend you in core buying queries.
Contested ≥ 50% You appear but compete with alternatives for recommendation.
Weak ≥ 20% AI mentions you occasionally but doesn't recommend you.
Invisible < 20% AI systems don't associate you with these conversations.

CSI — Category Surface Index

CSI measures how broadly your product appears across the wider category — not just your core queries, but the full landscape of conversations where someone might discover you. CSI scans run monthly (first Monday of each month).

Thresholds

Expanding ≥ 40% Surface presence growing across diverse category conversations.
Forming ≥ 20% Present in core conversations, beginning to appear in adjacent topics.
Thin ≥ 10% You appear in a few category conversations, only when directly relevant to your niche.
Invisible < 10% AI systems do not associate your product with this category.

Identity Fidelity

Identity Fidelity measures how accurately AI systems describe your product when they do mention it. It evaluates four dimensions:

Identity Match

Does the AI correctly identify what your product is and does?

Primary Recommendation

Is your product recommended as a top choice (not just mentioned)?

Differentiator Coverage

Does the AI highlight what makes you different from alternatives?

Boundary Coverage

Does the AI correctly describe what your product is NOT for?

Each dimension is scored 0–3 (None, Partial, Good, Excellent). The combined score gives you a picture of narrative control — not just whether you're mentioned, but whether you're described correctly.

PIL — Product Identity Layer

The PIL (also called ICID — Interoperable Canonical Identity Document) is a structured document that defines how your product should be described by AI systems. It includes:

Product identityName, tagline, category, target personas, and what you're not for.
CapabilitiesWhat your product does, with specific claims, evidence, and confidence levels.
DifferentiatorsWhat makes you different from alternatives, backed by evidence.
Use casesConcrete persona + problem + outcome scenarios.
PositioningAlternatives and displacement points (specific advantages over each competitor).
ConstraintsBoundary conditions: what your product explicitly should not be used for.

Each item has a stable ID, aliases, evidence (public URLs and internal references), and provenance tracking. You can generate a PIL automatically from your GitHub repository, then refine it manually.

Attestation

Attestation locks a specific version of your PIL as the canonical truth about your product. When you attest a PIL version:

01SHA-256 hash computed over the entire document
02All future drift analysis scored against this attested identity
03Public canonical URL generated: /p/your-product/pil.json

Only one version can be attested at a time. Creating a new attestation supersedes the previous one.

Reinforcement Phases

As you publish reinforcement artifacts and strengthen your AI positioning, your product moves through phases that describe the strength of your presence in AI conversations:

Emerging (0–3)

Your product is rarely mentioned in relevant conversations. Focus on foundational reinforcement artifacts.

Forming (3–5)

AI systems are beginning to recognize your product. Mentions are inconsistent. Double down on capability-specific artifacts.

Contested (5–7)

You appear regularly but compete with alternatives for the top recommendation. Focus on differentiation and displacement.

Dominant (7+)

AI systems consistently recommend you as the top choice. Maintain through ongoing identity reinforcement.

Drift Detection

After attesting your PIL, Bersyn continuously monitors how AI systems describe your product and flags identity drift — discrepancies between your attested identity and what AI providers actually say.

Drift findings are categorized by severity (critical, warning, info) and type (e.g., missing capability, incorrect positioning, boundary violation). Each finding links back to the specific PIL item that was misrepresented.