{"id":7110,"date":"2026-03-24T00:41:18","date_gmt":"2026-03-24T00:41:18","guid":{"rendered":"https:\/\/www.wizbrand.com\/tutorials\/sample-ratio-mismatch\/"},"modified":"2026-03-24T00:41:18","modified_gmt":"2026-03-24T00:41:18","slug":"sample-ratio-mismatch","status":"publish","type":"post","link":"https:\/\/www.wizbrand.com\/tutorials\/sample-ratio-mismatch\/","title":{"rendered":"Sample Ratio Mismatch: What It Is, Key Features, Benefits, Use Cases, and How It Fits in CRO"},"content":{"rendered":"\n<p>Sample Ratio Mismatch (SRM) is one of the most important \u201csanity checks\u201d in experimentation, yet it\u2019s frequently misunderstood. In <strong>Conversion &amp; Measurement<\/strong>, SRM is the signal that your experiment\u2019s observed traffic split doesn\u2019t match the split you intended\u2014often enough that random chance is an unlikely explanation. In <strong>CRO<\/strong>, that matters because you can\u2019t trust uplift, winners, or learnings if the people who saw each variant weren\u2019t assigned fairly.<\/p>\n\n\n\n<p>Modern marketing stacks make this harder, not easier. Multiple devices, consent choices, server-side rendering, edge redirects, personalization rules, ad blockers, and caching can all interfere with how users are bucketed into variants. A strong <strong>Conversion &amp; Measurement<\/strong> strategy treats Sample Ratio Mismatch as a first-class monitoring requirement, not an afterthought when results look \u201cweird.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What Is Sample Ratio Mismatch?<\/h2>\n\n\n\n<p><strong>Sample Ratio Mismatch (SRM)<\/strong> happens when an A\/B test (or any controlled experiment) receives a materially different distribution of users across variants than the planned allocation. If you set a 50\/50 split but observe 57\/43, that may be normal noise at small sample sizes\u2014but at scale, it can be statistically implausible, which is exactly what SRM detection is designed to flag.<\/p>\n\n\n\n<p>The core concept is simple: <strong>experiments require random assignment<\/strong>. When that assignment is compromised, the groups may differ in ways unrelated to the change you\u2019re testing (device mix, geography, logged-in status, traffic source, etc.). Business-wise, Sample Ratio Mismatch is less about \u201cmath purity\u201d and more about risk control: it\u2019s a warning that decisions based on the test could be wrong.<\/p>\n\n\n\n<p>Within <strong>Conversion &amp; Measurement<\/strong>, SRM sits at the intersection of data quality, experiment delivery, and analytics integrity. Within <strong>CRO<\/strong>, it\u2019s a gatekeeper\u2014if SRM is present, you typically pause interpretation until you understand and fix the cause.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Why Sample Ratio Mismatch Matters in Conversion &amp; Measurement<\/h2>\n\n\n\n<p>Sample Ratio Mismatch matters because it\u2019s often a symptom of deeper problems that can distort multiple metrics\u2014not just conversion rate. In <strong>Conversion &amp; Measurement<\/strong>, a trustworthy experiment depends on consistent exposure tracking and unbiased assignment. SRM tells you that one of those may have broken.<\/p>\n\n\n\n<p>Strategically, catching SRM early protects you from shipping the wrong experience. A \u201cwinning\u201d variant might look better because it received a different mix of users, not because it performed better. For <strong>CRO<\/strong> programs, this can lead to repeated false positives, wasted engineering cycles, and stakeholder skepticism about experimentation.<\/p>\n\n\n\n<p>From a marketing outcomes perspective, SRM can affect:\n&#8211; Budget decisions tied to landing page performance\n&#8211; Funnel optimizations based on misleading step-level conversion\n&#8211; Personalization rules trained on biased exposure data\n&#8211; Brand and customer experience if users see inconsistent variants<\/p>\n\n\n\n<p>Teams that routinely monitor Sample Ratio Mismatch gain competitive advantage by making faster, safer decisions with fewer reversals and less rework\u2014exactly what strong <strong>Conversion &amp; Measurement<\/strong> aims to enable.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Sample Ratio Mismatch Works<\/h2>\n\n\n\n<p>In practice, Sample Ratio Mismatch is detected by comparing <strong>expected allocation<\/strong> to <strong>observed counts<\/strong>, then checking whether the difference is plausibly due to chance.<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>\n<p><strong>Input \/ trigger: define the expected split<\/strong><br\/>\n   You launch an experiment with an intended allocation (for example, 50\/50; 90\/10; or 33\/33\/33). You also define <em>what unit is assigned<\/em>: user, session, device, or account.<\/p>\n<\/li>\n<li>\n<p><strong>Analysis \/ processing: measure observed exposure and run an SRM test<\/strong><br\/>\n   As traffic accumulates, you count how many units were exposed to each variant. Then you apply a statistical check (commonly a chi-square goodness-of-fit test or an equivalent proportion test) to evaluate whether the observed distribution deviates \u201ctoo much\u201d from expectation.<\/p>\n<\/li>\n<li>\n<p><strong>Execution \/ application: diagnose and isolate causes<\/strong><br\/>\n   If Sample Ratio Mismatch is detected, you investigate where the skew enters: bucketing logic, caching, redirects, bot filtering, consent gating, instrumentation differences, or audience targeting rules.<\/p>\n<\/li>\n<li>\n<p><strong>Output \/ outcome: decide whether to trust results<\/strong><br\/>\n   If SRM is real and unexplained, you typically treat the test as compromised. In <strong>CRO<\/strong>, the safest move is to pause, fix the delivery or measurement issue, and restart\u2014rather than \u201csalvage\u201d conclusions from biased assignment.<\/p>\n<\/li>\n<\/ol>\n\n\n\n<h2 class=\"wp-block-heading\">Key Components of Sample Ratio Mismatch<\/h2>\n\n\n\n<p>Sample Ratio Mismatch isn\u2019t a standalone feature; it\u2019s an outcome created by systems and processes. The most important components are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Experiment allocation plan<\/strong>: the intended ratio and the rationale (e.g., 50\/50 for speed, 90\/10 for risk mitigation).<\/li>\n<li><strong>Randomization and bucketing method<\/strong>: how users are assigned (client-side vs server-side, deterministic hashing vs random assignment, cookie vs user ID).<\/li>\n<li><strong>Exposure definition and logging<\/strong>: what counts as \u201cin the experiment\u201d (page view, render event, feature flag evaluation, or a confirmed impression).<\/li>\n<li><strong>Data pipeline integrity<\/strong>: consistent event collection, deduplication rules, and joins between exposure and conversion events in your <strong>Conversion &amp; Measurement<\/strong> stack.<\/li>\n<li><strong>Traffic quality controls<\/strong>: bot filtering, internal traffic exclusion, QA\/test user handling, and anomaly detection.<\/li>\n<li><strong>Governance and ownership<\/strong>: clear responsibility across product, engineering, analytics, and marketing for diagnosing SRM and enforcing experiment guardrails\u2014critical for scalable <strong>CRO<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Types of Sample Ratio Mismatch<\/h2>\n\n\n\n<p>Sample Ratio Mismatch doesn\u2019t have universally standardized \u201ctypes,\u201d but in real <strong>Conversion &amp; Measurement<\/strong> work, SRM tends to appear in a few recurring contexts:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">1) Allocation SRM (true split deviation)<\/h3>\n\n\n\n<p>The platform is genuinely assigning too many users to one variant due to a bug, misconfigured traffic allocation, caching behavior, or inconsistent hashing inputs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2) Tracking SRM (measurement-based deviation)<\/h3>\n\n\n\n<p>Assignment might be correct, but exposure tracking is missing or duplicated more in one variant than another\u2014often due to tag firing differences, blocked scripts, or conditional rendering.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3) Eligibility SRM (who becomes eligible differs by variant)<\/h3>\n\n\n\n<p>Users only become \u201ccounted\u201d after an eligibility step (consent prompt, login, feature availability, page route). If eligibility is influenced by the variant experience, the measured sample can skew even when assignment was correct.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4) Segment-limited SRM (skew within key dimensions)<\/h3>\n\n\n\n<p>The overall split might look fine, but SRM appears within segments (mobile vs desktop, specific geographies, paid vs organic). This is especially relevant in <strong>CRO<\/strong>, where segment performance often drives decisions.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Real-World Examples of Sample Ratio Mismatch<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Example 1: Landing page A\/B test with uneven traffic from redirects<\/h3>\n\n\n\n<p>A team runs a 50\/50 test on a paid landing page. After two days, they observe 62\/38 and an SRM alert triggers. Investigation reveals that one variant\u2019s URL path triggers an extra redirect for certain UTM combinations, and the experiment script loads after the redirect\u2014so some users never get counted as exposed. In <strong>Conversion &amp; Measurement<\/strong>, fixing the redirect and moving exposure logging earlier resolves the Sample Ratio Mismatch and prevents misleading conversion rate comparisons.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example 2: Consent banner impacts variant eligibility<\/h3>\n\n\n\n<p>An ecommerce site tests a new hero section. Variant B loads a heavier asset and delays the consent banner interaction on mobile. The experiment counts only users who accept analytics cookies (an eligibility rule). More users in Variant A accept before bouncing, so Variant A ends up with more measurable exposures, creating Sample Ratio Mismatch. For <strong>CRO<\/strong>, the key lesson is to define exposure and eligibility independently of the UX change\u2014or at least validate that measurement isn\u2019t variant-dependent.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Example 3: Feature flag experiment with logged-in vs logged-out hashing<\/h3>\n\n\n\n<p>A SaaS product runs a server-side experiment. Logged-in users are bucketed by user ID, while logged-out users are bucketed by a cookie that sometimes resets in certain browsers. Variant counts drift over time and SRM appears, concentrated in a few browsers. In <strong>Conversion &amp; Measurement<\/strong>, aligning identity rules (or treating logged-out traffic separately) reduces SRM and improves the validity of downstream activation metrics.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Benefits of Using Sample Ratio Mismatch<\/h2>\n\n\n\n<p>You don\u2019t \u201cuse\u201d Sample Ratio Mismatch as a tactic; you use <strong>SRM detection and response<\/strong> as a control system. Done well, it delivers clear benefits to <strong>Conversion &amp; Measurement<\/strong> and <strong>CRO<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Higher decision accuracy<\/strong>: you avoid calling false winners caused by biased assignment.<\/li>\n<li><strong>Lower experimentation waste<\/strong>: SRM alerts help you stop broken tests earlier, saving time and traffic.<\/li>\n<li><strong>Faster root-cause discovery<\/strong>: repeated SRM patterns often reveal systemic issues (redirect rules, inconsistent tagging, identity stitching problems).<\/li>\n<li><strong>Improved stakeholder trust<\/strong>: reliable guardrails strengthen confidence in the CRO program and reduce \u201cwe don\u2019t believe the tests\u201d pushback.<\/li>\n<li><strong>Better customer experience<\/strong>: diagnosing SRM often uncovers performance or routing issues affecting real users, not just analytics.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Challenges of Sample Ratio Mismatch<\/h2>\n\n\n\n<p>Sample Ratio Mismatch is straightforward to define but can be difficult to debug. Common challenges include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Multiple potential causes<\/strong>: allocation bugs, tracking gaps, caching, bot traffic, consent logic, and segment targeting can all produce SRM-like symptoms.<\/li>\n<li><strong>False alarms at low sample sizes<\/strong>: early in a test, natural variance can resemble SRM. Good <strong>Conversion &amp; Measurement<\/strong> practice sets sensible thresholds and monitoring windows.<\/li>\n<li><strong>Identity complexity<\/strong>: cross-device behavior and mixed identifiers (cookie vs account) can skew counts and make SRM appear intermittent.<\/li>\n<li><strong>Instrumentation differences between variants<\/strong>: if variant code changes event firing, you can create tracking SRM that looks like allocation SRM.<\/li>\n<li><strong>Operational pressure<\/strong>: teams may be tempted to \u201cignore SRM\u201d when results look favorable. In <strong>CRO<\/strong>, this is a reliability trap that compounds over time.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Best Practices for Sample Ratio Mismatch<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Monitor SRM early and continuously<\/h3>\n\n\n\n<p>Check for Sample Ratio Mismatch shortly after launch and then periodically, especially after deployments. SRM that appears mid-test often signals a release-related change.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Define exposure consistently<\/h3>\n\n\n\n<p>Log exposure at a consistent point across variants (and ideally as close as possible to assignment). Avoid defining exposure in a way that can be influenced by the variant\u2019s UX.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Use guardrails and stop conditions<\/h3>\n\n\n\n<p>Treat SRM as a validity gate. A practical rule in <strong>CRO<\/strong>: if SRM is statistically significant and persists after initial ramp-up, pause interpretation and investigate.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Segment the SRM diagnosis<\/h3>\n\n\n\n<p>When you detect Sample Ratio Mismatch, break it down by:\n&#8211; device type\n&#8211; browser\n&#8211; geography\n&#8211; traffic source\n&#8211; logged-in status\n&#8211; entry page \/ route<\/p>\n\n\n\n<p>This often reveals the mechanism behind the skew.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Align identity and bucketing rules<\/h3>\n\n\n\n<p>If some users are bucketed by cookie and others by user ID, document it and test for drift. In <strong>Conversion &amp; Measurement<\/strong>, consistency in unit-of-assignment reduces avoidable SRM.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">QA the full funnel, not just the variant render<\/h3>\n\n\n\n<p>Validate that both variants:\n&#8211; fire exposure events\n&#8211; record conversions\n&#8211; pass through the same redirects\n&#8211; load required tags\n&#8211; respect the same eligibility rules<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Tools Used for Sample Ratio Mismatch<\/h2>\n\n\n\n<p>Sample Ratio Mismatch is typically managed through a combination of experimentation, analytics, and monitoring systems:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Experimentation platforms \/ feature flag systems<\/strong>: control allocation, bucketing, and exposure logging; often provide SRM alerts or raw counts needed to compute them.<\/li>\n<li><strong>Analytics tools<\/strong>: help validate variant counts, segment distributions, and funnel performance within <strong>Conversion &amp; Measurement<\/strong> workflows.<\/li>\n<li><strong>Tag management systems<\/strong>: useful for auditing whether exposure and conversion tags fire consistently across variants.<\/li>\n<li><strong>Data warehouses and BI dashboards<\/strong>: enable repeatable SRM checks, historical baselines, and automated alerts for CRO teams operating at scale.<\/li>\n<li><strong>Observability and performance monitoring<\/strong>: helps uncover variant-specific latency, errors, or redirect loops that can produce tracking SRM.<\/li>\n<li><strong>CRM systems<\/strong> (when experiments tie to lifecycle): validate whether user identity and lifecycle events are being attributed consistently across variants.<\/li>\n<\/ul>\n\n\n\n<p>The most important \u201ctool\u201d is a standardized SRM checklist embedded into your release and experiment process.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Metrics Related to Sample Ratio Mismatch<\/h2>\n\n\n\n<p>To make Sample Ratio Mismatch actionable in <strong>Conversion &amp; Measurement<\/strong>, track metrics that reveal both the skew and its likely cause:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Expected vs observed allocation ratio<\/strong> (by variant): the basic SRM signal.<\/li>\n<li><strong>SRM p-value or significance indicator<\/strong>: whether the deviation is unlikely under random assignment.<\/li>\n<li><strong>Exposure count over time<\/strong>: helps detect when SRM began (often correlates with a deployment or campaign change).<\/li>\n<li><strong>Eligibility rate<\/strong>: percent of assigned users who become \u201cmeasurable\u201d (e.g., consented, logged in, reached the test page).<\/li>\n<li><strong>Event loss rate \/ tag firing rate<\/strong>: discrepancies in exposure or conversion event collection across variants.<\/li>\n<li><strong>Traffic quality metrics<\/strong>: bot rate, internal traffic rate, and unusually high bounce or error rates that differ by variant.<\/li>\n<li><strong>Segment distribution parity<\/strong>: compare device, geo, source, and returning\/new proportions by variant\u2014highly relevant to <strong>CRO<\/strong> interpretation.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Future Trends of Sample Ratio Mismatch<\/h2>\n\n\n\n<p>Several industry shifts are changing how Sample Ratio Mismatch appears and how teams manage it within <strong>Conversion &amp; Measurement<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>More server-side experimentation<\/strong>: improves control and performance but increases dependence on consistent identity and backend logging\u2014new places for SRM to emerge.<\/li>\n<li><strong>Privacy and consent constraints<\/strong>: measurement eligibility rules (consent, limited cookies) can unintentionally create SRM-like distortions if not designed carefully.<\/li>\n<li><strong>Automation and AI-driven monitoring<\/strong>: anomaly detection can flag SRM faster and correlate it with releases, segments, or traffic sources, improving CRO operations.<\/li>\n<li><strong>Personalization and bandit approaches<\/strong>: adaptive allocation changes expected ratios over time. SRM checks must account for dynamic targets rather than fixed splits.<\/li>\n<li><strong>Increased emphasis on measurement resilience<\/strong>: teams are building redundant validation (client + server logs, multiple counters) to reduce blind spots that lead to Sample Ratio Mismatch.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Sample Ratio Mismatch vs Related Terms<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Sample Ratio Mismatch vs Selection Bias<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Sample Ratio Mismatch<\/strong> is about <em>allocation\/exposure counts not matching expectations<\/em>.<\/li>\n<li><strong>Selection bias<\/strong> is broader: it means the groups differ systematically due to how participants enter the sample. SRM can be a symptom of selection bias, but selection bias can exist even when the overall split looks correct.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Sample Ratio Mismatch vs Instrumentation\/Tracking Errors<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Tracking errors are any mistakes in event collection (missing, duplicated, misattributed).<\/li>\n<li>Sample Ratio Mismatch specifically concerns whether those errors (or allocation issues) produce an improbable variant split. You can have tracking errors without SRM, and SRM without obvious tracking errors\u2014both matter in <strong>Conversion &amp; Measurement<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Sample Ratio Mismatch vs Statistical Significance (for results)<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Statistical significance for outcomes asks: \u201cIs the conversion difference real?\u201d<\/li>\n<li>SRM significance asks: \u201cIs the traffic split plausible under random assignment?\u201d\nIn <strong>CRO<\/strong>, you should clear SRM concerns before trusting outcome significance.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Who Should Learn Sample Ratio Mismatch<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Marketers<\/strong> benefit because SRM protects campaign landing page tests and prevents budget decisions based on flawed experiments.<\/li>\n<li><strong>Analysts<\/strong> need SRM to validate experiment integrity and defend conclusions with confidence in <strong>Conversion &amp; Measurement<\/strong> reviews.<\/li>\n<li><strong>Agencies<\/strong> use SRM checks to reduce client risk and standardize CRO delivery across varied tech stacks.<\/li>\n<li><strong>Business owners and founders<\/strong> should understand SRM as a governance concept: it prevents costly \u201cwe shipped the wrong winner\u201d mistakes.<\/li>\n<li><strong>Developers and product engineers<\/strong> benefit because SRM often points directly to implementation details (bucketing, caching, redirects, event timing) that only engineering can fix.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Summary of Sample Ratio Mismatch<\/h2>\n\n\n\n<p>Sample Ratio Mismatch (SRM) is the condition where an experiment\u2019s observed variant split meaningfully deviates from the planned allocation beyond what chance would explain. It matters because it signals potential bias in assignment or measurement, which can invalidate conclusions. In <strong>Conversion &amp; Measurement<\/strong>, SRM is a core data quality and experiment integrity check. In <strong>CRO<\/strong>, it functions as a guardrail: address Sample Ratio Mismatch before you declare winners, scale changes, or operationalize learnings.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Frequently Asked Questions (FAQ)<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1) What is Sample Ratio Mismatch (SRM) in simple terms?<\/h3>\n\n\n\n<p>Sample Ratio Mismatch is when your A\/B test variants don\u2019t receive the traffic split you intended (like seeing 60\/40 when you set 50\/50) and the difference is too large to reasonably be explained by randomness.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2) Does Sample Ratio Mismatch invalidate my test results?<\/h3>\n\n\n\n<p>Often, yes\u2014at least until you understand the cause. If SRM reflects biased assignment or unequal measurement, conversion comparisons may be misleading. In <strong>Conversion &amp; Measurement<\/strong>, treat SRM as a \u201cstop and investigate\u201d signal.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3) How do I check for SRM?<\/h3>\n\n\n\n<p>Compare expected vs observed variant counts and run a goodness-of-fit test (commonly chi-square). Many teams also monitor SRM over time and by segment (device, browser, source) to pinpoint the cause.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4) What are the most common causes of Sample Ratio Mismatch?<\/h3>\n\n\n\n<p>Common causes include bucketing bugs, caching or CDN behavior, redirects, bot traffic, consent\/eligibility rules, identity inconsistencies (cookie vs user ID), and variant-specific tracking differences.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">5) How does Sample Ratio Mismatch affect CRO decisions?<\/h3>\n\n\n\n<p>In <strong>CRO<\/strong>, SRM can produce false winners or hide real improvements because the variants may be exposed to different kinds of users. Resolving SRM improves decision quality and protects the credibility of your experimentation program.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">6) Can SRM happen even if the experiment platform is \u201ccorrect\u201d?<\/h3>\n\n\n\n<p>Yes. Even with correct assignment, you can get tracking SRM if exposure logging fails more in one variant, or eligibility rules exclude users unevenly. That\u2019s why SRM is both a delivery and <strong>Conversion &amp; Measurement<\/strong> concern.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">7) What should I do when I detect SRM?<\/h3>\n\n\n\n<p>First, pause interpretation of performance results. Then diagnose systematically: verify allocation settings, validate exposure logging, check redirects and performance, segment by device\/browser\/source, and confirm identity\/bucketing logic. After fixing, restart or rerun the test to restore validity.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sample Ratio Mismatch (SRM) is one of the most important \u201csanity checks\u201d in experimentation, yet it\u2019s frequently misunderstood. In **Conversion &#038; Measurement**, SRM is the signal that your experiment\u2019s observed traffic split doesn\u2019t match the split you intended\u2014often enough that random chance is an unlikely explanation. In **CRO**, that matters because you can\u2019t trust uplift, winners, or learnings if the people who saw each variant weren\u2019t assigned fairly.<\/p>\n","protected":false},"author":10235,"featured_media":0,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1889],"tags":[],"class_list":["post-7110","post","type-post","status-publish","format-standard","hentry","category-cro"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/posts\/7110","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/users\/10235"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/comments?post=7110"}],"version-history":[{"count":0,"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/posts\/7110\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/media?parent=7110"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/categories?post=7110"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.wizbrand.com\/tutorials\/wp-json\/wp\/v2\/tags?post=7110"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}