A/B Testing Regional Discounts: Finding the Perfect Price Point
Tutorials16 min read

A/B Testing Regional Discounts: Finding the Perfect Price Point

Data-driven approach to optimizing your regional pricing. Learn how to A/B test discount levels and maximize revenue per market.

Mantas Karmaza

Mantas Karmaza

Founder · October 1, 2023

A/B Testing Regional Discounts: The Complete Guide

Guessing discount levels is leaving money on the table. In this comprehensive guide, we'll show you exactly how to test your way to optimal regional pricing using data-driven A/B testing.

!Analytics Dashboard

Why A/B Test Regional Pricing?

Different markets have vastly different price sensitivities. What works in Brazil won't work in Indonesia. A 40% discount might be:

  • **Too low for Indonesia** — You could be at 50-60% and still be profitable
  • **Too high for Brazil** — Maybe 25-30% would convert just as well
  • **Perfect for India** — But only because you got lucky

Without testing, you're essentially gambling with your revenue. Here's the reality:

"We increased our India revenue by 340% simply by testing 45% vs 55% discounts. The 55% discount converted 2.3x better, more than making up for the lower price point."
— DevTools SaaS founder

Ready to increase your international revenue?

Start your free trial and see results in days, not months.

Start Free Trial

The Economics of Regional Pricing Tests

Before diving into implementation, let's understand why testing matters so much financially.

The Revenue Equation

Revenue per 1000 Visitors = Conversion Rate × Price × 1000

At 40% discount ($59):
- 2.0% conversion = $1,180 revenue

At 50% discount ($49):
- 2.8% conversion = $1,372 revenue (+16%)

At 60% discount ($39):
- 3.2% conversion = $1,248 revenue (+6%)

The 50% discount wins, even though the 60% discount has the highest conversion rate. This is why you must optimize for revenue, not conversions.

!Revenue Chart

Setting Up Your A/B Test

Step 1: Choose Your Test Market

Not all markets are suitable for testing. Pick a market with:

CriteriaMinimumIdeal
Weekly visitors100+500+
Current conversion<1%<0.5%
Strategic importanceMediumHigh
Traffic consistencyStableGrowing

Pro tip: Start with your highest-traffic underperforming market. You'll get results faster and the impact will be larger.

Step 2: Define Your Variants

Test 2-3 discount levels maximum. More variants = longer test duration.

Example for India (currently at 40% discount):

VariantDiscountPriceHypothesis
Control40%$59.40Current baseline
Test A50%$49.50Sweet spot for middle-class
Test B60%$39.60Maximum reach

Step 3: Calculate Required Sample Size

For statistically significant results (95% confidence, 80% power), use this formula:

// Sample size per variant
function calculateSampleSize(baselineConversion, minimumDetectableEffect) {
  const p1 = baselineConversion
  const p2 = baselineConversion * (1 + minimumDetectableEffect)
  const pooledP = (p1 + p2) / 2

  // Z-scores for 95% confidence, 80% power
  const zAlpha = 1.96
  const zBeta = 0.84

  const n = (2 * pooledP * (1 - pooledP) * Math.pow(zAlpha + zBeta, 2)) /
            Math.pow(p2 - p1, 2)

  return Math.ceil(n)
}

// Example: 2% baseline, detect 30% improvement
calculateSampleSize(0.02, 0.30) // Returns ~2,100 visitors per variant

Quick reference table:

Baseline ConversionDetect 20% liftDetect 30% liftDetect 50% lift
1%6,200/variant2,800/variant1,000/variant
2%3,100/variant1,400/variant500/variant
5%1,200/variant550/variant200/variant

Step 4: Implement the Test

Here's production-ready code for variant assignment:

import crypto from 'crypto'

function assignVariant(visitorId, country, testId) {
  // Create deterministic hash for consistent assignment
  const hash = crypto.createHash('sha256')
    .update(`${visitorId}-${country}-${testId}`)
    .digest('hex')

  // Convert first 8 chars to number (0-4294967295)
  const bucket = parseInt(hash.slice(0, 8), 16)

  // Normalize to 0-99
  const normalized = bucket % 100

  // Assign to variants (33/33/34 split)
  if (normalized < 33) return { variant: 'control', discount: 40 }
  if (normalized < 66) return { variant: 'testA', discount: 50 }
  return { variant: 'testB', discount: 60 }
}

// Usage
const { variant, discount } = assignVariant(
  'user_abc123',
  'IN',
  'india_pricing_2024'
)

Important: Use a stable identifier (user ID, device fingerprint) so returning visitors see the same variant.

!Code Implementation

Measuring Results Correctly

Primary Metric: Revenue per Visitor (RPV)

This is the only metric that matters for pricing tests. Here's why:

Scenario: India test results after 2 weeks

Control (40% = $59.40):
- 10,000 visitors
- 180 conversions (1.8%)
- Revenue: $10,692
- RPV: $1.07

Test A (50% = $49.50):
- 10,000 visitors
- 252 conversions (2.52%)
- Revenue: $12,474
- RPV: $1.25 ✓ WINNER (+17%)

Test B (60% = $39.60):
- 10,000 visitors
- 310 conversions (3.1%)
- Revenue: $12,276
- RPV: $1.23 (+15%)

Test A wins despite having fewer conversions than Test B. Higher conversion rate ≠ higher revenue.

Secondary Metrics to Track

MetricWhy It MattersAction if Concerning
Refund rateLow-quality customersRaise discount threshold
LTV by variantLong-term valueFactor into RPV calculation
Support ticketsHidden costsAdd to cost analysis
NPS by variantCustomer qualityMonitor closely

Statistical Significance Calculator

Don't end tests early. Use this to check significance:

function calculateSignificance(control, variant) {
  const { visitors: n1, conversions: c1 } = control
  const { visitors: n2, conversions: c2 } = variant

  const p1 = c1 / n1
  const p2 = c2 / n2
  const pPooled = (c1 + c2) / (n1 + n2)

  const se = Math.sqrt(pPooled * (1 - pPooled) * (1/n1 + 1/n2))
  const z = (p2 - p1) / se

  // Two-tailed p-value
  const pValue = 2 * (1 - normalCDF(Math.abs(z)))

  return {
    significant: pValue < 0.05,
    confidence: (1 - pValue) * 100,
    lift: ((p2 - p1) / p1) * 100
  }
}

Common A/B Testing Mistakes

Mistake 1: Ending Tests Too Early (Peeking)

The problem: You see promising results after 3 days and declare a winner.

Why it's wrong: Early results often reverse. Statistical noise can look like real differences.

The fix: Pre-commit to a test duration or sample size. Never peek and stop early.

Mistake 2: Testing Too Many Variants

The problem: You test 5 discount levels simultaneously.

Why it's wrong: You need 5x more traffic to reach significance. A 2-week test becomes 10 weeks.

The fix: Maximum 3 variants per test. Run sequential tests if needed.

Mistake 3: Ignoring Seasonality

The problem: You run a test during Black Friday and apply results year-round.

Why it's wrong: Holiday behavior doesn't reflect normal purchasing patterns.

The fix: Run tests during normal periods. Note any external factors.

Mistake 4: Not Segmenting Results

The problem: You look at overall results only.

Why it's wrong: Different segments may respond very differently.

The fix: Analyze results by:

  • Device type (mobile converts differently)
  • Traffic source (organic vs paid)
  • Customer type (new vs returning)
  • Plan type (monthly vs annual)

!Data Analysis

Advanced Testing Strategies

Multi-Armed Bandit

Instead of fixed traffic splits, dynamically allocate more traffic to winning variants:

class EpsilonGreedyBandit {
  constructor(variants, epsilon = 0.1) {
    this.variants = variants
    this.epsilon = epsilon
    this.stats = variants.reduce((acc, v) => ({
      ...acc,
      [v]: { impressions: 0, conversions: 0, revenue: 0 }
    }), {})
  }

  selectVariant() {
    // Explore: random variant (epsilon % of time)
    if (Math.random() < this.epsilon) {
      return this.variants[Math.floor(Math.random() * this.variants.length)]
    }

    // Exploit: best performing variant
    return this.variants.reduce((best, v) => {
      const rpv = this.calculateRPV(v)
      const bestRpv = this.calculateRPV(best)
      return rpv > bestRpv ? v : best
    })
  }

  calculateRPV(variant) {
    const { impressions, revenue } = this.stats[variant]
    return impressions > 0 ? revenue / impressions : 0
  }

  recordResult(variant, converted, revenue) {
    this.stats[variant].impressions++
    if (converted) {
      this.stats[variant].conversions++
      this.stats[variant].revenue += revenue
    }
  }
}

Pros: Minimizes regret during testing

Cons: Harder to reach statistical significance

Sequential Testing

Run tests in sequence to find the optimal price:

Phase 1: Test 30% vs 40% → Winner: 40%
Phase 2: Test 40% vs 50% → Winner: 50%
Phase 3: Test 50% vs 55% → Winner: 50%
Phase 4: Test 45% vs 50% → Winner: 50%

Final optimal discount: 50%

This binary search approach finds the optimal price in fewer total tests.

SmartBanner A/B Testing

SmartBanner includes built-in A/B testing that handles all of this automatically:

Features

  • **Visual test builder** — Create variants in the dashboard, no code
  • **Automatic traffic allocation** — Even split or multi-armed bandit
  • **Real-time results** — Watch conversions and revenue live
  • **Statistical significance alerts** — Know when you have a winner
  • **Auto-deploy winners** — Automatically roll out winning variants
  • **Segment analysis** — Break down results by device, source, and more

How It Works

  • Go to **Countries → Select a tier → A/B Test**
  • Create your test variants with different discount levels
  • Set your traffic allocation and success metrics
  • Launch the test and monitor results
  • SmartBanner notifies you when results are significant
  • One-click deploy the winner

!SmartBanner Dashboard

Real-World Case Studies

Case Study 1: SaaS Analytics Tool

Challenge: India traffic converting at 0.3% (vs 2.5% US)

Test: 40% vs 50% vs 60% discount

Results after 4 weeks:

  • 40%: 0.4% conversion, $1.12 RPV
  • 50%: 0.9% conversion, $1.68 RPV ✓
  • 60%: 1.1% conversion, $1.56 RPV

Impact: 50% improvement in India revenue

Case Study 2: Design Templates Marketplace

Challenge: Brazil underperforming expectations

Test: Current 35% vs 45% discount

Results after 3 weeks:

  • 35%: 1.2% conversion, $1.95 RPV
  • 45%: 1.8% conversion, $2.16 RPV ✓

Impact: 11% revenue increase from Brazil

Conclusion: Test, Don't Guess

The "right" discount for any market is whatever maximizes revenue per visitor. You can't know this without testing.

Your action items:

  • Identify your highest-traffic underperforming market
  • Set up a test with 2-3 discount levels
  • Run for at least 2 weeks (or until significant)
  • Deploy the winner and move to the next market

With SmartBanner, you can run these tests in minutes instead of building infrastructure. Start your free trial and optimize your global pricing today.

SmartBanner includes everything you need

Stop building regional pricing from scratch. Get started in 2 minutes.

  • Location-based pricing for 195+ countries
  • VPN/proxy fraud protection
  • 50+ automated holiday campaigns
  • A/B testing for discount optimization
  • One-line JavaScript integration
Try SmartBanner Free

Stop leaving money on the table

Join 2,847+ SaaS founders who use SmartBanner to unlock international revenue. Setup takes 2 minutes. See results in days.

No credit card required. 14-day free trial on all paid plans.