ScaledByDesign/Insights
ServicesPricingAboutContact
Book a Call
Scaled By Design

Fractional CTO + execution partner for revenue-critical systems.

Company

  • About
  • Services
  • Contact

Resources

  • Insights
  • Pricing
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service

© 2026 ScaledByDesign. All rights reserved.

contact@scaledbydesign.com

On This Page

The Most Dangerous ExperimentWhat You Can and Can't Test✅ Legal and Ethical❌ Risky or IllegalThe Gray AreaThe Safe Pricing Experiment FrameworkMethod 1: Sequential TestingMethod 2: Geographic TestingMethod 3: Tier and Packaging TestsMethod 4: Willingness-to-Pay ResearchMeasuring Pricing ExperimentsThe Metrics That MatterThe LTV TrapThe DashboardThe Pricing Experiment PlaybookPhase 1: Research (Week 1-2)Phase 2: Design (Week 3)Phase 3: Execute (Week 4-8)Phase 4: Analyze (Week 9)The One Rule
  1. Insights
  2. Split Testing & Tracking
  3. How to Run Pricing Experiments Without Destroying Trust

How to Run Pricing Experiments Without Destroying Trust

February 2, 2026·ScaledByDesign·
pricingexperimentationstrategygrowth

The Most Dangerous Experiment

A 10% improvement in pricing has more impact than a 10% improvement in conversion rate or a 10% improvement in traffic. It drops straight to the bottom line with zero additional cost.

But pricing experiments go wrong in ways that A/B tests on button colors never do. Show two customers different prices for the same product, let them find out, and you've got a PR crisis and a potential legal problem.

Here's how to do it right.

What You Can and Can't Test

✅ Legal and Ethical

  • Different products or tiers at different prices (everyone sees the same options)
  • Different landing pages with different offers (different value propositions)
  • Geographic pricing (different markets, different prices — disclosed)
  • Cohort-based pricing (new customers get new pricing, existing keep theirs)
  • Promotional testing (different discount amounts to different segments)
  • Bundling experiments (same products, different package configurations)

❌ Risky or Illegal

  • Same product, different prices to different users at the same time (price discrimination)
  • Dynamic pricing based on user data without disclosure (browsing history, device type)
  • Hiding the lower price from users who would qualify
  • Bait-and-switch (showing one price, charging another)

The Gray Area

  • Willingness-to-pay surveys before setting prices (ethical, useful)
  • Price anchoring tests (showing a higher "original" price — legal if the original price was real)
  • Free trial length experiments (different trial periods for different cohorts)

The Safe Pricing Experiment Framework

Method 1: Sequential Testing

Test prices over time, not simultaneously:

Week 1-2: Price A ($49/month) — measure conversion + retention
Week 3-4: Price B ($59/month) — measure conversion + retention
Week 5-6: Price C ($39/month) — measure conversion + retention

Compare:
  - Conversion rate at each price point
  - Revenue per visitor
  - 30-day retention rate
  - Customer satisfaction scores

Pros: No two customers see different prices at the same time Cons: Seasonal effects can skew results, takes longer

Method 2: Geographic Testing

Different prices in different markets:

function getPricing(user: User): PricingTier {
  // Different markets, different price points
  const marketPricing: Record<string, number> = {
    "US": 59,
    "UK": 49,
    "EU": 54,
    "APAC": 39,
    "LATAM": 29,
  };
 
  return {
    price: marketPricing[user.market] || 49,
    currency: getCurrency(user.market),
    // Always show the price for their market — no deception
  };
}

Pros: Natural market segmentation, no ethical issues Cons: Markets differ in ways beyond price sensitivity

Method 3: Tier and Packaging Tests

Same base product, different packaging:

Test A: Single tier at $49/month
  - All features included

Test B: Two tiers
  - Basic: $29/month (core features)
  - Pro: $59/month (all features)

Test C: Three tiers
  - Starter: $19/month (limited)
  - Growth: $49/month (core)
  - Scale: $99/month (everything + support)

Measure: Revenue per visitor, not just conversion rate

This is the safest and most common pricing experiment. You're not charging different prices for the same thing — you're testing different product configurations.

Method 4: Willingness-to-Pay Research

Before you test, ask:

Survey questions (Van Westendorp Price Sensitivity):

1. "At what price would this be so cheap you'd question
    the quality?" (Too cheap)

2. "At what price would this be a great deal?" (Cheap)

3. "At what price would this start to feel expensive but
    you'd still consider it?" (Expensive)

4. "At what price would this be too expensive to consider?"
    (Too expensive)

Plot the responses:
  - Intersection of "too cheap" and "expensive" = lower bound
  - Intersection of "cheap" and "too expensive" = upper bound
  - Optimal price point = where the curves cross

This gives you a price range to test within — dramatically reducing the number of experiments needed.

Measuring Pricing Experiments

The Metrics That Matter

Don't optimize for:
  ❌ Conversion rate alone (lower price = higher conversion, lower revenue)
  ❌ Revenue alone (higher price = more revenue per customer, fewer customers)

Optimize for:
  ✅ Revenue per visitor (conversion × price)
  ✅ Lifetime value (does the price affect retention?)
  ✅ Payback period (how fast do you recoup CAC?)
  ✅ Expansion revenue (do lower-tier customers upgrade?)

The LTV Trap

Price A: $29/month
  Conversion: 5%
  Monthly churn: 8%
  Average lifetime: 12.5 months
  LTV: $362

Price B: $49/month
  Conversion: 3.5%
  Monthly churn: 5%
  Average lifetime: 20 months
  LTV: $980

Price B converts worse but produces 2.7x more LTV.
You need at least 90 days of retention data before
declaring a pricing experiment winner.

The Dashboard

Pricing Experiment: Tier Structure Test
Period: Feb 1 - Mar 15, 2026

                    Control (Single)  Variant (Three Tiers)
Visitors:           10,240            10,180
Signups:            512 (5.0%)        468 (4.6%)
Revenue/visitor:    $2.50             $3.12 (+25%)
Avg deal size:      $49               $67
Mix:                100% @ $49        30% @ $19, 45% @ $49, 25% @ $99
30-day retention:   82%               86%
Projected LTV:      $362              $485

Winner: Three tiers (+25% revenue/visitor, +34% projected LTV)

The Pricing Experiment Playbook

Phase 1: Research (Week 1-2)

  • Run willingness-to-pay survey (100+ responses)
  • Analyze competitor pricing
  • Calculate your unit economics at different price points
  • Define the price range to test

Phase 2: Design (Week 3)

  • Choose experiment method (sequential, geographic, or tier-based)
  • Define primary metric (revenue per visitor)
  • Define guardrail metrics (retention, satisfaction, support tickets)
  • Calculate required sample size and duration

Phase 3: Execute (Week 4-8)

  • Run the experiment for full duration
  • Monitor guardrail metrics weekly
  • Do NOT peek at results and stop early
  • Collect qualitative feedback alongside quantitative data

Phase 4: Analyze (Week 9)

  • Compare revenue per visitor across variants
  • Check retention data (minimum 30 days post-signup)
  • Segment by customer type (SMB vs enterprise, new vs existing)
  • Make the decision based on LTV, not just conversion

The One Rule

Every pricing experiment should follow this principle: any customer who discovers the experiment should feel it was fair. If you can't explain your pricing test to a customer without them feeling deceived, redesign the experiment.

Pricing is the most powerful lever in your business. Test it rigorously, test it ethically, and test it with enough patience to see the retention impact. The companies that get pricing right don't just grow faster — they grow more profitably.

Previous
Why Most AI Chatbots Fail (And What Production-Grade Looks Like)
Next
The AI Implementation Playbook for Non-Technical Founders
Insights
A/B Testing Is Lying to You — Statistical Significance Isn't EnoughServer-Side Split Testing: Why Client-Side Tools Are Costing You RevenueThe Tracking Stack That Survives iOS, Ad Blockers, and Cookie DeathHow to Run Pricing Experiments Without Destroying TrustYour Conversion Rate Is a Vanity Metric — Here's What to Track InsteadBuilding a Feature Flag System That Doesn't Become Technical DebtThe Data Layer Architecture That Makes Every Test Trustworthy

Ready to Ship?

Let's talk about your engineering challenges and how we can help.

Book a Call