ScaledByDesign/Insights
ServicesPricingAboutContact
Book a Call
Scaled By Design

Fractional CTO + execution partner for revenue-critical systems.

Company

  • About
  • Services
  • Contact

Resources

  • Insights
  • Pricing
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service

© 2026 ScaledByDesign. All rights reserved.

contact@scaledbydesign.com

On This Page

The Stale Price IncidentPattern 1: Cache-Aside (The Default)Pattern 2: Write-Through CachePattern 3: Preventing the Thundering HerdPattern 4: Stale-While-RevalidateCache Key DesignWhen NOT to Cache
  1. Insights
  2. Infrastructure
  3. Redis Caching Patterns That Actually Work in Production

Redis Caching Patterns That Actually Work in Production

April 1, 2026·ScaledByDesign·
rediscachingperformancedatabaseinfrastructure

The Stale Price Incident

A client's e-commerce site showed the wrong price to 3,000 customers for 45 minutes. The product team had updated a price in the admin panel, but the cached version kept serving the old price. Orders came in at the old (lower) price. The business had to honor them. Total cost: $47K.

Caching is the easiest performance win and the hardest thing to get right. Here are the patterns that prevent incidents like this.

Pattern 1: Cache-Aside (The Default)

The application manages the cache explicitly:

async function getProduct(id: string): Promise<Product> {
  // 1. Check cache first
  const cached = await redis.get(`product:${id}`);
  if (cached) return JSON.parse(cached);
 
  // 2. Cache miss → fetch from database
  const product = await db.products.findUnique({ where: { id } });
  if (!product) throw new NotFoundError();
 
  // 3. Populate cache with TTL
  await redis.set(`product:${id}`, JSON.stringify(product), "EX", 300); // 5 min
 
  return product;
}
 
// On update: invalidate the cache
async function updateProduct(id: string, data: UpdateProductInput) {
  const product = await db.products.update({ where: { id }, data });
  await redis.del(`product:${id}`);  // Delete cached version
  return product;
}

When to use: Most read-heavy workloads. Simple, predictable, easy to reason about.

The trap: The window between database update and cache invalidation. If invalidation fails (network issue, Redis down), stale data persists until TTL expires.

Pattern 2: Write-Through Cache

Write to cache and database simultaneously:

async function updateProduct(id: string, data: UpdateProductInput) {
  const product = await db.products.update({ where: { id }, data });
  
  // Write to cache immediately (not delete — write the new value)
  await redis.set(`product:${id}`, JSON.stringify(product), "EX", 300);
  
  return product;
}

Advantage over cache-aside: No stale window. The cache is updated in the same operation as the database. Reads always get fresh data.

Disadvantage: Slightly slower writes (extra Redis call). Cache may contain data that's never read.

Pattern 3: Preventing the Thundering Herd

When a popular cache key expires, hundreds of requests simultaneously hit the database:

// The thundering herd problem
// 1000 requests/second for product "best-seller-123"
// Cache expires → 1000 requests all miss → 1000 database queries simultaneously
// Database buckles under load
 
// Solution: Mutex lock (only one request fetches, others wait)
async function getProductWithLock(id: string): Promise<Product> {
  const cacheKey = `product:${id}`;
  const lockKey = `lock:product:${id}`;
 
  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);
 
  // Try to acquire lock (NX = only if not exists, EX = auto-expire)
  const locked = await redis.set(lockKey, "1", "NX", "EX", 5);
 
  if (locked) {
    // We got the lock — fetch from DB and populate cache
    const product = await db.products.findUnique({ where: { id } });
    await redis.set(cacheKey, JSON.stringify(product), "EX", 300);
    await redis.del(lockKey);
    return product;
  }
 
  // Another request has the lock — wait and retry
  await sleep(50); // 50ms
  return getProductWithLock(id); // Retry (will hit cache this time)
}

Pattern 4: Stale-While-Revalidate

Serve stale data immediately while refreshing in the background:

async function getProductSWR(id: string): Promise<Product> {
  const cacheKey = `product:${id}`;
  const staleKey = `product:${id}:stale_after`;
 
  const cached = await redis.get(cacheKey);
  if (!cached) {
    // Full cache miss — must fetch synchronously
    return fetchAndCache(id);
  }
 
  const product = JSON.parse(cached);
  const isStale = !(await redis.exists(staleKey));
 
  if (isStale) {
    // Data is stale — serve it now, refresh in background
    refreshInBackground(id); // Fire and forget
  }
 
  return product; // Always returns immediately
}
 
async function fetchAndCache(id: string) {
  const product = await db.products.findUnique({ where: { id } });
  await redis.set(`product:${id}`, JSON.stringify(product), "EX", 3600); // 1h hard TTL
  await redis.set(`product:${id}:stale_after`, "1", "EX", 300); // 5m soft TTL
  return product;
}

The result: Users always get a fast response. Data is refreshed every 5 minutes but never causes a cache miss (hard TTL is 1 hour). This is the pattern behind CDN stale-while-revalidate headers.

Cache Key Design

Bad cache keys cause subtle bugs. Use a consistent naming convention:

// Cache key conventions
const keys = {
  // Entity cache: type:id
  product: (id: string) => `product:${id}`,
  user: (id: string) => `user:${id}`,
  
  // List cache: type:list:params (sorted, deterministic)
  productList: (filters: Filters) => {
    const sorted = Object.keys(filters).sort()
      .map(k => `${k}=${filters[k]}`).join(":");
    return `product:list:${sorted}`;
  },
  
  // Computed cache: type:computed:id
  orderTotal: (id: string) => `order:total:${id}`,
  
  // Add version prefix when schema changes
  // v2:product:123 (bump when Product shape changes)
};

When NOT to Cache

Caching isn't always the answer:

Don't cache:
  → Data that changes every request (real-time inventory at checkout)
  → Data that must be consistent (account balances, payment status)
  → Data that's cheap to compute (simple DB lookups with indexes)
  → User-specific data with low reuse (each user sees it once)

Do cache:
  → Expensive computations (aggregations, ML predictions)
  → Shared data with high read frequency (product catalog, config)
  → External API responses (rate-limited third-party data)
  → Session data (authentication, user preferences)

The best caching strategy is the one you don't need. Optimize your database queries first. Add indexes. Use connection pooling. If the database is still the bottleneck after that, then cache — with the right pattern for your access pattern.

Previous
API Versioning Strategies That Don't Become a Maintenance Nightmare
Insights
Redis Caching Patterns That Actually Work in ProductionZero-Downtime Database Migrations — The Patterns That Actually WorkTerraform State Management Lessons We Learned the Hard WayKubernetes Is Overkill for Your Startup — Here's What to Use InsteadScale Postgres Before Reaching for NoSQLDatabase Migrations Without DowntimeObservability That Actually Helps You Sleep at Night

Ready to Ship?

Let's talk about your engineering challenges and how we can help.

Book a Call