ScaledByDesign/Insights
ServicesPricingAboutContact
Book a Call
Scaled By Design

Fractional CTO + execution partner for revenue-critical systems.

Company

  • About
  • Services
  • Contact

Resources

  • Insights
  • Pricing
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service

© 2026 ScaledByDesign. All rights reserved.

contact@scaledbydesign.com

On This Page

The Retro That Changes NothingWhy Post-Mortems FailThey Focus on "Who" Instead of "What"They Produce Action Items Nobody TracksThey're Too LongThe Format That WorksPart 1: The Timeline (10 minutes)TimelinePart 2: The Five Whys (15 minutes)Part 3: What Went Well (5 minutes)Part 4: Action Items (15 minutes)Why Maximum 3The Blameless CultureWhat Blameless Means (And Doesn't Mean)The Severity FrameworkTracking Action Item CompletionStart This Week
  1. Insights
  2. Engineering
  3. The Incident Retro That Actually Prevents the Next Incident

The Incident Retro That Actually Prevents the Next Incident

January 17, 2026·ScaledByDesign·
incidentspost-mortemreliabilityengineering

The Retro That Changes Nothing

The site went down for 45 minutes. Customers complained. Leadership panicked. You held a post-mortem meeting. Someone wrote a document. Action items were assigned. And then... nothing changed. The same class of incident happened again two months later.

The problem isn't that you didn't do a retro. It's that your retro format is designed to assign blame, not prevent recurrence.

Why Post-Mortems Fail

They Focus on "Who" Instead of "What"

Bad retro:
  "The incident was caused by Sarah deploying untested code."

Good retro:
  "The deployment pipeline allowed untested code to reach
   production because our staging environment was out of
   sync with production config."

When you focus on who, people hide mistakes. When you focus on what, people share information that prevents the next incident.

They Produce Action Items Nobody Tracks

Action items from the last 5 retros:
  [x] Add monitoring for payment service (done, 2 weeks late)
  [ ] Fix staging environment parity (assigned, never started)
  [ ] Add integration tests for checkout (assigned, deprioritized)
  [ ] Update runbook for database failover (not assigned)
  [ ] Review rate limiting configuration (not assigned)

3 of 5 action items never happened.
That's a 40% completion rate. Typical.

They're Too Long

A 2-hour post-mortem meeting with 15 people produces a 10-page document that nobody reads. The signal-to-noise ratio is terrible.

The Format That Works

Part 1: The Timeline (10 minutes)

## Timeline
 
| Time (UTC) | Event |
|------------|-------|
| 14:23 | Deploy #4521 pushed to production |
| 14:25 | Error rate spikes from 0.1% to 15% |
| 14:27 | PagerDuty alerts on-call engineer (Alex) |
| 14:31 | Alex begins investigation |
| 14:38 | Root cause identified: new payment endpoint returning 500s |
| 14:42 | Decision: rollback deploy #4521 |
| 14:45 | Rollback initiated |
| 14:48 | Rollback complete, error rate returning to normal |
| 14:55 | Confirmed: all systems nominal |
 
Duration: 32 minutes (detect: 4 min, investigate: 11 min, resolve: 17 min)

Facts only. No opinions. No blame. Just what happened and when.

Part 2: The Five Whys (15 minutes)

Why did the site go down?
  → Payment endpoint returned 500 errors

Why did the payment endpoint fail?
  → New code assumed a field that doesn't exist in production data

Why wasn't this caught in testing?
  → Staging database has different data shape than production

Why is staging data different from production?
  → Staging was seeded 8 months ago and never refreshed

Why was staging never refreshed?
  → No automated process exists; it's a manual task nobody owns

ROOT CAUSE: No automated staging data refresh process

Keep going until you hit a systemic cause — something about processes, tools, or systems, not people.

Part 3: What Went Well (5 minutes)

✅ PagerDuty alert fired within 2 minutes
✅ On-call engineer responded within 4 minutes
✅ Rollback was clean and fast (3 minutes)
✅ Customer communication went out within 15 minutes

This matters. If you only focus on what went wrong, you'll accidentally break the things that went right.

Part 4: Action Items (15 minutes)

Maximum 3 action items. Every action item must have:

Action items:

1. Automate staging data refresh (weekly sync from anonymized production)
   Owner: Jordan
   Due: Feb 14, 2026
   Tracks in: JIRA-4521
   Prevents: Data shape mismatches between staging and production

2. Add production data shape validation to CI pipeline
   Owner: Sarah
   Due: Feb 7, 2026
   Tracks in: JIRA-4522
   Prevents: Code that assumes fields existing without validation

3. Add pre-deploy smoke test that hits payment endpoints
   Owner: Alex
   Due: Feb 10, 2026
   Tracks in: JIRA-4523
   Prevents: Payment failures reaching customers

Why Maximum 3

5+ action items from a retro: 30% completion rate
3 action items from a retro: 80% completion rate

Fewer items, higher completion, more prevention.
The goal isn't to fix everything. It's to fix the
most impactful things and actually follow through.

The Blameless Culture

What Blameless Means (And Doesn't Mean)

Blameless means:
  ✓ "The system allowed this failure" not "Sarah caused this failure"
  ✓ Asking "how do we prevent this?" not "whose fault is this?"
  ✓ Rewarding people who report incidents quickly
  ✓ Sharing incidents openly (not hiding them from leadership)

Blameless does NOT mean:
  ✗ Nobody is accountable for action items
  ✗ Repeated negligence is ignored
  ✗ "It's nobody's fault" (it's the system's fault, and we fix systems)

The Severity Framework

SEV1: Customer-facing outage > 30 minutes
  → Retro within 48 hours, VP-level visibility
  → 3 action items, tracked weekly until complete

SEV2: Customer-facing degradation or internal outage
  → Retro within 1 week
  → 2-3 action items, tracked biweekly

SEV3: Near-miss or minor degradation
  → Written analysis (no meeting needed)
  → 1-2 action items

SEV4: Anomaly detected, no customer impact
  → Slack thread discussion, optional action items

Tracking Action Item Completion

Monthly Incident Review:

Incidents this month: 3 (SEV2: 1, SEV3: 2)
Action items created: 7
Action items completed: 6 (86%)
Action items overdue: 1 (reassigned, new date set)

Repeat incidents (same root cause as previous): 0 ✅

Trend (last 6 months):
  Sep: 8 incidents
  Oct: 6 incidents
  Nov: 5 incidents
  Dec: 4 incidents
  Jan: 3 incidents
  Feb: 3 incidents (on pace)

If repeat incidents are happening, your retro process is broken. Either the action items aren't completing, or they're not addressing the real root cause.

Start This Week

For your next incident (or the most recent one):

  1. Write the timeline — facts and timestamps only
  2. Run the Five Whys until you hit a systemic cause
  3. Pick exactly 3 action items with owners and due dates
  4. Track completion weekly until all 3 are done
  5. Share the retro document with the entire engineering team

The goal of incident response isn't to never have incidents — it's to never have the same incident twice. A good retro process makes that possible.

Previous
Your Attribution Is Lying to You — Here's How to Fix It
Next
Why Your Roadmap Changes Every Week (And How to Fix It)
Insights
How to Write RFCs That Actually Get ReadThe Engineering Ladder Nobody Follows (And How to Fix It)Why Your Best Engineers Keep LeavingCode Review Is a Bottleneck — Here's How to Fix ItThe Incident Retro That Actually Prevents the Next IncidentRemote Engineering Teams That Ship: The PlaybookHow to Run Execution Sprints That Actually ShipThe On-Call Rotation That Doesn't Burn Out Your TeamTechnical Interviews Are Broken — Here's What We Do Instead

Ready to Ship?

Let's talk about your engineering challenges and how we can help.

Book a Call