Counterfactual analysis: What would've happened

Mon Jun 23 2025

Ever wondered if that feature you shipped last quarter actually made a difference? Or whether your latest marketing campaign was worth the budget? We all make decisions based on gut feelings and surface-level metrics, but there's a better way to understand what truly drives results.

That's where counterfactual analysis comes in - a fancy term for a simple idea: figuring out what would have happened if you'd made a different choice. It's the difference between knowing your numbers went up and understanding why they went up. And once you get the hang of it, you'll start seeing opportunities to make smarter decisions everywhere.

The essence of counterfactual analysis

At its core, counterfactual analysis is about asking "what if?" But not in a daydreaming way - in a rigorous, data-driven way that actually tells you something useful. You're essentially creating parallel universes where different decisions were made, then comparing them to reality.

Think about it like this: your product team launches a new onboarding flow and signups increase by 20%. Great news, right? But here's the thing - maybe signups were already trending up because of seasonal patterns. Or maybe a competitor just had a major outage. Without counterfactual analysis, you're just guessing at causation.

The Reddit community actually has some interesting debates about the distinction between counterfactuals and regular hypotheticals. Hypotheticals are forward-looking ("What if we add this feature?"), while counterfactuals look backward ("What if we hadn't added that feature?"). It sounds pedantic, but this distinction matters when you're trying to prove causality.

Data scientists like Rudrendu Paul have written extensively about how counterfactual modeling helps evaluate interventions. The basic idea is simple: compare what actually happened to what would have happened in an alternate timeline. The hard part is constructing that alternate timeline in a believable way.

This isn't just academic navel-gazing. Companies use counterfactual analysis to make million-dollar decisions about product launches, marketing spend, and strategic pivots. The philosophy of science community points out that this kind of reasoning is fundamental to how we understand cause and effect in any domain.

Applying counterfactual analysis in product development

Let's get practical. Say you're running growth at a SaaS company and you just revamped your pricing page. Conversions jumped 15%. Time to celebrate and roll it out globally?

Not so fast. With counterfactual analysis, you'd ask: what would conversions have looked like if we hadn't changed the pricing page? Maybe you'd discover that conversions always spike this time of year. Or that a recent product update was the real driver. As Paul explains in his guide to counterfactual modeling, incorporating this thinking into A/B tests helps you move beyond correlation to actual causation.

Here's a real-world scenario: you're building an e-commerce recommendation engine. Version A shows bestsellers, Version B uses collaborative filtering. Version B wins by 8% in your A/B test. But counterfactual analysis might reveal something interesting - like Version B only wins for users who've made 3+ previous purchases. For new users, it actually performs worse. Without that counterfactual lens, you'd have made a blanket change that hurts part of your user base.

The team at Inference.vc highlights how counterfactual thinking reveals these complex interactions between features. You shipped a new onboarding flow and a revamped dashboard in the same quarter. Retention improved. Which change deserves credit? Counterfactual analysis lets you isolate each effect by asking:

  • What if we'd only changed the onboarding?

  • What if we'd only changed the dashboard?

  • What if we'd changed neither?

This is exactly the kind of analysis that platforms like Statsig enable - running experiments that don't just tell you what happened, but why it happened and what would have happened under different conditions.

Methodologies and challenges in counterfactual modeling

Now for the reality check: counterfactual analysis isn't magic. It's hard work that requires the right tools and a healthy dose of skepticism about your assumptions.

The most common approaches include propensity score matching and inverse probability weighting - basically fancy ways of finding comparable groups to simulate your alternate timeline. Teams also use structural equation models when they need to model complex relationships between variables.

But here's what usually goes wrong:

  • Not enough data: You need substantial historical data to build credible counterfactuals

  • Hidden variables: That uptick might be due to something you're not even tracking

  • Overfitting the narrative: It's tempting to construct counterfactuals that support your preconceived notions

The key is to be ruthlessly honest about your assumptions. Start simple: define your question clearly, identify the relevant variables, and test whether your counterfactual scenarios actually make sense. One product manager I know always asks their team: "If we showed this analysis to someone who disagreed with our decision, would they find it convincing?"

Smart teams also build in sanity checks. If your counterfactual model says that removing a minor UI element would have tanked revenue by 50%, something's probably wrong with your model, not your UI.

Counterfactual thinking in strategic decision-making

Counterfactual analysis really shines when the stakes are high and controlled experiments aren't possible. How do you evaluate a nationwide policy change? How do you assess disaster preparedness when (hopefully) the disaster never happens?

Economists have been using these techniques for decades. As detailed in various economic studies, they simulate alternate economic scenarios to understand the true impact of policies. Did that stimulus package actually prevent a recession? What would unemployment look like without it?

The disaster management community takes this even further. According to prevention experts, counterfactual analysis helps identify "near misses" - situations where disasters were narrowly avoided. By studying what almost went wrong, you can improve systems before actual failures occur.

Even historians are getting in on the action. The philosophy of science community discusses how counterfactual thinking illuminates historical causality. Sure, some of it devolves into "what if Napoleon had won at Waterloo" speculation, but serious historical analysis uses counterfactuals to test theories about why events unfolded as they did.

For product teams, the lesson is clear: counterfactual thinking isn't just for post-mortems. Use it proactively to stress-test your strategy. Before that big product pivot, ask: what would happen if we didn't pivot? What if we pivoted differently? What if our main competitor pivots first?

Closing thoughts

Counterfactual analysis boils down to a simple but powerful idea: to really understand impact, you need to know what would have happened otherwise. It's the difference between correlation and causation, between lucky guesses and informed decisions.

The good news is you don't need a PhD in statistics to start thinking counterfactually. Start small - next time you're reviewing experiment results, ask yourself what would have happened in the control scenario. Challenge assumptions. Question narratives. Build the muscle of thinking in alternate timelines.

Want to dive deeper? Check out:

  • Judea Pearl's work on causal inference for the theoretical foundations

  • Statsig's experimentation platform for practical tools to run counterfactual analyses

  • Your own historical data - sometimes the best teacher is your own past experiments

Remember: every decision you make is essentially a bet on one timeline versus all possible alternatives. Counterfactual analysis just helps you make better bets. Hope you find this useful!

Recent Posts

We use cookies to ensure you get the best experience on our website.
Privacy Policy