Risk vs odds ratios: Choosing metrics

Mon Jun 23 2025

Ever wonder why your A/B test results sometimes feel off? Like when you're told users are "3x more likely" to click something, but the actual numbers don't quite add up? You're probably looking at an odds ratio when you think you're seeing a risk ratio.

This mix-up happens all the time in product analytics, and it can lead to some pretty bad decisions. Let's clear up the confusion once and for all.

The importance of risk and odds in statistical analysis

Here's the thing: risk and odds sound like they should be the same, but they're not. Risk is just the probability of something happening - dead simple. If 20 out of 100 users click your button, that's a 20% risk (or probability) of clicking.

Odds are weirder. They compare how likely something is to happen versus not happen. So those same 20 clickers? The odds are 20:80, or 0.25. See how that's different from 0.20?

When probabilities are tiny - say, 2 in 100 - risk and odds are practically twins. But once you get above 10-20% probability, they start to diverge like crazy. I learned this the hard way when analyzing conversion rates for a checkout flow redesign. We thought we'd improved conversions by 50% based on the odds ratio, but the actual risk increase was only about 30%. Big difference when you're forecasting revenue.

The NIH has a great example with a medical trial that really drives this home. They compared two treatments for esophageal bleeding (stay with me here). The death risk was 21% for one treatment and 15% for another. Simple enough. But the odds? 0.27 versus 0.18. Suddenly that "small" difference looks bigger.

This distinction matters because we instinctively think in terms of risk, but many statistical tools give us odds. Logistic regression? Odds. Case-control studies? Odds. That fancy ML model predicting user churn? Probably odds.

Risk ratios vs odds ratios: Understanding the difference

Alright, so we've got risk and odds sorted out. Now let's talk about what happens when you compare them between groups - because that's where things get really interesting.

A risk ratio (or relative risk) is beautifully straightforward. If 40% of power users complete onboarding versus 20% of regular users, your risk ratio is 2.0. Power users are literally twice as likely to complete onboarding. Easy.

Odds ratios? Not so intuitive. Using the same example: power users have 40:60 odds (0.67) while regular users have 20:80 odds (0.25). The odds ratio is 0.67/0.25 = 2.68. So the odds are 2.68 times higher, but the actual risk is only 2 times higher.

This gap gets worse as your base rates increase. The team at Our World in Data has a fantastic visualization showing how a 3x odds ratio can mean anything from a 2.7x risk ratio (when events are rare) to just a 1.5x risk ratio (when events are common).

I've seen this cause major confusion in product teams. A PM once came to me excited about a "3x improvement" in feature adoption from an experiment. Turns out that was the odds ratio from their logistic regression. The actual risk ratio? 1.8x. Still good, but not quite the slam dunk they thought.

The statistics subreddit actually has some great discussions on this if you want to dive deeper. Real practitioners sharing war stories about misinterpreted results.

Choosing the right metric for your analysis

So when should you use each one? The answer depends on what question you're trying to answer and what data you have.

Risk ratios are your friend when:

  • You're running a prospective study (like an A/B test)

  • You know the actual probabilities in each group

  • You want results that are easy to explain to stakeholders

  • The outcome is relatively rare (under 10%)

Go with odds ratios when:

  • You're doing a case-control study (working backwards from outcomes)

  • You're using logistic regression

  • You need to control for confounding variables

  • You're okay with some interpretation complexity

Here's my practical advice: if you can calculate a risk ratio, do it. It's almost always easier to understand and explain. "Users are 1.5x more likely to subscribe" beats "the odds of subscribing are 2.3x higher" every time.

But sometimes you're stuck with odds ratios. Maybe you're using a tool that only outputs ORs, or you're working with historical data where you can't reconstruct the full population. That's fine - just be crystal clear about what you're reporting.

The Reddit statistics community has tons of examples where people mixed these up. One poor soul spent weeks building a model predicting "2x higher risk" of churn, only to realize they meant 2x higher odds. The actual risk increase was about 40%. Ouch.

Applying risk and odds ratios in product experiments

Let's get practical. How do you actually use this stuff when running experiments?

First, pick your metric before you start. I've seen too many teams run an experiment, get confusing results, then try to figure out which metric makes their results look best. That's not how this works.

For most A/B tests, risk ratios are your best bet. You're comparing conversion rates, click rates, or retention rates between variants. These are all probabilities, so risk ratios make perfect sense. Plus, they're easy to explain: "The new design increased purchases by 15%" is a risk-based statement everyone understands.

But watch out for these common traps:

  • Using the wrong baseline: Always divide treatment by control, not the other way around

  • Ignoring confidence intervals: A risk ratio of 1.2 with a CI of [0.8, 1.6] isn't actually significant

  • Mixing metrics: Comparing an odds ratio from last month's test to a risk ratio from this month's test

  • Forgetting about base rates: A 2x improvement sounds amazing until you realize you went from 0.1% to 0.2%

The team at Statsig actually handles this really well in their experimentation platform - they default to showing percentage changes (which are risk-based) but let you dig into odds ratios when you need them for more complex analyses.

Here's what I do: calculate both metrics, but lead with the risk ratio. Then mention the odds ratio if it adds useful context. Something like: "Conversion increased by 25% (from 4% to 5%), with an odds ratio of 1.28." This gives the full picture without confusion.

Remember, the goal isn't to be a statistics purist. It's to make good decisions based on data. Sometimes that means simplifying things for your audience. Sometimes it means diving deep into the nuances. The key is knowing when to do which.

Closing thoughts

Understanding the difference between risk and odds (and their ratios) isn't just academic - it's the difference between making good decisions and accidentally misleading yourself or your team. Next time someone throws around phrases like "3x more likely," ask yourself: is that risk or odds?

The good news is that once you get this distinction, it sticks. You'll start catching these mix-ups everywhere, from research papers to quarterly business reviews. And you'll be able to make better sense of your own experimental results.

Want to dive deeper? Check out:

Hope you find this useful! And remember - when in doubt, just calculate both and be clear about which one you're using. Your future self will thank you.

Recent Posts

We use cookies to ensure you get the best experience on our website.
Privacy Policy