NNT: How many users need features

Mon Jun 23 2025

You know that sinking feeling when you realize the feature you spent months building has barely any users? It happens more often than we'd like to admit - in fact, about 80% of features end up collecting digital dust.

Here's the thing: there's actually a way to predict which features will flop before you waste all that time and energy. It's called the "Number Needed to Use" framework, borrowed from healthcare's NNT concept, and it's changing how smart product teams prioritize their roadmaps.

Applying NNT to feature development

In healthcare, doctors use NNT (Number Needed to Treat) to figure out if a treatment is worth it. If you need to treat 100 patients for one to benefit, maybe that's not the best use of resources. The same logic applies to features.

Think of it as "Number Needed to Use" (NNU). Let's say you're building a collaboration feature. If 100 users need to try it before one actually adopts it into their workflow, that's an NNU of 100. Compare that to a feature where every 10 users who try it, one becomes a daily active user - that's an NNU of 10. Which one deserves your engineering resources?

The Reddit community constantly debates this. One thread pointed out that 80% of features are rarely or never used, which honestly tracks with what most of us see in our analytics dashboards. Another discussion revealed that most people only use 10% of design features - sobering when you think about all those late nights perfecting that advanced settings panel.

This is why calculating NNU before you build can save you from feature bloat. When teams were creating a project management tool, they had to decide: do we build 50 features because competitors have them, or focus on the 5 that actually drive adoption?

The answer becomes obvious when you run the numbers. Lenny's Newsletter makes a great case for accelerating growth by focusing on existing features rather than constantly shipping new ones. If you have a feature with a low NNU that's underperforming, fixing it will likely have 10x more impact than building something new from scratch.

The hidden costs of unused features

Let's talk about what those unused features are actually costing you. It's not just the initial build time - though that stings enough when you realize 80% of features are rarely or never used.

Every feature you ship becomes a permanent resident in your codebase. It needs maintenance, testing, and updates every time you make changes elsewhere. Your engineers know this pain: "We can't update the authentication system because it'll break that export feature three people use." Sound familiar?

But here's what really hurts: feature clutter kills user experience. New users open your app and get hit with 47 different options when they really just need 3 things to get started. They bounce, and you're left wondering why your activation rates tank despite having "everything users asked for."

The ARIA framework that Lenny talks about offers a way out. Instead of building new features, you can:

  • Analyze what users actually do (spoiler: it's not what they say they want)

  • Reduce friction in your core features

  • Introduce features contextually when users need them

  • Assist users in discovering value they're already paying for

This approach helped companies like Statsig focus on making their existing experimentation features more accessible rather than building every analytics feature under the sun. The result? Higher engagement without the complexity tax.

Leveraging data to prioritize feature development

Analyzing feature usage

Here's where things get interesting. Most teams have the data - they just don't look at it the right way.

Start with three metrics that actually matter:

  • Adoption rate: What percentage of users even try the feature?

  • Activation rate: Of those who try it, who uses it successfully?

  • Retention rate: Who's still using it after 30 days?

If your shiny new feature has 5% adoption, 20% activation, and 10% retention, you're looking at 0.1% of users actually getting value. That's an NNU of 1,000. Ouch.

The 80% unused features stat makes more sense when you break it down this way. Features fail at different stages, and knowing where helps you fix them - or kill them gracefully.

Enhancing existing features

Remember the ARIA framework? Let's see it in action.

Netflix discovered that users weren't finding shows they'd love, despite having thousands of titles. Instead of adding more content filters (the PM's first instinct), they reduced friction by improving their recommendation algorithm. Same features, dramatically better outcomes.

The team at Lenny's Newsletter found that focusing on existing features often beats building new ones. They share examples where companies doubled key metrics just by:

  • Making features more discoverable

  • Simplifying complex workflows

  • Adding contextual help at the right moments

The medical analogy holds up here too. Doctors debate at what NNT value an intervention isn't worth it. For life-saving treatments, an NNT of 100 might be acceptable. For mild improvements? Maybe 10 is your limit. Same with features - a high NNU might be fine for your core differentiator, but not for that nice-to-have dashboard widget.

Optimizing features through experimentation and testing

A/B testing and feature gates

You can't improve what you don't measure, and you definitely can't measure what you release to everyone at once.

A/B testing isn't just for marketing teams anymore. Smart product teams use it to validate every significant feature before full rollout. The folks building that project management tool could test whether users actually want Gantt charts by giving them to 10% of users first.

Feature gates take this further. They let you:

  • Roll out gradually: Start with 1% of users, then 5%, then 10%

  • Target specific segments: Give power users early access

  • Kill switch protection: Turn off broken features instantly

  • Measure real impact: Compare users with and without the feature

Statsig's approach to feature experimentation shows how this works at scale. Instead of guessing which features matter, they help teams run controlled experiments that prove impact before committing resources.

Engaging power users

Your power users are goldmines of insight, but most teams waste this resource.

These aren't just your loudest users - they're the ones who've built their workflows around your product. They'll tell you exactly why that feature you think is clever is actually breaking their process.

Here's how to leverage them effectively:

  1. Identify them through usage data, not just who complains most

  2. Give them early access to features in development

  3. Actually listen when they say something's not working

  4. Watch them use your product - it's painful but enlightening

Power users helped Notion realize their database feature needed templates, not more functionality. They showed Figma that collaboration features mattered more than additional design tools. They'll show you where your NNU calculations are off because they actually use the features you're measuring.

The key? Don't just collect their feedback in a spreadsheet somewhere. Build it into your development cycle. When they say a feature has too much friction, that's your cue to apply the ARIA framework and fix what exists rather than build something new.

Closing thoughts

The NNU framework isn't about killing innovation or being stingy with features. It's about being honest about what actually moves the needle for your users and your business.

Next time someone suggests adding "just one more feature," ask them: what's the NNU? How many users need to try this before one gets lasting value? If the answer makes you uncomfortable, maybe it's time to focus on making your existing features 10x better instead.

Want to dig deeper? Check out:

  • Lenny's Newsletter for more on the ARIA framework and growth tactics

  • Statsig's guides on feature experimentation and rollout strategies

  • Your own analytics dashboard (seriously, when's the last time you looked at feature adoption rates?)

Hope you find this useful! And hey, if you do end up calculating NNU for your features, I'd love to hear what surprises you most about the results.



Please select at least one blog to continue.

Recent Posts

We use cookies to ensure you get the best experience on our website.
Privacy Policy