
A/B testing can be one of the highest-ROI tools in growth.
It's a major unlock in optimizing a business.
I have personally launched hundreds of tests. When I was a PM at Uber, every single change we made was A/B tested.
Everything. Layout updates, infrastructure improvements, button positioning, everything.
This made sense for us because:
We have the traffic/users to measure down to .01% improvements.
We had the tooling set up and some of the best data scientists in the world.
Even a 0.01% in our core metrics is worth tens of millions of dollars.
Seeing this process permanently changed my perspective on product management.
However, most of the tests that I see run are a giant waste of time.
How A/B Tests Waste Time
Even with all of my experience I've still launched many stupid A/B tests that I regret.
I have learned this painful lesson multiple times. At Codecademy, we once tested three different pricing page layouts.
Three variants meant 3x the build time. It also meant we needed way more traffic to reach statistical significance.
The more variants you test, the more you split your traffic, the longer it takes to get a clear answer.
The test ran for months. It came back inconclusive.
We basically wasted an entire quarter—engineering time, PM time, design time—and learned nothing actionable.
Additionally, and maybe more painfully, we blocked any other work in that area as changing something near the pricing page would have polluted the results.
The core mistake wasn't the idea of the test itself.
It was not calculating whether we could actually measure a meaningful difference before we started building.
This is where Minimum Detectable Effect comes in.
Why MDE is So Critical
Minimum Detectable Effect or “MDE” is the smallest change you can be statistically confident in.
You can think of it as the “floor” below which you can’t measure reliably.
If your MDE is 10%, you could have a 9% improvement and never see it. The change happened.
Your instrument just isn't precise enough to detect it.
You could run a test, see no statistical significance, and conclude "this didn't work" when in reality it worked—you just couldn't see it.
Your MDE is determined by three things:
Your sample size (i.e traffic).
Your baseline conversion rate,
How long you're willing to run the test.
Personally I have never found an online calculator that I liked so I built this for everyone.
Its free and not behind an email gate. This allows you to:
Select the metric you’re trying to improve
Plug in the numbers relevant to that
See what level of impact you could measure in what time
Before you run any test, you need to know: can I reach a reasonable MDE in a reasonable timeframe?
If the answer is no, don't run the test. You're just guessing with extra steps.
When to Actually A/B Test

Not everything needs a test. Most things don't.
Here's the decision tree I use:
Is there real risk or genuine uncertainty here?
If you're implementing a well-known best practice—like adding a pause option to your cancellation flow—you probably don't need to test it.
The industry has already tested it for you. Just ship it.
If you're making a change where you genuinely don't know what will happen, or where being wrong would hurt the business, that's when testing starts to make sense.
If no real risk or uncertainty, just ship it.
Are your metrics stable enough to observe the change directly?
If your conversion rate holds steady at roughly the same level week over week, you might not need a formal test at all. Ship the change and watch the line move.
Look at the below graphs. If your numbers look like the ones on the left, you'll probably see a 10% jump visually just by watching it and don’t have to test anything.

However if you are more like the one on the right, then you won’t be able to visually pick out changes.
If your metrics are stable, just ship it and watch.
Is this in a high-traffic area of your product?
Low-traffic areas can't support A/B tests. You simply don't have the volume to reach statistical significance in a reasonable timeframe.
If you're testing something in a part of your product that only gets a few hundred visitors per month, you're not going to learn anything useful.
The test will run forever or come back inconclusive.
If not enough traffic, just ship it.
Can you reach a meaningful MDE in a reasonable timeframe?
This is the critical calculation. Plug in your numbers: your traffic, your baseline rate, your test duration.
What's the smallest effect you'll be able to detect?
If your MDE is 15% and you're hoping to see a 5% improvement, don't bother. You won't see it even if it's there.
If your MDE requires 4+ months to reach, don't bother. You'll block development for too long.
If you can't hit a reasonable MDE in 6-8 weeks, just ship it.
If your answered “yes” to all of these, run the A/B test.
One caveat worth mentioning: even at 95% confidence, 1 in 20 tests gives you a false answer. I've seen tests flip after looking conclusive for weeks. Statistics reduce risk. They don't eliminate it. Don't bet the company on a single test result.
The Real Cost of Bad A/B Testing
Here's the math most teams don't do.
A feature takes 4 weeks to build. If you A/B test it, that 4-week project becomes 12+ weeks:
4 weeks to build the variants
7-8 weeks to run the test
1 week to interpret results and decide what to do
That's 3x the timeline. And you still have to explain to executives what "inconclusive at 95% confidence" actually means.
Testing also blocks adjacent work.
If you're running a test on your pricing page, you can't materially change your homepage without contaminating the results. Same goes for the checkout flow, the onboarding sequence, or anything else that feeds into the thing you're testing.
The blast radius is bigger than people realize.
You're not just blocking one feature—you're blocking an entire area of your product for weeks or months.
I'd be skeptical of any A/B test that needs more than 6-8 weeks to reach significance.
At that point, you're dramatically slowing down your velocity. The opportunity cost is enormous.
So What Do You Do With This Information
Go to this link and bookmark this calculator.
Before your next test, plug in your numbers. See what MDE you can actually reach. See how long it will take.
If the math doesn't work, don't build the variants. Save yourself a quarter.
Good luck out there,
Dan

About Me
Dan has helped drive 100M+ of business growth across his years as a product manager.
He ran the growth team at Codecademy from $10M ARR to $50M ARR, which was acquired for $525M in 2022. After that he was a product manager at Uber.
Now he advises and consults with startups & companies who are looking to increase subscription revenue.





