Bayesian vs. Frequentist. False Positive vs. False Negative. Truth vs. Uncertainty. It’s the world of A/B testing! In this bonus mini-episode, Moe sat down with Chad Sanderson from Subway to discuss some of the pitfalls of A/B testing — the nuances that may seem subtle, but are anything but trivial when it comes to planning and running a test. And a shout-out to https://www.analytics-toolkit.com/.
Thanks for the shout-out, Chad! Kudos for tackling these difficult topics.
I’d add that to understand type I errors and what “avoiding them” means, one needs to understand the difference between nominal significance and actual significance.
This will lead you to understanding that avoiding type I error does not make sense. It will always be present and what we do with a proper significance calculation (or confidence interval one) is to measure it, to quantify it. Most issues come from improper application of the statistical tools, resulting in nominal significance being different from the actual one. So a tool might spew out “99% significance” when the actual significance is 90%, often due to misapplication of the tool by the user.
[…] (Bonus) 1:1 with Chad Sanderson: The Pitfalls of A/B Testing […]
[…] (Bonus) 1:1 with Chad Sanderson: The Pitfalls of A/B Testing […]