(Bonus) 1:1 with Chad Sanderson: The Pitfalls of A/B Testing

Bayesian vs. Frequentist. False Positive vs. False Negative. Truth vs. Uncertainty. It’s the world of A/B testing! In this bonus mini-episode, Moe sat down with Chad Sanderson from Subway to discuss some of the pitfalls of A/B testing — the nuances that may seem subtle, but are anything but trivial when it comes to planning and running a test. And a shout-out to https://www.analytics-toolkit.com/.

3 Responses

  1. Thanks for the shout-out, Chad! Kudos for tackling these difficult topics.

    I’d add that to understand type I errors and what “avoiding them” means, one needs to understand the difference between nominal significance and actual significance.

    This will lead you to understanding that avoiding type I error does not make sense. It will always be present and what we do with a proper significance calculation (or confidence interval one) is to measure it, to quantify it. Most issues come from improper application of the statistical tools, resulting in nominal significance being different from the actual one. So a tool might spew out “99% significance” when the actual significance is 90%, often due to misapplication of the tool by the user.

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#257: Analyst Use Cases for Generative AI

#257: Analyst Use Cases for Generative AI

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_257_-_Analytics_Use_Cases_for_Generative_AI.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares