#232: The Reality of Uncertainty Meets the Imperative of Actionability with Michael Kaminsky

It’s been said that, in this world, nothing is certain except death and taxes, so why is it so hard to communicate uncertainty to stakeholders when delivering an analysis? Many stakeholders think an analysis is intended to deliver an absolute truth; that if they have just enough data, a smart analyst, and some fancy techniques, that the decision they should make will emerge! In this episode, Tim, Moe, and Val sat down with Michael Kaminsky, co-founder of Recast, to discuss strategies such as scenario planning and triangulation to help navigate these tricky conversations. Get comfortable with communicating the strengths and drawbacks of your different methodological approaches to empower decision making from your stakeholders!

People and Resouces Mentioned in the Show

Photo by Andrew Seaman on Unsplash

Episode Transcript


0:00:05.8 Announcer: Welcome to the Analytics Power Hour, analytics topics covered conversationally and sometimes with explicit language.

0:00:14.3 Tim Wilson: Hi, everyone, and welcome. This is the Analytics Power Hour episode number 232. My name is Tim Wilson and I’m excited to announce that we’ve crunched all the data, run it through the latest AI, and come up with the most perfectly optimized episode ever. Moe Kiss, Director of Marketing Data at Canva is one of my co-hosts for this show. Moe, are you excited to be part of this episode that the data has told us will be absolutely perfect?

0:00:41.2 Moe Kiss: Oh, I can’t wait to talk about single source of truth and absolute certainty in all of our answers. Ugh.

0:00:51.1 TW: Perfect.


0:00:52.7 TW: And Val Kroll is an Optimization Director at Search Discovery. She’s my other co-host for this episode. Val, being an optimization maven, I take it you’re pretty excited that we’ll be achieving guaranteed perfection with our decision to do this fully, fully optimized episode.

0:01:07.3 Val Kroll: I am honored to be a part of the most perfect Power Hour episode that’s ever existed.


0:01:12.8 TW: Great. We will proceed with that certainty. Obviously, it’s silly to think that crunching any amount of data guarantees something like a perfect show or a perfect decision. We actually do have a process for selecting show topics and guests, and that process aims to have more hits than misses. But that’s really not unlike the world of our stakeholders or our business partners face. They’re trying to make the best decision they can in a noisy world and they want data and analytics and experimentation and research to help them make better decisions. But the data is never gonna be perfect. The results of any analysis or experiment rarely point to a guaranteed truth. So how much uncertainty is acceptable? When should an organization just keep on crunching that data to try to increase the signal to noise ratio versus combining that noisy signal with human judgment to just make a damn decision and move forward?

0:02:08.6 TW: We’re reasonably certain that a discussion on that topic will be useful, but we wanted to get a guest to help us out. Michael Kaminsky is the co-founder of Recast, a modern media mix modeling company that would be an MMMM company for those keeping score at home because four M’s are better than three. He’s held past roles at Harry’s Grooming, Case Commons and Analysis Group, and he is in my mind, one of the most thoughtful producers of mixed modeling content out there on the interwebs. And mixed modeling is a broad and deep and messy space that is really all about helping marketers reduce uncertainty as they try to make decisions about where to invest their marketing spend. I’m absolutely certain that he will be a great guest for our discussion today. Welcome to the show, Michael.

0:02:53.6 Michael Kaminsky: Thank you so much. I am so happy to be here. Thank you Tim, Val and Moe for having me.

0:02:58.3 TW: Fantastic. So let’s start off with the easy stuff. We’re not expecting this episode to be focused solely on mixed modeling, but to me that’s one category of analytics work where there is an actual quantification of uncertainty that’s baked into the results. So I wonder how often have you had companies come to you expecting that mixed model is going to be that silver bullet, it’s gonna give them a precise truth that really just isn’t realistic. Have you had to have some uncomfortable educational discussions with clients or prospects about that?

0:03:36.0 Michael K.: Constantly. Everyone wants the silver bullet that gives them the answers and tells them exactly what to do. But I think in the context of media mix modeling and marketing measurement, which is where I spend a lot of my time, as with almost every other application of data analytics or data science, there’s actually a lot of uncertainty. And I think one of the hardest roles for analysts and data scientists is communicating that uncertainty back to stakeholders or business users in a way that is true, so that helps them understand what’s actually happening but also helps them make decisions. Because what I see happen a lot is you can really emphasize the uncertainty and the business user is, “Well, this doesn’t help me because there’s uncertainty. I need an answer.”


0:04:24.2 Michael K.: Or you can hide the uncertainty, but that also doesn’t help them because then they can’t make a fully informed decision. You tell them, “Yes, variant A1 and it was statistically significant. Don’t worry about the uncertainty.” That’s not great either because it doesn’t help them weigh all of the trade-offs. And so I think a big part of what analysts need to do in this modern world is think about how do we communicate that uncertainty effectively help our business users understand the limitations of a given analysis, but still also help them make whatever business decision they’re trying to make in order to propel the business or the organization forward.

0:04:56.6 TW: So where do you start with that? Do you start with trying to nail down what the decision is they’re trying to make and back into it from that or how do you do that?

0:05:10.4 Michael K.: So I think starting from… Understanding what the decision is and what the relevant trade-offs are, I think is a great place to start. This requires work that I think analysts are sometimes uncomfortable with because it requires really learning how does the business actually work, what does the downside cost of making a mistake here or not? And these are… Those are very much business questions, things MBAs spend a lot of time thinking about. And so some analysts, especially, earlier in their career, need to learn that skill of actually understanding the business in order to be able to help that decision maker make a really good decision.

0:05:41.2 Michael K.: The other thing that I’ve found is really helpful is trying to make things more concrete for those stakeholders. A good way to do this is scenario analysis. There’s a high scenario, a medium scenario, a low scenario and really working through what the implications of that are as opposed to just giving a point estimate and a p-value and saying it’s significant. Really help them understand, “Look, here are the range of possible estimates and here’s what the implication of that range is on this thing that we actually care about.”

0:06:09.7 Michael K.: So one of the things that I advocate for a lot is moving away from point estimates and towards something more a credible interval or a confidence interval, depending on what framework you’re operating in, and then really working through what the implications of that are and whether that’s in the spreadsheet just saying, “Here’s what the results would be at the midpoint and at the low point and at the high point” to other more sophisticated things. I think that can really help those people who don’t have statistics training begin to start to concretize what the implications of this analysis actually are and start to get them familiar with this idea of uncertainty. For me, uncertainty is just a fundamental part of existing in the world. And so it’s really important to start to get business users who are gonna be the recipients of more complex analysis that we’re doing, comfortable with the idea of uncertainty and then starting to make decisions even in the face of that uncertainty.

0:06:58.0 MK: It’s actually so funny and I feel like I’m gonna spend all of this episode talking about my team and the exact situation that we’re in right now.


0:07:06.0 MK: Because we are in this muddy place of, I guess, different areas of the business and different regions have different levels of both market maturity and measurement maturity. So we have created this situation where the complexity of what we’re doing has significantly increased, but likewise, I guess the confidence around the recommendation we’re making has improved substantially, which is why we’ve gone into this muddy water. But it is really tricky on stakeholders. And one of the techniques that we did try actually that you just said that worked really effectively the last time was that, “We’re gonna make a recommendation. This is what low, medium, and high looks. And just letting you know if you go for the aggressive case, our confidence decreases in terms of us being able to determine what we think the likely results will be. You absolutely were spot on because that was really well received. And I felt it was a good slow step into some of these concepts.

0:08:14.6 TW: That’s amazing. That sounds really successful.

0:08:15.9 MK: Surprisingly.


0:08:16.8 VK: I actually had the reverse happen to me with a client where when we were giving results of a test exactly how you described, Michael, doing some of the confidence intervals, they weren’t happy with the fact that we were giving that range. “Well, the last agency we worked with just gave us the yes and no.”


0:08:33.8 VK: “I don’t know what tools you’re using.”


0:08:37.9 VK: “Well, actually, it was always there. It just might not have been described to you in this way.” But it does take a more mature open conversation, like you said, to make them comfortable because at the end of the day, the goal is for them to be able to take an action and make a decision, not be, “Oh, this is that one number. [laughter] And I can take it to the bank.”


0:09:00.0 S1: All right, it’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:09:04.0 TW: Well, Piwik PRO is easy to implement, easy to use and reminiscent of Google’s universal analytics in a lot of ways.

0:09:10.7 TW: I love that it’s got basic data views for less technical users, but it keeps advanced features like segmentation, custom reporting and calculated metrics for power users.

0:09:20.1 TW: We’re running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:09:26.8 TW: That’s right. So head over to piwik.pro and check them out for yourself. Get started with their free plan. That’s piwik.pro. All right, let’s get back to the show.

0:09:37.9 TW: Just to clarify, when we’re saying low, medium, high, is that a, “This is a conservative prediction, the median prediction and the best case prediction,” or… ’cause, Moe, when you were talking about it, it almost sounded like you were saying… I might have been mishearing. That it was…

0:09:56.1 MK: When Michael was talking about it, I actually had the same thought. I was, “I think he’s talking about confidence levels,” whereas in my case we were actually talking about low, medium and high spend recommendations. But likewise, in that particular situation, the midpoint, we have the highest confidence in. And then when you go to the two extremes, that’s where we have less confidence, essentially. But yeah, I actually picked up on that as well. I think that Michael and I were talking about slightly different things, but it all pulls together, Tim.

0:10:24.4 Michael K.: Yeah. I think it pulls together. It’s this idea that there is uncertainty. And I think any amount of communicating that back to business stakeholders is really important. And depending on what level they are at today, you might choose how deep or how far down that path you wanna go. I think a lot about credible intervals because I’m a Bayesian statistician and so that’s the way that I frame a lot of the world, but then that can plug into other decisions that we have high, medium, or low confidence in based on where those credible intervals lie. And so there’s subsequent analyses that you can do that you might frame slightly differently than just a pure confidence interval on an AB test, for example, like I think what Val was describing.

0:11:04.7 TW: I don’t wanna belabor this, but I want to go with an extreme which is drawing from my past, where… And it is a media mix example. But if an organization, a brand says, “We’re hearing all about connected TV. We’ve done zero connected TV, but we think connected TV is gonna be amazing. And to me there feels there’s a logical explanation of saying, This is an entirely now channel and you’re saying you wanna put between 5% of your spend into it to 50% of your spend”. And if you put 5% of your spend into it, I’d feel like your overall results I’d be more confident ’cause you’re putting limited spend…

0:11:50.4 MK: Doesn’t that depend though on the risk appetite of the business? ‘Cause if we were doing something like that, we’d be, “Cool, we’re gonna spend this amount,” but we’re treating it as an experiment. We’re not treating it as a silver bullet. It’s we’re testing the waters here. But if they’re gonna put 50%, that’s pretty scary.

0:12:06.9 TW: Well, right, but I guess my point is that if somebody comes in and says, “We’re gonna do this new channel, do the analysis and tell me what result we can expect.”

0:12:15.8 MK: Got it.

0:12:16.2 TW: And you’re, “There’s a discussion to be had.” Where the fuck do you think I’m gonna get that?


0:12:23.2 TW: I don’t really have data to work from, so I can’t. And so, Moe, as you were talking about doing more stuff, whether that’s new channels or different creatives or going into influencers or podcast advertising or whatever, is there some discussion to be had that’s saying… That’s a fantastic way to reframe it. Actually, in the one case I’m thinking of, that’s how they actually came to it, Moe, saying, “We want to try this and let’s treat it as an experiment.” But that actually seems a one pivot if they’re asking for something that it’s, “There’s gonna be incredibly high uncertainty. So let’s not treat this as me making a prediction with the level of uncertainty. Can we instead reframe that as a cost of data and an experiment so that we can be more… So we’ve reduced the uncertainty for future decisions?” Does that make sense? I’m just talking myself in circles.

0:13:18.7 MK: Yes. I agree with you. That’s how I would tackle it. But there is, definitely, a hard case where you get to and yeah, things are really complex at the moment for us and there is definitely… We’re gonna push into new channels and new tactics and that stuff. And yeah, an MMM is not good at handling that. So, Michael, I’d love to hear from you a little bit about how you guys manage that.

0:13:46.2 Michael K.: No, I think everyone is right.


0:13:49.2 Michael K.: If you’re going into something new and you have no data, you’re… And again, in a Bayesian framework what we say is, “We’re gonna fall back to our priors.” We’re gonna have some prior belief about how bad or good this channel could be. And before we actually see any data, we’re just gonna assume that it could take any value in that range or within some distribution of that. And so in that example, prior to launching on connected TV, I might run the numbers and say, “Look, here’s what the impact is on the bottom line of the business.” If it’s really good, if it operates like an average channel or if it turns out bad. And then we can at least make a decision. How disastrous could this be? What’s the downside that we care about here? And maybe that pushes us away from the 50% of the budget into that channel before we’re able to test it a little bit more. But overall, that’s how I think about it.

0:14:36.1 Michael K.: And again, I like to educate the business on, “Look, today we don’t know anything and we’re gonna be able to get more certain as we experiment more with this channel, either via structured experiments or just by starting to spend into it and then observing what happens in an MMM type context.” And then again that idea of going from a huge amount of uncertainty and then narrowing that down and thinking about what’s the value of that additional information we have, I think those are all great levers to start educating the rest of the business on this way that we might think about the value of uncertainty reduction.

0:15:09.7 TW: So, Moe, back to you, does that get used as a… ‘Cause, Michael, you said, I think was, that’s another a twist of, “Yeah, the uncertainty may be high, but let’s think about this not as just a where within that range of uncertainty would this be a an unwise decision, but how can we also make sure that as we’re making that decision we are reducing our uncertainty?” I guess, Moe, in that…

0:15:41.7 MK: But isn’t that literally what you’re doing with an experiment?

0:15:44.5 TW: But is it? If you just introduced a channel based on… You don’t have to, but wouldn’t you wanna push to say, “Can we do this in a methodical way?” Whether it’s a truly randomized controlled trial or if it’s instead something that’s even, longitudinally, “We’re gonna look at it that way.” I guess do you do that now as the decisions are being made? Are you both trying to give them the information to make the decision in the moment but also saying, “Let’s look ahead and say what do we think we will have learned in two months?” by you taking a flyer were something new?

0:16:25.4 MK: So is the question that if we were going into a new channel, would we, basically, in two months try and look back and be, “Is this working?” I don’t know if I’m oversimplifying.


0:16:37.4 TW: If you’re in one of these areas where things are getting more and more complexity and therefore you’re struggling with the uncertainty, are you also looking at that increase in complexity and saying, “What can we do so that we’re also setting ourselves up in the future to not just have an ever widening set of uncertainty to deal with?”

0:16:57.6 MK: Yeah, so I guess the thing… Part of the reason that things are complex… I don’t know, maybe Michael can tell me if this is the standard thing that businesses do, but our approach is basically we have our MMM. Where things look funny or essentially we have uncertainty, we use experiments to calibrate it and we’re either we don’t have a lot of data about this channel or something feels off about it, or… Whatever the case is, or it’s a new thing that we haven’t done before. We’re gonna set up experiments and we have a few different types of experiments we run and then we calibrate our MMM with that. Where we have, I guess, a lot of complexity is we don’t have an MMM in every market. And I don’t think it’s feasible to expect that you’re gonna have an MMM in every country.

0:17:42.2 MK: So the complexity actually comes from what do you do in the other markets? ‘Cause I actually feel like in the markets where we’re fairly mature, it is about reducing uncertainty. But in the other markets that’s where things are a bit more difficult ’cause then you’re, “I’m gonna have to try and use either publisher data and their experimentation if we’re not set up to run experiments there.” And then so you’ve got this suite of tools, but the truth of the matter is… And we do have a lot of new senior marketers who’ve come on board and for the people that have been around for ages, they get it. We’ve taken them on this whole journey about why we’re doing what we’re doing. But if you are brand new and you’ve just started in the business, you’re, “I don’t get it. I’m used to one report and it tells me what the answer is.”


0:18:28.5 MK: “And it tells me how my channel performed last week.” And it’s like we are not doing that anymore. So the complexity comes from you’re gonna have different levels of maturity for different markets essentially. I haven’t answered your question at all and I also don’t know if I’ve explained that well.

0:18:48.0 Michael K.: I’ll jump in and add some comments that might resolve some of these differences. Tim, it sounded you were asking this question of how do you think structurally about reducing uncertainty? And I think this is a thing that we think a lot about at Recast and that we’d recommend to people, which is, “Look, if you’re going to pull back on spend anyway, do it in a structured way that maximizes learnings,” which maybe… Instead of pulling all of your channels back at the same time by the same amount, which means that you won’t be able to do a pre and post analysis to differentiate, stagger them in some way. Same thing when you’re launching. Try to think a little bit about what’s going to give us signal so that we can analyze it in the future, such that you can maximize the lessons learned from whatever changes you happen to be making.

0:19:29.8 Michael K.: And I think that that is… Again, it’s a good… You can’t always do it right. There are other business considerations outside of learning new things like we tend to care about on the analytics team, but I think it’s a good habit to build and to try to explain to people, “Look, if we add more variation into the data, and we can do that in a structured way, we will have more to analyze on the other side.” And I think, Moe, the things that you were talking about is another thing that I spend a lot of time thinking about, but it’s a slightly different question, which is how do we do triangulation as a business? Which is we have different data points that might be coming from the marketing platforms and they might be you feel like, “Well, it worked in Australia, maybe it’s the same thing in New Zealand, or maybe not, or maybe it’s different.” And we have this one test that we ran, but we ran it nine months ago and we changed creative since then. How do you pull that information together in a structured way?

0:20:17.9 Michael K.: And I think that is a really… Another really important skill that analytics teams and data science teams can really start to help with, which is how do we… We have all of these disparate points of information. We all recognize that there is no single source of truth silver bullet to solving this. But I think what we can do is help our our business stakeholders start to think structurally about what are the different pieces of information that we have? What would happen if this was true and this thing was informing that? And then help them work through that logic of triangulation from those different data sources, which are often generally measuring different things. None of them are perfect or right. They’re measuring different things. How do we bring that information together to give us the best perspective that we have today on this very complicated problem we have or difficult decision we need to make?

0:21:06.6 MK: Okay. So how do we do that?

0:21:08.8 VK: Well, that was actually my question too, is this whole putting that triangulation into practice. ‘Cause I love that concept, but you even talked about it… The way you just talked about it, Michael, was the logic of triangulation. We have a hard enough time sometimes getting someone comfortable with the pros and cons and the strengths and drawbacks of an individual methodology, let alone multiple and bringing them together. And it’s sometimes it feels like elite, but I would love to hear how you approach that. What was that? How do we do that?

0:21:41.6 MK: So Val, I can tell you by the end of this week I will have a document of my attempt to do that. And then I want to hear if it’s going to be totally off the mark or if I need Michael to check my homework.

0:21:53.2 VK: We’ll link it in the show notes.


0:22:01.8 TW: There’s two things. There’s the triangulation through different measurement and experimental techniques, which is one framing, but I was also hearing that there’s a structure around what are the different classes of decisions? Changing a creative, shifting channels that are being used, which I rarely see actual clear documentation of. You take the shittiest examples of putting notes, annotations of we change these things into a Google analytics or Adobe analytics interface, zero structure. It’s just notation. Part of me was hearing, “Well, that would be amazing to sit back and say, let’s capture change in the creative. That’s a decision we can make. Fundamentally changing the channel mix, or maybe it’s more structured than that. Ramping up on connected TV, ramping down on display, capturing the range of decisions at the business partners disposal seems like an area… And maybe I’m putting words in your mouth, Michael, but that’s where my brain started to head to. Is actually help them figure out what decisions are within their practical control to make.

0:23:22.7 Michael K.: I think helping them think through those things is definitely the role of analytics teams. And I think really good analysts spend a lot of time with their stakeholders and learn. What are the levers you have and how do they work? And what do you expect is going to happen when you make this change? And I think as analytics teams, we always want to collect as much data as possible. I think it’s important to be judicious about what data are we trying to collect and how much work are we potentially putting on someone when it comes to collecting that. We want to be smart. Otherwise, people will get burned out and they’ll be… They’ll stop doing it because they’re not seeing value from it. But I definitely think that as analysts, it’s very helpful for us to understand really what those decisions are, and then be thinking about what can we learn from these different decisions that are being made? And how can we use that to drive the business forward?

0:24:13.9 Michael K.: And so all of those things, I think, come together in really high-functioning analytics teams, because they’re always trying to think about, “How can we drive the business forward? What do we need to do to know in order to be able to do that? And then what are the levers that we have in order to be able to generate that knowledge through experimentation?” And whether that’s a formal randomized control trial, or just a before and after, we change this thing and it seemed things got better. So that’s some data. We can use that. And that work, I think, is a big part of being a high-functioning analytics team, where we have to operate under uncertainty. And we have to figure out what’s the best that we can do with the limited information, that limited information, time and energy that we have. I don’t know if that answered your question.

0:24:55.4 MK: But when you have 350 to 500 marketers, in different time zones, with different expectations, with different understandings of data, it almost sounds…

0:25:06.2 TW: That’s your problem.

0:25:08.5 MK: Yeah, but it almost sounds like to have…

0:25:10.1 MK: That’s a lot of marketers.

0:25:15.8 MK: It almost sounds good triangulation you basically need to always be in the room. And that’s not an option. There has to be a framework you can give them to how do you make the decision? Or what is going to guide you when I can’t be there? Because yes, I totally agree. In a perfect world you have this super close embedded relationship. But the reality is when you have a really big company and a really big team… And I appreciate, dear listeners. That is not everyone. And I am complaining about my own problem right now. But you can’t always be there. So what are the frameworks that you can use to help them manage that uncertainty and also the conflicting information they get from different data sources.

0:26:00.0 Michael K.: This is such a good question. I think this is a really hard question. So I’ll talk a little bit about it, but I don’t think there’s any… No silver bullets. I don’t think that there’s a fixed answer to this. So the things that I would think about for frameworks is what data do you use to make which decision. And so if you’re working with 350 to 500 marketers, it’s, Okay, you’re going to need to use platform data, what’s going on with Facebook clicks or whatever, to make some amount of decisions. What are the decisions that you want to use when you’re looking at that data? Day to day, what’s going on? Maybe you’re using that information to switch out different campaigns or creatives or whatever. It’s going to vary business by business. And then beyond that, what tools do you want to use to make which decision?

0:26:37.6 Michael K.: Okay. We’ve got experimental results. And you need to put together training materials for these marketers because they are the ones who have their hands on the controls effectively. But I think you can put together frameworks that are going to operate at different levels. And which different types of data do you want them thinking about when they’re making those different decisions? I don’t think there’s getting around that. One of the things that some people have been talking about, Eric Seufert, in particular, has talked about this idea of the marketing econometrician effectively. It’s like that’s the new CMO role, or not new CMO role, but it’s becoming a really important role on marketing teams. And I think every marketer needs to be thinking about this a little bit. How do we analyze data? How do we use that to make decisions? So hopefully your marketers are somewhat data literate.

0:27:20.8 TW: And then beyond that, you can build them tools or build them frameworks about which data to use to make which decisions and how to start pulling that together. And then even beyond that, what’s the superstructure of reporting that we’re going to provide to them? Ideally, again, along with training about how these numbers are calculated and what the pros and cons or strengths and weaknesses of those different metrics are. And then I think that that’s the best that you can do in addition to just providing support on additional deep dives where they need it. But that’s how I think about at least getting started. Moe, I’m curious what y’all are actually doing.

0:27:52.2 MK: Well, the good news is that I think my homework this week is going to get an A++ because you’ve pretty much literally described what I’m working on at the moment with the team of what is the right data source to use for which decision at what level? And we’ve taken it down a step, almost a step further of this framework is also dependent on your KPI. Because if you’re in a market where you’re building awareness, that’s very different if you’re a market where you’re trying to have growth. What’s available, I guess, from a measurement perspective is also dependent on the goal or the objective that you’re after in that particular market. So yeah, I don’t think I’m going to fail, hopefully. I’ll let you know how it goes.

0:28:37.1 TW: So I’m going to try to pivot to a slightly different take on this, because I think there is a premise with all of this. That was great. But what’s in the back of my head is how many marketers have I worked with? And I have very little patience for analysts who disparage their business partners. However, I may start to head down that path. As human beings, there is inertia. There is a premise that the marketer… Take marketer X, is, one, both empowered to make a meaningful decision, and, two, they have enough interest and motivation to not just maintain whatever the status quo. There’s always a status quo. We’re just going to do the same thing we did last year. That’s the classic.

0:29:39.0 TW: I feel like there are times where the… Well, the analyst isn’t giving me good enough data to make a good decision. I wonder if that is sometimes a, “Unless I have absolute certainty that I should do something different.” It’s pretty easy to surrender to the siren song of just keep doing what I’ve been doing, which probably isn’t going to move me backwards, but may also not move me forward. But I can hope. I think of if you’ve got 350 marketers, how many of them are actually trying to make real decisions? How many of them are going to say, “Well, no, no, no, the agency’s optimizing my media. I just manage the funding to the media agency, and I trust them to make the decisions.” I guess, does that have any resonance?

0:30:31.6 Michael K.: So this is just a business management problem. I would say that there’s a lot of analysts out there who aren’t going to try to push the business forward either. And if you have that it’s like it’s just a business problem. Yeah, you could have a team of hundreds of engineers who don’t contribute much code and don’t push the product forward. I think if you’re in this world where you have 300 to 500 marketers who aren’t doing anything for the business, that’s your problem. We don’t even need to talk about data at all. You just need to get better at managing your marketing program.

0:30:58.6 TW: Well, I’m more saying there may be 50, who are both throwing the analytics team under the bus for not being able to provide more precision. But really, they’re not… They’re just kicking the… I’m not trying to…

0:31:09.5 MK: I would say my experience is less throwing under the bus. It’s more the perspective of, “Well, I can’t make a decision because I don’t have X data. I don’t have Y dashboard. If I just have this thing, then I will know what decision to make,” which is this completely false narrative. And I think we all know that.

0:31:31.1 TW: I think that is throw…

0:31:32.0 MK: Yeah, but it’s not malicious. I don’t think it’s malicious. I think it’s their experience is I had this data source before, and if I have it again, then I’ll know what decision to make.

0:31:43.8 TW: I disagree that that’s what they have.

0:31:43.9 VK: Michael, it’s at this point in the show where we should tell you this is where we process all of our trauma related to data.

0:31:53.3 Michael K.: We’re all in it together here. All right. We’ve all seen some… We all have some ugly battle words, it sounds like.

0:31:57.7 TW: Well, but I guess… But I think, Moe, I was right there with you to the, “Well, if I just had this thing,” but to me that often seems grounded in a personal historical fictional narrative, or I always assumed that if I just had this thing… I never had this thing, but I’ve been able to walk around for five years saying, “I just need this thing.” Like you said, it’s a false narrative, but I guess, where I was reacting to was the, “Oh, yeah, yeah. I was getting perfect information on some dashboard before.”

0:32:35.2 MK: But it wasn’t perfect, but their perception was that it was perfect. That’s the problem. As you go to more complex measurement solutions, I think that’s the thing. Is you’re trying to say to them, “Hey, this isn’t right,” or, “This isn’t right.” The uncertainty is higher here, but I think sometimes, and I was actually going to ask Michael about this, particularly with underpowered experiments, I think what happens is, is they don’t trust the model then. It’s, “Well, if uncertainty is higher it’s because the model’s less good,” not because we’re introducing you to more complexity. Does that make sense? I don’t know if I’m explaining this at all.

0:33:16.2 Michael K.: I think that does make sense. And I think it’s a real problem. And this is just where undergraduate statistics education has really failed us, where people get so fixated on is the p-value less than 0.05, or is the r-squared greater than 90? Unfortunately, I think it is on us as analytics practitioners and experts to really do the education with these stakeholders about how to start to think statistically and really understand uncertainty. And I think, Moe, exactly what you described, which is when there is more uncertainty, there is… Or when there’s more complexity, there’s more uncertainty. And that’s just a thing that comes with having a growing business. And it’s a thing that everyone is going to have to get used to, and then start to think about, “Well, what do we do, assuming that that’s true?”

0:34:03.1 Michael K.: And I think, again, it’s just this idea that we need to do a better job of educating these stakeholders on what’s really important and how to start to reason through this, knowing that they’ve probably gone through the same undergraduate statistics education that we did, that just put the emphasis on the wrong stuff. It put the emphasis on getting to a p-value of less than 0.05. And I think we all know that that is not a good way to make decisions. But unfortunately, for a lot of our stakeholders, that’s all they know. And so we’re going to have to do some re-education with them to get them to understand what are the true trade-offs here, and how should we think about making decisions in the face of uncertainty, when we don’t have a number that’s just a p-value that will tell us whether or not something’s true or not.

0:34:45.2 VK: This is making me think a lot about what my clients have just gone through their 2024 strategic planning, or some of them are wrapping them up now. And a lot of times we think about and talk about our marketers being our end users of what we’re doing, our stakeholders. But they really have a lot of stakeholders themselves to answer to, whether it’s up their chain, or even once you’re talking about someone who’s more senior, the person who’s making the ask for those budgets with conversations with the CFO. And so if you’re delivering this burden of uncertainty in a way that your stakeholder, the person you’re meeting with, is having to grapple with, then they have to go and explain it themselves. And I think that that’s part of the pit in the stomach that you give them a little bit.

0:35:30.4 TW: Well, but I wonder… So Moe, back on the less… On that front, is part of your moving towards more sophistication, trying to get them the concepts of actual outcomes and incrementality from, I’m still caught up in the, well, historically, I was very simple. I just was trying to maximize or minimize my cost per acquisition. And I could punch that in a tool, and it would just drive it, and everything worked well. So part of what… It’s not a increasingly… There is increasing uncertainty, but it’s increasing uncertainty, but focused on a business outcome that’s incrementality and not a delusional look at a simple and easy metric.

0:36:18.2 MK: Okay. So again, this is my homework this week, but I can tell you what I have written in my head as the way to explain this. When it comes to why we’re making this move, I guess why we’re moving less away from attribution, but what role attribution still does play. But before I do that, I want to hear what Michael thinks. How would you answer that?

0:36:41.9 Michael K.: Wait, so repeat the question exactly.

0:36:43.6 TW: So I was thinking that some of that movement is when you’re focused on a tactical metric, a minimize the CPA, minimize CPC, minimize the CPM, and you can turn the machine on and that… Whereas when you move to something like MMM is one example, but anything that’s saying we’re focused more on an incremental lift in a business outcome, that’s a… More uncertainty may come along with it, even though it’s also focusing on a much more meaningful dependent variable.

0:37:20.9 Michael K.: I think you laid it out. I’ll run through how I explain this to people. So my first step is the thing that we actually care about as marketers is incrementality. What is the causal relationship between our marketing activity and the business outcome that we care about? That’s the thing that we care about. If we know what that relationship is, we can optimize the business. When we were first starting out as a business, we were only operating in one or two marketing channels and we were able to log into the Facebook reporting platform and see a CPA number there.

0:37:48.5 Michael K.: That number was never measured in a way to truly get at incrementality, but when we were only in one or two marketing channels, it was pretty close to incrementality ’cause we were a much smaller business. We had good faith that for every dollar that we were putting in here, the last touch attribution or whatever tracking they’re doing was getting us pretty close to incrementality. Now that we’re a bigger, more complex business and we’re operating in a lot more different channels where we don’t have faith that we’re actually able to track everyone across all of their different touch points, those CPA estimates, or those impression estimates are now much further away from incrementality, and because of that, we need to introduce different measurement methodologies.

0:38:27.0 Michael K.: The true incrementality is not knowable. There’s no data source that we can go and look at in order to get that number. And so we have to approximate it with these different methodologies. And one of them could be experimentation, and one of them could be MMM and other things. But that’s really the goal of a business. That’s always been the goal. It’s just that our circumstances have changed to get more complex. So now we need different methodologies, which have more uncertainty baked into them, in order to approximate that true number that we’ve always really cared about. That’s the story that I tell to try to bring people along that journey of why we’re introducing more complexity. ‘Cause it’s always been about incrementality. It’s just how we measure. It has to get more complex as our environment gets more complex.

0:39:09.7 MK: Well, fuck. My biggest problem is that I need the transcript from the episode. Because what I wrote, I actually think what you said is better than what I wrote. And so now I just need to change that and be like, “Credit to Michael, but this is why we’re doing… ” I think the add that you made was the point about CP&A being closer to incrementality when you’re a smaller business. But as you get bigger, it gets further away. I think that’s a really good addition. I was going with, well, the world of attribution. It gives you directionality, but not always good directionality. And as you add more channels, it gets less useful. But I think the way you’ve framed that is, like I said, I’m probably gonna have to paraphrase some of that.

0:39:58.0 Michael K.: Please do. And tell me how it goes.

0:40:00.5 MK: Thanks for making my job easier. Like I said, this episode was very timely for me.

0:40:06.0 VK: So one of the other questions that we wanted to ask you, or I wanted to ask you about, I should say, is what do you think are some of the big things that get in the way of actionability? And I have a feeling that one of the things you’re gonna say is vanity metrics or a focus on the wrong things. But I would love to hear, in your words, Michael, what some of the biggest things that get in the way of actionability are.

0:40:29.7 Michael K.: Yeah. So actionability is definitely a thing that I think a lot of analytics people talk about, which is we want the insights to be actionable, but we don’t think a lot about what that means. When I think about actionability, I think what we really want is we wanna be able to drive actual changes that some business operator is making in the real world that’s going to drive the business forward. I think a lot of times what gets in the way of actionability is, one, uncertainty, which we’ve spent a long time talking about on this podcast, so I won’t re-cover that. But another thing is institutional willingness to experiment. And this is a thing that I’m… I run a startup. So it’s a thing that I spend a lot of time thinking about.

0:41:11.0 Michael K.: But I think as a business, or I would encourage most businesses to think about how to experiment more. And how can we encourage more experimentation? Because it’s a real problem if you have an indication that something is working, but no one in the business feels empowered to go try to chase that down and learn could this be a huge opportunity for us or not? If you have to go through a million different phases of approval to make any different type of change, or if you’re locked into the plan for the year and you can’t deviate from that, then you lose that ability to really test and learn. Where I think that’s actually what’s really important from actually making something, an insight that is actionable actually actionable is for the organization to have a willingness to experiment and recognize that a lot of those experiments are going to turn out not great. You’re not going to get the positive outcome probably in most of those experiments.

0:42:04.7 Michael K.: But as a business, you need to be able to make those changes and do those sorts of experiments that could end up pushing the business forward. And if you’re not willing to do that as an organization or as a team or whatever, then you’re gonna have a real problem with actually taking any insight and bringing it to life in a way that’s going to drive the business forward. I don’t know if that was the question you were asking, but that’s how I think about what it actually takes to make something actionable.

0:42:29.1 MK: Can we loop back a little bit? Because I… Yeah, I feel like we’re quite aligned on this experiment mindset or in the startup land, a growth mindset, whatever you want to call it. But I dropped in a little bit earlier about when we have an underpowered experiment and the blowback is sometimes, well, then your tooling is wrong. It’s not actually about the fact that we have no result or an inconclusive result or there’s too much noise to detect a signal. It becomes this perception that maybe there’s a problem with the tooling. And I’m interested to hear your perspective on that.

0:43:09.9 Michael K.: This is a really great question, and I think that I spend a lot of time thinking about. Again, it comes back to, I think, a lot of people have been miseducated about statistics and they only care about the p less than 0.05. But there’s a lot of experimental results where we get information back where maybe the impact of this experiment was that it’s not statistically significantly different from zero, which is fine. But it might be statistically significantly different from some other value, which might actually be important to us as an organization. Maybe based on the results of this experiment, we know that the effect can’t be greater than 3X ROI on whatever the intervention is.

0:43:46.5 Michael K.: Maybe that’s helpful. Maybe it’s not. But I actually really always wanna push people to think about what are the results of this test. In general, there’s a maximum effect read and a minimum effect read. And we can actually use that to make decisions, even if it’s not “statistically significant”. And so when it comes to, again, building up our team to really think hard about what an experiment is and how to think about those results, those are some of the things that I start to encourage people to think about, which is, what does power mean? But really, how can we use the results of this test to make decisions using thresholds that aren’t just, is it greater or less than zero, but actually have other values that are relevant to our business? And where we can take these underpowered tests or tests that have significant results and still use them to infer truths about the world that might be important to us.

0:44:36.2 Michael K.: And so, again, I think it’s up to us as analytics practitioners to do a better job of educating our stakeholders in terms of how to think this way. But actually, I think there’s a lot of power there. And again, I think even people who haven’t been trained in statistics, when they’re explained this in the right way, they do have an ability to grasp that and really understand what this means. And so I would love for analytics teams to make a more concerted effort to do that education within their organization to help people understand, “Okay, what’s actually going on here in terms of the things that we care about learning, and how can we use that to make real business decisions?”

0:45:11.9 TW: They shouldn’t just keep segmenting and slicing their experimental results until they find something that pops up? ’cause…

[overlapping conversation]

0:45:18.1 Michael K.: This is a place where that xkcd comic about the jelly beans comes in. Send that around to the team at least once a year. You got to make sure that everyone is aware that if you do enough tests, you will get statistical significance.

0:45:30.9 MK: One last question.

0:45:32.2 TW: I’ll allow one more, but it better be lending itself to a brief answer.

0:45:37.3 MK: Great. So let’s say you do take the team on this journey and you educate them and you do a year and a half of really bringing them along for the ride but then you have a whole bunch of new people. Do you just keep starting from scratch every time you get new stake… Is that literally…

0:45:58.1 TW: Now you have 800 marketers.

0:46:00.4 MK: No, but I’m just… I don’t know.

0:46:04.1 Michael K.: Look, I’m describing an ideal world that I’m not sure if it really exists. But I think, hopefully, if you have a whole team that’s been trained on this way of thinking, they can educate the newcomers. And they can say, “Hey, we use this tool to think this way. And this is how we make decisions. And we don’t just make a decision on p less than or equal to 0.05, because here’s the slide that everyone on the team has seen that describes what a confidence interval or a credible interval is and how we use that to make decisions.” And I think, hopefully, again, you’re not gonna be starting from scratch with every single person. If you can build a core nucleus, I think, of people who really get it, they can be evangelists for the rest of the team and can bring people on to the process.

0:46:44.5 Michael K.: Again, this is an idealized version of the future that I’m imagining, and I hope that we can all one day achieve. But that’s what I think actually can happen in practice. Again, these are smart people that are trying to do their job as best as they can. I think once they are educated in the right way, they aren’t just gonna be, “Do whatever you want.” They’re gonna try to bring people along with them, especially if it’s yielding positive results for the business.

0:47:08.9 MK: True. I like it.

0:47:09.8 TW: And you have the ability to give them 10-15% more love than the ones who maybe are laggards. It feels there’s a little bit of an ability to nudge in that direction. But okay, well, we could spend another two hours clearly on this. Wow, this was a fun discussion, but we need to head to wrap or we’re going to miss out. My certainty of this being the perfect episode is not actually gonna happen. So before we go, we like to do a last call. Everyone would go around the bar and everyone shares a thought, a blog post, a podcast episode, something they’ve read, something that they think our audience might find interesting, whether they’re absolutely certain about it or not. These can be shared with a high degree of uncertainty. And I have wildly over-belabored that particular little trope. So with that, save me once again. We’ll start with you, Michael. Do you have a last call to share?

0:48:16.7 Michael K.: Yeah, I sure do. Last week I went back and reread Paul Graham’s essay on How to Do Great Work, which I personally find very inspiring as a startup founder. But also, I highly recommend to everyone who wants to think about how to have a big impact on the world. It also reminded me to go on more walks, which is very good for my mental and physical health. So I very much enjoyed rereading that last week.

0:48:37.9 TW: I need to read it. I’m at 18,000 steps today, so maybe I’ve done something right.

0:48:42.6 VK: Humble brag.

0:48:45.6 TW: Yeah. I’ve not read that, though, so I want to go check it out.

0:48:48.3 Michael K.: It’s a long one, so carve out a good 45 minutes for it.

0:48:53.3 TW: Okay, I can do that. Val, what’s your last call?

0:48:56.6 VK: Yeah. So like I alluded to earlier, very much in 2024 strategic planning mode. And so some fodder that we got ahead of a meeting, actually through Search Discovery, was a video, an HBR video that features Roger Martin, which I wasn’t familiar with him before, but I have gotten into a lot of his content since. He is a former dean of Rotman School of Management at the University of Toronto, and he has a video that obviously linked. It’s called A Plan is Not a Strategy. And it’s really interesting the way that he breaks it down and talking about strategy as this theory and it’s a set of choices to put yourself in a playing field to make sure that you win. And I really like the way that he describes why you fall into this trap of planning and why it’s so comfortable and why strategy is so hard. But some frameworks that can really help you think about what it truly is and how it can be different. So I liked it. It was a good way to think.

0:49:47.6 TW: That’s wow. Interesting. And Moe, what’s your last call?

0:49:53.6 MK: It’s a weird one. I’m still ruminating on it. So it was an article in Time Magazine called The Case for Mediocrity. Sorry, I always butcher that word. And it was… It goes through American ambition and wanting to be the best you can possibly be. It goes through the Great Resignation and quiet quitting and all that stuff. But then it also goes through a lot of the research about people working shorter days, part-time hours, four days a week. And the link to happiness or being more satisfied with life. And basically comes up with this concept of, maybe part of the problem is that we’re striving for greatness when we should be happy with good.

0:50:37.4 MK: It’s a really interesting article because the author also goes into how can be being good be okay when we have so many systemic issues that we want to tackle as a society like racism and sexism and all that stuff. But, also, how do you personally, as an individual, possibly benefit? It was just… I don’t know if I agree with it, but it just makes you reflect a little bit on your life and the hours you do and the tech burnout space that many of us float around in, an agency burnout space that is true for some of you. Yeah, I just thought it was good to get you thinking about your priorities. What about you, Tim?

0:51:23.9 TW: So, dear…

0:51:24.2 MK: Hmm?

0:51:24.3 TW: Well, I just said so, dear listeners, if you start to notice a tapering off of the quality of this show…


0:51:28.9 TW: Just know we had a group reading assignment we all decided to start. Just 30% mailing it in from here on out.

0:51:36.4 MK: Ah, shit.

0:51:40.7 TW: No, that’s intriguing. So, my last call, I have definitely just had a last call in the past that was recommending just Katy Milkman’s The Choiceology Podcast overall. I think I’ve probably recommended specific episodes, but I’m gonna do it again. She had an episode that was called Jumping to Conclusions, which I think winds up tying a little bit into this discussion we just had, because it talks about our tendency to basically under-respond to strong signals and over-respond to weak signals and how there’s been a lot of research done in that area, which I think maybe sparked some of my… What’s really driving whether we want to make decisions.

0:52:25.0 TW: But I just love the format of her podcast. It’s kind of tell some story from history, and then it goes to some, usually, an academic who’s done a lot of research in an area. Sometimes it’s not the strongest of strong links, but… I’ll even throw a hat tip to Charles Schwab. They sponsor that podcast, and it’s there’s just the lightest of possible plugs for Charles Schwab at the end. The content is just really, really good. So with that, this has been a fascinating discussion. I’ve been looking forward to it, and I think it exceeded my expectations. I’m certain it did. Stop. Stop it. Stop using the word certainty. But, Michael, thanks so much for coming on. We kicked around different ideas. We had a few little bumps along the way of making this happen, but thanks so much for coming on the show.

0:53:22.5 TW: This was a blast. Thanks for having me. I would love to do another two or three hours on this or eight to 12, depending on what y’all are up for, whenever y’all want.

0:53:30.5 Michael K.: Ditto.

0:53:32.0 TW: And I will say, I will make a recommendation to go find Michael on LinkedIn and follow the stuff. As I said at the top of the show, he does produce a lot of content and has lots of things worth checking out. You’re also on the X, the Twitter, I believe.

0:53:50.4 Michael K.: I still call it Twitter.

0:53:52.2 TW: Okay. Is that @Mike_Kaminsky?

0:53:55.1 Michael K.: That’s me.

0:53:56.5 TW: People just know how to… Have to figure out how to spell Kaminsky. So…

0:54:00.4 Michael K.: Exercise left for the reader, as we to like say in the academic world.

0:54:05.5 TW: That’s pretty nice. I’m going to throw in… Since I’m actually moderating this episode, I’m gonna throw in one of our rare plugs to leave a rating and review of the show. If you’ve enjoyed this discussion or past ones, we would love to get your rating and review on Spotify or Apple or wherever you listen. So I’ll put that plug in. No show would be complete without thanking our erstwhile, is that the right use of the word? Our wonderful producer, Josh Crowhurst, who will make this come together and not be mediocre. And we would love to hear from you. We’re on LinkedIn. We’re all findable as is our page for the show on LinkedIn, on Twitter or X, on the Measure Slack.

0:54:56.7 TW: The Measure Slack has been doing a lot of work to get through the backlog of requests to join. So if you have wanted to explore it or have signed up and wondered what happened, if you go to join.measure.chat, you can fill out the form to get added to the Measure Slack. That is continues to be a large and vibrant and growing community. So with that, I think I’ve hit all the final housekeeping. So regardless of whether this episode has made you more confident or less confident about how to work with your business partners, I know for myself, for Moe, and for Val, we can say we’re absolutely determined and positive that you should keep analyzing.


0:55:37.6 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter @AnalyticsHour, on the web at analyticshour.io, our LinkedIn group and the Measured Chat Slack group. Music for the podcast by Josh Crowhurst.

0:55:55.6 Charles Barkley: So smart guys want to fit in. So they made up a term called analytics. Analytics don’t work.

0:56:02.5 Kamala Harris: I love Venn diagrams. It’s just something about those three circles and the analysis about where there is the intersection.

0:56:12.4 TW: Okay. And finally, if you’re comfortable, we’d love to get your mailing address. We have a little small token of our appreciation for having you come on the show.

0:56:24.3 Michael K.: The pleasure is all mine. I’ll type it in. It takes years for things to get here. So don’t send anything live.

[overlapping conversation]

0:56:31.7 TW: I’ve sent things to Australia. I’ve sent things to New Zealand.

0:56:34.5 MK: We should start sending live stuff. That’s way more exciting.

0:56:37.6 TW: Yeah. When we sent it it was a hatchling. How large was… How many people would the chicken serve by the time that it arrived?

0:56:50.5 Michael K.: I used to be in New York. I actually now live in Mexico.

0:56:53.5 Michael K.: That is way more fun.

0:56:54.0 TW: Okay, then.

0:56:55.1 Michael K.: It is way more fun. That’s why I live here. It’s wonderful.

0:56:58.0 MK: And do they actually have good tacos?

0:57:00.9 Michael K.: Incredible tacos everywhere.

0:57:04.1 MK: Great.

0:57:04.6 Michael K.: There’s a stand right on the corner. I can go any day. Del Taco. They’re amazing.

0:57:09.1 MK: And I do the good corn tacos than that flour crap.

0:57:12.8 Michael K.: Well, it depends on what… You can get flour tacos as well. Flour tacos are in the north of Mexico. Mexico city is really corn taco land.

0:57:21.5 MK: Tim’s actually going to murder me. He’s going to jump through the computer to kill me, aren’t you?

0:57:27.1 TW: What? No, you’re fine.

0:57:28.7 MK: Oh, okay.

0:57:29.7 TW: Well, yeah, ’cause I was raised in the south of Texas. So I was raised on flour tortillas. So rock flag. And I just I don’t know.

3 Responses

  1. Sam Gamble says:

    Really appreciate the transcripts. Note that on episode 232, Moe Kiss and Michael Kaminsky have the same initials. If there is any way to easily fix, great! Otherwise I’ll deal.
    Always enjoy the show,

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#249: Three Humans and an AI at Marketing Analytics Summit

#249: Three Humans and an AI at Marketing Analytics Summit

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_249_-_Three_Humans_and_an_AI_at_Marketing_Analytics_Summit.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares