Hey there, mister. That’s a mighty nice multi-touch attribution model you’re using there. It would be a shame to see it get mixed up with a media model. Or… would it? What happens if you think about media mix models as a tool that can be combined with experimentation to responsibly measure the incrementality of your marketing (while also still finding a crust of bread in the corner for so-called “click attribution”)? According to a 2019 paper published by ThirdLove (which happens to have been Michael’s last call on our last episode), that’s a pretty nice way to go, and we thought it would be fun to see if we could raise Tim’s blood pressure by giving him something to vigorously agree with for once. It was.
0:00:05.9 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally, and sometimes with explicit language. Here are your hosts, Moe, Michael and Tim.
0:00:58.6 MH: Okay, it’s not that dramatic, but for the better part of 15 years, the digital analytics industry has had a fever, and the only answer was more attribution or something like that, and it’s sort of become a space that’s changing. And recently, actually last episode, I had a last call, we ran across a white paper authored by Erica Mason when she was at ThirdLove, about how they used and set up media mixed modeling. And we thought, “You know what?” As we were talking about it amongst ourselves, this is the perfect kind of conversation for what this podcast is, which is a conversation between analytics nerds that might happen at a bar, at a conference, and so here I am, beverage in hand, Michael Helbling. I’ve avoided all things media analytics for so long, and it’s gonna be fun to return.
0:01:50.9 MH: Moe, you lead marketing analytics at Canva, so this is your area. I’m excited to talk to you about this.
0:01:57.8 Moe Kiss: So you think I’d be totally, totally qualified to discuss this, right?
0:02:01.9 MH: So what’s the answer? Go. Well, I think you bring more credibility than I do, so we’re stepping it up. And then Tim Wilson, you actually help companies do this in your role of Senior Director of Analytics at Search Discovery, so I think you’re…
0:02:16.3 Tim Wilson: I feel like I more battle companies with trying to not think about it in the simpleton way that the digital world wants to think about it, but not particularly successfully.
0:02:28.6 MH: Well, let me just say, I’m excited to talk to you both about it, and… So let’s get started. So I think maybe, Tim, we should kick it off with you, maybe just giving a little overview of what the white paper is, and I think that’s probably the best place to start, and then we can kind of dig into different parts of it. And I think we’ll go a little beyond the white paper too, but it was just a good conversation that we’ve been having, so I figured we keep having it.
0:02:54.1 TW: Sure, and I will say there’s a big risk of confirmation bias, ’cause Michael, you had shared this with us right as I was ramping up to do a presentation, and I’d already done another version of it for the Search Discovery Education community, so there may be some confirmation bias going on in this, because I feel like Erica, a couple of years earlier, the author of this paper, with much more kind of precision, hit a lot of what I was starting to kind of come to, so what the… To me, the highlights are that she basically breaks it down and says, “Look, you’ve got… ” But I can’t remember what she calls it. Traditional multi-touch attribution, that’s what I wind up calling it. She calls it click attribution.
0:03:37.0 MH: Click attribution.
0:03:37.5 TW: And then she actually says it includes view-throughs. So there’s click attribution, that’s what I think most people, when we say multi-touch attribution, with people who are thinking about digital, they’re thinking, “Yes, that’s my pixel firing,” or, “That’s Google or Adobe telling me, last-touch, first-touch, algorithmic,” all of that stuff is click attribution. Then she says there’s experimentation. We’ve done episodes on randomized controlled trials, experimentation, field experimentation experiments, media experiments, and then mixed modeling, which I even love that she described that as, it’s linear regression, which is how I sort of describe it as well. And it’s funny because what she does is a little bit of a take down of click attribution, because when you take something like a CPA, Cost Per Acquisition, she says, “Yes, there’s cost per acquisition,” we hear CPA, but then there’s also…
0:04:32.1 TW: Really, what you want is the cost per incremental acquisition, which is kind of… That was the aha to me a couple of years ago, which I’d been kind of struggling to figure out how to articulate, which is, if you turned off all of your marketing, as an extreme example, would your revenue go to zero? And it wouldn’t. If you turned off paid search, would all of the orders that you attributed as a CPA through paid search have not happened at all? And that is this delusion that the digital marketing world has kind of been labouring under, that if I had a touch… If I had a click or had an impression and that person went on to purchase within a given amount of time, they deserve some credit. And that is fallacious thinking. There will be people who would have purchased anyway, you just happened to intercept them along the way. The way that Joe Sutherland explained it, or I’ve now ripped it off and use it as well, it’s like standing at the grocery store right at the check out, you see somebody coming along with a big basket full of potato chips, and you run up to him and say, “You should buy potato chips,” and then they buy potato chips, and you say, “Aha, I marketed to them. How much credit do I get for that?”
0:05:51.0 TW: So those three groups, I think are great, ’cause they are all trying to answer that question, like, what is the value that’s delivered? And then what she points out is that click attribution does tend to focus on some flavor of the CPA or what is the total value we’re attributing? And so the paper actually goes to… Well, really, if you combine mixed modeling with experimentation, mixed modeling is saying, “Don’t worry about tracking… ” That’s the other issue with click attribution, is that you’re trying to track each touch of a user, and that’s getting harder and harder and harder, whereas mixed modeling and experimentation don’t rely on that. So mixed modeling’s saying, “Take all your historical data. When your spend on paid search went up, how much did your revenue go up?” The challenge, which she calls out, is you tend to ramp up spend on multiple channels at once, and ramp down spend, so you’re… I think she says along the lines of, your data is only as good as what you feed into it. I tend to throw that as like, yeah, perfect world would like to get a bunch of monkeys drunk and let them just randomly control my marketing spend for six months, and then I’d have this nice bit of kind of randomized data going in and I’d build a great model, but that’s a terrible idea.
0:07:21.6 MK: But that’s not reality.
0:07:23.8 MH: Right. Well, it depends on what agency you’re working with.
0:07:26.9 MH: Oh.
0:07:26.9 TW: Yeah.
0:07:29.5 MK: But I mean, that is one of the problems, right? Is like you need good randomized data and you need historical data.
0:07:36.8 TW: Right, and that’s like… She says, and that’s like typically mixed models or like, “Let me look at… ” And they go back, they pre-date digital, and they say, “Let me take the last two years of your data, let me have a couple of cycles of your annual seasonality.” So she points to that, she says these are not… They’re not perfect. They’re not gonna be super timely, but they do actually try to get an incrementality. If you think of a regression model, they’re saying, “What is each one of these channels… Holding everything else constant, an increase in this channel would give me what increase in revenue?” And so she kind of talks about this blending of saying, “What does my mixed model tells me I have the most uncertainty, or where am I spending the most money? What are the biggest risks? And kind of be rolling out experiments, randomize controlled trials.” She actually gets a little more nuanced and says, “You can take… ” People tend to think mixed modeling, “Oh, let me just take the data, plug it into the model, it spits out the result,” which is like people who think multi-touch… “Oh, if I do algorithmic, it’s totally objective.” Like, no, you’re making decisions and tuning things.
0:08:48.3 MH: Yes.
0:09:27.7 MH: So before we move on though, hold on, I wanna ask one question just to clarify something I think I connected while you were talking, Tim, which is, okay, I need historical data or data to get… I’m just making sure everybody understands what I…
0:09:43.1 TW: Either one, depends on which hemisphere you’re in, yeah.
0:09:45.2 MH: To be able to do the modeling. If I don’t have that, can I start with experimentation then first, as I move towards having enough historical data? Is that a fair approach? So like, there’s a lot of companies out there who are like, “Well, we don’t have three years of data or two years of data to start with.” Could you start them by doing experimentation to learn things about different things while you’re building up the data necessary to get there? Is that logical?
0:10:16.7 TW: I think… ‘Cause you can even have… You could say, we’ve got three years of data, but oh my God, we’re on a growth path that is insane.
0:10:23.1 MH: Sure.
0:10:23.6 TW: You know, it’s not…
0:10:24.5 MK: Gee, I wonder who might have this exact problem.
0:10:27.8 MH: Yeah, exactly.
0:10:28.8 TW: I think that’s a challenge with mixed modeling, the market, the brand. Yeah, so those are challenging. I don’t know that you can just start with… You kind of have to start with guessing, to get way out over my skis, and say it’s like it’s a Bayesian approach, you gotta start with your… Like, do what you think made sense, do research, where are your customers, where can you target them with your message? You gotta start doing something. And then, to me, the experimentation is, where am I taking the biggest bets, where do I have… Doing direct response, paid search, non-branded terms, yeah, you’re probably… Depending on what you’re selling, probably pretty solid. But if you’re saying, “I wanna try podcast advertising” or “I’m gonna roll some crazy bit of TV,” well, okay…
0:11:19.4 MH: Podcast advertising works, Tim, just FYI.
0:11:23.9 MK: Absolutely. [chuckle]
0:11:24.1 MH: Yeah, there you go. We’ll just put that out there. No supporting data at this time, but… Oh, sorry, go ahead, Moe.
0:11:34.2 MK: So I’m kind of, like in my head, we can go one of two ways. And we probably will get to all of the ways over the course of the episode, but we can go into the technical component and some of the complexities, but the thing that I’m struggling with because we are dealing with exactly this at my work. We built multi-touch attribution. We somehow got people to use it, we’ve now decided we’re going with a version of MMM. We’re calling it MMM Lite because we don’t have enough historic data for all of our markets and all of the tactics and things that we’re doing next year. The issue to me is not the methodology, the issue is the stakeholders. What we’re talking about is very complex, and the idea, Tim, that you go into a market that you’ve never spent a dollar in, and you go, “Oh, you just gotta make some Bayesian guesses,” like, how do you say that to a marketer that needs to optimize in channel, with a weekly rate out, to know whether they should be moving money based on tactics or campaigns? This is complicated.
0:12:47.8 TW: Yeah. But I think there’s a couple of parts. And one, I guess to be fair, the third lot of paper doesn’t completely dismiss click attribution, it says, “Look, if you need to do very, very short timeframe movements, that may be the best you’ve got.” To me, what’s infuriating is that… Stepping way back, marketers think if they have all the data, the answers will just magically emerge. It gets back to the fundamental level of, marketing is messy, and data doesn’t give answers, data gives directional information with a ton of uncertainty. And we are not, as a… Marketers in general are not accepting of the level of uncertainty that we can put in. That’s part of the reason they like mixed models. Or the way I think about it, if you think, again, think of a simple multiple linear regression mixed model, if you’ve looked at the results of a linear regression, you’ve got the little stars next to each component that’s saying, “Is this… Do I have statistical confidence around each one of these?” Because I could throw complete garbage noise into a mixed modeling tool and it will generate a mixed model, and it will come out saying, “This model is crap. It is predicting 2% of anything.”
0:14:19.3 TW: It’s garbage. But you know what? I’ve got coefficients. And so, marketers will completely disregard any sort of communication that says, “This is a model,” but that goes beyond mixed models, it goes beyond anything. “But I put a million rows of data into it and you gave me a model, so it’s… That’s like golden truth,” right?
0:14:38.5 MK: I don’t think it’s disregarding though, I think it’s potentially that they don’t understand it. And I think the expectation that they understand it is maybe not unfair, but is it necessarily their job to understand how an MMM works? I feel like they should be educated about it, but I don’t know.
0:15:07.0 TW: I’m not… I don’t know, this is where I’m like, I’m on a kick where I’ve now presented multiple times. To me, the example of my little… The potato chip check out, Joe used to do it with the case of coke at the check out. If analysts aren’t prepared to communicate with the marketers and to help them… Marketers do need to be receptive, that… Don’t say, “Don’t bother me with those details, analyst. If you can’t come and give me the answer, then just don’t talk to me.” Well, no, that’s the marketer’s fault. That’s bad marketing.
0:15:41.7 MK: That is, but I have not had a lot of marketers like that, to be fair.
0:15:46.4 TW: I’ve had plenty of marketers who… I’ve had plenty of clients who have said, “We’re gonna do… I need you to take the data and tell me which one of these… ” They define the box, “Tell me which one of these heuristic models I should be using. Should I be using last-touch, first-touch, time decay, J-curve?” And it’s like, “Wait a minute. The answer is no.” The answer is, “Let’s back up and let’s understand that there are some understandable shortcomings of that.” Now, if you tell me, ’cause there’s been a dismissive… Actually, I’m thinking of one very specific person that Michael and I had as a client back in the day, who was like, “Don’t tell me… What are you analysts coming to me, telling me, this is a problem? I hired you to bring me answers.” It’s like, but you’re demanding that the answer fall in a first-touch, last-touch, inverse, J-curve algorithmic world, and I need to help you understand that there is… That is, you’re working in a CPAA world. Can we understand that you need to be in a CPIA world?
0:16:49.0 TW: It’s where marketers are saying, “Oh, I get it, there’s a problem, it’s third party cookies. I need to… I’m reading this stuff, you’re right, we need to go solve that to fix my attribution.” It’s like, “Wait a minute. If you’re really trying to do attribution well, is that the best place for you to spend your dollar?” So it’s why I’m a frustrated individual in my career, perpetually, is that I do want to educate marketers. And I don’t wanna educate them into… I do wanna have them understand how a little bit how a regression works. It’s not that hard to say, “It’s like the formula for a line and let’s move on from that.” Okay, now let’s talk about where a mixed model has promise. And let me explain to you. Just logically, can we understand if we locked… All of our spend was totally constant for a year, and we watched our revenue fluctuate, can we just intuitively understand that we can’t build a model out of that? Can we talk about external factors? I think getting marketers one level deeper on recognizing… And complexity is not the right word, ’cause that makes it feel like, “Oh, if I understood the complexity, then I could have the answer,” so I don’t know. I’m not sure if I’m… It’s tough, I do not have the answer.
0:18:13.7 MH: It’s sort of… It sort of starts really simple, which in reading through the white paper, Erica starts to lay out, “Here’s the data we’re gonna need.” It’s like you need time series data for the outcome variable, whatever the conversion is. You need spend by channel, and then the last one is the one I think, Tim, you’ve just spent this amount of time talking about, any other variables outside of marketing spend we might need to control for. And for any company or business, that can really get woolly. And I think that’s… A lot of times, what holds people up is they… That part is just real hard to build comfort level of any kind within, because it’s sort of like half an answer would be better than no answer, and yet, people want a perfect answer so much of the time that it stops them from even starting. I don’t know the right way to say that.
0:19:10.0 MH: Okay. Let’s step aside for a brief word about our sponsor, ObservePoint.
0:19:14.7 TW: Well, Michael, are those some new ObservePoint socks you’re sporting?
0:19:19.0 MH: Indeed, they are. You and I got to actually see some ObservePoint folk, IRL, as the kids say when we were at the DAA1 Conference last month.
0:19:29.7 MK: Oh man, I’m jealous. But I’m guessing that they demoed the product and talked about how their platform automates the auditing of data collection across entire websites to ensure that tags are firing properly. And I’m guessing they also showed you how you’re able to test and monitor important pages and user paths for functionality and make sure that you’re collecting accurate data.
0:19:50.8 TW: Of course, they did. And they showed how their alerts can be configured to notify you immediately if something goes wrong while also tracking your data quality and QA progress over time. But they also showed their privacy compliance capabilities, which are pretty slick when it comes to auditing all the tech that’s collecting data on your site, so you can ensure a compliance with digital standards and government regulations for customer data.
0:20:13.7 MH: Yeah, that was…
0:20:14.1 TW: It is.
0:20:14.7 MH: Yeah, that was really cool. So if you’d like to learn more about that or any other of ObservePoint’s many data governance capabilities, go request a demo over at observepoint.com/analyticspowerhour. Alright. Let’s get back to the show.
0:20:34.9 MK: The bit I’m struggling with as we grapple with this, is that I don’t know how… Like, let’s say you’re gonna use MMM Plus experimentation as your gold standard. That’s how you’re gonna inform your decisions at a… Like a very broad level, right? How do you help the tactical person? Like, how do you hope the person that’s doing a particular type of campaign in Facebook week on week? Like that, I don’t know how to answer.
0:21:12.7 MH: Well, you give them the metric to optimize against, which is some combination of the cost per acquisition and the cost per incremental acquisition. Ideally, you move towards incremental and use that as a standard probably.
0:21:27.6 TW: Well, but let me challenge the premise a little bit. If I look in the hierarchy of an org chart, and we’d go from the CMO down to… Say it winds up being 40 people who are all optimizing these little tactics, if we start by saying, how do we help those 40 people who are optimizing those little tactics, and we never stopped even to ask the question, is Facebook worth a shit? Like, really, I want you to go and do whatever you feel is best and is most logical within Facebook, from a timing, from the type of ads, from the messaging. Just do whatever you think is best maybe for two years and then we’ll feed it into our MMM. Or we say, okay, do that. We’ve committed a certain amount of spend, how much are we spending on Facebook?
0:22:18.5 TW: I want the CMO to be saying, “I’m spending 20% of my marketing budget on Facebook.” Let me just assume that it may not be perfectly optimized, but I’m assuming I don’t have a complete idiot like advertising dog food when I’m trying to sell shampoo. So let’s assume that it may be suboptimal, but it’s logically the marketer based on experience, hearsay, articles, whatever is doing, or the agency is doing a reasonable job, but I’m spending 20% of my marketing on this. Let me actually figure out whether… What the incremental… The CPIA for Facebook is. And those questions don’t get asked. We get sucked down into the, “How do I tune for my keyword, for my creative, for my messaging?” And I think that is a… It’s flippable, but it’s, we got to get to director level and above saying, answer the big questions. You’ve already… You’ve gone away. You’re not using display anymore because it didn’t have a ROAS that was acceptable because you were calculating using click attribution and it just didn’t show up because it’s top of the funnel. But you know what? You’re a senior person who thinks there may be value here. Okay, well, let’s do an experiment to introduce those new big channels.
0:23:40.9 TW: So that, I mean, we get sucked into the… But it’s like the developer who says, I’m gonna make the tagging. I’m gonna… You don’t need to do a big tagging plan, I’m just gonna tag everything on the website, in super minutia, and then you can roll it up.
0:23:54.0 MK: Yeah, I know you’re right, Tim. Soon as you start talking, I know you’re right.
0:24:00.5 TW: But it’s not easy to elevate that discussion because you’ve got 35 or 40 people who they’re saying, but what do I do? It’s like, you know what, keep doing what you’ve been doing.
0:24:08.3 MK: Do you know why it’s actually hard to elevate the discussion is because then you also have the analysts that have put their blood, sweat and tears into building an MTA, into building dashboards, into like making sure that it’s a well-oiled machine, and then suddenly, you’re saying actually also to your senior stakeholders, we’ve put blood, sweat, and tears into this for two years and now we’re saying it sucks and you shouldn’t use it. Like…
0:24:37.1 TW: You know, we have… I believe we call that the sunk cost policy.
0:24:40.9 MK: I am aware of what it is, but it doesn’t make it any easier to handle it politically.
0:24:46.4 MH: And the only encouragement there is that the entire industry…
0:24:51.8 MK: I know. I know.
0:24:52.4 MH: Has wasted their time on this for years and years and years. And only the three of us really have known this entire time. It was a huge waste of time. I’m just kidding.
0:25:01.0 TW: There are amazing Freakonomics radio episodes. But it is because even for the CMO, if they do it and say, holy crap, let’s say Twitter. Twitter’s actually, we can’t detect… It is we are burning money on Twitter, but we’ve been burning it for two years and then they’re gonna go turn it off. And the positive messages we found a way to get to better invest, but there’s gonna be that human nature fear of, oh my God, the CFO’s gonna say, “Why the hell did you wait for two years and not find this? What are you doing?” So that is a… It is the tragedy of, if I could go back in 20 years, I would wanna be screaming this at people then, but we have…
0:25:50.1 MK: Question…
0:25:50.6 TW: It is so entrenched. Yes.
0:25:53.0 MK: Sorry, I…
0:25:54.5 TW: You set me off on another one.
0:25:55.7 MK: Oh, again. Actually, this one’s really gonna set you off. So I…
0:25:58.2 MH: This one’s probably for me, Tim. This one’s…
0:26:00.6 TW: Okay.
0:26:02.2 MH: Probably just for me. No, I’m just kidding. [laughter]
0:26:02.5 MK: Well, actually, this is not about stakeholder stuff. This is something that has been turning in my head that I feel like people have been asking and I’ve kind of avoided answering because… Anyway, I wanna hear the team answer. Why couldn’t you use RCTs and then use those results to better tweak an MTA? I know there’s all these problems with… You’re taking results from an RCT and then you’re applying them broadly, yadda, yadda, yadda, but like… Go.
0:26:38.5 TW: No. I think I just… To me, it’s theoretical. Maybe this is probably why I got so excited about the paper in particular, because ThirdLove was actually doing it. I think the challenge gets to, if you can run through and do some experimentation and saying, “Okay, with some RCTs, it appears that when I use this specific heuristic model, it’s ballpark.” I still want the marketers to understand that it’s like, this is based on history, and it’s directionally, the heuristic model or the algorithmic model, that’s the closest, but guess it didn’t move perfectly. It is not a… You know what? Evidence says, if you’re gonna have to pick something day in and day out and you’re trying to equate it to clicks, I think that it would be… If an organization ran and said, “That’s what we wanna do, is we wanna do enough experimentation to try to figure that out.” But I guess, there’s the other big… The weird piece. When you’re doing experimentation, to me, it draws you to the bigger questions, which are a little more recognizing the reality of marketing. With click attribution, like how many things do we buy outside of super low consideration, where we do one search, we click through on it, we make the purchase.
0:28:14.1 TW: We know that an MTA is supposed to be multi-touch attribution, it’s supposed to combine these different touch points, but then we wanna put a value against the contribution of a single touch point, and that’s always gonna wind up missing the incrementality. I think it’s gonna always kind of over-credit it. I’m not opposed to say, I get the… And even in that paper, she says that you’re probably still gonna have to use the click attribution. To me, my extension of that is you’re gonna have to use it for the small, narrow stuff, but the big fallacy of, “Oh, if we really do that more and better, we have all this detail, we just pivot it and roll it up, and it gives us the answers to the big questions.” And it doesn’t. This is like, I haven’t figured out the perfect analogy for that, other than trying to explain the problems with the click attribution, but I also think the problem… Because we obsess about the problem of third party cookies and privacy, and harder and harder to track and ad blockers. And that leaves the industry down saying ah, so our priority needs to be to work around those because that will then get us to better data.
0:29:44.6 TW: And it’s like, is that really the best way for an organization to spin their analytical investment time or would they be better served trying to answer some figuring out which… No organization is advertising in every channel, which makes me feel like there are channels that have never been considered or tried because, “Oh, we’re not gonna advertise in that offline channel because we can’t get click attribution, therefore, we can’t measure it. We’ll never know the value.” Those sorts of things come up.
0:30:18.7 MK: You know what? If someone said to me, “We can’t measure it, so we’re not gonna do it,” I actually would applaud them. I get the opposite, which is like, ah, we can’t measure it. Let’s just do it anyway. So…
0:30:32.4 MH: Well, that’s the realm of brand advertising.
0:30:35.5 TW: But if they believe that the measuring… If they’re saying, “We can’t measure it because we can’t track an impression at a user level and can’t track a click… ” Right, and that’s problematic. That’s a whole other…
0:30:50.2 MK: I’m being facetious.
0:30:51.4 TW: Rant I have about the… I think people who have… If you have a really smart idea… Well, you should, let’s not make… Maybe that’s it. When they say, “Well, we can’t measure it,” it’s like time out. Like, maybe the way you’re thinking about, maybe you can’t measure it in Google Analytics, but that doesn’t mean you can’t measure it. Even awareness stuff, that one’s driven me berserk for years. Well, that’s awareness and we can’t measure awareness, so we just count impressions. Well, impressions isn’t… You can… You can measure awareness.
0:31:23.5 MH: You can measure awareness.
0:31:24.4 TW: You can measure lift. A lift study is a measure in the incrementality in awareness of an investment. Probably not an individual banner ad or creative level, it’s gonna be at a campaign level. And if you…
0:31:38.2 MH: Market share.
0:31:38.9 TW: Wanna measure it, you can. Oh man, get me wind up.
0:31:43.7 MK: I feel like this is like your real soapbox topic, like ride or die.
0:31:50.8 MH: So I think, it’s because you’re usually pretty right about this stuff, Tim, which is why we like you. I think there’s another piece to this and actually, it reminded me of, I think, how I found the original thread that led me to this white paper in the first place, but this is the other thing is the thread I found was written by a guy in Michael Taylor, which I can probably do the last call about a tool he’s working on, but he was talking about it from the perspective of Facebook, and Facebook is going in on media mixed modeling as the thing they’re gonna be doing, and Google is modeling conversions now too. And this is something where I…
0:32:29.5 MH: I’m priming the pump, I’m priming the pump. And this is what I think, it’s Dr. Ellie Fight, who was a guest on our show. I think that’s how I found the original thread, she basically asked the questions. She’s like, “Whoa, whoa, whoa,” basically like, “Should this be measured by these people, the people who sell you ads, should they be the ones putting together the mixed model?” And I was like, “Yeah, it seems like, a lot of companies are just gonna take some of these “models” that come from Facebook and Google and just run with them, but it seems like the smarter marketer are probably gonna need to do some work of their own to build their own versions of models and do their own work on this.” I’d like both of your perspectives on this ’cause I’m curious what the future… The next few years might look like in our industry.
0:33:17.4 MK: Do you want me to go first?
0:33:19.1 TW: You want to start, Moe?
0:33:19.2 MK: Yeah.
0:33:22.2 MH: Yeah, you better go first, Moe, you might not get a word in.
0:33:23.9 MK: Yeah. That’s also true. I suppose I also am the only person that’s in-house at a company that’s trying to figure out all this shit while also having the crazy hyper-growth that we’re having, and we do have a tendency to build everything in-house. We’re actually for… We’re calling it MMM Lite. I think I mentioned that. We have decided to purchase a tool, but I’m sure as shit wouldn’t be purchasing something from Facebook or Google. We’re using an independent third party that we have a lot of detail about their methodology and can make a lot of decisions, and I don’t know, maybe one day we will decide to bring it in-house, it wouldn’t surprise me.
0:34:06.6 TW: Oh, we’re just gonna let it go that…
0:34:10.2 MK: I feel like you need a [0:34:10.5] ____ white top.
0:34:12.3 TW: Well, no, and partly, it’s more that the door opened behind you and Jay’s arm came in ’cause he again… So the fact that Michael and I caught that he snuck in and actually the person in the room did not… Does not bode well if an axe murderer come to your house.
0:34:29.0 MK: I’ve got headphones on.
0:34:30.5 MH: She was mid-answer of a really important question, so…
0:34:34.8 TW: Well, so it’s interesting ’cause Google specifically has had… A month and a half ago now at this point, said, “Hey, good news,” and it’s this one paragraph, and they’re like, “We’re gonna do data-driven attribution by default,” but that’s still click attribution, but then they sprinkle in language and they say two things, I don’t have it right in front of me, but they’re basically like, “We used to have a limit, we used to have a minimum for how much data you had to have in order to do data-driven or algorithmic modeling. We’ve removed that limit.” Well, you can’t just… You can’t just wish away the need for data to feed into a model, so what the hell they mean with that, I don’t know. And then they say, “And we’re introducing experimentation, and we’re gonna factor in some of that as well.” Again with no detail. And so I’m like, okay, if somebody happened to because they’re delusional and listen to what I am saying and they’re like, “Well, this is great. You said, Tim, they should be bringing in experimentation.” It’s like, but you can’t just say that you’re bringing… Bringing it in. How does that even work?
0:35:40.8 TW: So that’s kind of part of the ramp, but the others, to your point of… I don’t know that it’s… I don’t necessarily… Oh, I totally distrust Facebook. I don’t necessarily distrust Google, and Google for years has done for clients that are large enough, and depending on the accounts, they will do kind of some of the experimentation, but it’s… The people I’ve talked to, the handful who have been in-house who have done that have said, “You know, surprisingly or not surprisingly, the result of Google was you should spend more on AdWords,” which I don’t think is Google being nefarious, I just think that those platforms always are gonna have a bias to seeing the entire universe as being their universe.
0:36:27.4 MK: Yes.
0:36:27.7 TW: You know, what are we involved with, which I want marketers to think a little bit more broadly, what are competitors doing, what’s demand in the marketplace, what’s my offline stuff, that you know what? I don’t have it integrated into the grand Google world. So, that’s where they define their ecosystems, is a little challenging.
0:36:50.2 MH: Yeah. In my mind, I just sort of envision the timeline of, “Hey, Google’s gonna just sort of pretend that they know exactly how to model out these conversions and everything should be fine.” And then five, six, seven years go by and then somebody leaks all the documents to the Wall Street Journal or CNBC or whoever, and we all find out basically they’ve been putting their thumb on the algorithm this entire time to promote their own channels because they needed the revenue, just like Facebook has done. This is the stuff that these companies… Nobody starts out intending to do that.
0:37:24.0 MH: But that sort of seems like that’s the path.
0:37:26.0 TW: But I think it’s more that what Google is doing is they would say, given the data we have access to and our need to be able to do this at scale, we can do this at scale for you, and we check the boxes of machine learning and data science, and from all the media agencies, and a bunch of analysts out there saying, “Look, I just checked this box and it’s just gonna… It’s gonna do this for me automatically,” and you’re saying that what I really need to do is go find incredibly scarce talent to work through a really pretty involved stuff, and you’re telling me I’m gonna try to answer bigger questions. And I will maintain that if you actually ask some of those really big questions, you will be probably unpleasantly surprised by the results unless well, you were surprised, and it’s your choice to be unpleasantly surprised. “Oh my God, maybe this isn’t delivering the value I thought.” Or pleasantly surprised, “Oh look, I freed up budget to go try new and different and interesting stuff and in a more diligent way.” But that’s the other… I haven’t explicitly said that this idea of, “We’re just gonna go execute, and then because we’re executing, we’re gonna collect data that will give us a causal link to incremental value,” as opposed to, “I’m gonna go execute. How should I execute so that I get a causal link to value.” Like, those are two different things and getting marketers to…
0:38:58.0 TW: That goes back to that question of the stakeholder saying… And I had one late last year that… She literally said, “We’re going to invest in this new channel. I don’t wanna just… I don’t wanna show up four months later and say, ‘We invested in it, please tell me if it was valuable or not. What do I need to do now so I can quantify the value?’” And even that, quantifying the value said, “Here’s the incremental lift, plus or minus this.” It’s welcome to the world of uncertainty.
0:39:29.4 MH: And I think there’s also another parameter to this which is the marketers themselves and the constraints they live under which is their media plan and what numbers they have to “hit” to be able to say they did their jobs and get the… So, I think…
0:39:47.9 TW: Well, they will get told by their media agency, “Oh, you can’t change that because we already… You already committed to buying this inventory, so… ”
0:39:55.8 MH: Yeah, you told… Even Google holds people to that at certain levels of spend. It’s sort of like, “Well, you told us you’re gonna spend this many millions, so you still gotta spend it.” And that sort of confounds all of this a little bit too. So a lot of this sometimes, I think for a little while, you almost have to run this sort of as a side project to how you’re actually running “the business” while you get it up and running, and then start to shift over to it over time. Because there’s a lot of stuff this stuff gets hung up on in terms of media commitments, spend commitments, planned commitments, and I don’t know that you can unwind all of that in just a… In a media mix model hackathon or something.
0:40:33.7 TW: If I could reference episode number 159, “Is Digital Advertising A Bubble Ready To Burst,” with Tim Hwang…
0:40:42.0 MH: Nice. Wow. You just had that ready to go. It was in the gun loaded.
0:40:45.9 TW: I just rapidly picked it. It is a… There’s a massive amount of inertia. His book, the “Subprime Attention Crisis,” starts to say there are… If you look, you can see the cracks. And so to me, there is an opportunity for marketers to say, “I see the cracks, I understand the cracks. What do I do to be… When this all crumbles, what’s gonna have… ” I think there’s actually gonna be somewhat sustainable competitive advantage for marketers who say, “I am going to put away some of the ways that I’ve been taught to think and think about this challenge differently.” And if you get that machine of… And no, I do not have any clients who have the mix modeling plus experimentation synergy, magically clicking along, but I think organizations that get there are just gonna start to leave their competitors in the dust, ’cause they’re gonna be… Unless they’re crazy established brands. It’s been the…
0:41:53.9 MK: Cool. Can you send an email to my founder and my CMO with pretty much like, “They’re doing the right shit?” That would be sweet.
0:42:04.9 MH: “Hello, you don’t know me. My name is Tim Wilson. [laughter] I’ve been called the quintessential analyst, and I just want you to know your analytics team is doing a great job.”
0:42:13.5 TW: I do have a podcast. Whatever you want though.
0:42:20.0 MH: That’s right.
0:42:20.9 TW: I will be equally powerless to that as I am with the entire industry.
0:42:24.9 MH: Email sent.
0:42:27.4 MK: No, I think it comes back to the problem that we as an industry, I guess, evolve every few years, and I think part of our job is constantly saying like, “Is this the right way to doing this? Or is there a better way?” But it gets really hard because every time you, I guess, go back and look and go, “There’s a better way to do it,” you are in this tough situation of being like, “The way we were doing it before was wrong, but I still want you to trust me that the way we’re doing it moving forward is better.”
0:43:01.1 TW: Oh, my God.
0:43:01.9 MK: What?
0:43:02.6 TW: I just… I don’t know. This, out on a limb, thought just occurred to me, this is the late Clayton Christensen, “The Innovator’s Dilemma.” The Innovator’s Dilemma has the, when you start doing something different, it’s worse and… But it puts you on a different trajectory. I’ve never actually tried to think about this problem through that lens. But there are high startup costs to going an experimentation route, just like there are high start-up costs to standing up an A/B testing program, but arguably probably lower start-up costs for that. So, I don’t know. I may noodle with that of framing it in those terms, ’cause certainly, I think a lot of his ideas have borne out to be pretty solid when it comes to products and services that maybe that’s one way to think about it. It’s gonna be worse, or it’s gonna be…
0:43:58.7 TW: It’s an opportunity cost. You’re gonna need to invest in doing this other thing that you’re going to suck at doing it, and you’re gonna get inconclusive results and it’s gonna be painful, but boy, if you stick with it, I don’t… And it’s easy for me to say, I’m a consultant, so I’m not… Not my money. But I think that it won’t take that long for organisations that are doing it. And I do think like the… ThirdLove is kind of in the same boat as Canva from being younger. I think it’s gonna come out of more of that space. They probably are gonna… The challenge they’re gonna have is, “I don’t have this mix modeling stuff.” The reality is, the big bloated enterprises, they take so long to do their mix models ’cause just getting that time series data for the last two years across all the dimensions they want is a nightmare in and of itself. So yeah. Whew! And I’ve said everything, everything I have to say on that subject.
0:45:00.4 MH: And that’s all I’ve got to say about that.
0:45:04.5 MH: Alright, let’s step aside for the thing you know we’re gonna do, the quizzical query that’s presents a conundrum to our two co-hosts who go toe to toe on behalf of our listeners. It’s the Conductrics quiz. Who’s Conductrics? Well, they’re that A/B testing vendor that doesn’t promise to make everything super-duper easy because that’s just not reality. Running an effective experimentation program is difficult and you need a technology partner that is not only innovative, but forthright about those challenges. And for over a decade, Conductrics has partnered with some of the world’s largest companies, and they have helped them deliver effective customer experiences along with offering best-in-class technology around A/B testing, contextual bandits and predictive targeting. They always provide honest feedback and go above and beyond expectations to make sure you can hit your testing and experimentation goals. You can check them out at conductrics.com. Alright, so onto the quiz. Moe, would you like to know who you’re representing?
0:46:07.2 MK: Absolutely.
0:46:08.1 MH: Alright, we’ve got a listener named Chris Bingley, maybe Bingley, I don’t know how to pronounce the last name, but Chris. And Tim, you are representing Simon Mack. Alright, let’s get into the quiz. As usual, I am looking for help and in a panic. I panic a lot, usually every time the Conductrics quiz happens. So I’m looking around, it looks like we’re at a conference, and I’m hoping to spot you. Luckily, I hear Tim telling a group of conference attendees in a voice that’s very much pleased with itself. “Well, you know, the key to variance reduction and experimentation is blocking,” followed by what he thinks is Moe muttering under her breath. “Oh, good grief!” As Michael rounds them both up, he says, “Thank goodness I found you. My client is asking me about experimental designs that minimize variance, and I have no idea what that is. Can either of you help me?” Moe responds, “Sure. I actually was just about to mention to Tim the use of optimal designs invented by Christine Smith in her 1918 PhD dissertation.” Tim, of course, rejoins with, “Yes, I always forget about that.”
0:47:21.1 MH: Michael, for optimal designs, it gets a little tricky because you will need to pick from a set of optimality criteria. That is because the variance has a matrix structure. It isn’t just one value, it is a bunch of values. So there isn’t one single best criteria. It is your call what aspect of the variance structure you want to optimize for. From the following list what is the one criteria Michael should not pick from since it is not an optimality criteria for optimal design? Is it A, optimality, which minimizes the trace of the inverse of the information matrix, B optimality, which solves for the minimax solution of the eigenvectors of the information matrix, C optimality, which minimizes the variance of a best linear, unbiased estimator of a predetermined linear combination of model parameters?
0:48:23.2 MK: Stop it.
0:48:24.8 MH: I’m just saying words. I know.
0:48:26.2 TW: Jeez Louise.
0:48:27.7 MH: D optimality, which maximizes the determinant of the information matrix, or is it E optimality, which maximizes the minimum eigenvalue value of the information matrix? This is interesting ’cause I actually know the answer. Because I read the things, yeah.
0:48:45.6 TW: You do because…
0:48:46.4 MH: Yeah, I…
0:48:53.5 MH: Alright.
0:48:54.2 TW: I’m gonna say it’s definitely not C, ’cause I couldn’t even write fast enough to actually capture more than there is an eigenvector one and…
0:48:58.0 MK: Me either.
0:49:00.6 TW: Eigenvalue one. So… And Matt is just pissed off at me about block randomization, so…
0:49:05.4 MH: Block random… Yeah, I thought that one seemed a little personal, very specific.
0:49:10.6 TW: Yeah, nothing like getting a slack from Gershoff saying, “I’m gonna kill you.” So that’s always a good opener. So let me just eliminate C, because why the hell not?
0:49:20.4 MH: Yeah, let’s do it. Let’s eliminate C. So I should definitely use that.
0:49:23.2 TW: What was C? Linear something rather?
0:49:25.3 MH: C Optimality, which minimizes the variance of a best linear, unbiased estimator of a predetermined linear combination of model parameters.
0:49:33.9 TW: Oh, yeah, pretty sure.
0:49:35.4 MH: I’ve read that twice now and still have no idea what has actually been described. So you two are smarter than me.
0:49:43.2 TW: Nope. I just took a shot in the dark.
0:49:45.6 MK: Yeah, we’re all pretty much in the same boat.
0:49:49.8 MH: What I love is, a lot of times, Matt Gershoff will write these and then say to me like, “Oh, this would be an easy one.”
0:49:56.2 TW: Yeah. [chuckle] And you’re very kind to not pass that on to us too.
0:50:02.6 MH: I don’t tell you ahead of time that he says that, no, I try not to do that.
0:50:06.4 MK: I’m gonna try and eliminate A, which I did also not write down all of.
0:50:13.4 MH: A optimality, which minimizes the trace of the inverse of the information matrix. I love it. We’ve eliminated C, we’ve eliminated A. We now have B, D and E. And I’m not gonna re-read them, but great work.
0:50:27.0 TW: Okay, I wanna eliminate the one that it wasn’t… I can’t remember if D or E had eigenvalues in it. Was that E?
0:50:35.1 MH: E optimality, which maximizes the minimum eigenvalue of the information matrix.
0:50:40.9 TW: Okay, so I’m gonna eliminate D, just ’cause I want eigens to… Eigenvectors and eigenvalues to retain.
0:50:48.1 MH: Okay. So D you’re eliminating, so…
0:50:49.5 TW: I’m gonna try to.
0:50:51.8 MH: Yeah, so let me… Okay, so we have two options left, then B, optimality, which solves…
0:50:56.6 TW: We seriously managed to guess correctly so far.
0:50:58.1 MH: For the minimax solution for the eigenvectors of the information matrix, B optimality. E optimality, which maximizes the minimum eigenvalue of the information matrix. So yeah, it’s literally a 50/50 now between B and E.
0:51:12.4 MK: Okay, I’m gonna again fail and I’m gonna guess B, which means… As in B is the answer, which means that Tim has A as the answer.
0:51:26.7 MH: Okay. The answer is B. It’s totally made-up nonsense. Moe, [laughter] your BS detector is still 100% operational. You are the winner. Great job. I think a really great job. And Chris Bingley, you’re a winner too because Moe got it.
0:51:42.8 MK: Yes, for Chris.
0:51:45.1 MH: Yeah, great work. Okay, that was pretty awesome. Alright. As always, we love doing the quiz. Thanks so much to Conductrics for sponsoring it. And Matt, we give you a hard time, but you write a pretty decent quiz question. So check out Conductrics. They make a pretty decent A/B testing tool as well. And let’s get back to the show.
0:52:04.7 MH: Alright, we do have to start to wrap up. And this is exactly the conversation I wanted us to have though. It’s kind of funny going back to Shakespeare’s sort of feels sometimes like full of sound and theory, but signifying nothing. But I do think that there’s a big tectonic shift happening in the measurement of this, in this space, and I’m… It’s cool to see it, and of course, I’m probably thinking it’s gonna happen a lot quicker than it does, ’cause that’s sort of like, yada, yada, yada, the year of mobile, insert joke there, but here we all are listening to this on our phones. Okay, let’s go around and do some last calls. That’s what we do when we go around and share something we think might be interesting to our listeners. Moe, why don’t you go first?
0:52:58.3 MK: Well, Tim alluded to it at the start, but if you wanna learn more about this topic, I’m not gonna lie, Tim now has become the person I listen to on a Sunday morning when I’m like doing errands around the house, like “What video has Tim got on something?” He’s shaking his head.
0:53:11.4 TW: I’m so sorry.
0:53:13.6 MK: But anyway, so I did watch the session he did a while back for the STI education community on paid media RCTs and MMM, and I just… The chip analogy makes a lot more sense when you see it in your presentation.
0:53:30.2 MK: I just got shared a wrong link by Tim, but Tim can share the proper link to it, but I don’t know, it did actually really help solidify a lot of stuff that was kind of churning around in my head about… I don’t wanna say… There’s a lot of MTA bashing in there, which I get, but it’s also just about why there is this evolution happening in the industry.
0:53:52.2 TW: Well, thank you. And it doesn’t have the experimentation plus MMM idea or link, which I fully acknowledge that’s probably one of the reasons I got pretty excited when Helbs shared this paper. I was like, “Oh.” So yeah, check out the paper as well.
0:54:13.3 MH: Yeah, absolutely.
0:54:14.8 TW: But thank you.
0:54:16.9 MK: It’s good.
0:54:17.0 MH: That was the last call on another episode. Alright, Tim, what about you? What’s your last call parentheses s?
0:54:24.9 TW: I’m gonna do two for it but they’ll be quick. Completely unrelated. One was, it came out a while back, but the Throughline Podcast. You guys listen to Throughline at all? Typically…
0:54:36.3 MH: I have it in my podcast set up, but I rarely get to it.
0:54:41.6 TW: Well, so they did an episode called The Nostalgia Bone, which traces the history of nostalgia, which was definitely not… Their premise tends to be what is going way back hundreds of years, the history of a region or a conflict or a technology or something and they actually traced the history of nostalgia and how it started out as being diagnosed as a physical malady and trying to kind of define it. And it was interesting, but it got me thinking, a lot of ways, it was kind of what I’m talking about, the brain and how we sort of process things and how much the brain plays tricks. I was starting to feel like it had a lot…
0:55:23.2 TW: It actually tied into analytics when we start thinking about things like confirmation bias, or the way we’ve always done things, or this reminds me of things. So it’s not really analytics, you gotta stretch a little bit for that leap, but I was kind of delightfully surprised by it. The other is gonna be a total log roll. I think Moe last time… Actually last call, the first TLC Test & Learn Community session, we did RCTs, and we’ve done the second one, but part of what we did out of that was built a little simulator for playing around with getting an understanding of a little bit of the… We call it the magic of randomization.
0:56:07.6 TW: But it’s a little shiny simulator, bitly/random-magic where you can play around with block randomization and what that does and it simulates test results, and you start to get a sense, or I started to get a sense of the way that… Different ways of designing an experiment can actually help you reduce the sampling variability, which gives you a stronger signal. So, it’s just a fun little tool to play around with. You don’t have to listen to me talk if you just wanna play with the simulator, but you can also go to the Test & Learn Community and see round two, which featured my co-worker, Julie Hoyer, much more than me in the actual talking if you’ve had enough of me, ’cause I certainly have. Michael, what’s your last call?
0:56:54.3 MH: Well, it’s interesting ’cause I also did a Search Discovery… No I’m just kidding. I’m gonna make my last calls exactly the same as yours.
0:57:04.7 TW: Oh, you did. That’s right, you did STEC.
0:57:08.5 MH: No, I’m not gonna do that. Although it was fun. Back in October, I did do a session with the STEC, and it was a lot of fun about empathy and the analyst. But actually… So no, the person who I originally found this from, Michael Taylor on Twitter, his Twitter thread is where I originally found that ThirdLove white paper, he has been building something called Vexpower, which I’m not exactly sure what the end product is gonna be, but in it, they have a media mix, marketing mix modeling training simulator that you can log in and it’s free to use. And so if you go to vexpower.com, you can actually sign up and go through their little training simulator. So it’s interesting, you could do the Search Discovery one and then this one that and you can compare and contrast. Anyway…
0:57:56.6 TW: That one’s probably more involved and better.
0:58:00.6 MK: Oh, jeez.
0:58:00.7 MH: I know it’s so funny, like you say stuff like that…
0:58:03.3 MK: Insert Tim saying something deprecating.
0:58:06.9 MH: I know, exactly. Anyways, hey, the more help out there for marketers and for analysts, the better. I just thought it was really interesting ’cause I hadn’t seen a tool like that until you mentioned that one, and then I was like, “Well, I’m gonna make that a last call ’cause that’s kind of fascinating.” But since we’ve been talking about it, I figured it fit the topic. Okay. I’m sure you’ve been listening and you’re probably like, “Oh, you guys are missing… You people are missing such an important point,” and you wanna tell us about it, and we would love to hear about it as long as you’re nice and constructive. And the way to do that is via Twitter or on The Measure Slack group, or via our LinkedIn group on LinkedIn. And we would love to hear from you. And please, do talk to us because all of those go into our marketing mix model where we figure out where to spend our podcast dollars. [laughter] Alright, no show…
0:59:00.1 TW: Wait, we’re moving away from the Drunken Monkeys?
0:59:04.4 MH: Oh, yeah, the drunk… Actually, that… That’s a good agency name, I gotta be honest. If you’re an agency, someone is about to kick off a new agency, Drunken Monkey’s not a bad one. Not bad at all.
0:59:16.7 TW: Probably just as good as the competition.
0:59:25.7 MH: Oh, well, no show would be complete without thanking our illustrious producer, Josh Crowhurst. He makes the show run behind the scenes and takes care of making us sound great and all those kinds of things, which is not the easiest thing to do. And of course, we are thankful for you, the listener. I know that I speak for both of my co-hosts, Tim and Moe, when I say no matter how complex the model, no matter how random the experiment, keep analyzing.
0:59:58.9 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions and questions on Twitter at @AnalyticsHour, on the web at analyticshour.io, our LinkedIn group and The Measure Chat Slack group. Music for the podcast by Josh Crowhurst.
1:00:16.0 Charles Barkley: So smart guys want to fit in, so they made up a term called analytics. Analytics don’t work.
1:00:23.6 Thom Hammerschmidt: Analytics, oh my God, what the fuck does that even mean?
1:00:32.5 MH: What another… What another… Okay.
1:00:36.9 MK: Guys, why you gotta be assholes? Like literally, this is my last call.
1:00:43.6 TW: Oh. Frankly, it comes to me naturally. To answer the question, literally, it’s just kind of how I roll.
1:00:53.4 MH: Rock flagging. I really don’t have a particularly strong opinion on that.
This site uses Akismet to reduce spam. Learn how your comment data is processed.