Multi-touch attribution, media mix modeling, matched market testing. Are these the three Ms of marketing measurement (Egad! The alliteration continues!)? Seriously. What’s with all the Ms here? Has anyone ever used experimentation to build a diminishing return curve for the impact of a media measurement technique based on how far along in the alphabet the letter of that technique is? Is “M” optimal?! Trust us. You will look back on this description after listening to this episode with John Wallace from LiftLab and find it… at least mildly amusing.
0:00:05.9 Announcer: Welcome to the Analytics Power Hour. Analytics topics are covered conversationally and sometimes with explicit language. Here are your hosts, Moe, Michael and Tim.
0:00:22.2 Michael Helbling: Hey everyone. It’s the Analytics Power Hour. This is episode 199. One of the tectonic plates in the analytics industry has been sliding around lately and causing a lot of seismic activity. The change in the ability of companies to use attribution to measure media performance and understand contribution. And maybe it was never a great idea to begin with, but anyway, with my two co-hosts, well, we just can’t stay away from the topic. And so as the summer sizzles here in the Northern Hemisphere, we wanna take another bite of the media measurement apple. Hey, Moe. How are you going?
0:01:00.5 Moe Kiss: Howdy, I’m excited.
0:01:02.6 MH: I know, this is a topic you’re dealing with day in, day out.
0:01:05.8 MK: I mean using this show to answer my personal questions about measurement, that doesn’t sound like me.
0:01:10.0 MH: And frankly, it works for the listeners too. So let’s just keep doing it. Hey, Tim Wilson, how are you doing?
0:01:16.9 Tim Wilson: I have not had a frustrating Twitter DM back and forth with somebody who was not listening to what I was trying to say on this topic in 24 hours, so.
0:01:24.5 MH: See, so that’s perfect. And I’m Michael Helbling. So we’re well prepared, but we still needed a guest, someone who could help us elevate the conversation. John Wallace is the CEO of LiftLab. It’s a Software as it’s a service company focused on digital media spend optimization. Before that, he founded DataSong, which he exited as he sold it to Neustar. He has his MBA in Decision Science from George Washington University. And today, he is our guest. Welcome to the show, John.
0:01:53.0 John Wallace: Hey, thanks for having me.
0:01:54.4 MH: It’s awesome to have you. John, you go back a long way in this industry, and a lot of people may not remember DataSong, but you worked with a number of very large companies. Talk us through a little bit of your history in the field, ’cause I think that’ll help for context.
0:02:09.8 JW: Yeah, by all means. We were introduced to the topic of attribution working alongside some strong direct marketers. It was companies like Williams-Sonoma and Nordstrom Saks Fifth Avenue, Eddie Bauer, Express, Wells Fargo. They all had common problems of trying to figure out what spend was incremental. And we were using the path to purchase as our input to figure that out. And that became multi-touch attribution.
0:02:35.9 MK: So you’ve moved away from that now. Tell me a little bit about like, I don’t wanna be like attribution sucks, because I feel like we could all just nod our heads and then move on, and that’s the end of the show. But tell me about your career and how, I guess you made those steps towards MMM Media Mix Modeling and more of an experimentation approach.
0:03:04.1 JW: For me, it was always a little bit of concern, no matter how fancy and smart we get the algorithms to become, it’s our input data that we’re really always going to be worried about. And really the path to purchase was the input data. It was also the path to not purchase. In a well-constructed MTA model, you need to look at the non-converting media as much as the converting media. And as those paths got more and more fragmented by changes in how we collect the data, or by our changes in third party cookies. Ultimately now more recently changes in how we can store IDFAs, what was one path to purchase for six events, now might look like six different paths. And so that’s something I didn’t think that it was wise to try to kinda continue to piece together or patch up. I didn’t think the band aids were going to work. And so we had to rethink the problem from scratch.
0:03:54.0 TW: So maybe backing up a little bit, so that sort of the path to purchase was specifically a tracking each individual user and their path to purchase, and that’s become fragmented. Was part of the challenge also, that even if you track the paths to non-purchase that you… I’ve always felt, as I feel like the scales have started to fall from my eyes, I’m increasingly realizing that if somebody took a path to purchase, that doesn’t necessarily tell you anything about the counterfactual of, they might have still taken a path to purchase without some specific marketing intervention, which to me, it’s like attribution is you’re assigning value, but you’re not really necessarily measuring incremental value no matter how algorithmically and how much machine learning you put behind it. You’ve got that fundamental challenge of you’re working kinda with the data as it’s occurring in the wild as opposed to match market or experimental design where you’re controlling that data set. Is that the fundamental distinction?
0:05:08.7 JW: Yeah, there is all flavours of algorithms for multi-touch attribution. The one that we had popularized took into account some of the non-marketing events as well. Again, those algorithms are pretty sophisticated, it’s really the input data itself before you ever estimate anything that was becoming more and more of a concern. But yeah, there were approaches to multi-touch attribution, like the one at DataSong that would look for the incremental effect of marketing. It’s just that those paths to purchase are more and more fragmented. I don’t think that it was… There was so much noise in the input data, and then we’re gonna estimate a model on top of that. It became a bit difficult to defend.
0:05:47.6 MK: So now I think the thing that I’m finding, I guess interesting, I obviously haven’t been in the industry as long as you. I did start an attribution agency as well. And pretty much when I was moving on from there, everyone was starting to talk about MMM again, Media Mix Modeling, and it does seem to be the rage as a com… In my head now, the way I think about measurement is not one tool, it’s about a combination of tools and using the right tool for the specific question you’re trying to answer, whether that’s wrong or right, I don’t know, maybe I’ll tell you in 20 years. But the thing that I find so fascinating about MMM, is it has actually existed for a very long time. I don’t think I necessarily understand why we went through this blip where we kind of ignored it and now everyone’s back to like, Oh, it’s the rage.
0:06:56.1 JW: I think that the history is pretty long there. They’ve been around for a few decades. They were really popularized by consumer packaged goods companies. And I think a lot of digital marketers and DTC brands were… Really saw the appeal of looking at user level data. So it was like, “Okay, the next generation behind MMM or the next frontier would be to build these models at the user level. And I remember our data sets were really impressive. It was millions and millions of observations. If you have a million customers and you’re looking at a 365-day data set and every event that happened and the decay of what’s not happened since the last event.
0:07:35.8 JW: And it was a really cool longitudinal datasets at the user level. And so we thought we were kind of on a frontier out beyond MMM. Again, when those data sets came apart you kinda have to look around. And so our conclusion… Actually, we started by building an experiment platform and the MMM came later. So that was chronologically how we got to where we are today.
0:07:55.6 TW: So if I mount my one soapbox… I feel like it was the digital came along and people were like, “Well, now we have all this greater richer data, therefore, if we just scale that, the… That will… More data, more granular data is better, is more accurate, and we just have to keep chasing that.” And nobody ever stopped and looked up. And we chased it to a point, and now we’re moving backwards, because more and more channels, more and more devices came out. Like I don’t know what peak visibility was or when it occurred.
0:08:32.3 TW: It was probably 10 years ago, and there wasn’t as many cookie issues like… But by the time that we hit the point where we were sliding backwards on the visibility, everybody had just bought into this is the way to go. And MMM people, they were easy to kind of… It was… You could kind of dismiss them saying, “Ah, your MMM model, it takes too long. It’s to bond instrument. It’s just overall channel. We’re gonna be tweaking at the keyword level. And I just feel like the entire industry bought into it and there were no incentives to push back on it. And even now with that going away, it feels like Google and Facebook… The ones who have a huge media footprint and have visibility within their ecosystem, they’re highly incentivised to say, “Just use us, we have that. We still have that user-level visibility, so trust us.” And yet they’re also the ones we’re trying to pay. It just…
0:09:39.7 TW: I just want critical thinking out there from marketers to say, “This cannot be the way we want to do things.” Okay, let me step down. That’s a tall soapbox. Let me be be careful here. Let me use my cane. Okay, I’m safely back on the ground.
0:09:55.2 MK: Oh Jesus.
0:10:00.8 TW: Sorry.
0:10:04.4 TW: Thoughts?
0:10:04.5 JW: One second. One second, man. I was like, “There’s a caption at the end.” Our brands are spending a lot of money. Our customers are spending a lot of money on these ad platforms. And we think of them as partners and every… I agree with them that they should have measurement teams and make measurement broadly available. The further you go up the level of sophistication and the scale of spend, the more likely you’ll have people that echo that sentiment. That, “Ah, okay. Look, we can’t rely on an ad platform to create their own homework.” So I think that it’s great that they’re making tools available. I think there’s not a marketer out there who hasn’t run an online conversion lift test from Facebook. And we were completely aware of all of those approaches and the fact that they were free when we entered the market as LiftLab. We realized we’d have to perform something a bar above that, if that’s already the table stakes of the market.
0:11:00.6 MK: So one question I wanna understand. I feel like the data discipline or people who work in data, I don’t know what the right term is. Clearly, I haven’t had my coffee. Are largely bought into this. I feel like we understand where the industry is going. I think we understand that measurement techniques are constantly evolving, things change, and we wanna try and do things better over time. From your experience, John, ’cause you are working with lots of different companies who are going down this path now of like, “Oh look, attribution is not working. We’re gonna have to be using geo-experiments or an MMM, or some other techniques.” Are the marketers bought in?
0:11:42.6 JW: Yeah, they are. They are. It’s helpful. I haven’t met an audience yet that argues and says that experiments are a bad idea.
0:11:51.6 MK: Really?
0:11:51.7 JW: They’re kind of universally accepted as… Almost too far, actually.
0:11:56.2 MH: Is there a potential selection bias though, that by the time you’re talking to them…
0:11:58.7 TW: Yeah, ’cause I was like, “Well, I can introduce you to a few people.”
0:12:05.0 MH: And it’s not that… Like in experience it’s not necessarily that the people I’m talking to are opposed, but I will get feedback like, “Well, what could we do if we didn’t wanna do Media Mix Modeling? Since we can’t really do multi-touch attribution anymore, what else could we do instead?” And it’s like, “What do you mean?” Like you can… Let’s do some hold down tests, but eventually we’re gonna need to model this out. So anyways, I do find there are some people. But I am encouraged, I guess, to a certain extent, that you are finding people fairly receptive to this. ‘Cause I think there is sort of a continuum of marketers, if you will.
0:12:42.4 JW: I find that they’re really, typically, warm to the experiment side of things. I think… I don’t know, maybe it goes back to eighth grade science class or something like that. Everyone likes a good experiment. Not the same reception on the modeling side. Sometimes there’s more concern, questions to be answered, understanding a methodology, or at least high level. What we found is that it’s putting the two together that is… Makes it extremely complimentary and extremely comprehensive. So we like to say that the modeling approach to your historical data gives you instant coverage and it’s always on and the experiments go hand in hand, they give you the precision.
0:13:18.8 JW: So if anybody has ever looked at the results of a marketing mix model, or if you’ve ever downloaded Robin and you run your data through a package like that, what you should expect to see is that the model is not perfect. It’s a model. I think someone had a great expression, it was that, “All models are wrong. Some models are useful.” It’s a statistician.
0:13:36.9 MH: George Fox.
0:13:37.7 JW: Yeah, Mr. Fox.
0:13:37.8 MH: Yeah, George Fox, yeah.
0:13:39.4 JW: And I’m a big believer in that. And so we should expect that in a first cut at a mix model, that there’re some… That there’re some question marks that come along with that, and often it’s not really the model that we should be blaming, don’t shoot the messenger, it might just be the data. If you have really low signal data, let’s say you’re modeling 15 channels, and you kinda look at the model and it looks pretty good for 10 out of 15 or 11 out of 15, what are you gonna do about the other four? Are you going to… You have one choice, which is kind of to torture the data as much as you can, but if there’s no signal in the data, there’s just no signal in the data, and our answer is, “Let’s go kick off an experiment. Let’s go gather better data.” So we have these two measurement approaches tied at to the hip. And so where one has precision, it’s still gonna be a point in time exercise, and that’s complemented by the modeling side that has a kind of comprehensive view of the data, is always on, and when you fuse these two together, you have a really nice, very complementary signals.
0:14:36.8 MK: Okay, I wanna get to the crux of the whole reason I asked you to be on the show, and the actual question that I really wanna ask.
0:14:45.5 MK: Okay, so here we go. The output that we get from an MMM or an experiment, and I’m very happy to be corrected, this is me very genuinely wanting to learn and understand how the hell to do this well, it is very different to the output that we get from MTA reporting, right? In my mind, the output looks different. And the thing that I’m trying to reconcile in my head is, “Yes, we have better data, yes, we’re gonna make hopefully better decisions with that data, but there is still this expectation from the business of what a report looks like, or what the output should look like. And so I’m thinking in my head a very traditional… And we have one as well, which is kind of like a… I don’t know, I’m gonna call it a dashboard, but maybe it’s not a dashboard, a scorecard, whatever, of like “These are the channels, this is how many conversions they drove, this was the cost per convert… ” And I look at that daily, or weekly, and in my head, I can’t reconcile the output that we’re getting from an MMM and an experiment and those reporting expectations by finance or marketers or whatever.
0:16:07.2 JW: I think we’ve addressed their concerns.
0:16:09.0 MK: What do you think?
0:16:10.0 JW: Yeah, I think we’ve addressed their concerns. If you think about, what are people looking at today? Most of the people listening, their marketing teams, the data that they’re looking at, their input data are quite often in-platform reported performance, so self-attributed revenue as the ad platforms would report it, or they’re looking at last-click coming out of Google Analytics or Adobe, and their analytic technique is typically pre-post analysis. So that’s what we’re walking into. And so that dashboard that you’re talking about, I don’t know if it’s one that’s put together with last-click data out of Google Analytics, or if it’s one that’s done with platform reporting. But we’re replacing those two, we’ll put them side by side with the numbers in our platform, but it has all the same levels of granularity, it goes campaign, and it goes down to tactic, and goes down… It goes campaign, tactic and channels, so we’re building from the bottom, and that’s what they’re used to seeing in their last-click reporting or their in-platform reporting. It’s just that we’ve now put the incremental number as opposed to the one that is somewhat… We all know is somewhat biased.
0:17:10.0 TW: And that… That incremental number is like… That is that… That is that fundamental shift. Because an experiment or MMM has the concept, it’s got incrementality completely baked into the language, whereas… Which means there’s… In an MMM there’s kind of an intercept, this is revenue you would have gotten anyway. With an experiment, I’m just looking at the change, the effect, the incremental effect, whereas attribution, the traditional MTA is gonna say, “We’re taking the full pot part and we’re gonna spread it out across everything.” Which seems like it is almost always going to… It’s got a bigger pot part to play with, ’cause it kind of is biased against incrementality, right?
0:17:54.3 JW: You’re right. I mean, the notions of incrementality… But I like its big brother, actually diminishing returns.
0:18:00.4 MK: I was about to talk about diminishing returns, so I’m excited.
0:18:05.3 JW: So I think of any other direction, that if you can decipher diminishing returns, incrementality is along for the ride. So in a diminishing return curve, we know the incrementality at every spend level, you can’t say that in the other direction. If you run an online conversion lift test in Facebook and you’re spending $1000 a day, you’re gonna get the incrementality for that spend level, if you triple your spend, that incrementality is no longer valid, you need to re-test it, and so that’s fundamentally what was the appeal for us. Was, “How could we set up an experiment approach that was designed around diminishing returns, and then complement that with MMM, which has got diminishing returns built into it? And that’s how these two fit together so well.
0:18:41.0 JW: And so we don’t ignore incrementality but like I like to say it, “It’s along for the ride.”
0:18:46.2 TW: But that means your experiments, you’re varying the level of spins so that you’re getting a curve?
0:18:51.4 JW: Yeah, the traditional A/B Goudar test we found is not sufficient for most marketing analytics. And I wanna oppose this to you, has anyone here run a test and actually seen negative lifts? Has that ever happened to you?
0:19:03.6 TW: On Media?
0:19:04.9 JW: Yeah. On media test.
0:19:05.9 TW: Yes.
0:19:06.6 MK: I think so.
0:19:08.9 JW: Yeah. So let’s ask ourselves, What happened? Did you… It could be something mechanically wrong, but most likely you’re just up against natural variation of the data. You were the unlucky test, where the control group perform better than the treatment group. And what we typically do is sweep those under the rug. But now, if you think about the other end of the distribution, what if you landed and you were the lucky test and you’re now over-reporting the ROAS or the Return On Investment. So that’s what we found is that the traditional A/B test that we all love, we all love them, we run them all the time. We think they’re better suited for things like doing creative testing or subject line testing, things like that, and then if you really wanna understand the economics of an ad platform, you need to learn more on diminishing returns and your experiment designs need to take you towards understanding diminishing returns.
0:19:58.0 MK: So now we’re gonna get really into my head and the weirdness that’s going on. So, okay, diminishing return curves. I am like, “I’m on board,” like I’m gonna refer to Sam, he’s the measurement leader at Canva, and him and I have spent a lot of time talking about this, and by us talking, I mean, him trying to educate me. I understand diminishing return curves, and I understand that the goal over time is to change the shape, right? So that it becomes more efficient. The bit that I’m struggling with is like, I understand it and Sam understands it. How do you… I think a marketer understands a diminishing return curve, I’m trying to understand how do you get them to understand that you wanna change the shape and how do you actually report that to the business? Do you know what I mean? Like you can’t be like… I mean, I guess you could be like, “This is what it used to look like, and now here’s what the curve looks like.” But I can’t find a way in my head that that will actually work.
0:20:57.4 JW: Right. So we don’t… Back to the dashboard question, where the two use cases that we’re showing our marketing customers are, we will turn the curves into incrementality and talk about what was the incrementality of each of your channels, each spend of each of your channels last week. And the other is the more forward-looking, the optimization, and so we don’t have to show the curve itself or ask people to try to put a bunch of curves on the screen and interpret them. We’re able to show them the type of metrics that they’re already familiar with, but powered by the advances in understanding a curve.
0:21:29.1 MK: But do you think we should be educating them?
0:21:31.5 JW: It depends on the level of appetite. So those curves are there and they could be viewed and we have some power users that wanna see them, but if we were to lead an onboarding with, “This is how you understand a curve.” We probably would lose a lot of the audience that way.
0:21:48.1 TW: So I guess back to the… And I feel like we’re peppering you with questions about the these are like the… These are the resistance that I can say that I feel like I run into and one we haven’t touched on is the multi-data attribution data. Another thing, not only is it, “Oh, this is this user level data.” It’s more data and it’s good, it feels free, it’s… We’re gonna run Google Analytics or the ad platform, that’s just part of, we run the conversion pixel, it just happens. We do whatever we think is best, and there’s no cost, minimal incremental cost to report on it, ’cause the tool just spits it out. Whereas to run an experiment and presumably even the more sophisticated experiment, certainly having technology and expertise behind it can drive efficiencies.
0:22:41.1 TW: That says effectively, you need to take some of your dollars that you would have spent on Google Ads and instead invest in actually reliable measurement, which in my experience, like in the abstract, that’s not that hard of a story to say when it comes down to saying, “Yes, you should cut your budget by 50 grand, so you can learn it,” that’s all the sudden they’re like, “Whoo,” yeah.
0:23:11.9 MK: Tim, we do the opposite, we have a measurement budget that we agree with finance, it seats separate from the marketers, that is like our measurement and experimentation budget, and then it’s not a like, “Hey, marketer, do you wanna give me some money to do this?” It’s like that pot of money is set aside at the start of the year, which… Like, I don’t know.
0:23:29.9 TW: Which is… I think that’s great, that’s a 100% not, but we have, I work with clients that have money for bodies and maybe for tooling, typically that’s not the… That may be the way to go at it.
0:23:45.8 MK: Can you like hide it in your tooling budget as like, just throw it in there and just hope that…
0:23:52.5 TW: ‘Cause it’s hard, ’cause it’s the tooling in, it’s the agencies that are managing the budget that have to be coordinated with, it gets pretty murky. I mean I love that as an idea, yeah.
0:24:03.9 JW: Part of the spirit of what we put together at LiftLab, is that these… We are most frequently measuring campaigns that are already in production and they’re actually baked into the marketer’s forecast, and they can’t really have a risk of an experiment impacting the performance. So these are special designs that are there, you do need to have money to cover the tooling, but you don’t need a… None of our customers have it per say, an experimentation budget where we’re gonna just take these dollars and runs a media, it’s typically testing on media that’s already live, that’s already production, and it’s the job of the experiment design to minimize any opportunity cost. So we’re not forcing people to buy PSA ads, for example.
0:24:43.4 TW: What, I guess that’s where, I’ll position it that way, saying, this is spending the same money, but spending it in a smarter way so that you’re basically you’re injecting a degree of variability into the data, you’re controlling the data you’re collecting, you get the variability so that you can construct a curve, and maybe it is that it is the tooling and I mean, do you wind-up needing to coordinate with? Are you running the media through the LiftLab technology or are you actually running the media through that? No.
0:25:20.3 JW: The interventions on the campaigns show up as different forms depending on the experiment design, but what you typically see are some form of the targeting exclusions or inclusions, you see some changes in spend, which the agency would refer to as pacing, and for the major ad platforms, yes, that’s happening through the APIs into the major ad platforms, if it’s a smaller ad platform that maybe doesn’t have a right API then, we’ll be coordinating with the agency for them to traffic the experiment. Yeah.
0:25:49.2 TW: So then I’d get ads like… So if it’s through a pacing, if it’s through a co-… If there’s a degree that’s re-coordinating with the agency, how often do they flub that?
0:26:00.4 MK: Fuck it up.
0:26:00.9 TW: Like that.
0:26:01.8 MK: Interesting question.
0:26:05.5 TW: I mean, to me it’s like 100% of the time and probably ’cause it is introducing like a different… And they’re like, “Yeah, yeah, but no, we flipped the switch to do the auto-optimizing.” They’re so conditioned to the magic of the machine, they struggle… And I honestly think there’s some conscious or subconscious fear that they might be exposed that I don’t think they’re super incentivized to make it run correctly. Do you run into that?
0:26:35.8 JW: I’ve never run into that, I’m not sure what you’re talking about.
0:26:38.8 TW: There you go. Just like you said, “Every market wants to do experimentation.”
0:26:42.8 MH: You’re under no obligation to incriminate yourself in this conversation.
0:26:47.1 MH: Tim’s flying free here.
0:26:50.6 JW: But what’s been actually useful is quite often you can make lemonade out of lemons, if someone… Let’s say by accident over does it and cuts the budget too far, that they could overshoot by accident, that actually just give more data points that we’re filling in a distribution of spend. So when we give instruction to the agency, we say, “Directionally, this is what we’d like you to do, but if you over or under, it’s not gonna matter,” so we’ve made a sample design that embraces that as opposed to… What we don’t want is to hang a sign on the door that says, “Quiet please, testing in progress,” because that’s just not gonna work, there’s going to be interruptions. In the middle of an experiment, you could just have the CFO say, “Look, we need to spend more, we’re behind on our revenue numbers,” and you can’t interrupt that test or say, “We can’t spend that money because there’s a test in progress.” So we see those as opportunities, it’s just gonna give us more confidence in the results. It could be in any other direction, it could be that they’re running hot and they needed to cut some budget out, and that gives us yet more points along the curve, so we actually just embrace it.
0:27:53.9 TW: But I guess that’s critical that… So you’d wind up, you may have the experiment design, but really what you’re working with is the experiment as is, like you’ve gotta have reliable data on what was spent where and how and the targeting, right. I guess maybe this is… I’ve been through it more in a kind of manual process where, here’s the design, the design doesn’t get… There’s not full compliance, so then we take in what the actual was and that’s how then the analysis of the experiment happens, but we’ve kind of sometimes struggled to get a straight answer on exactly where the spend actually did happen. I don’t know.
0:28:37.9 MK: Another great question for you, John.
0:28:41.9 JW: Well, I’m turning it into a question. [chuckle] You have all of the conditions that could happen in the wild, you could have where you implemented a go dark group and there was no spend, you could have a condition where you were spending down, and then you have sometimes what’s called leakage, where you intended to go dark, but there was still some spending that are happening. And the designs and the analytics of the data that’s collected have to be robust to all of those conditions, so that’s why we don’t rely on just a straight two cell A/B test or expanding that out to multiple cells, and the unlock here, Tim, comes is that we’ve embraced how do we marry experimental design with really time series analysis, and that’s a differentiator and it gives us a lot of robustness to when things don’t go as planned.
0:29:26.9 MK: John, one thing you’ve mentioned experimental design quite a few times and I’d like to dig into that a little bit. This shit is hard and really complicated, and we’ve already talked about kind of like one situation that you’ve managed to, I guess, account for, which is like you have an experiment design and the spend levels don’t go exactly as planned. I guess what I’m trying to understand is for the analyst sitting there who’s like, “Oh, I actually really, really wanna start with this. Like how?” In my head, I’m thinking about complexities of location data, and all that sort of stuff. Tell me a little bit more about that process? Is this one of those things that if you’re gonna start down this journey, you just need to have in your head the expectation and you need to set this expectation with the business, we’re gonna do this because we’re gonna make hopefully better decisions from it, but it is going to be a hard thing to execute well, or what’s the path like once you start down there?
0:30:30.7 JW: I can answer that from… If I put myself in the shoes of an analyst getting started on this, and I think it was a hard path for us. We had really, we are just lucky to have early adopter customers that wanted to kinda see that level of innovation and see the scientists coming out of the lab trying to figure out what went wrong, and they did, and they were patient, and they reaped the benefit. And that’ll happen for your firm as well if you’re endeavoring on something like this, the payoffs can be quite handsome. So there’s the difficulty part, this is a road that’s been traveled, you will end up in a better place because of it.
0:31:07.1 JW: In our case, the customers that are coming on to the platform are benefiting from a lot of those lessons learned in the trenches hard-fought, and we now have made it a lot more just highly repeatable. You almost ask me like, we’ve danced around this a little bit on the call. What do you actually get out of one of these? Is it better reporting? I’m not motivated by reports and good looking rewards, I’m always motivated by how do we increase profit. And this is what I’ve really enjoyed about popularizing these approaches is that our answer will only be, “We should cut spend because we’ll make more profit or we will increase spend because we’ll make more profit,” that’s the only two answers. And occasionally you’re already completely at the apex of a profit curve and you can stay put. And the order of magnitude of this, and we’ve seen experiments spring back anywhere from under $1 million to over $40 million in a single experiment and opportunity. So it’s the whole gamut and that’s what keeps us excited about it.
0:32:07.9 TW: Is there… I guess maybe it’s, you’re speaking channel-by-channel… is there’s we can spend the same amount but shift rebalance how we spend it and make more profit presumably?
0:32:17.1 JW: Absolutely, if you have two curves. We love the double whammy where you cut from one channel and increase profit and move it into another channel that was under-spent and increase profit again, that’s everyone’s favourite outcome. That’s usually when you have either you’ve run a bunch of experiments or you have a model that’s also looking at all the diminishing return curves for you.
0:32:36.4 TW: So when you’re doing experiments, and I guess this is… Are you doing them on… You said kind of at the campaign tactic channel, are you running basically multi-factorial experiments? There’s the curve, but are you hitting multiple channels and multiple campaigns in one experiment?
0:32:55.1 JW: Occasionally. So the way it works is our outcome variable is, it isn’t any of the attributed revenue. It’s nothing to do with what you would see in the platform, it’s nothing to do with the last click, it’s like in its total sales. And so therefore, we’re not analyzing maybe one campaign at a time, these are typically at the channel level or at the tactic level where we can measure against a pretty big outcome. But the modeling side of LifeLab, and we’ve been talking experiments for a few minutes, there we’re modeling along a hierarchy where we’re modeling the multi-channel approach that we traditionally call MMM, and we go two steps further down that hierarchy into tactics and into campaigns. So that’s how these are, again, complementary. We might learn something in experiment that shifts the model at the tactic level, but that’ll flow down to what we’re learning at the campaign level.
0:33:38.3 MK: Can you explain a little bit more about that? Technically, how do you use the experiment results to recalibrate the MMM output?
0:33:48.5 JW: Yeah, and I’ll speak to it at a high level. You need first to have experiment designs that think in terms of diminishing returns because that’s what your model is doing, and when you have that unlocked, now you have an ability to calibrate and say, “Okay, I need to move this curve up or down. I have better signal from the experiment.” But you wanna have your cake and eat it too ’cause the experiment’s point in time. So if you can feed that into the model, but the model can keep learning from that point as it sees new data, now you actually have the best of both worlds. And that’s part of what we got excited about when you’re standing up the methodology.
0:34:21.4 TW: Does the mix modeling… I think if an experimental design says, “We’re gonna design an experiment, start the experiment. The data we’re using is from… The data that we’re ultimately processing is from the start to the end of the experiment,” whereas my understanding of mix modeling is it’s taking your historical data, incorporating time series and other factors, and modeling that. When you talk about the modeling side, like how… Are you needing to feed in some pile of historical data a year, two years, three months, two weeks like what’s, is that the right way to think about it? And then you’re building more data, I guess, as you’re running the experiment to feed into it.
0:35:07.5 JW: Yeah. Think of it as you’ll get census data, census of your spend into the model, so that’s typically, in our case, a year of daily data for every channel, as well as all your outcomes that you’re studying, and that data’s collected along that hierarchy, so channel tactic and campaign, and that’s the input ingredients for the bottling sign. And then anywhere where you run an experiment, there might be an overlap period. So let’s say part of your year of data that you model has a recent month in it, and that might be at a month we had an experiment, and that’s where you get a chance to do calibration, is where’s the overlap? What’s the model saying for the January, and what’s the experiment said that we ran in January, and that’s the ability to true them up.
0:35:49.0 TW: For that modeling side, do you run into channels like out of home where there’s a billboard that’s up for six months or a year and daily data feels weird, or do you run into the offline channels where there isn’t kind of da-… Digital has daily, but non-digital?
0:36:08.5 MK: Also, like COVID, hasn’t COVID just fucked everyone’s data?
0:36:14.8 JW: It totally did. Anyone who was relying on a mixed model kinda just had a bit of their heart drop to their stomach when they realized what a change in the dynamic that just happened. We had a lot of customers that had experiments live when COVID hit, and so they had literally feelers in the ad platform of what just changed. Think about an experiment where you’re running simultaneously three different spend conditions intentionally, and they all shift down or all shift up, or the spend-up shifts by more than the spend-down. So that’s instrumentation, I would call it, in the campaigns. And so yes, anyone who’s working with weekly data and COVID enters, they’re going to have to do some pretty big surgery to those models to reflect that. In our case, anywhere where we had live experiments, we didn’t have to think about it too much. We had really hard, factual data on how should the model adjust to this new reality.
0:37:07.0 MH: All right. It’s time, and you know what time it is. It’s time for the Conductor’s Quiz. Yes, the quizzical query where we pitch a question to my two co-hosts, Moe and Tim, as they compete for one of our listeners. As you know, the Analytics Power Hour and the Conductor’s Quiz are sponsored by Conductrics. They build industry-leading experimentation software for A/B testing, adaptive optimization and predictive targeting. For more information on how Conductrics can help you with those types of programs, visit conductrics.com. All right. Let’s get to the quiz. Moe, would you like to know who you are competing on behalf of?
0:37:50.8 MK: Sure thing.
0:37:51.9 MH: It is listener Ben Woodward. He’s actually a great friend of the show. And Tim, would you like to know who you’re competing for?
0:38:03.9 TW: I’ll wait till after to find out.
0:38:06.1 MH: Okay.
0:38:06.7 TW: No, no, you can go ahead.
0:38:08.9 MH: Andrew Davies, is who you’re competing for. Very excited. Here we go with the question. All right. While data science insider’s today argue over the benefits of open source software such as R and Python, it wasn’t always that way. Back before anyone used the term data science, folks who had the same type of gig would instead likely use proprietary software like SAS for their statistical work. SAS was originally created for the mainframe. What mainframe language was SAS originally written in? Hint, the SAS data step has a similar syntax to this language. Is it: A. PL/I; B. JCL; C. COBOL; D. Rexx; or E. APL?
0:39:01.4 TW: Oh, boy.
0:39:02.9 MK: Well, I mean, if it is that I am not old enough to know the answer to this, so I’m glad we have the jump on the bus option available to me.
0:39:15.6 MH: I don’t know if even Tim is old enough to know the answer, honestly.
0:39:21.5 TW: I have one confident elimination, which is, I think COBOL does not feel right as being the right kind of language to work. So I’m gonna go with an eliminating COBOL.
0:39:32.3 MH: We can go back to Y2K and say, yes, that is correct. We can eliminate COBOL. So we now have A, PL/I, B. JCL, D. Rexx or E. APL.
0:39:45.6 MK: I’m gonna eliminate one purely because I don’t like how it’s spelt, which is D. Rexx. I don’t understand why it has two Xs. Maybe I’ll Google it afterwards.
0:39:57.4 MH: Rexx. Alright. Yeah. Me neither. And what a great intuition, Moe, that is correct. We can eliminate Rexx because you don’t like how it’s spelled.
0:40:09.7 MH: Which, incidentally, that’s how a lot of computer decisions got made in the early days. So, I mean, really you’re right there with your… Ahead of your time before your time, you have an old soul? I don’t know what the right way to say it is, anyways now we’re down to three.
0:40:24.7 TW: Oh, I feel like just out of, I mean the PL/I, JCL and APL all feel like they are legit languages. The L probably stands for L and all of them… I am gonna go just on a flyer and say that I think it is APL. And that’s a total guess. So now, I think Moe can…
0:40:51.8 MH: Either choose different or bandwagon.
0:40:54.2 MK: I’m not gonna bandwagon because the odds are in my favor here.
0:40:58.3 MH: Okay. So do you have a choice or do you want me to just…
0:41:00.7 MK: No, I don’t have to choose.
0:41:02.2 TW: No, you just think he’s wrong.
0:41:03.4 MH: And your choice was APL, Tim?
0:41:05.7 TW: Yes, it was.
0:41:05.8 MH: This is exciting. This is very tense. Tim, you are not correct. It is not APL. So Moe that means you’re the winner. Ben Woodard, you’re also the winner, which incidentally, Ben knows a lot of R so this is awesome because it’s sort of connected. Anyways, the answer is PL/I. One nicety of this was if the customer database was on a mainframe, you could write the code that would encode your statistical targeting model right in the data step. Then with minimal modification, you could send it to the mainframe to apply the model and use it to cut the mail files for targeting, which would be in the form of separate reels of nine-track magnetic tape for each targeting segment and control for use in direct mail campaigns. This is amazing, like this is some real Hidden Figures type stuff right here.
0:41:54.1 MK: That was my thought.
0:41:56.0 MH: Yeah, that’s what it reminds me of, that movie is awesome by the way. Okay. That’s not about the Conductrics Quiz, but Ben Woodard, you’re a winner, Moe, great job, you’re a winner just by having enough skepticism, which was something doesn’t smell right, you might be right. Okay. Thank you to Conductrics for sponsoring the Conductrics Quiz. And let’s get back to the show. You know, I think couple questions. So first question, how much does diplomacy factor into what you do? And second question, a lot of people listening are certainly analytics folks who are interested in going deeper in this topic, but don’t have a depth of experience. How would you advise someone in a journey towards a deeper understanding of both experimentation and maybe Media Mix Modeling as they’re kind of trying to progress in working better with their marketing teams and things like that?
0:42:50.4 JW: Well, I mean, we have to have diplomacy. I mean, this is an analytics power hour.
0:42:56.4 MH: That’s right.
0:42:58.9 MH: No marketers listen to this show. So you’re totally safe.
0:43:03.3 TW: Or agencies, or media agencies for that matter.
0:43:06.7 JW: So, yeah, you know, this is an analytics power hour. And part of the diplomacy we have to have is establishing our credibility with the analytics teams that we’re partnering with. They may have already built a mix model once or, and maybe not quite got it all the way to finish. That’s been reported to us before. So our diplomacy is, can we be good stewards of your data? Can we be very transparent? Here’s the model, here’s the good news of the model. Here’s the bad news. Here’s the dirty laundry of your model. And when we’re running experiments, those are running in the customers ad accounts, so everything’s completely in front of them, can we show you how we analyze the data that came out of the experiment? And then to the question Moe asked earlier, can we show you how the calibration from the experiment got pushed into the model?
0:43:50.8 JW: All of this is happening in a transparent way with customers that have the appetite to ask the questions. So there’s a lot of diplomacy, and we like to think of it as, “I’ve never met an analytics team that was underworked.” So if this is an area, [chuckle] where we’ve specialized and we can collaborate with the analytics team, but bring in some repeatability, then it’s a win-win. That frees up some time to go do things that are closer to the line of business. You know, maybe it’s some super secret analytics about how to make the product better or something like that, that you could never buy from an industry. So that’s, I mean, when you said diplomacy, that’s what popped in my head. And we do that every day. We just collaborate with a lot of smart people.
0:44:31.1 TW: So I guess on the diplomacy front, as you said, we could spend less and make the same, like, presumably you have experiments that run that say the response curve is flat. Like we went high, we went low and we’re not detecting it, which is the sort of thing that could, I think, rocks the world of the marketer? Like kind of upsetting and do you wind up getting after the fact like, “Hey, we ran it, it was valid. It was well designed. We’ll go into all the detail you want, but we can’t tell that this is doing anything, you should pull out of this channel.” Like, does that happen?
0:45:12.1 JW: I mean, I like to think of it both ways, that all channels are profitable. I’ve just never seen any marketing that makes revenue go down. And so it really becomes answering the economic question of did the last dollar that I spend cover its cost. And to your point, Tim, there’s been a lot of examples where people have looked at misleading numbers and made some really inaccurate investment decisions in that media. I know we have one example where they had done their homework. They did an incrementality test and they said, okay. And that’s the traditional one that says, hey, you know, 70% of the number coming out of the ad platform is believable. And then they proceeded to increase their spend considerably. And they made the assumption that that was gonna scale literally, that they would keep getting 70% incrementality. Now Moe might know where this story’s going. I mean, with that sound, it’s…
0:46:00.9 JW: It’s likely that you’re gonna start buying different types of inventory, you’ll be spending deeper into audiences, and that’s exactly what happened to them. They added some different placements and things like that, but they kept that assumption of 70%. When they ran the first lift lab experiment, we kinda had to triple check the numbers. We hadn’t quite seen something that far under water. And what was cool here is that the brand was like, “Okay, we see what you did with the Go dark, we see what you did with the pacing, we see where you got this, we’re just gonna take the rest of the country down 50% now,” 50% overnight. And guess what happened to sales?
0:46:35.9 MK: Nothing.
0:46:36.6 TW: Stayed flat.
0:46:37.2 JW: Nothing.
0:46:37.9 TW: Nothing?
0:46:39.0 JW: Nothing. And they took it down another 20% was what we recommended, and that freed up an awful lot of budget. Now if you… There’s a…
0:46:48.7 TW: And they put it all into affiliate marketing?
0:46:53.5 JW: It’s really your chance as a marketer to go and do some proper branding campaigns that you don’t expect to measure right away or something like that, it becomes a fund of money where you can… Or you might have performance channels where there’s still a lot of profit left in the curve, if you can go move the money over there.
0:47:10.1 TW: Try new channels, right?
0:47:12.2 JW: Absolutely. Absolutely.
0:47:12.8 TW: There are new channels emerging, if it’s gonna be TikTok or Connected TV or something, that a lot of brands aren’t spending in yet, it seems like that could be the strategy, is try to free… the budget that you’re freeing up, to now do this in an experimental way.
0:47:25.8 MH: Just hold on for just maybe two more years on the Metaverse people, come on. Just this… [laughter] Just hold off. We don’t even know what this freaking thing is yet.
0:47:33.7 JW: I think that the 3D ad is gonna do really well actually.
0:47:36.5 MH: Oh, I’m sure it’s probably gonna fume it. I’m just… I’m only saying that because you know you’re gonna be in a meeting in a week where they’re like, “We wanna put some budget behind the metaverse.”
0:47:48.3 JW: Tim, our least likely an anecdote for you. We have a B2B company that they said they’re gonna run a campaign on TikTok, and I thought this is probably the biggest waste of money that we’ll have ever seen. I was just skeptical. And how are you gonna find B2B buyers on TikTok? But they listened and did run that through it. Instead of trying to do pre-post analysis, they ran it through a randomized control trial and the numbers don’t lie, and I was wrong. My intuition was that they would be… Those dollars would be under water, but they actually… All the money that they trafficked was in the money and they actually had evidence from the diminishing return curve that they could continue to spend up. That they could scale spend.
0:48:22.5 JW: Now, part of that… What’s in the diminishing return curve? It’s price, so it’s cheaper media at this point in time, at least on TikTok, and it’s response. They actually had a great agency who did really good TikTok native creative that looked really nice, they got the likes and they got the responses out of it. And so they can spend up comfortably on a new channel. And someone’s asked the question, what about new channels? And I like to say instead of a go-dark test, it’s a roll-up test. So let’s go spend in 20% or 30% of the country and measure it without doing a…
0:48:53.8 TW: Try that, try podcasts, try podcast advertising, by golly.
0:48:58.3 MH: Yeah. For sure, the more specialized and niche focused the better.
0:49:05.0 MK: John, are you seeing… ‘Cause you’re working with all these different clients, and I’m not gonna lie, the experiment results, aww! And I just feel like all the results you get to see must just be like the funnest thing ever, and your team must just be like sitting on Fridays having drinks being like, “Oh my God, did you expect this?” I mean, that’s what it’s like in my head. Are you now banking up, I guess this experience around, this is what we often see with this particular type of test, or are you finding that the results really do vary across like yeah, B2B and B2C and country and so many other things?
0:49:45.2 JW: Yeah, it’s actually both. We’re starting to see some patterns emerge, but we go into every test with the data will do the talking kind of attitude. And what we’re learning is that you could take… One of my favorites is the third rail of marketing, it’s the branded search aspect. And too many times, the debate becomes too polarizing, like you should never do a branded search or you should maximize your branded search. We’ve done a number of those tests and we’ve gotten both answers back, quite frankly, sometimes people have under-spent on it, and sometimes people dramatically overspent on it, it really depends on w here they are, and that’s gonna vary from brand to brand.
0:50:24.4 JW: We just put it under the exact same lens as every other channel, which is, let’s find the point that we could spend to which we could spend where our last dollar is covering its own… That we’re generating covering their own costs. It’s no different. The economics are different, right? As you move out of impression-based media, the amount that you’re spending doesn’t have as much as an effect on the price. With click-based media, the more you spend and the more you’ll really just spend in the option, the more your unit price goes up. And these are all of the dynamics that are going into what we called earlier in the call, like diminishing return curves.
0:50:56.8 MH: Alright, well, we do have to start to wrap up, but this has been excellent, this has been an excellent conversation, John, thank you so much for coming on the podcast. I’ve been taking mad notes as we’ve been talking and really benefiting from some of your experiences and learnings as you’ve kind of done this for many, many years. I’m sure listeners are as well. One thing we like to do is do a last call. Anything of interest to our listeners that you might wanna share? John, you’re our guest, do you have a last call you’d like to share?
0:51:27.3 JW: What I can’t get enough of right now are trust constraint algorithms, they’re out there widely available, freely available for doing non-linear optimization, and if you’ve got a lot of diminishing return curves, they sure do come in handy.
0:51:40.4 TW: Trust constraint, can you define that? Like two more sentences. [laughter]
0:51:52.0 JW: They literally are algorithms for solving large non-linear solution spaces. And what you could think that we’re using them for is to say, if I had another $1000, where would I spend it? Or if I were able to move my spend around, if I wanted to keep the same budget or re-allocate it, how would I do that? And they lend themselves very well to it.
0:52:13.6 MH: Very cool. Nice. Well, I want to find a link for that.
0:52:19.6 TW: Trust… Trust constrained optimization.
0:52:21.6 MH: There you go. Alright, Moe, what about you? What’s your last call?
0:52:26.3 MK: So this comes via the new data PM working at Canva… Well, she’s not new, she’s been at Canva forever. But her name’s Jay Zee, and she sent it to me and it’s called an Overachievers Guide to Rest by Elaine Chow. And look, I’m not gonna lie, when I read it, I was like, “Oh, I’m really not an overachiever, like none of these attributes match up to me at all.” But I did find the comments on rest incredibly helpful, I recommend reading it, ’cause she’s got a series of different kind of recommendations.
0:52:57.6 MK: But the one that probably stuck with me the most, which is something that I’ve really been thinking a lot about, is see rest as an investment in your future self, I think it’s something that we actually don’t prioritize. And particularly rest and time away from work, we think… Well, I often feel guilty about it, but actually sometimes creating space from work, I actually do my best thinking weirdly about work, or it just… Like I have an epiphany about something. So yeah, I recommend… I just feel like the world after the last few years, everyone is feeling a little bit more burnt out, and anything you can do to kind of help yourself reset and get in a better mental health space is a good thing.
0:53:37.3 TW: Nonsense. Poppycock, I say.
0:53:39.6 MK: I think Tim needs to read it most of all.
0:53:42.6 MH: I think if you apply a trust constraint algorithm to that, Tim, you might find something different anyways.
0:53:49.1 MH: Anyway. Tim, what about you? What’s your last call?
0:53:53.6 TW: My last call, I would have thought I had… This is something along these lines was the last call in the past, and I can’t find a record that I did. But Michael Lewis’s podcast, Against the Rules, which has had…
0:54:05.4 MK: You definitely mentioned it.
0:54:06.7 TW: I feel like I have, but…
0:54:07.7 MK: You definitely have.
0:54:08.6 TW: Okay.
0:54:09.0 MK: But it’s a good reminder, because I forgot to listen to it and now I can listen to it.
0:54:12.3 TW: But this is gonna be… But this is gonna be a specific… They wrapped up… He wrapped up by season three, a couple of months ago back in May. And season three, episode four is called, Respect the Polygon, and it goes into uncertainty and probabilistic thinking. I actually think season three has been my favorite season of his. They’re only, I don’t know, five or six episodes per season. He actually has Nate Silver on, he has a weather man on, who talks about Respect the Polygon. Nate Silver actually is not horribly annoying. And I guess the link for me is that I remember reading in Nate Silver’s book, The Weather, talking about weather prediction. But it is a very good sort of discussion of how we try to take in and interpret uncertainty, and it’s done in kinda the really fun Michael Lewis self-deprecating inquisitive way. I think the whole season. I think all of them are good.
0:55:07.8 MK: Wait, wait wait.
0:55:08.7 TW: Yes, Moe.
0:55:09.6 MK: Can I have a twofer today, please, sir?
0:55:12.6 TW: Wait. Wait, wait, don’t tell me, also an excellent podcast.
0:55:13.8 MH: Yes.
0:55:14.5 TW: I know, that probably wasn’t what it was gonna be.
0:55:15.8 MH: Hold on, let’s go to the judges. Okay. Yep, I totally allowed Moe.
0:55:20.9 MK: Okay, great. Mark Edmondson has a new book that’s coming out, Learning Google Analytics: Creating Business Impact and Driving Insights. And I did actually message him ’cause I was like, “Look, I just wanna understand, is this a real intro-level book? And he’s like, “No, it’s definitely kind of like…
0:55:38.5 TW: No. [laughter]
0:55:39.6 MK: Definitely… What is Learning Google Analytics. I was like, “Well, this doesn’t sound like Mark. Mark is one of the most technical people I know.” So my understanding of this is more of an intermediate level and talks a little bit about Google Cloud and that sort of stuff. But I am very pumped about this and also incredibly impressed ’cause how that man has time to do all the shit that he does is still beyond me. But anyway, that was my twofer.
0:56:01.7 TW: Awesome. Excellent. That’s a worthwhile good one. Michael, do you have a last call? Or did you save yours to Moe?
0:56:09.2 MH: Well, no… Half of it. So with our last remaining last call. So I’ve been accused of having highly applicable last calls, and so I’ve been looking out to change all that. So this one is…
0:56:23.1 MH: Very non-applicable to your job. But it’s something I really enjoy messing around with from time to time, which is this little application that uses, I think some hedge rolling language programming and stuff like that behind the scenes like GPT3 to produce art. And so you put in a word or a phrase and then you pick a style and it generates a picture, so it’s at app.wombo.art. And it’s super fun to mess with, has nothing to do with analytics except for maybe some of the crazy math that’s happening behind the scenes, but I enjoy it.
0:56:53.7 TW: Jim Stern will attest to the fact that I actually did do that the last call and he went and tried it out.
0:57:00.4 MH: Did you really?
0:57:00.4 TW: That’s awesome. I did. [laughter]
0:57:01.6 MH: Oh, wow. Okay, well, I know that I did not hear about it from you, but…
0:57:05.4 TW: I’m used to being though late arriver, it took me like five years to get an Oculus after you outed it, and now you’ve run away ’cause it’s…
0:57:13.1 MH: Well, yeah, ’cause I deleted my Facebook account, so now I know I can’t use my Oculus Go.
0:57:19.2 MH: Anyway, that has nothing to do with this topic. Anyway, John, what a pleasure having you on the show, thank you so much for coming on, first and foremost.
0:57:28.6 JW: Thanks so much for having me.
0:57:30.0 MH: Yeah. Of course, no show would be complete without a huge thank you to Josh Crowhurst, our producer, who does so much behind the scenes to make sure this show is the shining example of analytics excellence you’ve come to expect. Say that fairly tongue-in cheek, but it seems to work out okay, and you’ve put up with us all these years. So we’ll keep doing it. And of course, I know that I speak for both of my co-hosts, Moe and Tim. No matter what kind of experimentation or Media Mix Modeling, you’re running, the most important thing you could do is keep analyzing it.
0:58:05.9 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions and questions on Twitter at @AnalyticsHour, on the web at analyticshour.io, our LinkedIn group and the measure chat Slack group. Music for the podcast by Josh Crowhurst.
0:58:23.9 Charles Barkley: So smart guys want to fit in, so they made up a term called analytics. Analytics don’t work.
0:58:30.6 Tom Hammerschmidt: Analytics. Oh my God, what the fuck does that even mean?
0:58:38.6 TW: Rock flag and diminishing return curve.
This site uses Akismet to reduce spam. Learn how your comment data is processed.