#155: Attribution without Cookies with Dr. Joe Sutherland

Cookies are getting aggressively expired or blocked outright. Referring site information is getting stripped. Adoption of Brave as a browser is on the rise! Yet, marketers still need to quantify the impact of their investments. What is an analyst to do? Does the answer lie in server-side technical solutions? Well, it’s not a bad idea to consider that. But, it’s almost certainly not “the answer” to the multi-touch attribution question(s). Arguably, a better solution was one proposed by Jan Baptist van Helmont in 1648: randomized controlled trials. On this episode, data scientist Dr. Joe Sutherland returns to the show to talk about the ins and outs of problem formulation, experimental design, the cost of data, and, ultimately, causal inference. This is one of those rare shows where there actually IS a solution to a problem that vexes analysts and their stakeholders. The trick is really just getting the industry to understand and apply the approach!

Concepts, Books, and Podcasts Mentioned in the Show

Photo by Florian Schmetz on Unsplash

Episode Transcript

[music]

0:00:04 Announcer: Welcome to the Digital Analytics Power Hour. Michael, Moe, Tim, and the occasional guest discussing analytics issues of the day, and periodically using explicit language while doing so. Find them on the web at analyticshour.io, and on Twitter @AnalyticsHour. And now, the Digital Analytics Power Hour.

0:00:27 Michael Helbling: Hi, everyone, welcome to the Digital Analytics Power Hour. This is episode 155. To paraphrase the classic John Wanamaker quote, I know that 75% of the conversations about marketing channel attribution are wasted… Yeah, that’s the whole thing. And as much as we have used these airwaves over the years to scoff somewhat at most of our industries’ approaches to attribution in its various forms, we do care. We care about connecting with customers and using smart and data-informed choices to do it. In fact, Tim Wilson is pretty good at helping companies evaluate and choose the best possible set of strategies when it comes to attribution. Isn’t that right, Tim?

0:01:13 Tim Wilson: Yeah. Well, sure.

0:01:17 MH: Yeah, I think so. And Moe Kiss, lead marketing analytics at her company, Canva, so this work touches her day-to-day. So…

0:01:24 Moe Kiss: It’s my worst nightmare. That is the summary.

0:01:27 TW: Complete perfect attribution of television advertising. [chuckle]

0:01:32 MH: But it’s part of what you do. And I’m Michael Helbling. Yeah, that’s it. Okay, so… [chuckle] The world is crazier by the minute in analytics, but we need help and we wanna keep helping marketers in their problems of getting the data they need to make marketing choices and things like that. So from our perch higher top industry landscape, we sometimes glimpse alternatives worth talking about, and we have a guest that will help us do so. You might know our guest as our previous guest on episode 137 about Natural Language Processing. That’s right, Dr. Joe Sutherland is rejoining us to talk about attribution without cookies. You’ll recall that Dr. Sutherland leads Data Science at Search Discovery, and he’s worked in research appointments at John Hopkins, Columbia and Princeton, and he used to work for the President of the United States. And not the last one, but the one before that. But today, he is our joining guest. Welcome back, Dr. Joe.

0:02:39 Joe Sutherland: When we start to step backwards into the past, just ’cause you don’t know which way it’s gonna go, it can go either way.

0:02:47 MH: Right. [chuckle] Welcome back to the show.

0:02:51 JS: Thank you. It’s really, really nice to be back.

0:02:55 MH: Alright, so I’m just gonna pass it off to Tim for the first very long and hard to understand questions, ’cause I’m pretty sure that’s how this goes. No, I’m just kidding. No, but okay, so this shouldn’t pique a lot of interest because cookies obviously are under attack from the browsers and those kinds of things, and so one of the challenges of attribution is understanding who a customer is over a longer period of time, and so it sounds like that’s what we’re gonna be talking about today, so let’s get into it.

0:03:26 TW: There’s kind of multi-touch attribution as a label, and then there is kind of what problem are we really trying to solve? And I think there’s a degree of conflation of those.

0:03:32 MH: Yeah.

0:03:34 TW: And I would claim that the way that whole companies and the way that marketers totally bought into that multi-touch attribution was gonna be solved through digital because everything was digital and we track everything, and then we just run fancy models was always flawed and problematic. And the reason… And this is… Yes, this goes to Search Discovery, but Joe and I have had a lot of discussions as I’ve slowly been, the light dawning on me, as he came in from a non-marketing background a few years ago and was talking about this thing called a randomised controlled trial, which if you’ve… I even remember hearing Jim Novo years ago at a conference, basically saying the same thing. If you’re trying to do attribution, you should be doing experimentation, so that’s… That is, I don’t know if we’re doing clickbait with the title of the show, but increasingly with the war on cookies and the requirements of privacy, first-party cookies are not gonna get you to what was already shitty data in the first place. So that’s where I’ve now watched Joe now do this, this like “Yeah, you can actually do experimentation.” So maybe that would be a good place to start would be Joe, you came in…

0:05:03 MK: Which place?

0:05:04 TW: Well. [chuckle] You know, Michael promised a long-winded wind up…

0:05:11 MH: Yeah.

0:05:11 TW: But I did not want to disappoint.

0:05:12 MH: That folks is predictive analytics. [laughter]

0:05:17 TW: The funny thing, I remember telling Joe that before he and I even worked together, we had a webinar last Wednesday that Katie Sasso came to and she evaluated attribution, and it was funny because she kind of landed in the same spot completely independently as Joe, like, What the hell are you doing with this whole first-touch, last-touch linear? What? That is absolute nonsense, even if you had the data. So I feel like if we start by saying, Joe, can you kind of talk through how, from a social science perspective, you would approach the attribution problem?

0:06:00 JS: Sure, and actually before we even get to that, I… Just coming into marketing as sort of more of a layman’s understanding of market, as somebody with more of a layman’s understanding of marketing, I sort of heard like varying definitions of attribution, like one I actually heard was that attribution itself is synonymous with one-to-one tracking and not necessarily the problem that we are discussing right now, which is when you spend a buck here, do you get this back over here? And… So could you actually go a little bit deeper in defining what we’re talking about? Attribution.

0:06:39 MH: We’ve actually spent multiple podcast episodes on that definition, Joe. I don’t know if we’re gonna be very much help. But actually, that’s probably one of the challenges, is that as an industry, we don’t all define some of this the same ways, and there’s the definition you just gave, which most people should reflect on as, “That’s not right,” but I bet you, if you ask five more people, you get five different answers about what it is too, so I’ll hit it off to Tim now to give the definitive podcast answer.

[laughter]

0:07:12 TW: Well, unless Moe, you wanna jump in?

0:07:14 MK: I definitely do not want to jump in.

0:07:16 TW: Well, I think there’s a degree of scale. I think generally, or fairly consistently, and there is marketing attribution and MMM, media mix modeling, or marketing mix modeling, do start to get conflated, and there have been times when that’s kind of, “Oh, one’s a bottom-up and one’s a top-down,” doesn’t really matter. I think it’s a matter of degrees. It is fundamentally, What was the value delivered by different marketing… Fill in the blank. That could be, What was the value delivered by my marketing channels? How much value did display deliver? How much value did paid search deliver? It then gets treated as, No, no, no, how much value did this specific keyword deliver in paid search? How much value did this hyper, high-fidelity precision? That is kind of where I think most marketers who are kind of bought into multi-touch attribution.

0:08:17 TW: And when people rail about it, they’re like, “This is why you shouldn’t be doing the last click, because there are all these clicks that happen beforehand. Oh, by the way, there are the impressions that happened beforehand,” and so it quickly marches to a super, super detailed level when… And the assumption is, if you can answer that, then of course you can answer how much value came from email, from paid search, from these others, when none of those questions are getting answered, well, at all, especially when you start talking about the so-called higher funnel activities, like the maybe social media or maybe display, so I don’t know, to me that’s where ultimately they’re trying to get to.

0:09:00 MK: Can I just add though? And maybe I’ve been thinking about this wrong my entire Data Analytics career, but to me, it’s not like how much value, it’s like the best estimate or best understanding of how much value a particular marketing action is driving. So yes, last click has its downfalls or whatever, and you should work to try and build a more accurate model, but I don’t expect it to be perfect and I don’t know, maybe I’m living in some other planet that no one else is living in, but I’ve never expected it to be a silver bullet and gonna tell us the perfect answer.

0:09:34 TW: Well, but it’s, How much credit am I going to assign? And actually, I think the way you just kinda headed down is the way that, a lot of time, the logical progression happens and it quickly gets you into, to me, kind of a hot mess. As soon as you’re saying which one of these models is better… Like it is easy to say, “This is why there are shortcomings of just looking at a last click model,” and then you can say, “Well, these are the shortcomings of a first click model,” well, now you’ve already… And this is from Joe, defined the problem, framed the problem. You’re now… You’ve already kind of marched into a corner where you’re saying, “Which heuristic model or which algorithmic model is the best one to solve the problem?” without having actually articulated the problem, which I think the big questions that I think most marketers should be answering first is, How much value am I… With the degree of uncertainty, which is the other thing with heuristic models. There’s no uncertainty baked in, it’s just, “Nope, this is the number, this is the value we assign to it,” whereas… In not defining the problem is, Don’t you wanna know how much value you got from this channel? Which is somewhat where mixed models happen as well. Okay, I’m gonna head off on…

0:10:51 MK: I feel like poor Joe asked one simple… Well, he thought it was a simple question, maybe not.

0:10:57 JS: I sort of asked it almost to point out one of the issues, which is, I think there is a lot of disagreement and uncertainty around some of the terms that we use. I think there’s a lot of disagreement and uncertainty around the approaches that we use, and so I entirely intended for this to become a little bit more of an argument. So yeah, I appreciate you looking out for it, Moe too. But I think it’s important to address.

0:11:25 TW: Well, so let me throw in that there’s the mixed model as well, that says, “Oh no, no, no, we use our mixed models to say how much value we got from each channel. We just take and count… ” And mixed models are just ridiculous how they typically get done. There is no upfront rigour in the, “How we’re gonna execute,” it’s purely, “These are the dollars we spend”. And at a super high level, it’s “Given what we spend and given to what we got, let’s do a big multi-regression and just kind of assign values,” which is also horribly flawed, and it’s all operating under this assumption that if you just track enough stuff, you can actually assign value.

0:12:10 MK: So, Tim, just hypothetically, of course, ’cause I only ever speak in hypotheticals, I’ve never used real work examples. If you were baking incremental lift tests into your attribution model, would that therefore be mitigating some of those heuristic risks in your view?

0:12:30 TW: Absolutely, but I feel like that’s where the hands get thrown up because the agencies that are executing, it scares them. It’s harder for them to execute, and there is the very real risk that it will turn up that maybe they’re not delivering as much value as they should, because that baking in any sort of experimentation means that you’re recognizing that you’re executing not just what you think is optimal, and then you’ll just check the data later. You’re saying, “I’m gonna execute on what I think is close to optimal, but I’m gonna execute in a way where I will take a lesser return, just like with an A/B test, to collect data so that I can then actually do better on the next execution. And I feel like when it comes to digital media or any media, that just doesn’t happen. They’re like, “Well, we can’t just do a split test, like we can do on a landing page, it’s not that easy.” It’s like, “Well, it’s not that easy. But it’s totally doable.”

0:13:29 MK: It’s totally doable.

0:13:32 JS: It is, yeah. So, man, there’s a lot of different places to jump in here. On the most recent point that you said, which is it’s totally doable, yeah, I think people are scared moving outside of digital, right? It’s in a lot of ways, I feel like we in marketing have been just spoiled forever, ever since the advent of the cookie or ever since the advent of just our ability to digitally track anything, right? ‘Cause all of a sudden, you get… You have sort of the 360 degree heads-up display, you can see who’s going where, what they’re doing, what their Google searching for, etcetera. And there’s this sort of false confidence that you get from that amount of information, ’cause you say, “If I needed to know something, let me just drill down, double click on Steve, one, two, three, four, one, two, three, four, address dot Atlanta, Georgia, whatever it is. It gives you a false sense of confidence that you could just look at what he’s doing or anybody’s doing, and then derive some sort of inductive story about why they’re doing that.

0:14:34 JS: And it’s almost like a… Think about presenting to executives, right? If you come to an executive and you pull up this extremely specific anecdotal example for an executive and you say, “Look, I know this is true, ’cause Steve one, two, three, four did it, like look at his pathway, here’s my theory for why that happened.” The executive’s like “I got 15 minutes left in this meeting. You certainly did a lot of work to pull up this data, I really appreciate that. Okay, whatever you’re doing sounds good.” It doesn’t really take you to that next level, which is, “Well, Okay, wait a second. That was anecdotal evidence. How can we abstract this into a larger story and start thinking, Okay, is this true for everybody? Every single time we see this person put this thing in their car or whatever it might be, again, that’s kind of a smaller, more abstract example, but every time we see that thing, does that mean that this thing is gonna happen too?”

0:15:23 TW: Well, and it also, I think you’ve hit on another one that when people… It is so easy to knock a single, like a last click, like here are the problems with it, but what happens is the story gets told in the absence of that and saying, “What about the loyal customer who does comes to you this way and it comes to you that way and then comes to you this way, and oh, wait a minute.” But that means we need to recognize them across devices, and so then we get excited about trying to solve for cross-device tracking, and we’re back to kind of missing…

0:15:58 JS: Yeah.

0:16:00 TW: This idea that digital, we could track everything, and there is, we talked about the social dilemma, there is this idea that we can track everything, but I’m more of the thought, Where does past experience with a brand fall into that? How do you know? I grew up using Colgate toothpaste for… And that’s just kind of what I always had. And then I married somebody who had grown up using Crest toothpaste. Well, no matter what the marketing is, it’s gonna be our past experiences or I had a positive experience or a negative experience with the brand. There’s all sorts of stuff that, to me, that’s this big myth. It’s like, Well, we’re digital, we can see everything. It’s why a 360 degree view of the customer is total crap.

0:16:41 MK: But do people really think that? When I try and explain attribution internally through training, I always talk about Tiffany’s and the fact that I have this affinity to the brand Tiffany’s, and I go through all the different things I’ve been exposed to, and then I go back to the fact that my grandmother, rightly or wrongly, gave me a copy of “Breakfast at Tiffany’s” when I was 10 years old, I don’t know if it’s the best movie for a small child, and I now have an original “Breakfast at Tiffany’s” poster in my house because it makes me think of my grandma. That’s why I have an affinity to Tiffany’s. It doesn’t matter any of the marketing stuff, and I use that story to illustrate like it’s the best estimate, it’s not… I think the idea that we can track every…

0:17:23 TW: But that’s the point. Why are you telling that story? Because people don’t think that way.

0:17:28 MK: Because it’s the best estimate, like…

0:17:31 TW: No, but you’re telling the story because the people you’re talking to, you’re having to explain to them that this is an estimate, right? That yeah, I’m not… That’s what I’m saying is, no, I don’t think people… I actually… I don’t know a 100%. How many fucking times do we have to listen to a 360 degree view of the customer, right? That’s everywhere. It’s being promised. It’s on all the CDP bullshit about… I mean, CDP, great, yes, do the best you can, but then you slap the label of, “Well, with first party data, we’re… ” It’s like, No, you’re not. And you don’t have to go down that path. I guess that’s the thing… That’s not the… The question people are asking is not, “What is the specific… “. It does get sometimes asked, I’ve heard it, “What is the best sequence of activities to get somebody to convert?” And it’s like, well, that has also been bullshit from 15 years ago, and it’s, “What’s the most common pass through my website?” All of that, you’re not defining the problem right, which is, For what I invest, what am I getting out of it? And how can I account for kind of interaction effects between them and all of that does go…

0:18:46 TW: That basically, in Digital Marketer parlance is a multi-variant test, just getting off of the testing and on-site or in app experience and say, test the media and that… I don’t see, I’ve watched many, many more. Yes, let’s do the testing, incremental lift test or let’s do a test on media be planned and designed and then not happen, because all of the sudden you have to coordinate multiple parties at multiple… The client has to understand it, the media strategists have to understand it, they have to execute it well, you have to have somebody who can design it well, you have to have somebody who can analyse it well, and I have a limited view with limited set of clients, over a less limited length of a career, a lot of stuff has to come together, that is a more nuanced cell than a run an A/B test on your website.

0:19:46 MK: But I feel like attribution is hard too… Like running a lift test is hard but attribution… Well, I’m saying having a particular model, getting everyone to buy into it, like calculating the weight, like productionizing it, having it available to…

0:20:01 TW: But it’s all a waste of time. That’s what I’m saying, we shouldn’t be doing that. Because it’s like, why do that at all? That’s the…

[overlapping conversation]

0:20:11 TW: It’s like saying, “I wanna go dig a hole in the front yard.” And they’re like, “Great, here’s a fork.” And I’m like, “Well, I said, I wanted to… ” And I’m like, “Well, that’s not right.” It goes back to, it’s like, “Well, yeah, you said you needed something that you could move stuff around with, here’s a fork.” And I’m like… And then people are gonna run around and argue about the best fork like, “Oh, here’s a longer fork, here’s a bigger fork… ” It’s like, “Why don’t you just go get a shovel?” [chuckle] Having, no vetting of that analogy happened before 30 seconds ago, so.

0:20:44 JS: Well, look, I think that in general. Most point is well taken. Right, it’s already a lift to get people to buy into the idea of attribution, and then I think Tim’s point is, well, if you’re already investing this much, why not just take it that little bit extra, inject some sort of experimental design into it, let’s say you get the causal inference piece that you’re looking for, and then why don’t you… You’ve already sold all the different transactional costs that people have to go through, all the collective action cost that you have to pay up front to get it done, so why don’t you just go that little extra mile and get it done.

0:21:22 JS: It’s really interesting, especially with the class that Tim and I have both worked with on this, how… It’s just… It’s almost like I think a mindset shift needs to happen, it’s less about… Yeah, I guess to your point, Moe, if you’re already paying this much, people are doing attribution is not like they’re not buying into attribution, I feel like there just need to be a mindset shift, which is, people can’t make that leap… At least people are, but it’s just not really broadly happening, right. People have trouble making that leap from, let’s just look back on what we’ve done in the past, to let’s do some sort of experimental manipulation to actually in a forward looking way, pay ourselves back in the future.

0:22:04 MK: I think the thing that gets tough about experimentation with marketing, and this is where I think it gets really harder with the stakeholders is you always have to have some type of hold out, so you’re always testing a particular campaign or a particular market or… It always is somehow a little bit broken, and then it also means that whatever decision you make is not something you can then replicate for every single other campaign, or every single other region or extrapolate out. Well you can, but then you start to wonder about how correct the data is. And then you end up in a whole another really complicated discussion, and then there’s still the whole like, Okay, well, I need to know last week, did my campaigns perform well, did my keywords perform well, it gets to that bit where it’s like, one, it’s really tough to do really well, but then the second bit it is like the actionability of how much more complex does it make a marketers day to day decision making? And how do you build that into their work process, and so it is a hard thing to do.

0:23:16 JS: And just… Oh, Sorry to jump in on that, I mean, exactly right, which is… You know you can almost imagine using the existing attribution architecture you have set up in real time, right, it’s the only difference is that you’re being smart about the design, there’s a few things I wanna double click on with what you just said, Moe, I mean, the first is the assumption that there’s always a hold out group that’s never gonna get anything. So some marketers might say, “Well look, I’m trying to give everybody some marketing ’cause we’re leaving money on the table, if we don’t treat everybody, what’s the purpose of this control group, are we just paying for the data,” maybe you are? Maybe. However, there’s other designs where you can treat every single person in the sample, but just the different times, right.

0:23:56 JS: Stepped wedge design, for example, is sort of like a waterfall. You’re treating this one chunk first, everybody else is in control, then you’re treating the second chunk, everybody else is in control, and eventually everybody gets treated… We use this design a lot, we like a society for people that need to get precious medicines when they have really terrible diseases, you don’t wanna deprive certain people of that medicine just because you need to be able to publish a paper, especially when you know that that medicine is gonna help them. So how do you kinda get the best of both worlds, that’s one design you can use to overcome that.

0:24:32 JS: Back to the earlier point though, which is you already have all these great attribution models running in real time, the only coordination component that needs to come up is the one at the beginning where you say, “Okay, at this time, in this place, we’re going to spend this much on this channel.” Or whatever it might be, and if you can get people to buy into that idea, the rest of the architecture is the exact same, same analytics, you can get the same real-time inferences, it’s not like you actually even need… When people say, “Oh, I need a data scientist to do that,” which is actually a funny thing in itself, we can talk about it in another time, but you like… No, you don’t actually need an additional person, no…

0:25:05 JS: Netflix, for example, and Microsoft, they do this at scale all the time, and they have an analytics platform that’s reading out these things in real time based on the experiments they’re running, Google is doing it all the time, whenever you Google search you’re subject to probably 20 to 40 experiments that Google’s running on you in that one search, so I guess that’s just a long way of saying, I think it’s more about a mindset shift, and I’d love to talk a little bit more about that.

0:25:34 TW: Let’s talk a little bit about the mindset shift that has to happen.

[laughter]

0:25:42 TW: Well, one, let’s just call out that when we list off to Facebook, Amazon, Google, the ones that have super, super scale, there are many, many, many companies that don’t have the volume of activity.

0:25:53 S1: Yeah.

0:25:54 TW: But Moe… Am I not right in thinking that if when we pick a, but I need to make… I need to optimize in real time, the marketer needs to come in and be like tweaking and tuning. Okay, do whatever you’ve been doing, keep doing that, that added layer of saying, There is no real causal inference that is kind of, well, I… I tweaked and tuned these things, and at the end of the day, this is what I got was X, I was optimising to last touch, which in many cases is fine to optimize to, but if I’ve done, it’s the concept that to me, the mindset shift is understanding the cost of data and what is the cost of data mean… I believe that if you go to 1000 marketers and ask them what is the cost of data, they would say, “Well, that is the cost for me to implement my web analyst platform and put in my Floodlight tags, it is literally the cost of collecting the data, just as I decide to implement,” as opposed to recognizing this much higher return cost of data, which is actually executing in a way that is inherently sub-optimal.

0:27:07 TW: But it’s the exact same argument when people push back against A/B tests, I think that this idea is best, and you’re telling me I need to have some small part of the group get something else. Well, that means definition-ally it is sub-optimal, it’s sub-optimal in…

0:27:24 TW: For the period of the test, but over the life of the… Because you’re paying… That’s a cost and you’re paying that cost, so you can actually get causal information that you can then apply going forward, and it’s just is not that… I think it was Gaushof who was the first one who sort of had the light bulb go on for me on the cost of data, that is that mindset shift. If I can do that, if I can get some experiments in place, I think I can figure out which multi… Classic multi touch attribution model best approximates the results that that test is giving me, not gonna be as good, but now I’ve got something that I can monitor and use day in, day out, week and week out. While also continuing to say, When I’m rolling out new things, I’m running them in a randomized controlled trial world, so I’m getting much more reliable, robust, causal information.

0:28:27 JS: There’s another piece of this that we’re not… I don’t think we’ve talked about yet, which is that the ability to do attribution in the world that we presently live in is at risk, right. So I think you may have heard of ITP, intelligent tracking prevention and some of these other technologies that have from a privacy perspective, prompted us to want to eliminate all of these trackers that we’ve been using, and the trackers that many of these attribution technologies are based on. And so I think that there’s like… Tim’s big point is, there’s a better way to do it. I think that that’s the big point there, and there’s a secondary piece of it, which is, well, sooner or later, we’re gonna have to find another way to do it anyway. And so that’s I think what sort of prompts us to think, what are the other ways that we can think about doing this, there’s technical solutions where people are not, okay, let’s actually just reassemble previous attribution technologies using different ways of tracking or different ways of linking these cookies back to people through probabilistic fingerprinting and things like that, and that’s one set of solutions. But in particular, if there’s…

0:29:46 TW: But I’ll say that crumbles so quickly, like the third party, an impression that doesn’t work for that… Right, ’cause that’s off-site, there’s… So something that was already bad, even the alternatives aren’t getting you back to where you started, they’re certainly not an improvement, they’re a significant erosion on what was already pretty bad, so…

0:30:06 JS: And so this is like, I guess, just to kind of roll it back up, people obsess over all these different ways of doing attribution, but the problem is that in essence, each one of them is essentially a marketer being like… “Dude, have you seen Karen? Karen looks so good these days, she totally like, she’s been taking care of herself, she’s been eating vegetables, we all need to eat more vegetables so we can look like Karen,” that’s kind of like what’s happening when in reality, there could be something else that’s happening and regardless of if you’re using last click, or if you’re using first click or you’re using multi-touch or whatever it is, it’s still this problem of, okay, there’s the potential for confounders out there that are leading us to draw conclusions that are bad, like potentially bad, right, and that’s been the problem that we’ve known about for a very, very long time.

0:31:00 JS: I think one of my… Actually here, I’ll read you a funny quote, which I actually really, really enjoy. This is by a guy named Jan Baptist van Helmont, who 1648 wrote a manuscript called origin of medicine. And he in it, he challenges proponents of blood-letting, okay, which I guess was a popular technique for getting rid of disease at the time.

0:31:21 TW: Wait, are we not doing that anymore?

0:31:26 JS: I think we still are.

0:31:26 TW: Okay.

0:31:26 JS: In certain places and for certain people, and I guess if you like your blood-let, I don’t judge, but here’s the quote, it is kind of snarky in 1648 language, let us take out of the hospitals 200 or 500 poor people. Yes, he would say that, that have fevers and pleurisy, let us divide them into half casting lots, that one half of them may fall my share and the other to yours, I will cure them without blood-letting and sensible evacuation, but you do as you know. We shall see how many funerals, both of us shall have. [chuckle]

0:32:08 MK: That is like a real A/B test gone… That’s like next level.

0:32:11 JS: The first recorded A/B test proposal in 1648 with a level of snark I have never heard before… It’s these are problems that people have been thinking about for a very long time. In some ways, you could almost think of blood-letting as the technology that we have and are disposable today, and there needs to be some sort of alternative evaluation running to be able to come up with better inferences.

0:32:42 MK: Okay, someone’s gonna have to explain blood-letting to me.

0:32:46 MH: So they used to… If you had an illness like a fever or something like that, they would put a leach on you or cut you and let blood out, ’cause that’s… Obviously, the build-up of the humors or whatever, the ether, I don’t remember the exact thing, but basically that was sort of like a modern medical treatment back in the day.

0:33:06 MK: And people didn’t die from blood loss? Or they did?

0:33:09 TW: Yeah. Yeah, absolutely.

0:33:11 JS: If you knew what we know today, you probably see that that’s probably like taking liquids out of someone who’s got a fever, probably headed in the wrong direction, but outcomes were not so closely matched to procedure, I guess, back then. Hence van Helmont’s challenge at the time.

0:33:32 TW: But I think in that example, it was ethics aside, that was kind of a split the population draw lots, cast lots, and do it. I think that’s this other piece is that when we think about doing this sort of thing in the wild saying, Oh, we can’t control or we can’t do a split test on such and such, because it’s broadcast. It’s out there everywhere. And yet, this is where Joe is like, there are million cases where in the real world, we have to do that all the time, to hell with you digital marketers.

0:34:07 JS: So okay, okay, on the point of… On the point of, it’s too complicated, it’s too murky, we can’t even run an analysis if it’s not in the digital space, we got news. People have been doing it since 1648, right. The first study of scurvy 1750, they ran it as a field study…

0:34:30 MK: I don’t think it’s too murky, I think it’s more that a lot of people are gonna fuck it up.

0:34:36 JS: And now that’s one of the risks of this, and you don’t wanna have a design that’s so complicated that it’s unimplementable. I’ve been a part of studies in the past where we had call centers, right, where I think one of the call centers just straight up decided that the script was too complicated and did their own thing, and all of a sudden your whole experiments broken because there was no compliance by the people actually implementing it, but the beauty of these types of experiments are that they really can be very simple if implemented correctly. If you can go to your media agency and you can say, “Hey, here’s a list of the DMAs that you need to treat at this time with this, with whatever this is, with this spend level, with this messaging, etcetera. It’s pretty easy to do that instead of doing it all in one big batch purchase or one big sort of, I guess one day you would parcel it out over time, and I think it provides great training for the analysts who are implementing it. I think it provides great training for the people who have to analyze it on the back end.

0:35:34 JS: You’re not necessarily spending any more money if you have a fixed budget or you’re not necessarily hurting anything with the wrong message or not enough spend to begin with, ’cause you had to parcel out your budget, it’s really just doing it in a smarter way so that you can then come back and reap the benefits.

0:35:53 MK: But in a cookie-less world. Right, ’cause to be honest, that’s literally what the pain of my life is right now, is measuring stuff DMA versus DMA. There is conceivably a point where we won’t know someone’s location and we won’t know what DMA they were a part of, so it’s all fine for when you’re spending, but when you’re trying to actually then look at the uplift in traffic or active users or whatever your KPI is, it could get really… Well, impossible.

0:36:23 JS: But… Not impossible.

0:36:24 MK: Okay, then cool. Give me some answers. We know this is the show where we talk about how to help Moe with her work problems.

0:36:30 MH: Well, this is where when Joe’s like, you don’t need a data scientist, I’m like “You kind of need a data scientist.” [chuckle]

0:36:35 TW: Well, it sounds like you definitely need some kind of way of really understanding your experiment design and the possibilities or structure of experiment design. So while we’ve been talking, I’ve been madly Googling like What’s the best book on Experiment Design, because it seems like that’s what you wanna be boning up on if you wanna start to try to take this new approach.

0:37:00 JS: I couldn’t agree more. I think that there’s this call to action, which is, we need to train up, analysts and to me it’s not only amongst our client base, but amongst, I think the professionals in the digital marketing space in a way that lets them sort of speak in this causal inferential language, right? Like it’s… Causal inference is not a tool, you need to use at every single turn, and it’s not always possible. A client might be a lot more comfortable with, let’s just do a historical analysis of the relationship between how much people bought of this stuff and how much we spent in the past. That’s gonna be interesting, it’s gonna get you some results, the problem is that, let’s say you then get a negative relationship between the amount you were spending on your display and your purchases, then the inference is like, “Oh my God, I need to spend less money on display.”

0:37:52 JS: And setting aside the problem, but we’ve never really been able to do great display attribution to begin with, I think this is another strength of the design that we’re discussing right now, but there’s probably… What was probably happening there was when things were going poorly, people started spending more on display, which is you get a sort of bi-directionality and the causality there that can show up as a… It lights up as a negative result, but there’s really a story behind it, and you probably shouldn’t be spending less on search display or something like that, so I think that there is a place, a time and place for those types of observational analysis, however, I do think in the analyst toolkit, which I think is currently lacking in a lot of places, is people should also be well-versed in the language of causal inference, right. How do we design an analysis, whether it be through quasi-experimental design or using some sort of random variation or instrumentation from the past to say something about the future with causally inferentially valid language.

0:38:56 JS: Those are great tool kits that I think people need to be able to tap, and the fundamental principles behind designing a field experiment, which is kind of what we’re talking about right now, these RCTs, these large scale media RCTs, those principles also underpin all of those other types of historical analysis that you can do. We were talking about Diff and Diff, in the lead up to this show. A difference in differences analysis is not an algorithm, it’s not a linear regression that you point at a set of data and then you get an answer from… It’s a design that you implement so that you, based on certain assumptions that you’ve made, you can get an answer about the effect of X on Y, and regardless of the algorithm you use, I think this is probably the big point, it doesn’t really matter unless there’s some sort of randomly assigned or approximately random variation that you can use as part of your analysis to produce an answer, and that’s what the beauty of these RCTs is, is we’re actually manipulating experimentally these groups and inducing that random variation.

0:40:00 TW: So two quick points, and I think that to me, the mixed models, one of the reasons that they can struggle is it’s along those lines of you needing to have some degree of variation in order to… When you’re doing a quasi-experiment, you’re introducing variation so that you can actually try to detect an effect. So my go-to example on that is being asked years ago to tell a brand when the best time for them to post on Facebook was to get the most engagement and to analyze their data. And they posted on Facebook on Tuesdays and Thursdays at 10:00 AM and had for a year, and they wanted me to analyze it, I was like, Well, we’re gonna have to… And this was super gonna be crude, but I’m like, well you’re gonna have to post at other times, and then kind of Moe to your point, a lot of things come in, well, what if they post shitty posts at 10 o’clock at night. So to me, that all goes back to.

0:40:56 TW: Defining the problem, framing the problem. Outlining all of those things, what are the things that I think are kind of… What can I control for? What can’t I control for? And then, doing a design like at a writ large. And that is not how marketers think, and as many analysts aren’t thinking that way either because they’re pleasers. And the marketer wants them to analyse their historical data. And it’s like, “Well, look, if you spent exactly the same level in every channel, every week, for the last quarter, then how am I gonna figure out what gave you what? Could we mix it up a little bit?” The other reason we haven’t really hit on as to why I think it is hard to get the media agencies to execute, and I have an admittedly strong feeling about media agencies that is partially based on experience that…

0:41:52 TW: Do they really wanna know? If they count impressions and say, “This was the CPM we got, and you’re running your programmatic media, and we’re optimising to this.” And yeah. They’re relying on pixels that are slowly getting pixels and cookies that are getting broken and eroded. That I think is actually a bigger challenge because we’re kind of asking them to execute in a different way so an unbiased third party can actually measure the value of what they’re delivering. Which is understandable that the executive teams of those agencies is not like… “Yeah, I get onboard.”, when they said, “Well, wait a minute, for the last 10 years, we’ve been selling them more media and telling them how many impressions they’re getting, and we’ve been counting this way.” So there is the… Even if you get an analyst at the agency that’s like, “Oh. This is so cool. We’ll finally be able to really quantify the value of our affiliate advertising.” It’s gonna go up two levels in the chain and be like, “No. There is a very good chance that it is.” And I think they generally believe that all this stuff works. But as a marketer PMG, they keep cutting back saying, “We cut our media spend in half again, and again, it didn’t have any impact on us.” So I think the media agencies don’t… They’re not really incentivized to know.

0:43:15 MK: But I also don’t feel like it should be their choice.

0:43:17 TW: It shouldn’t, but that means…

0:43:20 MK: You should be setting your KPIs ahead of time. And from my perspective, I mean we don’t work with very many agencies, we work with one, and that’s only very, very new. It’s not up to the agency to tell us how well something’s doing. That’s our job. We’re gonna tell them what methodology we’re using to measure success, and they agree or they don’t agree.

0:43:41 TW: But if you’re relying on them to execute in a different way, there can be a lot of resistance and friction like “Oh. Oops, I’m sorry. Did we not execute?” I get the “Oops. Sorry, I guess your measurement, which grading…

0:43:53 MH: Yeah. And the other thing is that the agency is not really incentivized to think critically about the on ramp to this design of experiment as well. “Oh. Well, this worked on Twitter, so we’re gonna try it on Pinterest.” They’re just the fastest route to getting the dollars is probably, well it’s not use the billable hours to figure out what’s different between those audiences or how we should design and experiment differently to think through any of those kinds of things. Or maybe types of buyers and where those buyers are or what they’re doing when they’re interacting with this content or media and all those things. It just blows my mind how good Instagram is at advertising versus how bad Twitter is. It’s certainly not a lack of data, but there’s a huge difference in engagement of advertising in those two different platforms. And it’s just sort of like, huh there’s something to that. And probably when you go and design experiments around how you understand those audiences and the attribution of those markets and that causal inference of what’s happening, those are all things that unless you’re particularly motivated. Moe, you kind of represent the motivated sample.

[laughter]

0:45:05 MH: You’re trying to actually do a good job. Then that’s just gonna go right past you, and you’re gonna end up confounding it. So it seems like there’s many roles to play. It’s almost like if I’m a leader and I’m having someone pitch a test or an idea to me, I wanna start asking questions of like, “How did you differentiate this approach versus other approaches? Why is this the same or different than other ones that you might have done? And see how well people can answer that question. And then it seems like all of us need to start boning up on causal inference and…

0:45:40 MH: Good that we’re talking about it. We did a podcast episode about it, which we’ll put in the show notes that one. And so go back and listen to that one, everybody. But it sounds like… Yeah, we’ve got some work to do. But at the same time, I wanna say this is giving me hope. So that’s good. There are solutions. That always feels nice because, honestly, attribution for me has felt fairly intractable and annoying for many years, so this is very good for me anyways. I don’t know how to do it yet, but at least I know there’s something you can do.

[laughter]

0:46:14 MH: Take 200 patients from any hospital, that wasn’t the…

0:46:18 TW: Can we throw in… The other perverse incentive is that, and I think, Moe, you’ve been at a couple of the younger companies that have a better infrastructure, better mindset. When you take the organizations where there are marketers in-house who have grown up old, stayed, TV print, and there’s that other perverse incentive that really the… How do you get the old… How do you advance? You get… How big is your budget? I’m controlling a $5 million media budget. Next year, it’s gonna be a $6 million media budget. So even in-house, the marketers aren’t necessarily as incentivized as they should be to really get the value.

0:47:02 TW: Really, really quantify the value in kind of a causal way. So I think there are a lot of barriers that are organizational barriers, and again, I think there is hope. I think the more it’s starting to click in, the more companies that hire a data scientist and then says, “Can you go look at our time decay attribution model?” and they look at it like, “What the hell are you doing? What problem are you trying to solve? Have you tried… Have you thought about doing some quasi-experiment?” I think that’ll help here.

0:47:34 MK: But that’s the whole point of the analytics function, right, is like it’s your job to go in, get all the marketers to like you, then you tell them you’re gonna help make their channel more efficient, and then you put in some robust practices to measure stuff better, and hopefully by then they like you enough that when you tell them something sucks, they listen. No one tell any of my stakeholders to listen to this episode. [chuckle]

0:47:58 MH: And then you hope that some major vendor doesn’t come in and blow it all out of the water with a bunch of sales side BS that the executives are eating up because it’s… Quote major vendor.

0:48:11 MK: Yeah. But if they’re your friends, they’ll forward those emails to you and be like, “Is this any good?” And you’d be like, “These are the 10 reasons why it sucks”.

0:48:17 MH: Yeah. Yeah. Yeah, and that’s the trust part of it, and building them, man it sounds like what we’ve stumbled across is the biggest… One of the major hurdles to doing attribution well is actually like process and design problems.

0:48:36 TW: I thought we’re about to take it to emotional intelligence, and I was gonna be like, “No. No. No. I did not need to go there.”

0:48:42 MH: Well, to solve process problems intelligently, you need emotional intelligence and organization trust. [chuckle] And so… Yeah, it all comes back to Brené Brown Tim. Learn to love it. [chuckle]

0:48:54 JS: There’s a piece of this though, where it’s… Capability enablement is important.

0:49:03 S1: Yeah.

0:49:03 JS: The type of thing where you can necessarily go and buy like an off-the-shelf piece of software, you can’t really pay a vendor for these types of thinking processes, you have to really have a human brain behind designing, okay, what is the experimental manipulation that we are trying to perform what are the outcomes of interest that we really value, and how closely to those outcomes of interest and context that we’re dealing with map back to the actual world that we care about, right? You can do a letter writing campaign to get one individual to buy your product, but in the real world, nobody actually does that, so there’s I think a lot of different pieces that are about the analyst capabilities set, and I think the first big thing I’ll say is that sort of causal inference skill set is missing, so there’s like a capability enablement, I think it’s a big piece of the process. I agree with you as well. I think process is a big gaping hole, it’s hard as a digital marketing analyst because especially when you’re quintessential. The amount of…

0:50:09 MH: Everybody there…

0:50:11 JS: And the soft side of things is like monumental, it’s… And part of me likes to think that the more types of… The more pieces of content that we could put out there trying to educate just I’m like, “Okay. What the heck is all this stuff?” And, Okay. Where does this make sense to apply it, and when is this actually gonna really make a difference for my bottom line, the pieces of content like that are gonna start opening up the conversation in ways I think Silicon Valley is done a great job of doing so far towards, Oh, causal inference what is that? How do I do that? I want that, that’s the cutting edge. And once you get there, I think it starts to become a little bit more incumbent on the analyst to be able to execute it, and to be able to execute, you have to have the capability, so…

0:50:55 MH: Alright. Well, this is phenomenal stuff as always, and I already see a couple of things, the Dr. Joe Sutherland podcast brainstorming experiments for the modern era, so that’s a part of what you should spin up and… Or I could totally see a Tim and Joe, like a group where you just have people call in and ask questions like, here’s what I wanna do, how should I design this? And you can answer questions and they can pay you money, of course, ’cause that would be somebody you charged for. Anyways, I would join that group.

0:51:28 TW: I think we all know that there’s now one user on Slack who was gonna say that this was the sales pitch podcast.

0:51:36 MH: Well, you know, sure. If it’s me pitching it though, then technically like Fuck ’em. So anyways, we do have to start to wrap up though, and one of the things we do like to do because we’ve tested it extensively in many different designs and the causal inference shows that we… Our audience loves it. Is we go around the hall and do a last call. Actually, no. It’s just one letter we got from Moses Jerkins out of Louisville, Kentucky, no I’m just kidding. [chuckle] People like anecdotes. Alright. Let’s do last calls. Joe, you’re our guest, would you like to share last call with our audience?

0:52:18 JS: The first last call to reiterate would be to watch the Social Dilemma on Netflix. I think it’s so, so important for people to understand what they and their children are being exposed to through the digital ecosystem that we’ve built today, and if you haven’t seen it, I highly recommend it. It’s like a knowledge is power type thing, it’ll really change the way that you look at social media and a lot of these other digital experiences that we have today.

0:52:43 TW: So just a clarification for our listeners when we’re recording this episode, Joe does not know that an episode that hasn’t released was our last episode, which was the Netflix movie club review of the Social Dilemma.

0:52:57 MH: Yeah.

0:52:58 TW: So we wholeheartedly endorse that, but if you’re thinking, What does Joe not listen to every single episode, he just doesn’t know.

0:53:06 MH: Yeah.

0:53:06 TW: And we’re not gonna answer that question definitively. But that’s a great, great endorsement.

0:53:12 MH: That’s a really good one. Alright. So I kind of inferred from how you talk about it. You might have another last call to share.

0:53:15 JS: It is… And I’ll give you a few more if you’re interested more and just generally experimentation of field experimentation. I think Field Experiments is a really good book to pick up, Don Green and Alan Gerber, I think is probably one of the Bibles, if you’re really into the technical stuff, Mostly harmless econometrics by Angrist and Pischke would be another really, really good one. And if you like to follow some great content just about how does it all fit together, and maybe there’s a little bit of fun sprinkled in there too, Andy Gelman at Columbia, I think has a fantastic blog about causal inference. I know him, he was one of my advisors during the PhD. He’s got a really, really great perspective on all this stuff, but that would be my last reaction.

0:54:04 MK: I feel like from now on, all last calls should just be Joe last calls.

0:54:09 TW: Yeah.

0:54:09 MH: Yeah. He’s super good.

0:54:12 TW: We’re gonna… We’re gonna clip those up into multiple ones, and then we’re just gonna kind of insert one at a time.

0:54:17 MH: Yeah. We’ll just sort of chuck ’em in and be like, Michael, what’s your last call? And all of a sudden Joe’s voice starts talking, it’s like…

0:54:28 MH: Okay. Moe, what about you? What’s a last call that you have?

0:54:32 MK: So I have a bit of a weird one, but it was actually really fun to play around with yesterday, my colleague Jayzii sent a link to a new Google research.

0:54:41 MH: Hold… Hold on.

0:54:42 TW: Your co… Wait.

0:54:43 MK: Yeah. Her official name is Jay… I was like why is everyone doing weird…

0:54:47 MH: You work with Jay Z?

0:54:49 TW: Jay Z.

0:54:49 MK: It’s spelt J-A-Y-Z-I-I.

0:54:55 TW: Okay. You know who… You know why we’re doing the double take.

0:55:01 MK: I’m pretty sure.

0:55:01 TW: Okay.

0:55:01 MK: Yeah. Anyway, her name is actually Jayzii, yeah.

0:55:02 MH: Okay.

0:55:02 MK: And sorry I was like why is everyone giving me these dramatic faces. [chuckle]

0:55:06 MH: Well, it just so happen that someone is kind of popular here in the US that has the same name.

0:55:13 TW: Even I’ve heard of the Jay Z that was giving us all those reactions.

0:55:16 MK: Anyway, she sent a link to our data group of a new Google research project called tone transfer, and basically it’s using ML models to have instruments then replicate sounds from real life. I don’t know, it was just really fun to have a listen and play, and they have some really interesting musicians on there ’cause they’re working with a bunch of musicians to replicate different sounds, and just how enthusiastic that these musicians get about ML models, replicating sounds like actually really surprised me. So it was a bit of a fun one. Yeah.

0:55:53 MH: Very nice. What about you, Tim?

0:55:58 TW: Okay. I’m gonna do it because it’s so topical, but while I have ordered it, I have not yet received it or read it, but it is the book out called Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet by Tim Hwang, which is supposedly I read an interview with him by Wal Hickey is basically saying part of what I’ve been railing on, which is maybe a bunch of this advertising actually doesn’t work, so it seemed so topical, I promise that I will have at least started reading it by the next episode.

0:56:36 MH: That’s nice.

0:56:38 TW: What about you, Michael?

0:56:39 MH: Well, this one is, I’m gonna call it aspirational. So it’s kind of personal, but I wanna highlight it because… So we had Nancy Duarte on the show a while back, and her company does a lot of different things with sort of communication and storytelling, and they have these virtual speaker coaching services that they provide, and I’ve kind of have this goal of getting that for myself someday. So I talk at conferences from time to time, and really the only feedback I get is from Tim telling me like, “Here’s what you did wrong”. Which is a little de-motivating. Honestly… No I’m just kidding.

0:57:20 MK: I gave you feedback at SUPERWEEK!!

0:57:24 MH: Moe…

0:57:24 MK: Okay. Sorry.

0:57:25 MH: It’s the humor.

0:57:26 MK: Sorry the humor. Don’t let real emotions get in the way of humor darn it.

[laughter]

0:57:32 JS: If you ever want a pick up, you just call me. I’ll have…

0:57:35 MH: That’s right. So, but one of the things is is like… And I… That’s awesome, but what if you can have like a professional work with you for the month before your presentation, how amazing could you present? And so like in my head, I kind of am like, Man, someday, it’s a little much to afford as an individual, but if your company or you are ever gonna speak like, go check those out, ’cause I think that you could potentially really elevate your speaking game through those services, so I mostly for myself, but I’ll share it with our audience too, so I will commit to the podcast if and when I could ever buy one of these packages and use it, I will tell the podcast audience ahead of time and give you my assessment of the results, so you can tell how much better I was as a speaker, and I’m no slouch, I’ve scored higher than Tim Wilson one time. [chuckle] At a speaker evaluation. [chuckle] I love my life. Okay, thank you so much everybody. Joe what a pleasure to have you on the show. Thank you so much for coming back on.

0:58:46 JS: Thanks for having me.

0:58:46 MH: There is no shortage, and of particular joy to me is how you’re able to so seamlessly pull in, like these relevant examples from long ago, it just sort of makes the past, present and future kind of all seem accessible. So appreciate that very much. Speaking of things, I really appreciate, I also appreciate our producer, Josh Crowhurst, he wades through all these rough unedited episodes and produces the thing you’re listening to now, which is a beautiful piece of podcast content honed to perfection. And so…

0:59:18 TW: What you just listened to was after he shortened my content, we had four hours that the poor guy had…

0:59:26 MH: Yeah. It’s right, this was… Yeah. That’s right, it’s sort of like those other podcasts that just go and go for like three or four hours, that’s what our shows are, but then Josh really brings it back in, and all. Anyways, and so we’re very thankful for that. Remember, we do actually have a merchandise shop. So if you wanted to buy a t-shirt or a mug or something like that, we would love to encourage you to do that. You can just go to the shop and then if you wouldn’t mind doing us a favor is just take a picture of yourself with that merchandise that you bought from four different angles, so we can get a 360 degree view of you, the customer. [chuckle] That would really help us out. Anyway. [chuckle]

1:00:09 MH: That would definitely go up on our Twitter page, if you were willing to do that, we would be thankful for that joke. Anyway, we really appreciate all of you as listeners, and I know I speak for my two co-hosts, both Moe and Tim, when I say to you, there’s a lot of uncertainty out there, but if you really work at designing your experiments, you know you can keep in with…

[music]

1:00:34 Announcer: Thanks for listening. And don’t forget to join the conversation on Twitter or in the Measure Slack. We welcome your comments and questions. Visit us on the web at analyticshour.io or on Twitter at Analytics Hour.

1:00:48 Charles Barkley: So smart guys want to fit in, so they’ve made up a term called “analytic”. Analytics don’t work.

1:00:55 Thom Hammerschmidt: Analytics. Oh my God, what the fuck does that even mean?

1:01:05 TW: Huh? You did what? Wait, you set your hair on fire? With what? What were you burning that led you to set your hair on fire? You were lighting a candle?

1:01:16 MK: No, my hair straightener set it on fire.

1:01:19 MH: Oh.

1:01:20 TW: Oh.

1:01:21 JS: Tim, you should get a hair straightener.

1:01:23 TW: No, ’cause I would definitely set my hair on fire.

1:01:27 MK: What with the amount of products that’s in it?

[laughter]

1:01:34 TW: Moe, don’t do this to me.

1:01:35 MH: One tiny dab of product in it.

1:01:41 JS: Well, it’s depends on who you’re rooting for, but actually, so.

1:01:45 MK: You do realize, Tim, that rooting in Australia means a very different thing.

1:01:50 TW: Yeah. So do thongs in Australia. So, you know. [laughter]

1:01:56 MH: And ants.

1:01:57 MK: True.

1:02:03 JS: Who are you cheering for?

1:02:05 MH: When you’re ready to record, I am ready to go.

1:02:10 MK: No. But actually, one that I asked, “If we asked your best friend what your most annoying or worst habit is, what would they say?” And every time you ask, these people come out with the stupidest shit, because if you say, “What’s your worst habit?” People are like, “Oh. I over-communicate,” or like some other bullshit.

1:02:34 MH: “Sometimes I’m too committed to the team.”

1:02:36 MK: Yeah. But if you say, “What would your best friend say?” Like one guy, recently, on an interview was like, “Oh, well, you know, they think I’m a little bit self-absorbed.” Another guy was like, “Oh, they think I don’t have a lot of empathy.” And you’re like, Are you… You are still in an interview. Are you hearing yourself?

1:02:54 JS: But Moe, how do you know what that friend is like? They may be awful. This whole thing about having a best friend, I feel like does it disqualify if you’re male or?

1:03:03 MH: It’s this whole best friend thing.

1:03:05 JS: I would’ve been on “Who Wants to Be a Millionaire?” but I realized I didn’t have one of my lifelines, there was no one.

1:03:14 MH: Rock flag and causal inference…

One Response

  1. […] folks over at Analytics Power Hour decided to bring me back for another episode. This time, we talked about how companies use cookies […]

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#260: Once Upon a Data Story with Duncan Clark

#260: Once Upon a Data Story with Duncan Clark

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_260_-_Once_Upon_a_Data_Story_with_Duncan_Clark.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares