We’ve got the technology. We’ve got the behavioral data. We’ve got the content (or at least tell ourselves we do). We’re all set to develop personalized experiences that knock consumers socks off and leave them begging us to take their money. Is it really that simple? If it is, why aren’t more companies realizing the dream of 1-to-1 marketing? Matt Gershoff joins us to discuss how the pieces of the personalization puzzle often don’t quite fall into place like we wish they would. Matt’s also written a post that overlaps with our discussion: http://conductrics.com/complexity.
The following is a straight-up machine translation. It has not been human-reviewed or human-corrected. However, we did replace the original transcription, produced in 2017, with an updated one produced using OpenAI’s WhisperX in 2025, which, trust us, is much, much better than the original. Still, we apologize on behalf of the machines for any text that winds up being incorrect, nonsensical, or offensive. We have asked the machine to do better, but it simply responds with, “I’m sorry, Dave. I’m afraid I can’t do that.”
00:00:04.00 [Michael Helbling]: Hi there, folks. There are a couple of upcoming events that we wanted to make our listeners aware of. The first is Unsummit. This is an event put on by David McBride, who’s a good friend and a former guest on our show. For many years, it’s been the event of the who’s who of the analytics community attends in the week of Adobe Summit. It’s a place where people can connect, make new connections, share what they’ve been working on via series of short presentations. And this year, it will be held at UNLV on Moenday, March 21st. And the cost is $50 for professionals and $15 for students. And that’s just to make sure that we can cover the cost of the event. So for more information go to undashsummit.com, so unsummit but with a dash there, for more information and a sign up. You can also contact David McBride directly on the measure slack or via LinkedIn. I highly recommend if you’re going to Adobe Summit to make timing your schedule for Undash Summit. The second thing is that in April, the digital analytics power hour will be live from E-Metrics San Francisco. The conference will be from April 3rd to the 6th at the San Francisco Marriott Marquis. We’re trying out our first ever live episode there on April 5th. You can get more information about E-Metrics San Francisco at the E-Metrics site, E-Metrics.org.
00:01:41.29 [Announcer]: Welcome to the Digital Analytics Power Hour. Three analytics pros and the occasional guest discussing digital analytics issues of the day. Find them on Facebook at facebook.com forward slash analytics hour. And now, the Digital Analytics Power Hour.
00:01:58.27 [Michael Helbling]: Hi everyone, welcome to the Digital Analytics Power Hour. This is episode 31. There’s one thing that every digital marketer desires above all else. A knowledge of the customer so keen that every digital touch point is personal and relevant. Personalization. At its core, it’s a rational desire, but tonight we head down to the crossroads in an attempt to understand the mystery of personalization. Is it a stairway to digital marketing heaven or a descent into madness and perdition? We were scared to go alone, so we’re bringing a guide. Our guest tonight is Matt Gershauf. He’s one of the co-founders of Conductrix. He’s been in the analytics industry since the mid-90s. He started his career over in database marketing at Wonderman, might have heard of them. And he studied artificial intelligence in Scotland, where he discovered a deep and abiding love for Scotch. And despite that, we don’t find many people who talk about personalization like he does. And that would be with a lot of intelligence. Welcome, Matt.
00:03:06.01 [Matt Gershoff]: Thanks for having me.
00:03:07.01 [Michael Helbling]: And of course, rounding out this crew, or my other two hosts, it’s the senior partner from analytics demystified, Tim Wilson, Heidi Ho, and the double CEO of napkin and Babbage Systems, Jim Cain. Making a double. And of course, I am Michael Helbling, the analytics practice leader at search discovery.
00:03:29.63 [Tim Wilson]: It’s like the of course, like really? You think you’re at that point where people are like, oh yeah, yeah, yeah.
00:03:34.36 [Michael Helbling]: People know who I am. They know who I am. This topic is one that intersects with analytics in a way that is sometimes not always easy for sort of your regular digital analysts to consume. And I think it’s a really good math you’ve deigned to join us because I think you can be of a lot of assistance to our listeners and maybe to us too on this journey. So I don’t know, like I think it’d be great if you gave us a little bit of a rundown of sort of how you got into this area, maybe a little more about conduct tricks. If you go to your website, it’s not, the podcast isn’t about the product, but it might help people kind of understand where you and personalization first met up and how you became friends.
00:04:17.54 [Tim Wilson]: Conductors is a one-to-one marketing engine, correct?
00:04:24.44 [Matt Gershoff]: It’s the Panacea. Yeah, sure. Wait, what do you want me to start with? You guys want to hear a little bit about… So for me, I really got interested in this. I started, as you mentioned, in database marketing at a place called WarnerMan, which is originally called WarnerMan Kato Johnson. And they were one of the early direct marketing companies. In fact, the guy Lester Munt, WarnerMan, who started the company, Coin the term direct marketing and it was really all about this idea of one to one marketing so from the very very early on we were involved with trying to target individual customers with experiences that were most appropriate for them now what was different back then was rather than using the digital channel which we mostly use now. the channel that we used back then was predominantly male. So not particularly glamorous, but still was a channel in which the client could communicate and interact with the customer. And I had a direct response mechanism. So just like today where we take, have experience, digital experiences for customers, and in the hopes that our customers take certain actions, some sort of call to action, was essentially the same type of framework except you know, the channel was a lot slower, was male and was more of a discrete process as it is today than it is today. And I was always really interested in this idea of where we used to take, I mean, what the job was in database marketing was we would take our client’s data and their marketing databases and we would try to build models, analyze the data and try to figure out who is most likely to respond to a particular offer and maybe what the best offer should be. And what was interesting to me is that I always wanted a system that embedded that type of logic directly so it didn’t have to go through a human being like as the analyst I was building the model what I thought was the I want to say the panacea, but I thought was going to be the future was this the capacity for the marketing process to embed the intelligence to make the decision automatically, right? So to have a differentiated customer experience for different types of users. And so that’s really after I went back to graduate school for artificial intelligence is what we started trying to work on. for conductrics. And so conductrics really is just sort of a thin decision layer that you can put anywhere within your marketing process. And so a lot of the technology that’s out there now, which is usually under the framework of AB testing and maybe personalization or targeting, most of that is built around the browser. So it’s usually about website optimization and tools around manipulating the web page, the presentation layer on a website, as the experience for the user. So when the customer comes in or the user comes in, we’re usually trying to figure out, well, how do we make the website better? Or what’s the offer that we display? Or what’s the button? Or what have you? And what we wanted to do, which is a little bit different, is that we wanted our platform to be able to be embedded anywhere and in any type of marketing application. So our system is architected as a web service, which means that you can use us both on the web, but also in a mobile application or in a call center or really in any transactional marketing application. And what’s useful for our clients, which tend to be larger clients, and also for our partners. So we have ecosystem partners that consume our platform directly. So they kind of embed our technology into their system. So we have partners who are, I guess I can’t really say exactly what they were doing, but we have ecosystem partners that use our logic and our machine learning technology directly within their application.
00:08:16.61 [Jim Cain]: matt can either confirm nor deny that the alice uses conductrics for first-person a v test it’s more conductors uses the alice So to give some of my background on this one, was it two years ago or so, Matt, you saw me talk at a e-metrics event of all things, and I talked a little bit about the opposite kind of personalization than you kind of empower. It was kind of that hypothesis-driven, business-rule-driven, very manual approach to personalization. And then you and I chatted after, and then I even came to New York to kind of get the full walkthrough on your approach to it. And you’re literally sitting on the other side. Personalization is, to a large part, a very business user Again, testing and a hypothesis driven like Tim’s whole routine on, you know, someone says, I think this and if that I’ll do this. And that’s how most, most people, including me approach personalization and you’re coming at it from literally the other side of the fence. Is that a fair statement?
00:09:11.61 [Matt Gershoff]: That’s interesting. I don’t, I’m not sure. I think of personalization, which is really just the process of assigning users to experiences. Really. I mean, that’s the real objective is that we want to assign experience to a user that is in some sense best. And what best is going to be is going to be a function of your organization, that’s sort of your KPIs, or it could be what’s best for your end user or whatnot. That’s really something that’s going to be organizationally driven. I’m not sure that the whole idea of using data or not are using editorial expert opinion is in a way a separate issue. It’s the main idea that I tend to think of this or I try to think of the problem as is an assignment problem. So personalization is the assignment problem of assigning a customer to an experience. And now how that’s done is kind of secondary, I think. And so there’s a lot of different methods. One method is to use business logic, like business rules, right? And so it may very well be that the VIP customer who comes in needs to get a certain experience, or it may very well be that you have business policies in which you need to assign certain types of users a certain type of experience. So the point is that in order to implement this type of thing, we really have to start thinking about our marketing program as like a larger process. And we have to start thinking about sort of the inputs and the outputs and that logic and how we get that logic is, as I said, is either through business experience or it’s through predictive analytics or machine learning or what have you. I’m not sure if I answered the question you had, but I just think there’s multiple ways to come up with that logic. It’s almost like we’re trying to come up with a program, a targeting program, and we can either write that ourselves through our business logic or our expert opinion, or we can have the machine write it by using machine learning and predictive modeling.
00:11:15.60 [Jim Cain]: So what you’re saying is that we’re violently agreeing, it’s just that theoretically my approach could take six years and your approach could take six months if you kind of apply the machine learning appropriately.
00:11:26.45 [Matt Gershoff]: I think the approach could take a week. It’s really about stepping back and the scale of the decision problem. So if we think of it as we’re trying to make decisions, we don’t think about it as a data problem, let’s say, but it’s really a decision problem. How is our marketing process responding to our user? And is it a low level? type of decision, is it a transactional decision, which is what does the web page show to the user, or what email do we send to the user, or how does the call center respond? What’s the script? That’s kind of a low-level, high-transaction type of decision, and that’s the type of thing that is most amenable to being embedded into the marketing application and having our machine learning figure out what’s best. more continuous, low-risk, high-volume types of decisions. If you’re talking about a more discrete, high-value type of decision, then that might be something that really is something where you’d want your analysts to be taking a look at, and it’s the type of thing that might live in a longer time horizon. So you have different types of decisions that live in different time scopes and different Different frames so I know that’s not particularly clear But you know it’s just like that there’s a from AI There’s this notion of like that the taxi the taxi problem and there’s lots of subtasks that the taxi problem has one It’s to like come to you and deliver you to where you need to go But there’s also subtasks that it has which is you know trying to navigate the streets and what have you and It can operate on two different levels. It’s one, it’s trying to not get into a car crash, but at the higher level, it’s got this planning problem, which is what’s the optimal way to pick up the passenger and deliver them to their destination. You have these problems that are scoped differently, that have a different time framework.
00:13:23.41 [Tim Wilson]: But when you say an assignment problem, that is a user to the experience. And so at one end of the spectrum, you only have one experience, which means the assignment problem is very simple because you only have one option. At the other end of the spectrum, you have 1,000 experiences that you’ve Set up and then you have the ability to shuffle things around and say, I’m going to try assigning this type of user to this type of experience. And it feels like machine learning is coming into that. I feel like we run into the case when it comes to personalization in my one to one personalization crack. that we kind of ignore that you have to have multiple experiences and that organizations struggle to have even two experiences. If you have two experiences and an assignment problem, does it become, is it easier, I would think, if you tell a machine you have two experiences to choose from, that is much easier than having the machine have 10 experiences to choose from. or am I framing that completely incorrectly to say when you talk about an assignment problem of a person to an experience that the number of experiences is the thing that is often overlooked as you have to create those experiences because it feels like there’s in the world of machine there’s this idea that oh no no it’s going to dynamically generate the experience and that’s really really hard to do.
00:14:50.15 [Matt Gershoff]: Yeah, so yeah, I totally agree. And it’s, I don’t think, I think at a certain level, you don’t even want to focus too much even on the learning, the machine learning part of it. It’s just if you have multiple experiences, it means that your application, so currently your application or most applications, so just think of your marketing website, let’s say, there’s only one website conceptually. And so whenever a given user is on a particular page, it’s the same Experience it’s the same sort of state so it’s it’s as it’s as simple as possible but then if you start saying well depending upon the user so it’s going to be conditioned on some attribute of the user so it’s no longer condition just on what page there on it’s some it has to do with the user and let’s say there’s just two types of experiences. then you kind of are doubling the complexity like the complexity of your system goes up and so the more experiences you have the greater the complexity of your overall marketing system and that’s that’s even if you already know even if you don’t need to even learn what customer gets what experiences like that’s a that’s just a business rule so you’ve got repeat visitors get one thing and new visitors get something else, you still have increased the complexity of your system. And so one of the things that I kind of wanted to talk about was just there’s this inherent trade-off between the expected returns or the expected efficacy of targeting because you’re delivering hopefully a better experience for your user, so you’re going to get a greater return. There’s always this cost of complexity. You’re always going to be making your system more complex. It’s going to be more complex because now you need to manage all of these different experiences. It’s also going to be more complex because you need to take into account the input, so you need to have data about your users at decision time. And that data needs to be managed right because it needs to be accurate it needs to be timely and so you need to have. A whole system which is managing your data and just that side of it is difficult you need to have greater complexity or greater management costs. for your content or your experiences, right? And it doesn’t need to be just presentation layer stuff. It could be some backend business rule or different pricing. It could be different sort algorithms, whatever. But these different experiences, those also need to be managed. And now you have this new object, which also needs to be managed, which is the decision logic. which you didn’t have before, you didn’t need it, but now you do, right? You can almost think of it, a good way to think of it is just as a decision tree. So you’ve got this structure, this data structure now, almost like a program. And in fact, it is a program. And so you have this new program, which is this mapper, which takes its inputs, data about your users, and its outputs are the results of the decision, so the experiences, so blue button, green button. And in fact, I think the reason why people get kind of excited about AV testing is it’s like the prototype to this, or it’s the very beginning, it’s the first step into having your marketing application have multiple experiences. Even if it’s not conditioned on the user, your site now can be in more than one state in a way. You have this A experience and the B experience. It’s almost like you’re in the multiverse, right? And you kind of like, You’re living in one world and then an experiment lets you live in multiple worlds and then you can see which world is doing the best and then you can kind of collapse all those other worlds and then just live in the best world in a way. And with targeting it’s different. You kind of keep open lots of these other universes, these other worlds and they’re always there because some users are going to do better in those experiences and so you have this greater complexity that you need to manage. And I think that’s the reason why, even though people have been going on and on for over 20 years, longer, about one-to-one marketing, many of the people listening and many of the companies, they don’t employ it because there’s this complexity, there’s this cost. And in fact, I think it’s really the reason why people in analytics in general are often frustrated with, hey, how come no one is using the information I’m giving them? It’s because the organizations that they work in and the systems that are used are not designed to provide multiple experiences, right? That’s really the thing. You really have the static system and you’re just sort of, analytics is really just about collecting sensor data. It’s a, even though it’s difficult and it’s useful, it’s passive, right? You’re just kind of putting in sensors and you’re collecting data. The hard part is what’s known as the control problem, which is this, trying to figure out what’s the best experience, but that’s where the meat is or the tofurkey.
00:19:26.46 [Michael Helbling]: Yeah, and there’s very few companies that kind of turn that corner effectively, right? So there’s like a Netflix or Amazon who have websites that are completely tailored to the experience of that user or Netflix’s case, me and my kids. Yeah, Google.
00:19:45.48 [Matt Gershoff]: Yeah, and if you think about it, and there’s a reason for that, actually, that’s actually a great insight, is that companies whose product is data, essentially, they tend to be more in that winner-take-all, not exactly, but it’s more of a winner-take-all problem, where if you’re marginally better, then you get a very large share of the market, and it’s like the Google, or use Netflix or Amazon or what have you. But for most companies that are selling a physical product or service which is less of a digital product, it’s really more of a game at the margins. It’s a marginal improvement to go and start using targeting. It’s gonna improve things, but you’re not probably in this winner-take-all environment. And implicit in most of the pitches for personalization is this notion that you are facing a winner-take-all problem when you’re trying to start off with personalization. And I would argue that most scenarios are not. Now, I may be wrong about that, but from my experiences, my experience, it’s really most problems are I’m gonna get a certain baseline return by just doing the all for one, like everyone gets the same experience. And I’m gonna improve my situation a bit if I try out two different scenarios, right? I have an A and a B. But at best, I’m just gonna double my return because I could just play both, right? I could just randomly assign people the A experience and the B experience. I’m gonna get half the return no matter what right so in the face of uncertainty when you don’t know anything about a problem actually playing random isn’t that it’s it’s a uniform head it’s just like you’re buying a portfolio anyway that’s what we’re doing we’re buying a portfolio. of experiences and I might want to try to reallocate my portfolio so that I’m only playing B sometimes and A other times. But in the worst case, just playing random, I’ll get half of the returns because I’ll get it right half of the time in most cases.
00:21:51.29 [Jim Cain]: I was hoping you were going to jump in there and do that any given Sunday speech. The length is a game of inches. What have been perfect
00:21:59.53 [Matt Gershoff]: It’s not too late. Yeah, but I don’t think it is a game of it. That’s the thing. I think if you’re a Google, it is a game of inches. But I think for most companies, if you’re playing this marginal game, inches doesn’t mean anything really. You’re just gonna get inches out of it, and it may not be worth anything to you. But if it’s a football game, the inches could be winning or not. Only one team wins when they walk off the field. And that’s not how most companies, I think, should look at this.
00:22:26.82 [Michael Helbling]: So here’s I want to throw a question out to all of us because I will kind of grab in and we can talk about lots of different ways, but product recommendations, that’s a pretty common use case sort of for the physical product world. So and to your point on kind of kind of winning in the margins, that’s kind of what we try to do there is cross sell or upsell at the point of making a decision about a product. So let’s say you get Lyft from doing that, which is, I think that’s a pretty reasonable assumption that usually happens in my experience. So let’s, okay, we’ll assume that you do for this scenario, would you expect that Lyft to be continuous or is there sort of a concept of a regression to the mean even in sort of this new semi-personalized experience based on your viewing patterns and habits and people like you who’ve also bought this.
00:23:17.21 [Matt Gershoff]: I don’t know if it’s a regression to the mean. I think the world we live in, though, isn’t IID, right? There’s drift. So we’re in a world in which things change over time. So our solutions are perishable in many cases. And I think that’s another key thing to be thinking about is the perishability of the problem. And it’s the type of thing that you want to ask before you start trying to solve a particular problem. How perishable is it? If it isn’t very perishable, then that’s the type of situation where in A-B tests, type of thing is going to be, approach is going to be effective.
00:23:56.16 [Tim Wilson]: Why would a product recommendations be perishable?
00:23:59.04 [Matt Gershoff]: Well, it could be that people have seen, let’s say it’s the movies and people start seeing lots of the films or there’s new films that are coming out. You’re competing against, let’s say your Netflix and Amazon. is also running its recommendation or that has their own products and so there’s a lot of things that are happening outside of your control in the world now it may very well be that or your user start to look different over time so just in general you have the you know kind of the world changes out from underneath you and so part of the management we’re talking about before where we need to to manage both our data and our experiences as well as the logic, that little program, which is the thing that maps users to experiences, that also needs to be kind of continuously updated and refreshed. And it really depends upon like the nature of the problem. It may very well be that in certain domains, whatever recommendation you have, that’s kind of fixed and that lives forever. I don’t really know. It really depends upon the context. My guess is that in general, especially in the online space, most things have a certain shelf life and over time they begin to degrade and so it needs to be refreshed. Now one of the things you might want to think about is I have a recommendation system and you say that it’s working well but how would you know and one of the things you might want to do this is to loop back to Jim’s approach and Tim with hypothesis testing is that in a way now what I could do is I could have two programs right I could have recommendation A and recommendation B and I could have control which is random assignment of products or objects. And then I might want to run an experiment. So that’s a place where we’ll be doing both. We both have some sort of algorithm, or we might want to learn the algorithm for making the assignment, but we might want to run a couple of them. And then in market, we might want to run an experiment and see which one has the greatest return. That’s really the only way to be sure that our recommendation engine is working well. We can create one offline based upon the data we’ve collected and we can build some sort of model that does well with certain metrics, certain loss functions and what have you. But we don’t really know until we push it live on how well it’s actually working. marginal efficacy is like what’s the value of this little program but that’s something where we can start using a b testing and i think that’s that’s kind of me and i think that that would be a great way to have people to start thinking beyond just presentation layer stuff and just running experiments for. You know what page people should see and in the future we’ll be expanding the reach of testing to cover our personalization and our targeting. It’s like which targeting is best.
00:26:43.10 [Michael Helbling]: Yeah, that’s a great A B test. We actually I’ve done that one. It was recommendations against what the emergence thought were the best. And the recommendation engine was very, very a lot better.
00:26:56.78 [Tim Wilson]: But are you?
00:26:57.66 [Michael Helbling]: Yeah.
00:26:58.30 [Tim Wilson]: So, I mean, take, take Adobe. I mean, Adobe was testing target and they became Adobe target. And I remember the light bulb going off for me where it was, Oh, no, when you do, when target being the assignment engine, right? I mean, that’s what. targeting is is an assignment engine and they for a while and it seemed like it was just for a couple of years there was the story of yes when you are changing your assignment engine you then should test it and I’m back to kind of the the simple math of the assignment problem that you have the variables on how much do you know about the person and So take your marketing website and there’s a crap ton of people that all you know is where they came from. And a big chunk of those were, don’t know, they appear to be direct. And then you have the variety of experiences that you can offer to them. And it just seems like the math of that problem of the number of combinations goes berserk really, really quickly. Right, when you say we are going to do a recommendation engine, what are we recommending on? Well, what are the products you viewed? Well, that gets complicated in a hurry. What else do we know about you? In many cases, not a whole lot. I mean, oh great, you’re using Firefox versus Chrome. Is that going to change the type of socks we want to recommend to you? Well, I don’t know. It might. So we need to consider it.
00:28:27.76 [Matt Gershoff]: I think, I mean, you have to be smart about thinking about the problem before you go and try to solve it. And if you think about where recommendation might, so recommendation is like a high cardinality problem. and by high cardinality I mean there are a lot of potential options that we can select from like it’s a big space that we need to be picking from and if you think about where that might be most useful it’s probably going to be useful in these very you know frequent low risk types of decisions right because the recommendation engine is just a small little bit of a bump to get someone to take some sort of action. So I imagine it’s the type of thing like listening to music, watching videos. It’s like things that are very low risk if you don’t like the thing and it’s the type of thing where you might have a lot of experience per user. If it’s, if you’re, Tim, if you’re, I would tell you right now, if you wanted to do a recommendation engine where you’ve like you’re thinking about trying to display all possible products and if you have like say hundreds of products and the only data that you have is you know IP based right user agent IP based it’s almost assuredly not going to be effective right you got to think about it like the recommendation engine is going to make sense for something where someone is signed in so well you could show coats to the Canadians Right. You could do high level classification and whatnot. But it’s limited to your point. Yeah. And that’d be like a marginal thing. And then that might just be a one of. So we have algorithms that do, it picks the best out of a set, but then we also have algorithms that are like the top five or 10 out of 50 or 100. It’s like, what’s the top list? And so we have a client, we have a European lottery that’s one of our clients. and they have us all on the server side and you have to be logged in, you actually have to be a citizen of this country to use their lottery and everyone is logged in and so they have a fair amount of data about the user and past behavior and they use us to help predict both what the minimum bid should be so should we suggest buying two tickets or ten tickets but also like what the cross sell game should be so They try to use us to try to predict for different types of users what the next three games they should suggest out of, say, 60 games. And that’s like a top three out of 60. And so depending upon how much data they have, they have a lot of data. They do over 100 million transactions a month. And so there’s a lot of data there, and so you can start to find the structure because you have a lot of transactions. You don’t have a lot of transactions. You’re going to be limited in how complex that little program can be. You’re going to need to have a simpler program. And that’s OK. But you just need to think about it beforehand. You can’t just be throwing stuff at the problem and just thinking that it’s going to be going to work magically. The analyst doesn’t advocate the responsibility of thinking about how to solve the problem.
00:31:32.14 [Tim Wilson]: just because you say the word machine learning, but it’s not magic that happens though right i mean that just to be i mean i’m in violent agreement with you that there’s there’s a little bit of wishful thinking that you do you throw machine learning at it and it’s going to it’s gonna magically do stuff that actually requires thought not just from the analyst but from the business as well. You know, we’re gonna personalize stuff. I’m like, okay, well, what are you, how many experiences you got? One?
00:31:58.44 [Matt Gershoff]: Well, that’s also, yeah, but that’s magical thinking from analytics, which totally fetishizes data collection. Like how many people are out there like collecting, the narrative is just there’s gold in the data. And no, there isn’t necessarily. The data is just an artifact. It’s a shadow. of your process and it may or may not be informative. And if your process is only doing one thing, the data that you collect probably is not going to be particularly informative about how you should improve the system, how you should alter it. That’s why people start doing testing is so that you can have another view on your process. And it doesn’t mean that it’s necessarily not useful But I think there’s a lot of magical thinking. And I think that’s partly why there’s frustration, because we’ve been told all we need to do is tag our systems and just collect as much data as possible without asking what’s the marginal value of this particular bit of data. And that’s something that we’ve been working on really hard for our new release, which is really about trying to ask when the system is building its models, we have a new algorithm which basically asks, what do I think is the value of having this bit of data in the model? Do I want to include it? Or do I just want to not look at it? Because in a way, you want to have it as parsimonious as possible. I want to have that program as short as possible. I want to have it as simple as possible.
00:33:23.06 [Jim Cain]: One of the things I’m digging about this particular podcast is that we were at one end of the spectrum last year with big data, and we were like, what do you do with lots of data? What should you collect? But it was kind of like techno weenie, like we weren’t sure where to take it. And then we had a testing discussion, which was very, very business rule, best practice, marketer focused. And here we’re kind of in the middle somewhere and i like how it’s it’s not we’re not having a big data conversation but we’re definitely talking about the network effective collecting the proper things properly to empower analysts and it’s not just like pulling the levers yeah well i agree i think that the big data is is your observational data that’s your
00:33:59.93 [Matt Gershoff]: your correlational data. That’s your sensor data. That’s the data that you passively collect about a process. And it does not inform you about the efficacy of a new marketing intervention. It’s not a causal data, but the The testing side is really about collecting. I always think of testing as data collection, but I’m collecting data about marketing interventions, causal data. So data where I can make a statement about this thing affects the marketing process. It’ll improve the marketing process or it’ll harm the marketing process. And it’s a separate type of data than what we collect, either big data or web analytics, whatever you want to call it, which is the sensor or our observational data. The assignment, the targeting logic, is the link. It’s the connection between those two types of data. And so that’s the whole kernel of this. I collect data, I collect the big data or the observational data, as I do now. I also collect my intervention data or my causal data. which is what we normally do when we talk about experimentation. And in the middle of that, I connect the two and I learn a model. That’s all the machine learning is really doing. And that’s where people start, you know, how I collect it might be randomly, like as I do with A.B. testing, or it might be like the Bandit. Like when people talk about multi-arm Bandits, that’s just another way of collecting data about our marketing interventions in a way that’s adaptive so that we start automatically applying the results that The interventions are the different marketing experiences that have a better return. It’s not none of its magic. It’s just different approaches of data collection and analysis and then using that analysis.
00:35:44.54 [Jim Cain]: I had to ask you this one. By the way, it was in my Ask Matt notes. I’ve been hearing for the last year about, I’m not gonna throw you under the bus, but who do you think’s gonna win the US election? I’m not gonna throw you under the bus, but.
00:35:57.05 [Matt Gershoff]: Who do I think is gonna win the, you gotta go, who do I want to win or who do I think is gonna win?
00:36:03.04 [Jim Cain]: Well, I was kidding, I care less, I’m in the right country. But my point was actually around.
00:36:07.42 [Matt Gershoff]: Oh, he says now that is true. Now, but like six months ago, come on.
00:36:14.37 [Jim Cain]: I was living in fear, but now we have a handsome man.
00:36:16.53 [Matt Gershoff]: We have a handsome man as well. There’s handsome men everywhere, but I think our next president will not be a man. How about that?
00:36:33.25 [Jim Cain]: Personalization based on weather and I’ve heard two or three of our customers talking about it as something that they heard from a consultant is a really good idea and I think it sounds conceptually badass and I could see in practice it being a very expensive. shit that didn’t work out real well, kind of initiative.
00:36:50.58 [Tim Wilson]: Have you ever done personalization based on- It goes to the business framing, right? I mean, what you were saying was if you’re selling outdoor equipment, if you’re selling bicycles, tying it to weather makes sense. If you’re selling- It depends.
00:37:06.42 [Matt Gershoff]: I mean, think about it. It makes sense in theory. And what’s the problem? So I will tell you a case where it did work, and then I won’t tell you a case where it didn’t work. But I think most of the times, so here’s a case where it worked. And this was, this is before the internet. That wasn’t before the internet, but it was right around the cusp of- Before how it ended the internet? It was just at the tail end of Gopher. So this was probably 96. Anyway, so this was when I was at Warner Men, going back old school. And they, the client was a big retailer. And at the time, they had up to 100 million US households, like everyone Head shop there at one time or another and this is a company that is out as a Midwest company And so they they had a ton of data the mainframe data and whatnot And so we built a model for them and they wanted to sell HVAC units right and it was like a two or three thousand dollar product. This was back in the mid 90s, so that was, I don’t know, worth a lot. And we were just going to do a look-alike model and target their customers. And one of the data points that they had was weather, was like the average weather temperature in the winter and the summer and whatnot. And so I took a look at that in the model, and that was a nightmare because that was back when data was super, memory was really expensive. And so this guy had put each variable into a nibble. So the first four bits of a byte were one variable, a range between 0 and 15. So two variables would sit in one byte of data. And so you had to go in and you had to use a bit string to take a look at the first four bytes and then map that back into a numeric 0 to 15. And then you had to look at the next four bytes and then look at that and that that was another variable. And what was even more nightmare is that since it was mainframe, it was all like Epsidic, it was PacZone, and whatever. And then you had to convert that into ASCII in the SAS. So even back then, we were using data in kind of a fairly advanced way. And yeah, it made a difference. Those bits of data about the weather patterns for targeting people for HVAC made a difference. if it was predictive. And then, you know, we also, what we would do is we would take a look at, you know, we would build the mail files. We would target some folks from the model. We would take a random selection and target them as well, just randomly to take, so we could measure the, the marginal effect of the, of the model. And then we would have a holdout group where they weren’t mailed at all. And then we would just look at their, their return. So we were always taking a look at to see whether or not our marketing activities had any return. And so you’d also you know and that’s one of the things that we were talking about before with a be testing your your model that’s what we would do back then and that’s a case where it did work but the weather we have a direct fee in our system you can like get local weather right there’s an API call API call out you can you can use the weather. It’s compelling in the sales because people are like, oh, that makes perfect sense because people can tell a story about, oh, it’s cold and whatever, and isn’t that exciting that we know about it? But what’s a real problem where it’s so dependent, it’s like a now thing? I think it would be more interesting to know. It’s going to be hard for me to sell surfboards to Jim in Ottawa, probably, as opposed to LA.
00:40:36.38 [Tim Wilson]: But your example was kind of average. And I drank once a week with a guy who repairs HVAC systems. And so we pretty much know that if we’ve had a ridiculous cold spell or a ridiculous hot spell, that when he rolls in on Thursday night, he is going to have had a rougher go of it. So if I am in the HVAC repair business, now that’s like the most, it is how do I offset the weather? And it goes back to your stating a business problem that yes, if I have marketing budget and know there are gonna be hot, hot spells and cold spells, then I should be prepared to heavy up my paid search spend in those, those periods. Now is that, I don’t know if that’s, if that falls in the testing. It’s not, which I think is to kind of Jim’s point, that’s a very specific. If I’m selling jackets, I don’t know that people run out in mass and say, oh, it got a little chilly, now I’m gonna buy my winter coat. If I’m selling Tupperware, you know, what the hell, who cares what the weather is like? But there is kind of that, it’s a spectrum. It goes back to your, can I articulate an idea of why weather would matter in an absolute term, Whether overall, generally you’re going to sell fewer surfboards to people in Ottawa than you are to people in LA. So great, don’t make a stupid business decision. How much do you need to test that? Then there’s the micro, hey, when the weather changes or when the stock market shifts heavily or when, what are these other things that move and prompt people to think about things differently? It goes back to saying, have I defined a problem where I have different content or different messaging or the ability to pull a different lever that is super timely that seems like it would actually relate to this external factor. And let me map that out and then let me try it. Let me test it and see if it actually works. Right. I mean, you could talk. Yeah.
00:42:37.89 [Matt Gershoff]: For sure. But the question was just a priori. What’s your thoughts? This is one of these how I interpret it. So, Tim, yeah, you’re right. That’s the way you would approach any of this. And but the question I think was is, hey, Matt, what’s what’s your take on just in general has like the weather API feed? Is that really? Have you seen that being super predictive of behavior? And my sense is that that is a narrative that you see often from testing providers or targeting providers because it sounds cool and it’s something where people can visualize it and tell themselves a story. I don’t know, maybe have a situation where that works.
00:43:16.93 [Tim Wilson]: But that goes for lots of things, right? I mean, there’s the, what’s the, you nailed it when you said earlier, you said what’s the, it’s great for a demo. It’s great for a story.
00:43:26.08 [Matt Gershoff]: Yeah, it’s a great demo. We’ve used it as a demo. And then it was like after a while, it’s like, I mean, I don’t know. You know, it could work. I think I would always be cautious about just think about it, you know, and you can try it, but if you’re going to have to spend a lot on like that one thing that someone’s selling, you know, I don’t know.
00:43:44.60 [Michael Helbling]: Well, like any sensor data, you have to figure out what it means to your system. Exactly. Well, this is outstanding. And I think a lot of food for thought here. And like most every episode of the show, I feel like we’re barely scratching at the little scratch on the surface of this whole thing.
00:44:04.48 [Tim Wilson]: But we have to save time for helping. We do have to wrap up. To gush towards the guests for another 12 minutes before we wrap.
00:44:11.75 [Michael Helbling]: Whoa, steady on. I was only gonna say mean things to our guests, thank you very much. So yeah, let’s wrap up and go around maybe one thing you thought was interesting or closing thoughts.
00:44:27.54 [Tim Wilson]: I’ll go with, I think there are a couple of sound bites in this. I love the framing it as an assignment problem, I think is a simple idea, but I don’t know that when we talk about testing or personalization that we think about it as we’re trying to map inputs to outputs. And I think the other sort of the back part of this, which we were just kind of starting to dig into that having a really clear business idea, which maybe it does go back to the drum Tim beats all the time of having a hypothesis. Have I mapped out some causal model? Do I have a good idea that I can now test with data as opposed to saying, let me just grab all the damn data and it’s going to give me ideas because it’s not going to give me ideas. And I think, Those, I think, were kind of reinforced, but maybe I’ve got some more vocabulary in my bucket.
00:45:21.84 [Jim Cain]: Tim, you called yourself Tim, by the way, earlier.
00:45:24.52 [Tim Wilson]: I did. I morphed into the third person.
00:45:29.04 [Jim Cain]: Tim’s comfortable with you calling yourself Tim. Tim is fine with that.
00:45:32.95 [Michael Helbling]: Bob Dole likes personalization.
00:45:36.05 [Tim Wilson]: I am done. Good night.
00:45:39.33 [Jim Cain]: So I’ll do my two cents. So the point I was trying to make with weather and it was a perfect follow-up is that personalization is really powerful and it lends itself. I get into this selling the business rule-driven side of personalization. When Matt you totally played it straight, is weather personalization cool? It’s conceptually cool. It’s friggin expensive. It’s probably not going to do a good job. It pitches like a dream. Personalization to some extent has been around in the marketplace for Almost as long as there’s been analytics and there are a lot of companies that overfunded it and didn’t win or thought it was way too hard and they couldn’t get into it and they never tried. And I think that something that really came out today were some some clear not even so much best practices but like. medium-sized shops can pick up a personalization practice and begin to do it in the same way that they can begin a testing practice, and it’s complementary. It doesn’t have to be major league ball. I think a couple of good first steps came through today. Don’t do weather unless you sell HVAC systems.
00:46:42.26 [Matt Gershoff]: What about you, Matt? I feel like I was droning on and doing most of the talking, so I thought I sounded awesome, though. I mean, that’s really what my take on it is. No, I just think the main thing is just to, I mean, even though we sell this type of thing, I know it’s hard to believe. Given what I just you know ramble on about that. Actually. This is like what we sell But the podcast you will not not point potential investors to is that no I’ll point I mean for me though It’s really important that people you know I started off as an analyst and I just think it’s important to to not be Lulled into thinking that something is gonna do it for you and these are just tools and to help you sort of instantiate smarter marketing systems and it’s not it’s not like you’re gonna lose your job or it’s not like it’s gonna do something magic for you it’s just a way of you implementing the systems that can make your. your marketing efforts work better. But it comes at a cost and you’re not going to be successful unless you think through the cost versus the benefits. Jim, I can’t agree with you. It’s great.
00:47:57.71 [Michael Helbling]: Yeah, I think for me there was a couple things that I really kind of took to heart in this. One is I really like the way that you broke down sort of the concept of the winner take all aspect versus at the margins. And I think it’s a distinction that very few companies and organizations kind of think through before kind of diving into this space. And a lot of times we get fooled into thinking we can be the winner take all model versus the win at the margins model, depending on what we do. The other thing that I thought was important and kind of ties into what you just said and also what Tim said is that, yeah, the role of the analyst is still paramount here in terms of having smart intuition and thoughtfulness about what has to happen and then bringing the data to bear into that appropriate system to create that personalized capability that will actually make a difference. So that’s my two. Well, I’m sure that as you have listened, maybe you have come up with a question or two, and we would love to hear from you. The easiest ways to do that are on our Facebook page or on Twitter. or on the measure slack. If you’re not on there, you should be on there. You can find out how to join that at the top of our Facebook page. Once again, Matt Gershov, thanks so much for taking the time out of your schedule to join us. I think this is a really good starting point. I’d love to see, as we all go forward, this conversation I think will continue to kind of evolve. And I’d love to I look forward to the opportunity to continue this either in this forum or in another lobby bar somewhere someday soon. So for my co-hosts, Tim Wilson and Jim Cain, keep analyzing.
00:49:42.84 [Announcer]: Thanks for listening and don’t forget to join the conversation on Facebook or Twitter. We welcome your comments and questions. Facebook.com forward slash analytics hour or at analytics hour on Twitter. But your LinkedIn profile is frankly a barren wasteland of lack of self-promotion.
00:50:16.33 [Michael Helbling]: That, yeah, it was a Glenn Fittich tasting.
00:50:18.91 [Matt Gershoff]: I believe it’s pronounced Glenn Fittich, but you know.
00:50:22.90 [Tim Wilson]: Really? You just serving up stuff for the outtakes that early on? Because hell yeah, we are. In answer to Jim Cain’s question that he has not asked yet, which is, are we recording?
00:50:35.14 [Jim Cain]: You’re not recording it, are you?
00:50:36.82 [Matt Gershoff]: Not what I will I do wait. Are we recording?
00:50:45.73 [Tim Wilson]: There’s a third one on the distillers adorable when it when someone a guy tries to try to pretend that he is the authoritative one
00:50:54.00 [Jim Cain]: I knew I’d like you, Matt Kershaw.
00:50:56.04 [Tim Wilson]: Who wanted this guy on the podcast? Who’s the fucking idealist?
00:51:00.89 [Jim Cain]: I’m wearing two sweaters and a two, like, and a scarf. I’m fucking freezing.
00:51:07.88 [Matt Gershoff]: Jim doesn’t listen to it either. No, none of us do. No one listens to this.
00:51:14.47 [Michael Helbling]: I’m the only one here who’s heard any of the episodes. It’s a classic, and I just love sounding self-important and special.
00:51:24.07 [Tim Wilson]: The guy who was explained, baby.
00:51:25.27 [Matt Gershoff]: Well, actually, I did, wait.
00:51:27.05 [Jim Cain]: Wait, are we recording? You’re my first… That’s Peter Shoff, born October 31st, known by his stage name, Vanilla Ice, an American rapper, actor, and television host. See? Cooler, maybe, with Peter Gabe. cooler maybe with Peter Gabriel? Well, the gloves have come off. I like Phil Collins, but I don’t think that’s unquestionable. He’s fine.
00:51:49.14 [Michael Helbling]: Alright, well this is the show right here. Alright. Who are your early influences?
00:51:55.73 [Tim Wilson]: Did not hit record fast enough!
00:51:59.39 [Michael Helbling]: I think so.
00:52:00.23 [Matt Gershoff]: This isn’t going to go well. I can feel it now.
00:52:04.18 [Michael Helbling]: No, no. You know what? We’ve done much worse shows than this already.
00:52:10.88 [Tim Wilson]: Rock flag and personalization.
Subscribe: RSS
[…] The Personalization Episode of the Digital Analytics Power Hour with Matt Gershoff (Episode #031) […]
[…] Episode #031: Personalization – the Other Faustian Bargain – with Matt Gershoff […]
[…] #031: Personalization – the OTHER Faustian Bargain (with Matt Gershoff) […]