#031: Personalization: The OTHER Faustian Bargain

We’ve got the technology. We’ve got the behavioral data. We’ve got the content (or at least tell ourselves we do). We’re all set to develop personalized experiences that knock consumers socks off and leave them begging us to take their money. Is it really that simple? If it is, why aren’t more companies realizing the dream of 1-to-1 marketing? Matt Gershoff joins us to discuss how the pieces of the personalization puzzle often don’t quite fall into place like we wish they would. Matt’s also written a post that overlaps with our discussion: http://conductrics.com/complexity.

 

Episode Transcript

The following is a straight-up machine translation. It has not been human-reviewed or human-corrected. We apologize on behalf of the machines for any text that winds up being incorrect, nonsensical, or offensive. We have asked the machine to do better, but it simply responds with, “I’m sorry, Dave. I’m afraid I can’t do that.”

[00:02:02] Hi everyone. Welcome to the digital analytics power hour.

[00:02:06] This is Episode 31. There is one thing that every digital marketer desires above all else a knowledge of the customer so keen that every digital touch point is personal and relevant personalization. At its core its irrational desire but tonight we head down to the crossroads in an attempt to understand the mystery of personalization. Is it a stairway to digital marketing heaven or a descent into madness and perdition. We were scared to go alone. So we’re bringing a guide. Our guest tonight is Macker SCIAF. He’s one of the cofounders of conduct tricks. He’s been in the analytics industry since the mid 90s. He started his career over in database marketing at Wunderman might have heard of them and he studied Artificial Intelligence in Scotland where he discovered a deep and abiding love for Scotch. And despite that we don’t find many people who would talk about personalization like he does and that would be with a lot of intelligence. Welcome Matt. Thanks for having me. And of course rounding out this crew are my other two hosts it’s the senior partner from analytics demystified Tim Wilson Idaho and bee double CEO of napkins and Babbitts systems. Jim Kane.

[00:03:23] Make it a. And of course I am Michael Hellblazer. The analytics practice leader at search discovery. Of course like really you think that boy or people are like oh yeah I know who I am. They know who I am.

[00:03:37] This topic is one that intersects with analytics in a way that is sometimes not always easy for the sort of your regular digital analyst to consume. And I think it’s a really good map. You’ve deigned to join us because I think you can be of a lot of assistance to our listeners. It may be to us to this journey. So I don’t know like I think it’d be great if you gave us a little bit of a rundown of sort of how you got into this area maybe a little more about conduct tricks. If you go to your website it’s not a podcast isn’t about the product but it might help people kind of understand where you are and personalization first met up and how you became friends conductors as a one to one marketing engine correct.

[00:04:27] Yeah sure. What would you want me to start with you what you want to hear a little bit about.

[00:04:31] So for me I really got interested in this.

[00:04:33] I started as you mentioned in database marketing in a place called Wonderman which was originally called Wonderman Kitto Johnson and they were one of the early direct marketing companies in fact the guy Lester month wonderment who started the company coined the term direct marketing and it was really all about this idea of a one to one marketing. So from the very very early on we were involved with trying to target individual customers with experiences that were were most appropriate for them. Now what was different back then was rather than using the digital channel which we mostly use now the channel that we used back then was predominantly male so not particularly glamorous but still was a channel in which the client could communicate and interact with with the customer and how to direct response mechanism. So just like today where we take have experienced digital experiences for our customers and in the hopes that our customers take certain actions some sort of call to action was essentially the same type of framework except you know the channel was a lot slower was was mail and was more of a discrete process as it is today than it is today.

[00:05:43] And I was always really interested in this idea of where we used to take me what the job was and database marking was we would take our clients data and their marketing databases and we would try to build models analyze the data and try to figure out who was most likely to respond to a particular offer and maybe what the best offer should be and what was interesting to me is that I always wanted a system that invented that type of logic directly so it didn’t have to go through a human being like as the analyst I was building the model.

[00:06:17] What I will what I thought was the panacea what I thought was going to be the future was this the capacity for the marketing process to embed the intelligence to make the decision automatically right so they had to have a differentiated customer experience for different types of users.

[00:06:35] And so that’s really I went back to graduate school for artificial intelligence is what we started trying to work on for a conductor X so conducting really is just sort of a thin decision layer that you can put anywhere with in your marketing process.

[00:06:50] And so a lot of the technology that’s out there now which is usually under the framework of AB testing and maybe personalization or targeting most of that is built around the browser so it’s usually about website optimization and tools around manipulating the Web page the presentation layer on a website as as the experience for the user. Right so when the customer comes in that user comes in we’re usually trying to figure out Well how do we make the Web site better or what’s the offer that we display or what’s the boughten or what have you and what we wanted to do which is a little bit different is that we wanted our platform to be able to be embedded anywhere and in any type of marketing application. So our system is architected as a web service which means that you can use us both on the web but also in a mobile application or in a call center or really in any transactional marketing application and what’s useful for our clients which tend to be large clients and also for our partners so we have ecosystem partners that they consume our platform directly so they kind of embed our technology into their into their systems so we we have partners who are like us I can’t really say exactly what they were doing but we have we have we have ecosystem partners that use our logic and our machine learning technology directly within the application that neither confirm nor deny that Cialis uses conductor X for first person AB testing it’s more conductor X uses Cialis.

[00:08:27] So to give some of my background in this one was it two years ago or so that you saw me talk at a metrics event of all things and I talked a little bit about the opposite kind of personalization then you kind of empower it was kind of that hypothesis driven business rule driven very manual approach to personalization.

[00:08:45] And then you and I chatted after and I even came to New York to kind of get the full walkthrough on your approach to it and you’re literally sitting on the other side like personalization is to a large part a very business user. Again testing a hypothesis driven like Tim’s whole routine on you know someone as I think this and if that’ll do this and that’s how most most people including me approach personalization and you’re coming at it from literally the other side of the fence. Is that a fair statement.

[00:09:11] That’s interesting. I don’t I’m not sure I think of personalization which is really just the process of assigning users to experiences. Really. I mean that’s the real objective is that we want to assign an experience to a user that is in some sense best and what best is going to be is going to be a function of of your organization. That’s sort of your keep your eyes or it could be what’s best for your your end user or whatnot that’s really something that’s going to be organizationally driven but I’m not sure the whole idea of of of using data or not or using editorial like you know expert opinion is in a way a separate issue. It’s the main idea that I’m that I tend to think of this or I try to think of the problem as is is an assignment problem. So personalization is the assignment problem of assigning a customer to an experience. Now how that’s done is kind of secondary I think. And so there’s a lot of different methods.

[00:10:10] One method is to use business logic like business rules. Right. And so it may very well be that the VIP customer who comes in needs to get a certain experience or it may very well be that you have business policies in which you need to assign certain types of users a certain type of experience. So the point is is that in order to implement this type of thing we really have to start thinking about our marketing program as like a larger process and we have to start thinking about sort of the inputs and the outputs. And that logic and how we get that logic is as I said as either through business experience or it’s through predictive analytics or machine learning or what have you. I’m not sure I’ve answered your question you have but I just think there’s multiple ways to come up with that logic. It’s almost like we’re trying to come up with a program or targeting program and we can either write that ourselves right through our or our business logic or our expert opinion or we can have the machine write it by using machine learning and predictive modeling.

[00:11:15] So what you’re saying is that we’re were violently agreeing it’s just that theoretically my approach could take six years and your approach could take six months if you kind of apply the machine learning appropriately.

[00:11:26] I think it could take the approach could take a week.

[00:11:29] It’s really about that need stepping back and the scale of the decision problem. So if we have this if we think of it as we’re trying to make decisions we don’t think about it as you know a data problem let’s say. But it’s really a decision problem how does how is our marketing process responding to our user and is it a low level type of decision is it. Is it a transactional decision which is you know what does the Web page show to the user or what email do we send to the user or how does the call center respond. What’s the script. That’s kind of a low level high transaction type of decision and that’s the type of thing that is most amenable to being embedded into the marketing application and having our machine learning figure out what’s best. Right. Is this more more continuous low risk high volume types of decisions. If you’re talking about a more discrete high value type of decision then that might be something that really is something where you’d want your analyst to be taking a look at and it’s the type of thing that that might leave up in a longer time horizon. Right. So you have different types of decisions that live in kind of different time scopes and different different frames. So no that’s not particularly clear but you know just like that there’s a from AI there is this notion of like that the taxi the taxi problem and there’s lots of subcastes that the taxi problem has won. It’s like come to you and deliver you to where you need to go.

[00:12:58] But there’s also subcastes that it has which is you know trying to navigate the streets and what have you and it can operate on two different levels. One is trying to not get into a car crash but at the other at the higher level it’s got this planning problem which is what’s the optimal way to pick up the passenger and deliver them to their destination.

[00:13:18] You have these problems that are scooped differently than that that have a different time framework.

[00:13:23] When you use what do you say is a sign of a problem that is a user experience. And so at the end of the spectrum you only have one experience which means the a problem is very simple because you only have one option at the other end of the spectrum you have thousands of experiences that you’ve set up and then you have the ability to shuffle things around and say I’m going to try assigning this type of user to this type of experience that it feels like machine ordering is coming into that. I feel like we run into the case when it comes to personalization and by one personalization crack that we can’t ignore that you have to have multiple experiences and that organizations struggle to have even two experiences. If you have two experiences and an assignment problem does it become. Is it easier. I would think if you tell the machine you have to experience choose from that is much easier than than having the machine have ten experiences to choose from or a framing that completely incorrectly to say when you talk about an assignment problem of a person to an experience that the number of experiences is the thing that is often overlooked. As you have to create those experiences because it feels like there’s that in the world of the machine. There’s this idea that oh no no it’s going to dynamically generate the experience. And that’s really really hard to do.

[00:14:50] Yes so yeah I totally agree.

[00:14:52] And it’s I don’t think I think at a certain level you don’t even want to focus too much even on the learning machine learning part of it it’s just if you have multiple experiences it means that your applications are currently your application or most applications. So just think of your marketing website lets say there is only one Web site conceptually and so whenever a giving user is on a particular page it’s the same experience it’s the same sort of state. So it’s as simple as possible but then if you start saying well depending upon the user so it’s going to be conditioned on some attribute of the user.

[00:15:27] So it’s no longer condition just on what page they are on it’s some it has to do with the user and let’s say there’s just two types of experiences. Then you kind of are doubling the complexity like the complexity of your system goes up. And so the more experiences you have the greater the complexity of your overall marketing system. And that’s that’s even if you already know even if you don’t need to even learn what customer gets what experiences like let’s say that’s just a business rule. So you have repeat visitors get one thing and you visitors get something else. You still have increased the complexity of your of your system.

[00:16:02] And so one of the things that you know I kind of wanted to talk about was just there’s this inherent tradeoff between the expected returns or the expected you know efficacy of targeting right because you’re delivering hopefully a better experience for your users are you going to get a greater return. There’s always this costs of complexity you’re always going to be making your system more complex. It’s going to be more complex because now you need to manage all of these different experiences.

[00:16:32] It’s also going to be more complex because you need to take into account the input so you know to have data about your users at decision time and that data needs to be managed right because it needs to be accurate it needs to be timely and so you need to have a whole system which is managing your data. And just that side of it is difficult. You need to have greater complexity or greater management costs for your content or your experiences. Right. And it doesn’t need to be just presentation layer stuff it could be some back and business rule or different pricing. It could be different sort algorithms whatever but these different experiences those also need to be managed. And now you have this new object which also needs to be managed which is the decision logic which you didn’t have before you did need it but now you do. Right. You can almost think of it a good way to think of it as just as a decision tree. So you’ve got this structure. There’s data structure now almost like a program and in fact it is a program so you have this new program which is this mapper which takes as inputs data about your users and its outputs are the results of the decision. So the experience is so blue button green button. And in fact I think the reason why people get kind of excited about AB testing is it’s like the it’s like the prototype to this or it’s the it’s the very beginning it’s the first step into having your marketing application have multiple experiences even if it’s not conditioned on the user.

[00:17:55] Your site now can be more than one state in a way you have this experience and the experience it’s almost like you’re in the multiverse. You kind of like you you’re living in one world and then an experiment lets you live in multiple worlds and then you can see which world is doing the best and then you can kind of collapse all those other worlds and then just live in that in the best world in a way. And with targeting it’s different. You kind of keep open lots of these other universes these other worlds and you are always there because some users are going to do better in those experiences.

[00:18:27] And so you have this greater complexity that you need to manage.

[00:18:31] And I think that’s the reason why even though people have been going on and on for over 20 years longer about one to one marketing many of the people listening and in many of the companies that they don’t employ because there’s this complexity there’s this cost. And in fact I think it’s really the reason why you know people in analytics in general are often frustrated with hey how come no one is using the information I’m giving them it’s because the the organizations that they work in and the systems that are used are not designed to provide multiple experiences. Right. That’s really the thing really have the static system and you’re just sort of analytics is really just about collecting sensor data. It’s a even though it’s difficult and it’s useful. It’s a it’s passive right you just kind of putting in sensors and you’re collecting data. The hard part is what’s known as the control problem which is this trying to figure out what’s the best experience.

[00:19:23] But that’s where the metas or the Tofurky Yeah there’s very few companies that have cut that kind of turned that corner effectively. Right. So there’s like a Netflix or Amazon who have websites that are completely tailored to the experience of that user or in flex case. Me and my kids. Yeah. So Google.

[00:19:45] Yeah and if you think about it and the reason there’s a reason for that actually that’s that’s actually great insight is that companies that whose product is is data essentially they they tend to be more in net winner take all. Not exactly but it’s more of a winner take all problem where if you’re marginally better then you get a very large share of the market. And it’s it’s it’s like the Google or you know Netflix or Amazon or what have you. But for most companies that are selling like a physical product or service which is less less of a digital product it’s really more of a game at the margins. You’re you’re. It’s a marginal improvement to go and start using targeting. Right. It’s kind of improve things but it’s not it’s not probably in this winner take all environment. And implicit in most of the pitches for personalization is this notion that is this what you are facing a winner take all problem when you’re when you’re when you’re trying to start off with personalization. And I would argue that you most most scenarios are not. Now I may be wrong about that but from my experiences my experience it’s really most problems are I’m going to get a certain baseline return by just doing the all for one.

[00:21:02] Like everyone gets the same experience and I’m going to improve my situation a bit if I try out two different scenarios. I have an A and A B but at best I’m just going to double my return because I could just play both right. I could just randomly assigned people to experience and the experience I’m going to get half the return no matter what right. So in the face of uncertainty when you don’t know anything about a problem actually playing random isn’t bad it’s it’s a uniform hedge just like you’re buying a portfolio and in a way that’s what we’re doing we’re buying a portfolio of experiences and I might want to try to reallocate my portfolio so that I’m only playing B sometimes and other times but in the worst case displaying random I’ll get I’ll get half of the returns because I’ll get it right half the time. In most cases.

[00:21:51] So you’re going to it. I was hoping you can jump in there and do that. Any Given Sunday speech I as a game of inches would have been perfect.

[00:21:59] It’s not too late. Yeah but I don’t think it is a game. That’s the thing I think like if you’re a Google it is a game of inches.

[00:22:06] But I think for most companies it’s if you play this marginal game interest doesn’t mean anything really it’s you’re just going to get inches out of it and it may not be worth anything to you. But if it’s a football game the inches could mean winning or not.

[00:22:19] Only one team wins when they walk off the field and that’s not how most companies I think should should look at this.

[00:22:26] So here’s a I would throw a question to all of us because a little kind of grab it and we can talk about it lots different ways but product recommendations. That’s a pretty common use case sort of for the physical product world. So and to your point on kind of kind of winning in the margins that’s kind of what you know we try to do there’s cross-sell or up sell at the point of making a decision about a product. So let’s say you get a lift from doing that which is I think it’s pretty reasonable assumption. That usually happens in my experience. Okay we’ll assume that you do for this scenario would you expect that lift to be continuous or is there sort of a concept of a regression to the mean even in sort of this new semi personalized experience based on your viewing patterns and habits and people like you who’ve also bought this.

[00:23:17] I don’t know if it’s a regression to the mean. I think I think the world we live in though is in isn’t ID right. It’s there’s drift. So we’re in a world in which things change over time. So our solutions are perishable in many cases. And I think that’s another key thing to be thinking about is the is the parish ability of the problem and it’s the type of thing that you want to ask before you start trying to solve a particular problem. How perishable is it if it isn’t very perishable. Dan that’s the type of situation we’re in AB test type of thing is going to be approach is going to be effective.

[00:23:56] Why or why would private communications be perishable.

[00:23:59] Well it could be that people have seen it let’s say it’s the movies and people start seeing lots of the films or there’s new films that are coming out. You’re competing against let’s say your Netflix and Amazon is also running its recommendation or it has their own products. So there’s a lot of things that are happening outside of your control in the world. Now it may very well be that we’re your users start to look different over time. So just in general you have the you know kind of the world changes out from underneath you.

[00:24:32] And so part of the management we were talking about before where we need to to manage both our data and our experiences as well as the logic that little program which is the thing that maps users to experience experiences that also needs to be kind of continuously updated and refreshed and it really depends upon like the nature of the problem. It may very well be that in certain domains whatever recommendation you have that’s kind of fixed and that lives forever. I don’t really know. It really depends upon the context. My guess is that in general especially in the online space most things have a certain shelf life and over time they begin to degrade and so it needs to be refreshed. And one of the things you might want to think about is I have a recommendation system and you say that is working well. How would you know. And one of the things you might want to do this is to loop back to Jim’s approach and Tim with hypothesis testing is that in a way now what I can do is I could have two programs right. I could have recommendation a recommendation be and I could have control which is random random assignment of of products or or objects and then I might want to run experiments and that’s a place where we’ll be doing both. We both have some sort of algorithm where we might want to learn the algorithm for making the assignment but we might want to run a couple of them and then in market we might want to run an experiment and see which one has the greatest return.

[00:25:52] And that’s really the only way to be sure that our recommendation engine is working well we can we can create one off line based upon the data and data we’ve collected and we can build some sort of model that does well with certain metrics certain loss functions and what have you. But we don’t really know until we push it live on how well it’s actually working what its marginal efficacy is like what’s the value of this little program but that’s something where we can start using AB testing and I think that’s that’s kind of neat. And I think that that would be a great way to have people to start thinking beyond just presentation layer stuff and just running experiments for you know what page people should see and in the future we’ll be expanding the reach of testing to cover our personalization and our targeting. It’s like which targeting is best.

[00:26:43] Yeah that’s a great AB test. We actually I’ve done that one. It was recommendations against what the merchants thought were the best and the recommendation engine was very very a lot better but better use Yeah.

[00:26:58] So I mean take take Adobe I mean Adobe was testing target and they became Adobe target and the Ermera the iPod going off for me where it was oh no when you do target being the assignment engine. Right. I mean that’s what targeting is is the assignment engine and they for a while and it seemed like it was just for a couple of years. There was the story of yes.

[00:27:21] When you are changing your assignment engine you then should test it and I’m back to kind of the simple math of the assignment problem that you have the variables on how much do you know about the person. So take your marketing Web site. And there’s a crap ton of people that all you know is where they came from. A big chunk of those were don’t know.

[00:27:47] They appear to be direct. And then you have the variety of experiences that you can offer to them.

[00:27:55] It just seems like the math of that problem of the number of combinations goes berserk really really quickly. Right. When you say we’re going to do a recommendation and what are we recommending on well what are other products you viewed. Well that gets complicated in a hurry. What else do we know about you in many cases. Not a whole lot. I mean no great use firefox versus chrome. Is that going to change the type of socks we want to recommend to you. Well I don’t know it might. So we need to consider it.

[00:28:27] No I don’t.

[00:28:28] I think I mean you have to be smart about thinking about the problem before you go and try to solve it. And if you think about where a recommendation might that recommendation is like a high cardinality problem and by I carnality I mean there are a lot of potential options that we can select from like it’s a big space that we need to be picking from and if you think about where that might be most useful it’s probably going to be useful in these very frequent low risk types of decisions. Right. Because the recommendation engine is just a small little bit of a bump to get someone to take some sort of action. So I imagine it’s the type of thing like listening to music watching videos. It’s like things that are very low risk if you don’t like the thing and it’s the type of thing where you might have a lot of experience per user. If it’s if you’re timi if I would tell you right now if you wanted to do a recommendation engine where you feel like you’re thinking about trying to display all possible products and you have say hundreds of products. And the only data that you have is you know IP based user and IP based. It’s almost assuredly not going to be effective. Right. You got to think about it like that the recommendation engines going to make sense for something where someone has signed in.

[00:29:48] So well you could show coats to the Canadians.

[00:29:52] Right. Right.

[00:29:53] Right. High level classification. Yeah. But it’s limited to here. Yeah maybe and they’ll be like a marginal thing and then that might just be between code that might just be a one of silly. We have algorithms that do you know it picks the best out of a set but then we also have algorithms that are like the top five or 10 out of 50 or 100. Right. Like it’s like what’s the what’s the toplessness so we have a client we have a European Lottery that’s one of our clients and they have us all on the server side and you have to be logged in. You actually have to be a citizen of this country to use their lottery. And everyone is logged in. And so they have a fair amount of data about the user and past behavior and they use us to help predict both what the minimum bid should be so we suggest buying two tickets or 10 tickets but also like what the cross-sell game should be so they try to they use us to try to predict for different types of users what the next three games they says. So that should suggest that say 60 games and that’s like a top 3 out of 60. And so depending upon how much did they have a lot of data they do like you know over 100 million transactions a month. And so there’s a lot of data there. And so you can start to find the structure because you have a lot of transactions you don’t have a lot of transactions. You’re going to be limited in how complex that little program can be.

[00:31:14] You’re going to need to have a simpler program and that’s that’s OK. But you just need to think about it beforehand. You can’t just be throwing stuff at a problem and just thinking that it’s going to be going to work magically. The analyst doesn’t know abdicate the responsibility of thinking about how to solve the problem just because you say the word machine learning but it’s not magic that that happens though right.

[00:31:36] I mean just to be I mean I’m in violent agreement with you that there’s there’s a little bit of wishful thinking that you do. You throw machine orany at it and it’s going to it’s going to magically do stuff that actually requires thought not just from the air or from the business as well. You know that’s personalized stuff. I’m like OK well what are you. How many experiences you’ve got one.

[00:31:58] Well that’s also yeah that’s magical thinking from from analytics which totally fetishizes data collection. Like how many people are out there like collecting. The narrative is just there’s gold in the data and no there isn’t necessarily the data is just an artifact it’s a shadow of your process and it may or may not be informative and if your process is only doing one thing the data that you collect probably is not going to be particularly informative about how you should improve the system how you should alter it. That’s why people start doing testing so that you can have another view on your process and it doesn’t mean that it’s necessarily not useful. But I think there’s a lot of magical thinking and I think that’s partly why there’s frustration because we we’ve been told all we need to do is tag our systems and and just collect as much data as possible without asking what’s the marginal value of this particular bit of data. And that’s something that we’ve been working on really hard for our new release which is really about trying to ask when we’re building when the system is building its models there’s we have a new a new album which basically asks What do I think is the value of having this data in this bit of data in the model. Do I want to include it or do I just want to not look at it because in a way you want to have it is as parsimonious as possible I want to have it as I want to have that program as short as possible want to have it as simple as possible.

[00:33:23] One of the things I’m digging about this particular podcast is that we were at one end of the spectrum last year with big data we were like What do you do with lots of data. What should you collect and watch it. But it was like kind of like techno weenie like we weren’t sure were to take it and then we had a testing discussion which was very very business rule best practice marketer focused. And here we’re kind of in the middle somewhere and I like how it’s not having a big data conversation but we’re definitely talking about the network effect of collecting the proper things properly to empower adolescents not just like pulling the levers.

[00:33:53] Yeah well I agree I think the the big data is your observational data that’s your your correlational data that’s your your sensor data that’s the data that you passively collect about a process and it does not inform you about the efficacy of a new marketing intervention. It’s not a causal data but the DNA testing side is really about collecting the test. I always think of testing as data collection but I’m collecting data about marking interventions causal data so data that were I can make a statement about this thing affects the marketing process it will improve the marking process or will harm the marketing process. And that’s a separate type of data then what we collect either big data or web and only whenever you want to call it which is the sensor our observational data the assignment the the targeting logic is the link. It’s the connection between those two types of data. And so that’s the whole that’s the whole kernel of this. It’s I collect data I collect the big data or the observational data as I do now. I also collect my intervention data over my causal data which is what we normally do. We talk about experimentation and in the middle of that I connect the two and I learn a model that’s what’s on the machine learning is really doing.

[00:35:10] And that’s where people start you know how I collect it might be randomly like as I do with AB testing or it might be like the bandit like when people talk about multi armed bandits that’s just another way of collecting data about our marketing interventions in a way that’s adaptive so that we start automatically applying the results that the interventions or the different marketing experiences that have a better return. It’s not that none of it’s magic it’s just different approaches of data collection and analysis.

[00:35:43] And then you might think that analysis I ask you this one by the way it was in my ask Matt notes I’ve been hearing for the last year about I’m not going after the end of the bus but who do you think is going to win the U.S. election. I have not got 300 of us but I think it’s going to win.

[00:35:58] The eight you’ve got to go do I want to win I think is going to win.

[00:36:03] I was kidding I could care less I’m on the right country. But my point was I’m actually a role he says.

[00:36:07] He says now that is true wow.

[00:36:10] Now but like six months ago I was living in fear but now we have got a man behind the man. Well we said we have it we have a handsome man as well. So handsome men everywhere. I think we will have I think our next president will not be a man that now said Carly.

[00:36:31] Exactly personalization based on weather and heard two or three of our customers talking about it as something that they heard from a consultant is a really good idea. And I think it can sound conceptually bad ass and I could see in practice it being a very expensive ship that didn’t work out real well kind of initiative that he ever done.

[00:36:51] Personalization based on it goes to the house goes to the houses goes to the business framing. Right. I mean what you were saying was if you’re selling if you’re selling outdoor equipment if you’re selling bicycles tying it to weather makes sense if you’re selling it to Panteley.

[00:37:06] I mean think about it like in theory what we’re like and what’s the problem so I will tell you a case where it did work. And then I won’t tell you what case where it didn’t work. But I think most of the times. So here’s a case where it worked. And this was this is before the internet. It wasn’t before the internet but it was right around the cusp of before the internet.

[00:37:28] It was it was just at the tail end of gopher. So this is probably 96 anyway. So this is when I was at Wonderman going back oldskool and they the client was a big retailer and at the time they had up to 100 million U.S. households like everyone had shopped there at one time or another. And this is a company that is out. It’s a Midwest company.

[00:37:52] And so they had a ton of data that mainframe data and whatnot. And so we built a model for them and they wanted to sell each VAC units.

[00:38:01] And it was like a two or three thousand dollar product. This is back in the mid 90s that was I don’t know Berthelot and we were just going to do a lookalike model and target their customers. And one of the data points that they had was weather was like the average weather temperature in the winter in the summer and whatnot. And so I took a look at that in the model and that was a nightmare because that was back when data was super memory was really expensive. And so this guy had put each variable into a nibble. So the first four bytes no the first four bits of a bite were one variable between a range between zero and 15.

[00:38:40] So two variables would sit in one bite of data.

[00:38:43] And so you had to go in and you had to do a use a bit string to take a look at the first four bytes and then map that back into a numeric 0 to 15. And he had a look at the next four bytes and then look at that and that that was another variable. And what was even more nightmare is that since it was mainframe it was all like episodic this packed zone and whatever. And they get to convert that into ASCII in a cell.

[00:39:10] So that was so even back then we were doing we were using data in like kind of a fairly advanced way. And yeah it made a difference. Those those bits of data about the the the weather patterns for targeting people for H fact made a difference if it was predictive.

[00:39:26] And then you know we also what we would do is we would take a look at how we would build the mail files. We would target some folks from the model we would take a random selection and target them as well just randomly to take so we could measure them the marginal effect of the of the model. And then we would have a hold out group where they weren’t mailed at all and then we would just look at their their return. So we were always taking a look at to see whether or not our marketing activities had any return. And so you Dossett you know and that’s one of the things that we were talking before with AB testing your your model. That’s what we would do back then and that’s a case where it did work.

[00:40:03] But the weather we have a direct feed in our system you can get local weather right there’s an api core API call out and you can you can use it whether it’s compelling it’s compelling in the sales because people are like oh that makes perfect sense because people can tell a story about oh it’s cold and whatever. And isn’t that exciting that we know about it but what’s a real problem where it’s so dependent it’s like a now thing I think will be more interesting to know that it’s going to be hard for me to sell surfboards to Jim in Ottawa probably owes to like L.A..

[00:40:36] But your example was average and this is I drink once a week with a guy who repairs h vac systems. And so we pretty much know that if he had a ridiculous cold spell or a ridiculous hot spell that he rolls it on Thursday night he is going to have had a rougher Wreford go of it. So if I have in the back repair business that’s like the most is how to offset the weather. And it goes back to your staging a business problem that yes if I have marketing budget I know there are going to be hot hot spells and cold spells then I should be prepared to heavy up my paid search spend in those those peers know is that if that’s it that falls in the testing. It’s not. They get to kind of Jim’s point. That’s a very specific. If I’m selling I’m sewing jackets I don’t know that people will run out in mass and say oh pick out a little chili now. My winter coat from selling Tupperware. You know what the hell. Who cares what the weather’s like. But there is kind of that spectrum that goes back to your. Can I articulate an idea of why weather would matter in absolute term weather overall. Generally you’re going to sell fewer surfboards to people in Ottawa than you are to people in L.A. so great don’t make a stupid business decision. How much do you need to test that. Then there’s the micro.

[00:42:05] Hey when the weather changes or when something when the stock market shifts heavily or when what are these other things that move and prompt people to think about things differently. It goes back to saying about to find a problem where I have different content or different messaging or the ability to pull a different lever that is super timely. That seems like it would actually relate to this external factor. And let me map that out and then let me try it. Let me test it and see if it actually works. Right.

[00:42:36] I mean. Yeah sure. But the question was just a priority. What’s your thoughts. What is how I interpret it. So Tim Yeah you’re right. That’s the way you approach any of this and the the question I think was Hey Matt what’s what’s your take on just in general has like the weather API feed is that really have you seen that being super predictive of behavior. And my sense is that that is a narrative that you see often from testing provider partners because it sounds cool and it’s something where people can visualize it and tell themselves a story. I don’t know.

[00:43:15] Maybe maybe you have a situation where that work but that goes for lots of things right. I mean there’s the ones that you nailed it when you said earlier you said what’s the. It’s great for a demo.

[00:43:24] It’s great. We could have used it as a demo and then it was like after a while like you mean I don’t know. You know it could.

[00:43:33] It could work.

[00:43:34] I think I would always be cautious about just think about it. You know you can try. You can try. But if you don’t have to spend a lot on like that one thing that someone selling you know I don’t know.

[00:43:44] Well like any sensor data you have to figure out what it means to your to your system. Exactly.

[00:43:51] You know this is this is outstanding. And I think a lot of food for thought here. And like most every episode of the show I feel like we’re barely scratching at the little scratch on the surface of this whole thing.

[00:44:04] But we have to save time for help. We do have wrap up to gush towards the guest for another 12 minutes before we before we run. Oh steady on. I was only going to say mean things to our guests thank you very much.

[00:44:21] So yeah let’s ramp up a go around that maybe one thing you thought was interesting or closing thoughts.

[00:44:28] I’ll go with. I think there are a couple of sound bites in this. I love the framing it is an assignment problem I think is a simple idea but I don’t know that when we talk about testing or personalization that we think about it as we’re trying to map inputs to outputs. And I think the other sort of the back part of this which we were just starting to dig into that having a really clear business idea which maybe it does go back to the drum beats all the time of having a hypothesis and I mapped out some causal model to have a good idea that I can now test with data as opposed to saying let me just grab all the damn data and it’s going to give me ideas because it’s not going to give me ideas. And I think those I think were kind of reinforced but maybe I’ve got some more more vocabulary in my in my bucket.

[00:45:21] Oh OK. Kim you called yourself Tim by the way earlier I speak.

[00:45:25] I did. I am.

[00:45:26] I’ve morphed into the third person is comfortable with you calling yourself Jemas find that Bob Dole Bob Dole likes personalization. I’m done. Good night.

[00:45:39] So I’ll do my two cents so that I mean the point I was trying to make with weather and it was a perfect follow up is that personalization is really powerful and it lends itself quite like I get into this selling the business rule driven side of personalization you know and when Maggie totally played it straight it is whether personalization cool it’s conceptually cool. It’s friggin expensive and I’m going to do a good job. It pitches like a dream and personalization to some extent has been around in the marketplace for almost as long as there has been analytics and there are a lot of companies that overfunded it and didn’t win or thought it was way too hard and they couldn’t get into it and they never tried. And I think that something that really came out today were some some clear not even so much best practices but like medium size shops can pick up a personalization practice and begin to do it in the same way that they can begin a testing practice and it’s complimentary it doesn’t have to be major league ball. And you know I think couple of good first steps came through today. And don’t do whether should sell back systems.

[00:46:42] What about you Matt. I feel like I was droning on and giving and doing most of the talking so I thought I sounded awesome though.

[00:46:53] I mean I really want my take on it.

[00:46:56] No I just think the main thing is just to I mean even though we sell this type of thing I know it’s hard to believe given what I just you know I am not about that actually this is like what we sell but I guess you will not not point potential investors to that no point.

[00:47:12] I mean for me though it’s really important that people know. I started off as an analyst and I just think it’s important to to not be lulled into thinking that something is going to do it for you.

[00:47:26] And these are just tools to help you sort of instantiate smarter marketing systems. And it’s not it’s not like you’re going to lose your job or it’s not like it’s going to do something magic for you. It’s just a way of you implementing these systems that can make your your marketing efforts you know work better. But it comes at a cost and you end you’re not going to be successful unless you think through you know the costs versus the benefits. Jim Kane agrees with you. It’s great.

[00:47:57] Yeah I think for me there is a couple things that I really kind of took to heart and this one is I really like the way you broke down sort of the concept of the winner take all aspect versus at the margins and I think it’s a distinction that very few companies and organizations kind of think through before kind of diving into this space. And a lot of times we get fooled into thinking we can be the winner take all model versus win at the margins model depending on what we do. The other thing that I thought was important and kind of ties into what you just said and also what Tim said is that the the role of the analyst is still paramount here in terms of having smart intuition and thoughtfulness about what has to happen and then bringing the data to bear into that appropriate system to create that personalized capability that will actually make a difference. That’s my two. While I’m sure that as you have listened maybe you have come up with a question or two and we would love to hear from you. The easiest ways to do that are on our Facebook page or on Twitter or on the measure slack. If you’re not on there you should be on there. You can find out how to join that at the top of our Facebook page. Once again Naggar SCIAF thanks so much for taking the time out of your schedule to join us. I think this is a really good starting point. I love to see as we all go forward this conversation I think will we’ll continue to kind of evolve and I’d love.

[00:49:26] I look forward to the opportunity to continue this either in this forum or in another lobby bar somewhere someday soon. So for my cohost Tim Wilson and Jim Kane keep analyzing.

[00:49:42] Thanks for listening. And don’t forget to join the conversation on Facebook Twitter. We welcome your comments and questions. Facebook dot com slash and now all on Twitter.

[00:49:57] In. We’ve made up a little. Bit of. The. Word.

[00:50:05] Touchscreen. Happening right now in the future.

[00:50:10] Your LinkedIn profile is frankly a barren wasteland of lack of self promotion. That was a Glenfiddich thing. I believe it’s pronounced Kwam. But you know. You just serve up some of the outtakes of that early on because we are an answer to a question that I’ve asked yet. We recorded. You’re not recording it or you.

[00:50:37] Are not. I will be recording there’s.

[00:50:45] A third one on the distiller’s is horrible when it wins when a guy tries to pretend he is the authoritative one. I knew I’d like to have Kershaw wanted this guy on the line. I’m wearing the letters and if you like a scarf talking breathing. Foley. Jim doesn’t listen either. No not. Listen to this. I’m the only one that is a classic and I just loved sounding self-important and spiteful. Guy who was the way I did. According to.

[00:51:29] You my first born October 30 first known by his stage name Vanilla Ice an American rapper actor and television host. Stay. Cooler maybe with your game cooler maybe with Peter Gabriel. Well if there is any gloves come off like Phil Collins that I don’t think that questionable fun writing this show.

[00:51:52] Who are your early influences. I think so. We’ll get to that God. Isn’t that going to go on. I can I can feel good. No no you know what we’ve done much worse shows from this already. Pop.

[00:52:10] Rock clag and personalization.

 

3 Responses

  1. […] The Personalization Episode of the Digital Analytics Power Hour with Matt Gershoff (Episode #031) […]

  2. […] Episode #031: Personalization – the Other Faustian Bargain – with Matt Gershoff […]

  3. […] #031: Personalization – the OTHER Faustian Bargain (with Matt Gershoff) […]

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#243: Being Data-Driven: a Statistical Process Control Perspective with Cedric Chin

#243: Being Data-Driven: a Statistical Process Control Perspective with Cedric Chin

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_243_-_Being_Data-Driven__a_Statistical_Process_Control_Perspective_with_Cedric_Chin.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares