It’s a podcast episode. That’s WHAT it is. But… WHY should you listen to it? Exactly. Or, perhaps, that’s exactly WHY! Are you confused? You won’t be after checking out our discussion with Jenni Bruckman about the vast and varied world of qualitative research and how it is the perfect partner to quantitative data. Give it a listen, and then let us know WHY you did and WHAT you thought of it!
0:00:05.9 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language. Here are your hosts: Moe, Michael and Tim.
0:00:21.8 Michael Helbling: Hey there, it’s the Analytics Power Hour, and this is Episode 181. Just picture it. There I was, standing in front of a room full of usability professionals on World Usability Day. I was trying to explain why we should be friends, and that was more than 10 years ago. And actually, the synergy between professionals who leverage qualitative data in the fields that that supports, and quantitative data is still really strong. Sometimes I think it could feel a little daunting to dive into the world of qualitative data, especially for us more quantitative data types, but there’s a lot of similarities, but it also feels different. I don’t know. Moe, do you have any surveys in flight right now?
0:01:06.9 Moe Kiss: I am always running surveys. I actually sent a survey two days ago, I just forgot about…
0:01:10.1 MH: Really?
0:01:10.9 MK: I just remembered.
0:01:13.1 MH: Oh my goodness.
0:01:14.4 MK: Big survey fan.
0:01:16.5 MH: Awesome. What about you, Tim? You ever sit there and facilitate some usability sessions?
0:01:24.3 Tim Wilson: I too like the field surveys, and I just collect the data but do nothing with it, forget that I have done so.
0:01:30.7 MH: Oh. Well, that’s good, ’cause actually that’s… We can talk about that. I feel like a lot of companies do exactly that. Well, we needed someone who could bring it together for us, an expert, somebody with a little more time in the field and experience on the ground. So, we found Jenni Bruckman, she’s the Vice President of Customer Success & Strategic Partnerships at WEVO. Prior to that, she spent time in optimization leadership roles, both at Accenture, Clearhead, Blue Acorn, and Brooks Bell, and today she is our guest. Welcome to the show, Jenni.
0:02:05.6 Jenni Bruckman: Thank you, thanks for having me.
0:02:07.9 MH: Yeah, it’s great. Well, why don’t you do something to help us all kick it off? ‘Cause I don’t think many of us know what WEVO is, and also give us a little background about you, ’cause you’ve obviously been in this space for quite some time and done a lot of really cool work, but it’d be good to hear a little more about your background.
0:02:25.2 JB: Yeah. I feel like all of our paths were inevitably going to cross at some point. As I told you off-air, this is my career peak right here, I’ve reached the pinnacle.
0:02:33.3 MK: Aww. [chuckle]
0:02:34.7 JB: So, yeah, it’s such a small world, I think, in digital optimization especially, and so I’ve worked in that agency optimization, the A/B testing side for the past decade plus. If I start to get more specific than that, I feel old, so I don’t. And really just focusing on helping different orgs, different brands, B2B, B2C, stand up and really expand and enhance their A/B testing and optimization programs, sometimes dabbling into personalization, but mostly on the testing front. Got out of it for a little while and went more toward the implementation side and quickly realized I miss measurement, I missed being able to prove that things worked or didn’t, and so jumped back in with Clearhead and Accenture a few years ago, and in the past couple of years, stumbled upon WEVO as a new tool in the kit, and did something I’d never done.
0:03:24.6 JB: I loved it so much from both the qualitative and quantitative standpoint, it felt like a breath of fresh air that I’d been waiting to see in the industry for so long. And so I knocked on their door and I said, “Hey, I love this team, I love this product, I’d love to come build out a partner program and help other agencies do what we’ve done together and really start to expand that effort and that benefit.” That’s the fun job I get to do today is just have conversations like this and figure out how we layer all those really cool things together.
0:03:54.3 MH: Awesome.
0:03:54.4 TW: The same way that Oracle grows their team, people just love the product and think…
0:03:58.2 MK: Oh geez.
0:04:03.7 JB: They want your Thanksgiving dinner invitation thing, right? That’s…
0:04:12.7 MH: Alright. So, let’s jump into it. And so I think maybe it might be a good idea to start just around maybe defining what we’re talking about in terms of qualitative data, and maybe layering in, or taking away some of the confusion around it.
0:04:29.9 JB: Yeah, for sure. So I came heavy from the quantitative side, I worked with people who were way smarter than me to do all the number crunching and brilliant data science-y things that you all do, and that your listeners do on a regular basis, and so I came from that idea of a cross-functional pod, primarily being there to drive forward experience optimization. You’ve got your UX design, your development team, your project management, your strategic direction, and then you’ve got your analytics. And what was always this missing piece on that wheel was being able to answer the why. We all would hypothesize about the why, we would all really drill into, “Here are all the different roadmap problems that we’ve identified, let’s build it based on what we think we need to pull this lever first or second.” And the challenge became, we would always have to pull in these different varied qualitative and quantitative combinations to really figure out if our why was correct, and prove that hypothesis.
0:05:26.7 JB: And so, I really saw this big opportunity costs of, a lot of time, the iteration to figure out if we were pulling the right levers and solving for the right problems, or if we needed to go in a different direction. And the benefit of qualitative to me was answering that why, and so the challenge that I saw over and over again was that qualitative is typically very small sample size, and to do it well, it’s typically very labor and expertise-intensive, and that’s very hard to scale. So I could extrapolate on that for ages, of how I see the similar trend line that I think A/B testing saw, we’re gonna see that demand for UX research. We’re seeing it now, and we’ll see it in the near future. But really being able to scal e that at a way that makes insights more accessible across those cross-functional teams is really where I think the magic happens.
0:06:13.8 MK: I don’t wanna even tell you how many job ads there are for qual researchers at the moment at Canva, for example, like…
0:06:22.1 MK: But it is one of those things, a company reaches a certain point of scale in its size and data evolution, and then suddenly something’s missing. And I have to be honest, when I think of qual research, I typically do explain it as like, This is gonna help us answer the why. And as you were saying that, I’m like, Does it always, or is… Or does it only help us understand the why if it’s done really well?
0:06:52.1 JB: So, 1000% agree. And that’s what we always struggled with is the scalability of answering the why, to do it well, to do it responsibly was the challenge. And frankly, that’s where I fell in love with WEVO as a platform, is figuring out, How do we… There has to be a way to standardize this, there has to be a way to say, “We’ve iterated and proven this ability to ask the right questions in the right way to the right audiences,” but then do that at a much greater quantitative scale. It shouldn’t be about six people’s decision. Sometimes there are use cases where that’s the better fit. You really need to observe the error that they’re hitting in a password reset flow or something like that, and it’s very specific to an individual user. But for the most part, the why needs to be reliably viewed across a greater volume of people. And so to your point, it’s exactly what I agree with, is it has to be responsibly collected and not just anecdotal in order to really be a reliable driver for the impact of the roadmap.
0:07:52.4 TW: How consistently defined is qualitative? I feel like the qualitative gets used… And sometimes there’s a, “Oh, behavioral is quantitative, attitudinal is qualitative.” Or if it’s objective, it’s quantitative, if it’s subjective, it’s qualitative. And I actually get… I’m not clear if there… Is there a hard black line that defines whether something is… If you ask somebody… If it’s a Likert scale on a survey, they’re providing a hard number that we can perform math on, but it’s an attitudinal, subjective measure. Is there a definition or is it kind of a squishy threshold as to whether something is quantitative or qualitative, or is it… Depends on what you do with it, you can have the same piece of data, it could be both?
0:08:42.9 JB: Yeah. I think it really comes down to sample size. So you take a lot of that squishy-ness out when you collect that attitudinal data in a quantifiable way, when you collect it at a great enough scale that you can see, Here are the real trends of truth, and here’s the noise. And I think there are a variety of different methodologies, and they all kind of have a strength for different use cases. Card sorting and tree testing is enormously valuable in early prototyping and kind of idea generation of, How are we architecting this experience, and are we building it the right way? Moderated video-based testing is gonna be more powerful if, like I said, you really need to observe that specific point of friction. But then that’s the minority of the volume of feedback that you need. The majority of volume of feedback that you need is… My story was, I kept running A/B tests, and how often do we all run into a test idea that’s like, “That should have won.”
0:09:40.6 JB: And you have the best data scientists and the best analysts and everybody. There’s truth in the quants. Everybody knows and trusts the data. You got stat sig, you know that it didn’t win, or it was flat and it shouldn’t have been. And you don’t have the luxury of running it forever, so then you want to understand why, and you can run a big enough test of qual to get that truth versus noise, that real signal versus noise. And then you suddenly understand, and it’s like that lightbulb moment goes off. So I think it’s about sample size, to your question, Tim.
0:10:10.1 TW: So in taking to an extreme, though, if I had 100,000 open-end responses to a survey, I guess I would have enough text data I could… That would still be… Because I have a large enough sample. Is that…
0:10:25.4 JB: Yeah. There are ways to organize it in a thoughtful way. You start to identify some themes, and then you start to see which themes of data are statistically significant. All of that is the really heavy lifting that takes teams so long. And I think the teams that are doing qual and quant really well are those that have those super experts, like a UX researcher embedded in a center of excellence, the same way that we saw that trend in optimization with data science and analytics early on. But then they have a toolset that makes insights more democratize-able across a team that you know and trust that the way the insight… The way the qual is being collected is a reliable method, and it’s not just left to the luxury of whoever’s launching that test to kinda butcher, “Well, it should be a Likert. Actually, it should have been a multiple choice and have an open-ended qual to follow it.” It’s creating it the first time really well, and then scaling that in a responsible way to get that mass of data of understanding that why.
0:11:23.5 MK: I am suddenly having this experience of feeling like I am so out of my depth. You know that feeling of like, I know qual research? This is something that I’m always advocating for, I’m a big believer in, and suddenly I’m like, “Oh shit.” I really need to know a lot more than I know. You even mentioned a couple terms that… There was something about trees and cards, and I was like, What is she talking about?
0:12:00.0 JB: And to be fair, I know your listeners will probably call bullshit on a lot of the depth if we kept talking about it. That’s where you know… I know as a responsible practitioner, to go to the subject matter expert who can go 30 miles deep in the right directions and say, “This is the right methodology for the right thing we’re trying to uncover or understand.” But I think the missing piece is, that doesn’t scale. And that’s the demand problem that the market has seen and will continue to see. And I don’t think we got ahead of the understanding that we were gonna need that. 10 years ago, people weren’t majoring in UX research. It wasn’t as popular and critical as it is now. And I think that scale is just gonna take off with a trajectory that we can’t keep up with, and so there have to be solutions that every team member can access insights that the experts trust, and then it really takes off and you can answer that why in a much more powerful way.
0:12:55.0 MH: Yeah. And Mo, you’re not alone ’cause I do the same exact thing where I sort of… I’ve read Don’t Make Me Think by Steve Krug, both versions, and so you sort of like, “Hey, I sort of know what I’m talk… ” And then you get it, you’re like, “Okay, this needs someone to take it very seriously. We need an expert for this.” Yeah. So, not alone at all.
0:13:15.5 TW: But is it… When we had Els Aerts on… ’cause I kinda think of the… Going back and we had Monica and Anthony talking about customer journey mapping, and both of those, I think, fall into… There’s a degree of… It’s heavily, heavily reliant on qualitative, but does it come down to… ’cause I remember in both cases, it was, “You do this poorly, and you can really screw yourself up.” And the extreme example was when you get some mass survey, and the survey has a busted… The survey wasn’t even tested. Surveys will go out that are poorly… Not even… There are ones that you’re like, “This is broken.” There are ones where a survey writing… Or a survey research expert would say, “This is a totally biased question.” And then you’ve got things like usability testing or card sorting, where it does rely on the skill of the facilitator. But is that… Does that… Not to diminish the need for skill, but is… Does the skill become more critical as your sample size is smaller? Is there a relationship there?
0:14:31.9 JB: There could be a relationship there. I think it’s… If you have a poorly developed test, if you’ve got a large sample size, it’s still crap data. So you have to have a framework that you trust. It has to be created right the first time, in the way in which you’re asking the questions and gathering the feedback. You have to ask from multiple lenses. It’s… A really good qual isn’t just about uncovering the way people like the design of the page or the experience, it’s about uncovering, Do they also believe in the brand, and do they also believe in the product? And all these layers are part of one holistic understanding for the customer. And so that should be reflected. And that’s where the bias can come in, is if you irresponsibly and often unintentionally introduce bias in the way you’re gathering that data, that’s the damage of a smaller sample size is that bias and that anecdotal risk is there, if you’re not a UX research expert. But the UX research experts can’t solve for the asks that they’re getting at the volume that they are. They need to be embedded across every team, and answering why questions across so many people. So it’s important that they have a tool that they can empower.
0:15:41.4 MK: And their projects also typically take longer. Sometimes their work… It’s not a type of work that typically is done super fast. And that’s not to discredit UX researchers, I think they are completely phenomenal.
0:16:00.9 JB: It’s a compliment to UX research.
0:16:02.0 MK: Yeah, it’s… Yeah.
0:16:03.8 JB: Yeah. Their job is hard, exactly. And it takes that time.
0:16:07.4 MK: But what do you think the business needs to understand? Because the thing… I feel like I’ve watched this at multiple companies now where you kind of go through this evolution, and random people are just basically trying to do their own qual research, writing their own survey questions, whatever it is, doing their own user research interviews, the whole barrel of mistakes. And then along… Typically, it’s a UX-er who actually is kind of like, “I’ve gotta at least steer you in the right direction, ’cause I have a little bit of experience in this,” and then it kind of evolves to eventually like, “Oh, this is a skillset we need.” What do companies need to understand, I guess, to help them get to that stage quicker, and not have old mate from marketing trying to do this work?
0:17:00.7 JB: Yeah. I think as soon as you shortcut it at all, you lose credibility in the process. And then it sets you back that much further to… Then you’re just getting the why, but then when you present the why that you think you found, you’re getting pushback. And that’s 10 times worse than just being asked why and not having the resources to responsibly source it. And so I think the best models are those that… It’s just an awareness or top of mind thing. Once we became aware that you could test and iterate, that became the path forward. And now it’s, as soon as you understand that you need quant and qual, it’s impossible to separate them. And there are ways to do that at scale across democratized teams, while you have these super… I think of it as the mile-wide. You need a lot of resource to be able to go mile-wide. But then the UX researchers need to go a mile or 30 miles deep, and they wanna know that they’re going a mile or 30 miles deep in the right direction. I had a client recently describe it as like, “I wanna know that I’m putting the ladder against the right house before I start to paint, ’cause it takes me a long time to paint the house.” And that’s…
0:18:11.3 MK: Oh, that’s good analogy.
0:18:13.1 JB: Yeah. It’s just really applicable for all of the teams involved. There’s so much time and energy and effort, you wanna know that you’re going in a validated direction before you make that investment.
0:18:25.2 MH: I was gonna say, a lot of times in sort of digital analytics, we’re used to sort of just answering whatever questions the business comes up with, and we grouse a little bit about how they might not be really great quality questions. It seems to me that because of the need to really set up experiments and design them carefully, you have to be really careful about what questions you try to get answers to, because they need to sort of have a life cycle that lasts longer than the research itself. And we do this in analytics all the time. By the time you do the analysis, you come back with the answer, nobody cares anymore, because it wasn’t actually that meaningful. Shout out to all my overlooked analysts out there. But how do you work with businesses and stakeholders to kind of help them think bigger picture, I guess, or ask better questions?
0:19:21.0 JB: I think the key… If people listening walk away with nothing else, I think the best question you can start to instill is to get people to ask and for you to begin to ask more often, What’s the insight and data that drove this priority? And if you can’t answer that, it’s the wrong priority to take out of the pile first. And if you can build an entire roadmap that’s based on validation and insights and data, then you can really start to shape and determine where your effort and time should be spent next.
0:19:53.3 MK: Jenni, I feel like you just said something so simple, but so… An obvious, but I had an epiphany, so.
0:20:03.4 TW: You are just… You’re swinging back and forth from, “I have no clue, I’m lost.”
0:20:08.9 JB: I was gonna say I’ve rebuilt your faith in yourself a little bit, but knocking you down.
0:20:16.3 MK: No. I actually feel like you’re coaching me up the next hill or something, to the top of the next peak.
0:20:23.7 TW: Five more minutes, you’ll be a certified UX researcher, so.
0:20:27.9 MH: That’s right.
0:20:27.9 JB: Yeah. But it is the simple stuff that we forget the most often. That’s the muscle we have to use daily. That’s the memory we have to just have at the ready is, Wait a minute, what insight and what data brought us here? Why is this our next focus? We don’t have one? Alright. Let’s go fix that. Or, Oh, we do have one? Great. And now that becomes the repeatable pattern.
0:20:51.8 TW: But back to the flip side, I’ve got a… Had a client for a few years, and he is… We kinda refer to him as a hypothesis-generating machine, super-motivated, always kinda thinking of new thoughts and ideas and questions, and really thinks about the business. But, boy, if you… He keeps asking the same question. He was like, “Yeah, we don’t understand why these customers aren’t buying this product. That’s kind of a… Obfuscating the actual scenario. But we just don’t understand why they’re not.” We know that there should be this many who should buy it. If they’re buying X, they should also buy Y, and we just don’t know why would you buy X and not buy Y too. And he will bring it up and it’s like, “Well, I don’t know. Maybe we should ask them.” And as soon as you say that, he’s… Completely dismisses it. He was like,” No value in research. Maybe there’s some value for it somewhere.”
0:21:47.9 TW: And we’ve tried to kinda pick away and try to figure out what his reaction is. In part, I think it’s two things. One, he over-indexes to the anecdotal evidence as dangerous. And so in his mind, research is, We went out and asked four people, and one of them said something, and we’re gonna run with that and treat that… Or one of them said X, and we’re gonna say, 25% of our customers think X. So it’s like, Okay, well, that was… What you’re basing it on was either poorly done, or it’s a misunderstanding of what it is. But it’s gotten to the point of comical. It is one question that we genuinely don’t know the answer. And he keeps bringing it up, and then he keeps saying, We just… We need to find it in the data. And it’s like, Well, that is… That is a question where you are looking under a street lamp for your keys, ’cause that’s where the light is, and that is not where you dropped the key. You dropped your key off in the alley.
0:22:45.0 MH: I literally used that analogy earlier today. I love that one.
0:22:49.3 MK: I’ve never heard that analogy.
0:22:53.3 TW: But I wonder how much is… If there was irresponsible, to that point of… If you say, “Here’s the why,” but it was poorly done, and so it actually was not correct, if it’s the same thing that if you don’t do this well, the right tool with the right rigor, then you can burn some… For all I know, this guy had a bad experience with some sort of market research 20 years ago, and he has not let it go, and won’t let his teams do it again.
0:23:26.0 MK: Look, can I just say, some people are stupid. I know that’s an awful judgment, but I haven’t had coffee yet, and some people are just stupid.
0:23:36.8 JB: I will say, usually acknowledging… Don’t disagree. I will say, usually acknowledging the pain that you… I’m an empath, so this is just my personal style coming out a little bit. But if in that situation, I would just say like, Hey, I wanna get you involved and bought into the method that we’re gonna use for this so that we can confidently answer your why. Here’s what we propose. I want you to help craft and make sure. And here’s the buy-in to the methodology and everything that we’re using, so you know and trust that. And here’s the volume of people that we’re gonna measure this to, so it’s not just a little flicker of signal over here. It’s actually the majority of the audience agrees and feels this way.
0:24:18.3 JB: And then once he’s emotionally bought into that method to gathering it and saying, Look, I want to answer this question with and for you, and I think it’s a great question, I think the spirit is right, because typically the companies that embrace the why have a much more sustainable long-term positive outcome. It makes their what of the data look a lot better. And to that idea of a searchlight, oftentimes, the qual is that searchlight around the room that illuminates the… I’m gonna mix our metaphors a little bit. I don’t know where he lost his keys. But it’s that searchlight around the room that you suddenly turn it and it’s like, Oh, there. There is where we should be focusing.
0:24:57.6 JB: And so you get that intrinsic value. And frankly, that’s also tied to the emotional value for the stakeholders a lot of times. It becomes more intrinsically valuable to… As a win, and then becomes more quantifiable as a win, because you understand the motivators, the desires, the propensities of the users that are driving the hypotheses that then create your next curiosity and hypothesis to follow.
0:25:24.5 MH: It seems like there may need to be organizational precursors to effective qualitative research and experimentation. What’s your reaction to that?
0:25:38.0 JB: I think the organizations that I’ve seen drive value through qual or quant or a combination of the best, are those that have an executive level stakeholder who is all about the customer experience. And they are all about learning. And if that is their attitude, then embracing that downward and outward through the team is very easy. If not, you kinda need a maverick to start to push that narrative upwards instead to the executive stakeholder. And then you find the… Typically, you do best if you can find that really magnetic or charismatic stakeholder at the ground level and at the exec level and say, “We’re in this together. And this is our vision. And we’re really gonna believe in answering both the what and the why. And we’re gonna solve that with data and insights. And we’re gonna drive change that way.”
0:26:29.8 JB: And then that creates that culture of asking like, “What data and insights are of this priority over and over again.” Because it’s happening at multiple levels within the organization. That’s really where the organizational change comes from. And if you can find both, great. [chuckle]
0:26:45.0 MH: Okay, let’s step aside for a brief word about our sponsor, ObservePoint. It’s a platform that automatically audits your entire website data collection for errors and issues. That includes testing more than just individual pages, testing your most important user paths for both functionality and accurate data collection.
0:27:04.0 MK: ObservePoint can alert you immediately if something goes wrong. But it also tracks the results of these audits over time. So you can see if your data quality and QA processes are improving or worsening. But we all know I’m an optimist. So let’s go for improving.
0:27:18.9 TW: Absolutely. We want those quality trends going up into the right, people. And of course, quality data collection also means respecting user’s privacy. So ObservePoint now has the ability to perform privacy compliance audits to identify all the tech collecting data on your site. And ensuring adherence to digital standards and government regulations for customer data.
0:27:39.7 MH: All of that, and they’ve got great taste in podcasts. So if you wanna learn more about ObservePoint’s many data and governance capabilities, go request a demo over at observepoint.com/analyticspowerhour. Now, let’s get back to the show.
0:27:56.7 MK: Speaking of organizational change or just structure, it’s something you kind of referred to a few times as about basically the need to be highly cross-functional, embedded. It’s tough to scale. But if you were setting up this type of function, how would you structure it? Just hypothetically.
0:28:16.1 JB: I’m a huge fan of, I don’t even know what it’s called, but I always heard it referred to as that hub and spoke. That was always my favorite where there’s a center of excellence, true center of excellence team, that are the master craft’s resources. And then as you grow and scale all of their individual teams, whether that’s by product or by business unit or by marketing function, you name it, then you can start to embed cross-functional resources across each one of those. But they always have that best-in class team to go ask questions of. And explore new methodologies. And explore new tools. And say, “Let’s check this out together. Have we thought about this? What do we do about that?”
0:28:53.6 JB: And certainly that center of excellence pod needs to be embracing of data and insights, both. They need to want. They have to have an appetite for quant and qual. Because they have to have the appetite to answer the why. And I think the teammates and the team members, you can feel and see that curiosity in interviewing and in asking people their most interesting test they ever ran or just like seeing them light up to describe what it was that got their fire lit a little bit, you see if that curiosity is there or not.
0:29:25.2 MK: So when you talk about… ‘Cause I feel like this is something that I’m constantly badgering people about is research done well is a really good mix of qual and quant. They have to co-exist. And be so deeply interwoven. And the truth is not many teams do it well. You’ve mentioned a few times you’ve seen some teams that have done it well. What does good look like? And how do people get there?
0:29:56.7 JB: Yeah, I think if I… So the best way I can describe that is if I’m designing a testing program from the ground up today. And by testing, I mean quant and qual, by the way. So to me, they can’t be broken apart anymore. And so I think it is about having a team that can not only build and envision the next experiences and hear a hypothesis or a problem. But all of us are gonna participate and begin from validating the problems. And that’s gonna be the foundation of the roadmap. And then if we can build that way, we’re gonna build those cross-functional resources. We’re gonna go forward from there and really establish the experts to say, “Okay, the first time we set the study up, we need to make sure that we’re really thinking about it responsibly.”
0:30:40.0 JB: And then it’s copy-paste. This study looks like that other study we did, so we can start to do it more often. But having the right toolset is a critical part of that. I think there are advantages to everyone being able to responsibly gather insights with a tool that everybody depends on and relies on. And that really is a game changer. It just opens up this scale that you don’t need to hire as many people for. And you can, eventually. But you create that demand that then gives you the headcount.
0:31:08.1 TW: Is there a risk? It feels like the breadth of qualitative research tools or the types of qualitative research, card sorting, diary studies, journey mapping, persona building, that no organization is gonna have the skills and the resources to do all of those. So it seems like you have to sort of pick the subset that you have a need for. Do you run into a risk where it’s like you’ve got everybody bought in. But all of a sudden they’re using the wrong qualitative research method or approach or the less optimal to try to do something where they’re using. Panel survey. And they should be doing, I don’t know, persona building.
0:32:01.1 JB: Yeah. Speaking of bias, I have bias. So I answer this with that bias. There is a difference between generative and formative level research and evaluative research. And I think evaluative research is that people who do research can be anyone on any team. And I think more of the formative and generative research that’s more of the tree testing, card sorting, UX research, expert level stuff, is that piece that you protect for the experts to be able to lead and do. But if you open their bandwidth to not be buried in the evaluative research asks all the time, then they have the ability to go that 10 miles deep. And the rest of the teams don’t have to wait on them to get in a cue to go get insights for the evaluative research level that they need. So I think that’s a big part of it, is being able to do that at scale.
0:32:53.8 TW: So does the generative research, generally speaking, have less of a value of being linked to quantitative? Or is the evaluative… Is that when you’re saying the quant plus qual kinda link together, does that index more to the evaluative…
0:33:14.2 JB: Evaluative. Yeah, yeah.
0:33:16.1 TW: Yeah.
0:33:16.8 JB: Evaluative is easier to tie to quant because it’s typically later stage, once the experience is prototype level or later. You’re not creating the early stages of something. I think there’s value in doing evaluative research of competitors or existing flows that are out there beyond just your own when you’re creating something that new. So evaluative can exist throughout the process. But typically, generative and formative research falls into that expert level user. And the evaluative happens once you have an experience or a design or an idea that you wanna gather. And I do think one of the biggest blind spots is running research on experiences you don’t own. You may have a product that exists on a multitude of marketplaces. And you should be researching it on all of those market places. You can have a product that is similar to a competitive product, you should be researching that competitive product, not just your own. I think some of the biggest light bulb moments come from those.
0:34:16.2 TW: Which I… Yeah, not to talk about Wevo, but that was something that I had not really thought about when I saw you speaking at OneConference. OneConference. I don’t know how to say the… No space. There was no space between one and conference.
0:34:32.6 MH: That’s right. Thank you.
0:34:33.0 TW: I didn’t wanna inadvertently think it was one conference. But was that idea of, “I’m selling widgets on my website. My widgets are also available on Amazon. My competitor is selling a directly competitive… They’re selling it both on their website and on Amazon. And with these… ” One of these forms of qualitative research is, it’s all out there public. It’s just trying to get scale at getting the user’s perception of or thoughts about and being able to line those up, which does… I agree, I think that’s very much. Our tendency is to really focus on us. It’s like we think everybody, “What’s the exit rate on our website?”
0:35:21.8 TW: “Well, it’s a 100%. Congratulations. Nobody is that into your website.” But when you’re… Do you not wind up with a blind spot if you’re doing research where you’re evaluating… ‘Cause there’s one of… Like a heuristic evaluation. Let’s just go through and do that comparison. And then there’s… I think what Wevo does is more, “No, let’s get users that are in kind of whatever population we’ve defined to go through and gather research.” But do you… At that point, you don’t have the… You don’t have any of the quantitative behavioral stuff to marry up with that. You don’t know how their site is performing. All you know is what your recruited users say about it. Is that right?
0:36:09.0 JB: Yes. And I’ve never taken an improv class, but there you go.
0:36:12.7 TW: I was gonna say, “Nicely done.”
0:36:13.6 MH: Yes, it’s nice.
0:36:17.5 JB: The idea of gathering all of that feedback at scale is that you can use AI and tools available to organize and theme and analyze what are the consistent messages these panelists or users are telling us about these experiences. And again, that crosses brand and credibility and UX and design and content and messaging and pricing. And all of that is one to the user, to the visitor. And in the live experiences that they go through on a daily basis, when they look at a product, headphones on a direct site. And then I go look at Amazon. And they’re not gonna look at Best Buy. And I’m thinking about which one has the closest to where I can go pick them up by the time I need them to record the podcast. And it’s not just an online single-channel experience. You need that massive feedback from all of those people across all those channels.
0:37:06.5 JB: So you can stitch together a singular journey like that. And say, “Imagine you’re going through this journey. And these are the different points on that that you go through. And now all of you, hundreds of you per page in that journey, tell us what you think. And then we can use AI and other tools available to say, “Alright, now of all those open-ended responses where we didn’t bias them, we didn’t tell them what to think. We just truly captured their feedback. What bubbled to the top? What themes really popped out of all of that massive data?” And then it becomes qualitative at a quantitative level, where you can sometimes then turn that into your quantitative road map. And say, “Hey, we should be digging a little bit deeper here. I think there’s something going on that we’ve maybe not thought to explore before.” And/or just having that to pair with quantitative is hugely valuable.
0:37:52.0 TW: And will that digging deeper sometimes be, “Oh, we’re gonna actually do research again. But we’re gonna have a different focus?”
0:37:57.2 JB: Absolutely.
0:37:58.6 TW: Okay.
0:38:00.0 JB: So that, then you can go into the 10 miles deep. Once you understand the mile wide, now you can spend more valuable time going into the 10 miles deep. But otherwise, you’re just kind of throwing darts and seeing, “Alright, I guess I’ll go 10 miles deep there,” without any validation. And that’s really what broader evaluative research at scale can deliver is you get that validation along with just the ton of discovery light bulbs. But you get that validation and then you know where to go spend that time and investment. I could clearly geek out about this for a really long time. Longest answers of any guests you’ve ever heard. Sorry.
0:38:32.1 MK: I wish our listeners could see all of your notes because I’m like… We obviously do prep for the show and I was like… Anyway, I feel like I’ve learned a lot.
0:38:40.8 TW: We hope that it’s obvious that we prepped for the show.
0:38:42.5 MK: Well…
0:38:44.1 MK: Yes.
0:38:44.1 TW: We are a little fly by night. [chuckle]
0:38:45.7 MK: Okay. But anyway…
0:38:47.4 MK: What I did wanna ask about is communicating qual research in terms of presentation. So, data viz is this massive thing, we’re always talking about how to present quant data, I know there’s always… If I say quant and qual enough times, I always invert them at some point in time, so watch out, folks.
0:39:10.4 JB: You’re not alone. You’re not alone. As long as you say data, Moe, we are good.
0:39:18.7 MK: I guess, is there starting to become industry best practices when it comes to how to present qual research? Because it is one of those things, right, it is very… I don’t wanna say timely, but it does kind of expire, it has a shelf life, and so…
0:39:36.7 JB: It does, yeah.
0:39:37.6 MK: And typically, you see people put together really beautiful presentations, which I’m obviously a big fan of, but how do you then move quickly if that’s the best method of communication?
0:39:52.1 JB: Yeah. I think consistency is key. If you were inconsistent in your data storytelling, it invites skepticism, and so there’s a fluidity with which you need to present your key findings. There is that, “You support this much data within our qual set, supported this theme that we’ve elevated, but here are some real user quotes that reflect that.” So it’s that combination of real verbatims with the scale at which that was relevant. And I think the… It’s really funny that you say it does expire. One of our clients recently came up with this brilliant idea, I can’t believe none of us thought of, which is, I get that I wanna keep testing my own experiences in gathering qual at scale for those, I wanna see what new ideas are working, I wanna test across my journeys on my own site, I get that, but could you just schedule some competitor benchmarking for me on a quarterly basis?
0:40:47.9 JB: Their sites are changing and I don’t have time to keep an eye on them. And I know they’re testing and optimizing, and I know their users in the marketplace is testing and I’m just overwhelmed, could you just plan every quarter to benchmark my competitors and look at how their experiences are changing and how that measuring stick is looking against them? And so not only does the market change, therefore the benchmarks change, but also the experiences of that set are changing. And so, it’s really important to be consistent with the frequency, as much as you are with the way that you tell the data story really consistently over and over again.
0:41:23.7 TW: Yeah, for some reason in my head, I think that it is easier to commit data crimes with qualitative data, and maybe it’s just because I’m more used to quantitative data, but I’ve just seen so many poor… You do a survey, and so somebody will take their opinion and pull the open-ended remarks, throw it in the slide and be like, “Well, users are saying this,” and it’s like you just cherry-picked. I don’t know, what’s…
0:41:46.5 MK: I feel like people do that with quant data too.
0:41:49.9 TW: They do, they absolutely do, but data becomes very political in a lot of organizations, but to me it seems like there’s more danger, sometimes.
0:42:00.7 MK: It’s easier to weaponize, almost.
0:42:02.6 MH: Yeah.
0:42:02.6 TW: What’s great is when you have one product page and you say, “This product page has amazing engagement,” and then what you don’t realize is that’s because you’ve been having qualitative research run on it and it hasn’t been getting filtered out because it’s a competitor who’s running it, and so, you can mess with somebody’s data. Moe’s looking at me like, “What?”
0:42:17.7 JB: That’s a whole different kettle of fish.
0:42:19.5 MK: I did not follow that example.
0:42:21.7 MH: I think Tim’s dropping a very specific example there.
0:42:26.0 TW: No. Not on theory, right? When you’re collecting research on other sites, you are generating traffic, it’s not gonna be enough…
0:42:33.9 MH: Oh, oh, too bad…
0:42:35.0 TW: Volume to… If you’re…
0:42:35.7 MK: Oh.
0:42:36.9 JB: From a client perspective, if you’re inflating… Yeah, that’s… Yeah.
0:42:40.8 TW: Yeah.
0:42:41.7 JB: But I think, Michael, to your question, that goes back to the bias. Nine times out of 10, if there is a small volume anecdotal only evidence of user research, it’s going to be met with equally anecdotal opposition. It’s my anecdote versus your anecdote, my experience versus your experience, my bias versus the bias of whoever ran the study, and that’s the biggest reason that you need it at scale, you get around that.
0:43:11.1 MH: Well, I’ve seen executives just really upset with how dumb the users and the usability tests are, and why can’t they get it right. And you’re like, “No, they’re showing us what we’ve done wrong. Don’t you see, it’s… ”
0:43:25.9 JB: And that’s the beauty of the qual, and you can be like… And actually, this supports our exit rate, right? Like…
0:43:31.6 MH: Yeah, that’s right.
0:43:32.3 JB: “These are telling us the same story,” and that’s really where that pairing exists is, “We’ve looked at the mile wide, that told us where to go mile deep, and now they’re consistently telling us, “That’s where the problem is and what the problem is, but now we know why it’s a problem, now we know why they’re stuck, why they’re not converting.”
0:43:50.5 MH: Your choice is to fix the experience or you can go and try to educate all of the users to not fit your…
0:43:57.0 TW: That’s right.
0:43:57.8 MH: Your definition of dumb.
0:44:00.8 JB: I will also say there is a big benefit of that, again, that buy-in piece, if they have a hand at all in crafting the study that you’re gonna move forward with, and they’re bought in before you run it. Don’t just surprise them with qualitative data, get the buy-in and say, “We’re gonna go understand the why of this.” “Cool, alright, let’s talk about that later.” Now they’re leaned in.
0:44:24.1 MH: Alright. It’s time for that segment, which is the conundrum du jour, the Conductrics quiz, the quizzical query that sometimes puts us in a quandary. Moe and Tim, are you ready to go?
0:44:40.4 TW: As ready as I ever am.
0:44:42.9 MH: Nice.
0:44:42.9 MK: Exactly.
0:44:44.3 MH: Well, let’s do a quick word about our sponsor, Conductrics. A/B testing vendors sometimes promise a silver bullet to make experimentation easy, and running an effective program is really hard work, you need a technology partner that is innovative and forthright about the challenges to build and maintain that successful experimentation program. And for over a decade, Conductrics has been that partner with some of the world’s largest companies that help discover and deliver effective experiences for their customers. Along with offering best-in-class technology for A/B testing, contextual bandits, and predictive targeting, Conductrics always provides honest feedback and goes and beyond to help their clients achieve their testing and experimentation goals. They also go above and beyond to do this awesome quiz. You can check them out at conductrics.com.
0:45:35.6 MH: Let’s talk about who you’re representing, which listeners. Alright, Moe, you are representing Jodie Salvo. So, way you go.
0:45:44.7 MK: Hello, Joe.
0:45:46.4 MH: Perfect. And Tim, you are representing none other than Simon Pulton. Okay, let’s get into the quiz. Now, let’s just pretend that I am very eager to get the podcast recording finally started. Unfortunately, Tim and Moe are having a bit of a kerfuffle, which is keeping them from getting started. And Michael asks, “What seems to be the problem?” Moe answers, “We’re arguing about p-values again. While we agree that the p-value is the probability of seeing the data or a more extreme, assuming that the null hypothesis is true, we don’t seem to agree on what the distribution the p-value takes when the null really is true.” Michael Coleman responds, “The p-value follows a blank distribution when the null is true. Now, can we get going and start to recording podcast?” What distribution did Michael say the p-value follows under the null? Is it A, Gaussian; B, Log normal; C, Gamma; D, Uniform; or E, Beta? This one might be easy.
0:46:52.4 TW: I actually understand the question. And yet do not have a…
0:46:57.6 MH: I just like that I’m calm in this one. Okay.
0:47:01.1 MK: I did notice that change the story arc. And I wondered if perhaps you might have had influence over that.
0:47:08.1 MH: I’ve been lobbying pretty heavily for weeks. So this one, yeah, I feel really good about how calm and cool I am at this one.
0:47:17.2 MK: What depresses me is that like 99% of my team, if they ever listened to the show, would be screaming the answer at me right now.
0:47:24.6 MH: I feel like it’s a missed opportunity on your part. You could have been typing this into Slack all this time.
0:47:29.7 MK: I can’t type at the speed at which you read.
0:47:34.0 MH: Well, but I’ve given you all this sort of dead air talking about how calm I am and stuff like that. Yeah. Anyways, that’s fine.
0:47:41.4 TW: Do you want me to start with an elimination?
0:47:45.2 MH: If you want to. And that’s one thing we have done in the past.
0:47:47.5 TW: Or do you wanna…
0:47:48.3 MK: Okay.
0:47:50.5 TW: You wanna go for it, Moe? No. [laughter] So I’ve actually spent a lot of the last couple of days actually on the log and log, log and the log odds front. And probably I’m as most knowledgeable about the logs as I’ve ever been in my life. Which means they still confuse me a bit. But because the p-value is gonna be between zero and one, it does not feel like it’s a log normal type thing. So I’m gonna eliminate… I’m gonna try to eliminate B.
0:48:20.8 MH: I love the thinking. And let’s eliminate B. It’s gone. Log normal out the window. Now we have Gaussian, gamma, uniform and beta. A, C, D, E.
0:48:34.5 MK: Look, I’m gonna eliminate beta.
0:48:38.1 MH: You’re gonna eliminate beta. Okay, let’s compute that. Pup pup pup pup pup. Yay, yes. We’ve eliminated beta. Now we have Gaussian, gamma and uniform as our only three options remaining.
0:48:53.9 TW: This may be bad. But I don’t see how it could possibly be a uniform distribution. So I’m gonna aim for nixing that. ‘Cause surely it’s gonna skew towards the smaller. So I’m gonna try to eliminate uniform.
0:49:13.3 MH: Gonna eliminate uniform. Okay, calculating. And you… You know how Minesweeper works?
0:49:22.3 TW: Yeah. Did I just hit the bomb?
0:49:22.9 MH: You just hit the bomb. Uniform is the answer. And Moe wins.
0:49:27.5 MK: I love winning by default. Nice. I love winning by default. What is it? Winning by…
0:49:33.9 MH: It’s just winning. Jodie Salvo is a winner. So, Moe, great job. So here’s what happens. The answer is D, uniform. If the null is true, which in AB testing usually means that the B version is not better than the control, then the p-value of the test statistic has an equal probability of taking any value between 0 and 1. Which is why 5% of the time, if B variation is not better than the control, the p-value will be less than or equal 0.05, which is the 5% alpha or a 95% confidence level frequently used for AB testing. One could argue that E, the beta distribution is also right. Since a beta is the same as the uniform when constrained under the interval 01. But unless either Tim or Moe more explicitly say beta one, one, then the answer is not exact.
0:50:25.4 MH: I probably should have stopped reading a while ago. And then there… Oh, there is probably a way to recast the uniform as a gamma distribution. But that’s also not accepted unless Moe or Tim is explicit about its construction. [laughter] I think we got to the right answer. Congratulations, Moe. You’re the winner. Tim, you do a great job every week in, week out. And of course, we love doing the quiz for our listeners. And thank you, Conductrics for sponsoring it. Please check about it, conductrics.com. You can tell they know a lot about AB testing and p-values. Alright, let’s get back to the show.
0:51:05.5 MK: Out of curiosity, are there different methods that you would use if you did have a more pressing timeline?
0:51:14.9 JB: Again, I’m biased. Our results are really far. That’s frankly where I fell in love with it, is I had a six-week research project. By the time we could staff the people that we needed and the pages that we needed to look at, we had to pre-plan all of the different studies out of the gate. And then by the end of the first study, we wished we’d gone in a very different direction for the second and third. And now there is a way to gather all of those studies in one to two weeks, and you can do it simultaneously. That’s the other… Software is in everything, and individual researchers run everything. It’s the power of both together, and I think if you have the ability to do that mile wide faster and more upfront, you have a much greater, happier team. They don’t get bored, they don’t get frustrated, they can actually spend their time adding value instead of just saying, “I can’t do that by then.” Which is a very fair and real answer.
0:52:11.0 MH: Well, one qualitative metric we’ve been trying to get back to is to adhere more to our hour-long status, so we do have to start to wrap up. But Jenni, this conversation has been amazing for a number of reasons. A, it’s so easy to love that work, and you explain it so well. So I’m thrilled that our listeners got a chance to just be involved with that. That question is gonna be… I wrote it down with the insight and the data that drove this priority. And I know I’m gonna use it over and over again, so thank you so much for that. And so much more we could totally cover here, but we do have to start to wrap up. Anyway, one thing…
0:52:52.4 JB: I told you longest answers of anybody you’ll ever have.
0:52:53.1 MH: No, no, no.
0:52:53.5 MK: Not true at all.
0:52:53.6 MH: You’re not even scratching the Tim Wilson watermark in terms of long windedness, so you’re fine. These are perfect. Now, one thing we do love to do is go around the horn and share a last call on the show, something we think might be of interest to our listeners. Jenni, you’re our guest, do you have a last call you’d like to share?
0:53:14.2 JB: I do, I am nothing but a reflection of the really smart people I have been lucky to work with. So one of those, Matty Wishnow recently wrote an amazing blog post called The Analytics Hangover. And it’s just beautiful, I cannot recommend it enough. And in it, he talks about that combination of the what and the why, and it just sends light bulbs off all over the place. The Analytics Hangover, Matty Wishnow.
0:53:38.6 MH: And I assume that’s different than sort of the genesis of this podcast [laughter], which was the day after the lobby bar [laughter], the conference. That’s a different kind of analytics hangover, I’m guessing.
0:53:48.2 JB: Different, different, yeah.
0:53:48.3 MH: That’s awesome, I will check that out.
0:53:50.8 TW: I was gonna say, I can qualitatively say that 60% of anything I do well when I present publicly is due to Matty Wishnow. So everything I learned about presenting or 60% of what I learned about presenting effectively was from Matty.
0:54:04.7 MH: Well, Tim, why don’t we go next to you. What’s your last call?
0:54:09.4 TW: So I’m gonna do two, but really, it’s really Jenni doing two, because this is completely random. But at one conference, while we were having dinner, Jenni and Matt Policastro, so now all people who’ve been on the podcast, Jenni recommended… I don’t remember why. But I believe she actually had to a phone a friend, also known as a spouse to get the actual name of it, [laughter] but it’s the Barkley Marathons, the race that eats its young. It’s like an hour and a half long movie on YouTube, and I’m not gonna lie, I finished watching it before I left Chicago. Which meant that while not hanging out in the lobby bar, I was sucked into a five Marathon in Tennessee. Just hilarious, and entertaining and fascinating documentary. So there’s that. I had to use that because…
0:54:58.9 JB: Gift that keeps on giving, right there.
0:55:00.0 MK: You do realize now Tim, that people are gonna want to come on the show because they’re gonna realize it’s a ticket to get to have dinner with you. [laughter]
0:55:07.9 MH: Yeah, at conferences.
0:55:08.3 JB: And talk about marathons in rural Tennessee.
0:55:15.0 TW: But somehow I was in the opposite… I don’t think time actually works to apply that and I’m a messy eater [laughter] People don’t wanna do that. [laughter]
0:55:24.3 JB: That’s that. [laughter]
0:55:24.9 TW: I had this cotton candy hanging out of my mouth. It was weird. So the other, a little bit more on Analytics and a former guest also Tim Harford and his cautionary tales podcast, he had a recent episode called The Mummy’s Curse, which it brings together the curse of King Tut’s tomb. That whole thing from 2012 when Target predicted a girl was pregnant with the fly and her dad got all upset. It’s got selection bias in it. It is a pretty… It’s a little bit of a stem-winder, which maybe is why I like it as well. It goes on and about, but the fact that it ties together multiple mummies’ curses with predictive analytics and Target and really comes down to selection bias. It’s a good lesson from a very practical analyst application, if you listen to it that way.
0:56:19.8 MH: Nice.
0:56:19.9 TW: And that is all.
0:56:22.1 MH: Okay, well, that’s great. Alright, Moe, what about you? What’s your last call?
0:56:26.1 MK: Mine could not be more different. So I’m spending a lot of time at the moment on team health, and just checking in on how everyone’s doing. We’re all still working from home, where I am in my part of the world. And there’s definitely some fatigue. And I think, regardless of where we’re at, I guess the last couple of years have been pretty tough. And it’s definitely taking its toll as we get to the end of the year. And big shock, Adam Grant has been in my life again. And one of the things that he has suggested doing recently… So obviously I’m still catching up with lots of team members and we have new team members that I haven’t met, and he’s like, instead of asking people what they do ask what they love.
0:57:11.9 MK: And every conversation I’ve had in the last three weeks, I’ve focused on asking people what they love to do, and it has just… I mean, I had the most beautiful conversation with one of the guys in the team yesterday about cooking. I had one the other week about the nature walks he loves doing. And I’m just finding it’s been a really beautiful way to connect with team members. I feel like Tim’s probably gonna be eye-rolling over there in the corner, but…
0:57:44.6 TW: Our HR department asked me to do the same thing. So I just called him up and said, “Are you doing your fucking job? Yes or no? [laughter] On to the next one, I had the whole thing done in a hour and a half.
0:57:55.8 MK: But anyway, [laughter] it’s been something that has made me reframe, I guess, the conversations I’m having with people, and when you talk about what people love doing, it’s the same as like, What’s the favorite piece of research you’ve done? People’s eyes light up, they just get really excited and it ends up being this really nice part of your day. Oh my God, I had this amazing chat with someone that studied Linguistics. Anyway, the topics go on. So yes, ask people what they love to do instead of what they do.
0:58:27.4 JB: I love that you’ve connected it back to the why unintentionally. It’s all about the humanity in the why. That’s beautiful.
0:58:35.1 TW: My God, I’m getting uncomfortable. [laughter]
0:58:36.0 MH: I think maybe like… Never mind, I’m not gonna say…
0:58:40.1 TW: Please tell me you have an academic study, Michael, to share.
0:58:43.2 MH: Well, it is sort of data-related, and I’m glad you asked. It’s a little while back, I ran across this and sort of it’s humorous, but sort of it’s interesting. So there’s a guy named Matt Turk who’s a venture capitalist, and as some other people have put together a 2021 machine learning AI in data landscape. So…
0:59:04.5 TW: Now we’re talking.
0:59:04.8 MH: I am a sucker for this big landscape slides with all of the crazy 8,000 vendors and what have you. So this is now the data and AI one. Hilariously, Adobe apparently does not make it on to there in any category, so I don’t know how thorough it is, but it is interesting and worth taking a look at. And certainly for all of you agency folks like me out there who need a big old landscape slide for your decks, there’s another option out there for you.
0:59:31.8 TW: Did you actually look for Adobe Sensei? ‘Cause I think it’s, if you were just looking for Adobe you weren’t looking for the AI engine, the Adobe Sensei.
0:59:40.8 MH: I looked… I’m not saying I exhaustively looked ’cause all the little logos are really small, but I was looking for the Adobe logo and I couldn’t find it. Anyways, it’s interesting. I don’t think it means it’s complete, but there are some other things in there that are kind of nifty and it probably is a good resource and we all need these things, ’cause of course there’s a bajillion and one vendors out there for all these things so there is that. Is that not touchy-feely enough for you, Tim? ‘Cause I was sort of like, “Hey, Jenni and Moe, let’s go start a different podcast just talking about how to lead people and really interact,” and like you know, and I thought that would be fun. [laughter]
1:00:21.8 JB: It’s all a balance. It’s all a balance.
1:00:23.8 MH: That’s right. Okay, well, thank you for sharing the last calls, everyone, and I know if you’ve been listening, you’re probably thinking, “Hey, I’ve got some ideas about this or I’d like to ask some questions,” we would love to hear from you. The easiest way to do that is on the Measure Slack or on Twitter or on our LinkedIn group, and you can also email us at Contact@Analyticshour.io. So we’d love to hear from you. Anyway, Jenni, thank you again for coming on the show. This was great.
1:00:54.2 JB: My pleasure, thanks for having me.
1:00:56.1 MH: Awesome, and of course, no show would be complete without a mention of our illustrious producer, Josh Crowhurst, who does so much to make this show do what it do. So thank you, Josh, and for my two co-hosts, Tim and Moe. No matter what kind of data, qualitative, quantitative, I know I speak for both of them when I say keep analyzing.
1:01:23.2 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter at @AnalyticsHour, on the web at analyticshour.io, our LinkedIn group and the Measure Chat Slack Group. Music for the podcast by Josh Crowhurst.
1:01:40.8 Charles Barkley: So, smart guys wanted to fit in, so they made up a term called analytics. Analytics don’t work.
1:01:47.8 Thom Hammerschmidt: Analytics, oh my God, what the fuck does that even mean?
1:01:56.8 TW: Should I be honest and actually admit that I wrote that today, but I was like, I’m gonna pretend that I’ve actually…
1:02:02.2 MH: You don’t have to say…
1:02:02.8 MK: Tim.
1:02:04.3 MH: No, you have all kinds of grace.
1:02:06.8 MK: No, but I love that you had a moment of being a human for once. That’s important ’cause I fuck up all the time.
1:02:16.6 MH: Yeah. Okay, so for… It makes us feel more connected to you as a person, Tim, and that’s a positive.
1:02:38.9 TW: Rock flag and why, why, why? [laughter]
1:02:46.8 MK: I feel like that might be one of your best.
1:02:50.8 TW: I know, it was like reverberating. [laughter]
1:02:54.5 MH: Like, reverberating?
1:02:55.8 TW: Yeah they were like reverberating or reverberating. Well I can’t talk good. Like yeah. [laughter]
1:03:02.8 MH: You got an analytics hangover?
1:03:04.8 TW: Yeah I’ve got an analytics hangover. Had one for years. Different, different kind.
This site uses Akismet to reduce spam. Learn how your comment data is processed.