#218: Delivering Value by Listening for Problems with Matty Wishnow

Do you ever feel like the experiments and analyses you’re working on feel a little bit like a trip on a hamster wheel — properly grounded in hypotheses, perhaps, but not necessarily moving the business forward like you’d hoped? On this episode, Matty Wishnow, the author of Listening for Growth: What Startups Need the Most but Hear the Least, joined Moe, Tim, and Val for a discussion about why that may be, and how reframing the work to focus first and foremost on identifying problems (and unmet opportunities) can be useful!

Links to Items Mentioned in the Show

Photo by Emanuel Kionke on Unsplash

Episode Transcript

0:00:05.8 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language. Here are your hosts, Moe, Michael and Tim.

0:00:19.3 Tim Wilson: Hi everyone. Welcome to the Analytics Power Hour. This is Episode Number 218. I am Tim Wilson taking on a couple of Michael Helbling’s duties today. The question is, is that a problem? I think it’s a problem. Tell me about it. Go ahead. I’m listening. Oh no, I can’t hear a thing. I can’t listen for problems. That’s not good, but listening for problems is the topic of today’s episode, so maybe that’s appropriate. One problem we definitely don’t have is needing to record another show without Moe because, yes, Moe Kiss, marketing data lead at Canva is back. Hey, Moe.

0:01:07.4 Moe Kiss: Hey, how’s it going?

[laughter]

0:01:11.0 TW: I have a question, you’ve got a second kid, number two. She has popped out. So really the burning question is, How buried are you in your inboxes as you crawl your way back into day-to-day professional life? Five meters, 10 meters?

0:01:25.9 MK: Yeah, I would say I’m drowning, although the thing is, it doesn’t stress me out, I just roll with it.

0:01:33.2 TW: That’s good.

0:01:34.3 MK: It stresses you out though.

0:01:35.5 TW: It does. I’ve already had two emails come in, so I’ve got three emails in my inbox and I’m shaking, I’m not gonna make it through, but we had so much fun with our guest co-hosts while you were out, so we’re gonna continue to mix things up for a couple of episodes. So rather than Michael, the reason I am sitting in his chair on this episode is that I’m gonna get to welcome Val Kroll from Search Discovery, back to the co-hosting chair. So it’s good to see you Val.

0:02:06.1 Val Kroll: It’s great to be here, thanks for having me.

0:02:11.4 TW: Alright. I said that the topic of today’s show is listening for problems, and that may sound a little squishy, which… Not my forte. So we’ll see if we can bring it into focus. What really kind of sparked the idea for this discussion was a book that’s called Listening for Growth: What Startups Need the Most but Hear the Least by Matty Wishnow. So the smart thing to do seem to be to bring Matty on as a guest. Matty is a serial entrepreneur, he’s a co-founder of multiple successful startups, including Insound for any of you, vinyl heads out there like Val’s husband, and most recently Matty started and ran Clearhead, which was acquired by Accenture a few years ago. I am excited to welcome him to the show as he’s had quite an impact on my career and my own professional growth from the brief time that we worked together back in the day, so welcome to the show Matty.

0:03:03.0 Matty Wishnow: Thanks, Tim. Thanks, Val and Moe. Honored to be in such wonderful company and congratulations, Moe.

0:03:10.4 MK: Thank you.

0:03:12.2 MW: Moe have you heard… Do you know the comedian, Jim Gaffigan?

0:03:19.0 MK: No. I’m not good with comedians.

0:03:21.2 TW: He’s the Hot Pockets.

0:03:23.6 MK: Hot Pockets.

0:03:28.1 TW: Hot Pockets. Yeah, no?

0:03:28.9 MK: Okay, I’m gonna have to Google some stuff.

0:03:29.8 MW: Yeah, I think Jim Gaffigan has five kids, and I know you don’t have five kids, but when Tim was asking you about your inbox, I was thinking about a joke that he told, which is when he had four kids, he said four kids is like you’re drowning. And then somebody throws you a baby. [laughter]

0:03:49.4 MK: Yeah, I feel that deep in my core. [laughter]

0:03:52.4 TW: I like it. That’s awesome. Alright, so to dive into the show, maybe we can kick it off, Matty, by just having you talk a little bit about the motivation for the book, ’cause it would seem like it was… You were kind of getting some things out by writing it after reflecting on your past experiences. So what was it that you were seeing and realizing that made you wanna start documenting your thoughts?

0:04:23.3 MW: Yes, well contrary, Tim, to what you may have felt it was not an exorcism, it was… [laughter] I guess two reasons that I can think of, one, I love to write, and for the first time in my life since I was an undergrad, I had the time and space to do it seriously, so I think really the practice and the space was incredible. It was very fulfilling, but number two, to your point, I did have a topic and a thesis that I was interested in thinking through and researching that I thought was both relevant and possibly time sensitive. So yeah, opportunity and motive. I guess.

0:05:05.9 TW: [chuckle] And the basics of that thesis, I guess there were several parts of it, but one of the things that really struck me was this idea that we jump too quickly to hypotheses, so what’s the elevator pitch on where that… Where you were starting to see that as being a challenge.

0:05:24.4 MW: Yeah. Well, here we are, 2023 we’re sensibly 12 or 13 years removed from the arrival of the Lean Start-up, the book, and Lean Thinking and Lean Canvas, and about 11 or 12 years from the tsunami of A/B testing and personalization and adjacent software. And I was reflecting back on my own experience, vis-à-vis all of that, and whether the promise of experimentation, data-driven design, data-driven product, data-driven marketing had paid off, and I was thinking of it broadly through the lens of growth, start-up growth, product growth, performance marketing growth, all of that. And you and I had talked about this previously, Tim, but I was both observing and intuiting a hangover and was trying to figure out what the source of that was, and my thesis was basically that growth, as we talk about it in start-up bubbles is broken, and that we all sort of know this and we know it in spite of the data and the tricks of the trade, and the hacks and the cheat codes, and the very smart people that we have access to, and I was trying to figure out why I think it’s broken and what an alternative would be wherein the growth was healthy.

0:06:46.2 MK: I know we’re gonna get into this in a lot of depth based off Val and Tim’s thoughts. Before we get there though, I really identified obviously with this problem space that you’re talking about, growth hacking in startups, and this like need to expand. Now that you have kind of feel like you’ve had this realization, do you think there are any companies that are doing this well, or like anyone in the market that’s kind of, I guess walking in step with you on us needing to rethink this space?

0:07:18.7 MW: That’s a fair question, Moe. It’s funny, the person who asked me this question first was my mom, and I was trying to describe to her what I thought was a fair answer to it. I would say the blunt answer is no. I would say the more subtle answer is there are many companies who I think employ evidence-based thinking, and I think that the Lean start-up community probably around 2015 pivoted slightly towards problems and away from hypotheses first, I think Ash has written and espoused a lot of problem-centric thinking. But I still think that problems and more importantly, what proceeds problems, which I believe are goals and desires tend to get yada yada-ed. So I think many companies, especially in product and UX design are increasingly problem-centric, but I don’t really think that’s where the origin of healthy growth is, I think that’s kind of where the constraints of the system are. As to companies that I would say are really mission and purpose-driven, and think about growth in healthy terms, I’ll say anecdotally, as opposed to betraying confidentiality for companies I’ve worked with, I think that Airbnb does a pretty good job, honestly, of knowing why they exist, what success is, what problems they need to solve.

0:08:46.9 MW: I’ve had… Again, I’m not even suggesting these are companies that I have an inside view into, although parenthetically, some are, but I think Nike… I think Nike does a good job of it. I think there are obviously parts of Amazon that do a good job of it, there are definitely parts of Google and Apple that do a good job of it, but I think most companies don’t. I think very… The list is very short.

0:09:10.5 VK: So when I looked at the book and I read the little excerpt, I was kind of like thinking about what I expected to be diving into, and I thought it was something like a modern response, and a thoughtful framework to the fail fast, like the Lean Startup, and I know that everyone’s gonna have their interpretation of what it means to them, and a lot of what I heard was making this case in this space for business level experimentation that so many decisions that a company makes are actually experiments, even if they aren’t articulated as solid hypotheses or testable hypotheses even. And it’s actually one of my favorite things to try to illuminate with a Net New client is like, Actually you… Every time your dev team releases to production, that’s a test, maybe it’s not a controlled experiment, but you’re running lots of tests and maybe you can’t measure it in that way, but I loved that framing that everything is that opportunity. I was wondering if you wanted to share a little bit more about that or how you think about that.

0:10:09.0 MW: Yeah, hear hear. Yeah, first of all, thank you. I think I was trying to thread a couple of needles here, which was, I didn’t want to write a book that was exclusively targeting people who are interested in optimization or A/B testing, because my interest was obviously beyond that, and it goes without saying that a start-up unto itself is the biggest experiment that most people, especially the founders and the founding team involved, ever engages in, except possibly parenthood, I guess. [laughter] So I was thinking about experimentation, Val, on a number of levels obviously, and hopefully that came through and hopefully that’s what you’re pointing to, which is that, yes, not only are production level changes experiments, but functionally, every change that we make where the outcome is uncertain and there’s an underlying supposition. These are all experiments. I think back to the previous question where I was talking about the last 12 years being in the wake of the lean start-up and A/B testing and optimization, I think the underside of this sort of gold rush towards LEAN test measure learn behavior, is that there’s been a conflation between A/B testing and experimentation.

0:11:25.3 MW: I don’t need to tell you or your listeners this, but it goes without saying that all cogent A/B tests are experiments, but not all experiments are conducted as A/B test, and I think that that’s been kind of one of the undersides of the last 10 or 12 years, but yeah, having been a lifetime founder, I realized that every change that we make in the business, HR changes, operational changes, billing changes, that if you just scratch right beneath the surface, there is a hypothesis, and ideally there is a mechanism for defining success, measuring it and making recommendations about outcomes, and I am very passionate about, to use your words, business level experimentation.

0:12:03.2 TW: But is that… Is that part of the challenge? ‘Cause I feel like everything we do is an inherent experiment, but I feel like what I was… The thing that I kinda got excited about with the framing of it as a problem is that it does seem like changes happen, ’cause there’s sort of an assumption that well people wouldn’t have done that if there wasn’t an underlying problem, and that’s like a super dangerous assumption. It’s like, well, this tool has the ability to do a nurturing program, we bought the tool, therefore we should do a nurturing program, and next thing you know, you’re millions of dollars into developing a marketing automation tool, and nobody actually stopped to say, Is there a problem that we’re solving, or is there an opportunity? That to me was like, he’s been a frustration with… Things do happen, they are an experiment, but they’re not grounded in, Is this an experiment worth doing, is this a change worth doing and what are we solving for?

0:13:09.3 MW: Yeah, completely agreed. You all have the most salient example I can think of from, say, 2013 to 2019, was walking in and talking to someone who was in charge of digital for our business, and them saying, our site needs personalization. I couldn’t count on all my fingers and toes the number of times that happen per year. To which I would say, No, I don’t think that’s exactly what you mean. Like for instance, you might mean that like returning customers who always buy a certain style of shoes get frustrated that they have to navigate a certain way and it takes too many steps, and they think that you should know more about them. That would be… I was like, do you think that that user needs an experience that recognizes them. The idea that the problem you’re trying to solve is that your site needs personalization is not really the problem. I totally agree with you.

0:14:04.7 MW: Yeah, I completely agree with you that calling everything an experiment unto itself is not sufficient. Right, and this gets into Tim, what you were alluding to before, which is like the mania of build, measure, learn, hypothesis-driven frenzy, where it’s like, I have a hypothesis, I have a hypothesis, let’s experiment, let’s experiment, and then you have to hire teams of people, you have to buy more software and you’re buried under the weight of experimentation. I think the things that I would say to that are number one, yes, everything’s an experiment, that does not mean that the rigor with which the experiment is conducted is the same, and in some cases, you don’t even conduct a rigorous experiment, but you still identify the parts of the behavior using the same language, there’s still a goal, there’s still a problem, there’s still a hypothesis, there’s still an experiment and there’s still an outcome even if it’s just, a just do it. And then the other sea change, which is, it seems like what attracted you, Tim, to the book on some level, is the change towards problem-centricity as opposed to, What do we test, Can we test the change to what problem are we solving for, and is it a problem most worth solving? And that’s a big, big, big mind shift.

0:15:24.7 VK: Yeah, the whole, just because you can doesn’t mean you should. Is a huge theme and Anthem, and I feel like any problem that we start like tech first, especially is just… I remember there was a retailer that we were potentially gonna be doing some business with, and they stated that their goal was a 1000 personalizations in the next calendar year. I was like, Oh boy, we’re gonna have to… We’re gonna have to work on that one for sure. But I recently was at a conference, I won’t say the name, something like [0:15:52.7] ____ recently. [laughter] There were lots of people, it was actually a wonderful event, but there were lots of people that I talked to, and they thought that their problem was we’re not making use of all the features that we’re paying for in our SKU and trying to force into problems or identifying problems that the technology solved and hopefully, hopefully I was able to make some positive impact on that because it’s really painful to think that the error is not making use of all the tools in your tool belt and not starting with some of the things that you were talking about, many of the problems, the problem statements that are worth solving.

0:16:27.3 MW: Yeah, yeah, I would say that when I first started forcing both my team and our clients to start before solution hypothesis, to start with problems and before that with goals, and then as we probably will get to before that with desires, I would say the first day was just like an exhaustion, a barfing out of solutions that people were trying to call problems. It is, that behavior is very, very ingrained and it takes like… You have to flush out the system before people are like, Okay, I’m now of sane mind and can actually describe what a problem is because we are so solution-oriented, we so want cognitive closure, we so want to chase our ideas.

0:17:14.5 MK: Okay, I need to talk about the definition of a problem or the concept of problem, because this is the bit that really stuck with me of when we say problem and you articulate this very, very well in your book, there’s a lot of negative associations that go along with it, you think about complaints, you think about bugs you… And that can sometimes be a bit un-sexy, right? People wanna build hot new features, they wanna launch it. And when you say to people like, No, let’s take a step back and look at the problem. You have to overcome first that negative association because people will wanna get excited about the solution. Talk to me a little bit about how you’ve done that with clients you work with, because I feel like that could be a real barrier for some people.

0:18:08.0 MW: Yeah, well, the first thing I would say Moe, is you got to name it to tame it, which is like you say, I’m gonna describe what I mean by problems, and I think initially, some of you might brace yourself, you might like emotionally clench a little bit, for all the reasons you just described, Moe, which is shame, embarrassment, not wanting to acknowledge what you’ve swept under the rug. So my short and probably insufficient definition of what I use to describe problems are obstacles, defects, unmet opportunities. Those are the big ones I would say. Those are like the three big buckets, obstacles, defects, and unmet opportunities. Now, unmet opportunities does a lot of the heavy lifting there, because unmet opportunities speaks to both opportunity cost or expectations that the end user, whether it’s internal or external, has that you’re not fulfilling on. So the first part is just taking a little bit of air out of the word. Now, the second thing I always say is that problems are the fulcrum of growth, they are what reveal to us what the solution is. The solutions aren’t these magical revelations that come to us and I use the example obviously of like science, like a virologist or an epidemiologist wouldn’t wake up one day and say, I’m going to discover the solution, I’m going to invent the vaccine for some problem, some disease that I don’t know about.

0:19:40.0 MW: They’re going to have rigorously studied the virus, rigorously study the disease. And it’s amazing how intuitive and obvious the solution is once you understand the problem. The best hypotheses in my experience, having tested literally many thousands of them are almost always, stop doing the thing that causes the problem or do more of the thing that the customer wants, those are almost… Almost all of the winning hypotheses are some variation of those two ideas. So, I think once you really explain to people that problems aren’t all things to be ashamed of, and that the underside of problems are almost always the best solutions, you normally get more… The ears perk up. Then as I sort of alluded to before, then you have to do the hard work of getting people to articulate clear, concise, evidence-based problems, and that’s not necessarily easier… Any easier than articulating hypotheses.

0:20:36.4 TW: Is there a need to… In the framing of it saying, “These are problems.” It feels like when articulating problems, it can be a… We’re not pointing fingers that somebody’s causing a problem, we are trying to come together. I am probably gonna be using this example for the rest of my career, we’re working with a pharma company, they had vaccines, there was a vaccine that was kind of elective, and there were kinda relying on the healthcare providers to recommend the vaccine to the group of people who would be recommended for. And it felt like we were doing… We had hypotheses and we were testing things and we were… But when we’d have discussions, it kept lurking in the shadows that what… The problem was, there were some health care providers who just weren’t seeing the need to recommend the vaccine. We knew that people were coming in who should get this vaccine and they weren’t, and it was like the whole wanting to name it, that’s the problem. And that was one where I’m like, I don’t know what the solution is, but we keep not naming the problem, finding the evidence around what the cause of the problem is, was something we could then push for.

0:21:57.8 TW: And that just… That same… Those investment dollars were instead going to tweaking little things on the website or setting up super intricate, low volume, email nurturing campaigns, and it was like… To me, the problem language helps to say, can we really… That might get us to bigger swings than this, “Oh, we saw something that we thought we could make better, we saw a solution without ever naming the problem.” So, to me it seems like naming the problem should help unify people and say, “Ah, we’re all in alignment that this is the problem. Now we can go and have debates and discussions about different possible solutions or hypotheses,” is that a… Is that how it works? Or can it work that way?

0:22:50.6 MW: Oh, without question. I would say that problem definition and prioritization is cathartic in a way that hypothesis development and prioritization is not. I think many of your listeners, and you will all relate, hypotheses are cheap, everyone has ideas, and they are by definition speculative, they are hypothetical, so prioritization is fraud. Problem definition and prioritization is more evidence-based because they’re either happening or have happened. They can be quantified, and if they’re not immediately quantifiable, they can be relatively directly quantified through research or historical analytics. And what you alluded to, Tim, with the pharma client was what we call a problem most worth solving, and problems unto themselves are not especially valuable. Root problems, the problems that constrain a system of problems are quite valuable, but the problems most worth solving are like the root of the root systems, and those are the ones that if those are unsolvable, what you’re looking at then is a product or business that’s going to fail. And I’ve encountered that several times where great-looking software products, very SMART team has not acknowledged the fact that what they want to sell their product for, is not what their customers are able to pay.

0:24:16.5 MW: That’s a very, very common thing. They wanna sell it for hundreds of thousands of dollars a year, but the wallet their customers have might only be tens of thousands of dollars a year. And those are… When you confront a problem like that, you can double on the margins of everything else, and A/B test your way, incrementally, it’s not going to resolve the fact that you either need to change customers or change pricing.

0:24:40.0 MK: So one of the questions that I do have, and it’s actually funny, it’s been on my mind for a fairly long time, and so I feel like your book has come at the perfect environment for me because it’s bringing together all these threads. And whether it’s experimentation or analysis, I think to be a good analyst, you need to be listening for problems. That’s a core of that job, if you ask me. How do we get to that finding out what the problems are, and then sitting with our stakeholders to really do that comparison and figure out what the problems worth solving are? How do we do the listening bit well, and to make sure that we’re doing it regularly and effectively?

0:25:23.3 MW: Yeah, well, so the question of, where do problems most worth solving come from? Is answered fairly swiftly in the book, which is you have to listen to your goals to ideally SMART goals and ideally SMART goals that are balanced on a scorecard. I’m a very big proponent of a balanced scorecard approach of having an effective EKG for whether it’s your business, your product, your department, whatever it is. And I sort of describe five dimensions of a balanced scorecard, historically, the balanced scorecard is four, I thought a lot about it, and I proposed five dimensions. And that every business in most departments and most products would have at least four, and in most cases, five dimensions of a balanced scorecard. Those SMART goals, balanced scorecard, help orient your ears. They help direct you, they help constraints and say, “Listen in this direction, not in that direction. Listen towards employee retention, listen towards product utility, listen towards unit margin,” whatever it is, that’s where you should be listening for the problems. Because otherwise, there’s a sea of data and information and potential problems out there. And what I like to say is hypotheses are democratic, they’re only right if they’re valid. Problems are meritocratic, they are observable.

0:26:54.3 MW: The biggest and best problems are the ones you should focus on, however, goals are autocratic. We don’t all get to just say, “Oh, I think this should be the goal,” goals are defined by leadership. And that is how growth in the system works, which is leadership defines the constraints, and then the team of people who are tasked with listening for problems, have to sort of tune their ears in direction to those goals. I know I’m mixing a lot of metaphors, but that’s kind of my most visual description of what is an auditory medium that we’re working in today.

0:27:30.0 MK: No, I love that. What about though, when the goals conflict?

0:27:35.2 TW: Yeah, I was like, “Moe, come on, I wanna hear from the inside,” and you’re like, “Oh, but what about.” Yeah.

0:27:41.0 MW: Yeah, well, that’s… I would say that’s a balance scorecard problem. A balanced scorecard doesn’t have conflicting goals. There should be enterprise goals that really explain what the C-suite is accountable for, and departmental goals or product goals beneath that should ladder up. I’m not naive to suggest that conflict doesn’t happen, but I think good leadership is about clear balanced scorecards, and I think that that very, very rarely happens. Now, Moe I didn’t answer this, but… And I know this wasn’t really your question, but the beautiful thing about problems is that we get more feedback from users than ever before, even a post cookie, first party, data age, our customers and our employees are for better and for worse, giving us more information and feedback than ever before. So, I think, the question is not like what to listen to? It’s more like, where do we point our ears to know what not to listen to?

0:28:37.4 TW: So, just to push back, and I’ll admit I have a somewhat visceral reaction to a lot of the things like SMART goals, I struggle with things that get thrown around a lot, and I know yours is very thoughtful. But if I’m listing saying, “Well, I work at a company that isn’t on the balanced scorecard train, and maybe best utilizes the crap out of the use of SMART KPIs or whatever they use, is SMART. What can I do?” To me, they’re still kinda… I think Moe, where you were, the framing of, and even still saying it’s meritocratic. If I come in and say as an analyst, “I have been working with our staff, I have been listening to the questions coming in, and fundamentally, I as an analyst, I think there are these seven problems.” That’s what I’m hearing from the business, no one is asking me. No one is coming to me asking me to solve these problems. Is that a valid framing to be a little more gorilla tactic of, I think I see these… The pharma example could not get the stakeholders to fully… It kept cropping up in the fringes, and I’m like, “That’s a problem.” I wanted to come back and say, “These are the problems you have,” and then have the meritocratic discussion without necessarily going totally top down in kind of an ideal world that seems like that’s valid, is that fair?

0:30:22.5 MW: No, I think what you’re pointing to is the most real common practical problems with any ideal methodology, which is that most businesses are already out at sea sailing, and it’s hard to fix the ship while you’re sailing. And it’s… While you’re sailing, it’s also hard to be like, “Wait a minute, we are supposed to be heading due east and we’re heading southwest.” It happens where with experimentation roadmaps, with product roadmaps, with priorities, and to use your example, Tim, it happens where in people at some point in maybe RS I recognize that we’re focused on the wrong goals or problems. Okay, this happens all the time.

0:31:05.9 MW: Which is why as a really like a life-long founder, I can tell you that nothing made me happier than when the team would come to me and say, “Matty, I know these were the goals,” or, “Matty, I know these were what you said, or what we decided were the problem’s most worth solving. But I’m observing something that I think challenges the system.” The growth only happens when the system is flexible enough to continue to revisit goals, problem, solution hypotheses. So, I think that what you’re describing, Tim, happens even in the most ideal circumstances. And not only do I think it’s acceptable or incumbent upon people to sort of challenge the system, but I think that’s where the most oversized growth tends to happen when people are like, “Oh, the door that we should be walking through is straight ahead of us, but we were told to look to the door to the left.”

0:32:02.6 MW: It happens all the time. So, for anyone who’s an individual contributor out there, I would say that you should tune your ears in the direction of the goals that have been set forth, but to the extent that problems come up that reveal that are bigger and perhaps not aligned with those goals. Or perhaps to the extent that you think that the goals are not so SMART, or not so balanced. I can only say from my experience, that’s the beauty of being a chief executive, is that you get the leverage of the SMART people, the sum of their intelligence. Because by definition, the hive is exponentially smarter than you and your silly goals.

[chuckle]

0:32:45.0 MK: Actually, this is when I was reading in the section, this actually felt like maybe one of the areas that someone could jump into, even if they’re not a leader or an entrepreneur, even if they don’t think of themselves as entrepreneur, to say like, “Okay, I’ve inherited this roadmap or these goals,” how would it play out? In my head, if you’re focused on performance measurement of the goals that you have at hand and coming up with the actionable, testable hypotheses. It feels like you’re gonna get to a point where you’re reporting on some of that, where you can show that there’s disconnect, where you can show that you’re not addressing some of those problems to start to make space for those new ideas. I think you referred to many as like, you come up with these natural points of re-prioritization or re-setting. Do you think that that’s… How often can that be true? Or what would you say to encourage someone who is kind of dealing with those constraints? But can kind of see it, but they just are wondering how to get there from where they’re at today.

0:33:43.0 MW: Yeah, so I think positive contagion of this sort of thinking is very possible up to a point, and that point will be determined, I think by the open-mindedness and growth mindset of the next manager or next executive up. I do think that most businesses might have some goals at an enterprise level, oftentimes goals at the department level are not as well-defined, goals at the product level are even less define, goals with the campaign level are even less defined than that. But to the extent that whatever you’re working on, you have the capacity to define your goals and problems for that, and then to share it up one level and say, “Okay, does this align with what you think department head for our department?”

0:34:30.0 MW: And then the department head will be forced to then either be open-minded and acknowledge maybe that they haven’t been as rigorous or not, and then that department head could say, “Maybe I need to talk to my boss, the C-level executive about this.” So, I do think that their positive contagion is a real thing, but I’m also realistic enough to know that it can be hard to innovate and change process within large, well-established businesses where inertia has set in. That does not, I don’t think absolve anyone from doing their best to sort of say, “Why am I doing this? How will I know if it’s successful? What are the problems most worth solving? And what are my best hypotheses for resolving those?” I think that anyone can do it, it’s just a matter, if you’re doing it in a very molecular level or whether you’re doing it at the enterprise.

0:35:19.2 VK: I think that change management piece, especially trying to get the inertia, like you said, in the large organizations is huge. I think a lot of times with some of the engagements that I’m working on right now, just getting leadership to understand that some of their initiatives for the year are actually hypotheses, getting that head-nod is like, “Ooh, okay, level one were there.” And so then it’s like making space for it over time, ’cause it’s hard to just reset all at once, but… No, those words are… Those words are wise.

0:35:51.8 TW: You talk about… There’s story telling, and I think there’s just storytelling generally, and there’s data storytelling. To me it, let me play this to you that if you’re an individual contributor and your problems lend themselves to story-telling, evidence-based problems, hopefully. But the more that you can arm, if you arm the manager or the manager’s manager with a clearly-articulated problem, to me, it feels like, yeah, it makes sense to name it and name it effectively and tell the tale, sprinkle some evidence in and say, “We’re not diving into giving you the answer, we’re not tasked with that,” you’ve given us these other goals, you said go through the left door. But doing… Spending the time to say, “This is a problem,” or “These are the problems we’ve heard,” feels like it’s the sort of thing that, Oh, well, now the manager, the tighter that story is, the manager now is armed with that, and that can sort of drive some of that change in the mentality from the bottom-up. I can’t say I’ve totally seen that happen, but that feels like a potential vector for it to spread.

0:37:12.0 MW: Yeah. Well, I think I’ll try to tie Val’s point and your point together, Tim, which is to say there are many moments to sort of Trojan horse in this sort of thinking. And for most of the readers who have presumably not read the book that the thinking is honestly quite simple, which is that you develop an evidence-based practice that starts with constraints to the system. Those constraints are desires, parenthetically, mission, vision, values, goals and problems. And that once those are established, you then can accelerate and tune in to hypotheses, experiments, and then Tim, what you’re describing, which is what I call outcomes. When outcomes are ready to be shared, as opposed to sharing just a report or a table or just say, “Here’s what to do,” or “Push this variation to 100.” Outcomes are a great opportunity to revisit every step of that practice, it’s not…

0:38:15.0 MW: It’s really not the like plus 2.9% with 95% confidence interval. That’s not the thing. The thing is, why did we do this? How did we say we would be successful? What was the problem we were trying to solve? Etcetera, etcetera. To what extent did we sufficiently solve it? So what you described, I think Tim is data story-telling, which is how I talk about it in the book. That is a great Trojan horse moment. Val, you didn’t ask this, but you were, I think, imagining other places where people can kind of sneak in this thinking. Think about weekly one-on-ones employee reviews, these are like all moments to say, How will I know if I am successful in what I’m doing? What problems do you want me to solve, what are our best hypothesis? I think the more that meta-language that vernacular can spread, the more likely, I think healthy growth can be engendered.

0:39:12.1 VK: So I do wanna talk a little about prioritization. So you alluded to this earlier, Matty, about how you prioritize or think about prioritizing hypotheses versus problems. I have a lot of strong opinions about prioritization. This can be one of my almost Tim Wilson level soap boxes. I have a major issue with some of the frameworks like PIE or ICE, where they ask you to estimate where you’re layering a lot of assumptions on top of one another, and my problem with that is that I guarantee you that never, never has it happened that someone says, I have this really cool idea, but it’s probably not gonna have a big impact. It’s probably not gonna win, probably not gonna affect a lot of people, but can you just go ahead and test that? That has never [chuckle] happened. And so pulling the subjectivity out of it, so that you’re using something that’s really objective to try to line those ideas up to understand what to go after first, because I’m assuming you’re going to be identifying more than one problem worth solving. So can you talk a little bit about how you think about prioritizing? [chuckle]

0:40:18.3 MW: Yeah. By the way, Val, hearing you talk, I did have a little bit of PTSD from my clear head days.

[laughter]

0:40:26.6 MW: Yeah, I feel you, sister. I would say that hypothesis prioritization is a massive boondoggle. It’s one of my biggest pet peeves about the way the CRO industrial complex has evolved. I, as you can guess, I’m a big believer in sizing problems, how big is the segment that it impacts, how valuable is the segment, how much evidence do we have that the problem is real, what would be the cost of solving it, what would be the cost of not solving it? And then when hypotheses are developed, requiring people to effectively map their hypotheses to certain problems so that there’s already an implicit prioritization based on the problems as opposed to starting with, Do I think my hypothesis is going to have some big lift? Now, inevitably though, people are going to claim that their hypothesis is absolutely gonna resolve a problem most worth solving. I will say it doesn’t happen as much as you think, but it does of course happen. In which case, I think you have to look at things that are of truly evidence-based, which is like the size of the segment it impacts, the proximity to the ostensibly valued event, and I think things that are maybe a little squishier like the difference of the variation, like the likelihood of difference as opposed to likelihood of validity.

0:41:52.7 MW: So I will say that of the four people on this call, I am the least probably qualified to answer this, except to say that I share your cynicism about the PIE method, and I like to use problem quantification as a forcing function. And then anything that I like to do around hypotheses, I really like to keep it very fact-based, and there are some things about… And also, I think it compels good hygiene ’cause it would then require people to sort of say, This is exactly who I think this hypothesis should be tested for. This is the segment, this is where in the experience it should be tested, this is the variation I’m contemplating, because you can get… You can develop a rubric around difference and potential, not potential impact, so much as like the potential for interfacing with a large and valuable audience. Anyway, that’s my… It’s been a minute, but that’s me dusting off my old answer.

[laughter]

0:42:53.2 TW: Alright. Well, one problem we have is time and that we have to stop at some point, so I think I can just… I can kind of summarize the whole episode that as long as you identify your problem as being that you don’t have enough display traffic visitors clicking on your global nave, that you’re off to the races. So this has been really useful. Yeah, obviously that’s a terrible idea, but…

0:43:18.0 MK: Triggered, feeling triggered.

[laughter]

0:43:22.2 TW: But this has been a great discussion, but we do have to head to a wrap, and we always like to kind of wrap up the show by having a little bit of fun or throw some other topics out by doing a last call going around and have everybody share a link or an idea or an article or something that they found interesting that others might find interesting as well, and Matty, you are our guest, so would you like to go first with the last call?

0:43:52.8 MW: Sure. You guys are too kind. So Tim, you and I bonded 12 or so years ago over baseball statistics, and I think a book called Flip Flop Fly Ball, do you remember Flip Flop Fly Ball?

0:44:03.1 TW: I’ve got it. Yup, I can tell you where it is. I can go put my hands on it right now.

0:44:10.2 MW: So Tim knows well that I’m obsessed with advanced baseball statistics, but even if you are not obsessed with advanced baseball statistics, the daily podcast from The New York Times, two days ago, released an episode called the plan to save baseball from boredom. As many people know, baseball’s made a lot of radical changes this season, changes that I’m excited about, but this podcast explains the origins of those changes functionally, the problems that baseball was trying to solve by testing these hypotheses. So I recommend that listen, whether you are a disinterested baseball hater or whether you love baseball, I think it’s very appealing.

0:44:47.2 TW: This year, this season specifically, stuff that’s kind of crept… The only thing, except for the robot umpires, but the bigger the pizza box, the bigger, bigger basis, the timers on things, this feels like a year where they’re like, yep, we’re gonna shove a whole bunch of stuff into production and see what happens.

0:45:07.8 MW: Can’t wait.

0:45:08.7 TW: Awesome. So Val, I guess as our guest co-host, maybe you, you wanna share your last call?

0:45:16.5 VK: Yeah, sure. So I’m about two-thirds of the way through a book called Marketing Rebellion: The Most Human Companies Win. And it’s a book that I actually heard of from the founder of the guitar school that I was a part of when I lived in New York, and he wrote this newsletter where he was talking about testing the idea of cutting his advertising budget in favor for some more local community investment, ’cause this book is all about how traditional marketing is dead, and we need to be thinking about how to be human again, and that small is gonna be big, and I love that the first sentence of this book is, A few years ago, I formulated an uncomfortable hypothesis. So I feel like a lot of time here, but it’s really interesting, it’s a lot of really good case studies, it’s a really fast read, so I highly recommend.

0:46:04.7 TW: And it feels like getting back to the, like, you need to think and be creative and that’s just a burr under my saddle for a while. Now that sounds very cool. Moe, what about you? No pressure. First, last call as a mother of two.

0:46:23.4 MK: Well, it’s very personally relevant and Matty’s joke about drowning and throwing a baby at someone, this is pretty on point. So it’s actually an interview in Time Magazine with the CEO and founder of Bumble, her name’s Whitney Wolfe Herd, and she is the youngest woman ever to take a company public, which she did with her baby Bobby on her hip, which is hysterical ’cause my baby girl is also called Bobby, and it’s basically about some of her experiences as a mother and obviously running this very successful company, and yeah, it’s just… It was a really good timing for me to read it at the moment as well, because she talks a little bit about that feeling, the feeling of drowning and people throwing babies at you. Which she describes is like driving 20 different cars all at once, but then also about how having children has made her reflect a lot more on how she uses her time. So that’s a really lovely read for anyone that just wants to be a little bit thoughtful, or give some of their brain power to how they balance family and work.

0:47:31.6 TW: That’s good. Now that you’ve mentioned her on the podcast, maybe that gives us a better shot of getting her actually on as our guest.

0:47:41.6 MK: I know, right?

0:47:41.7 TW: ‘Cause we’ve made that run at it. So anybody who has a Whitney Wolfe Herd connection, let us know ’cause we have questions.

0:47:49.1 MK: What about you, Tim?

0:47:53.7 TW: So mine, I don’t think I realized I was gonna have it and then realized that there actually is a… I would be shocked if, Matty, you’re not a fan of pudding.cool. Are you familiar with?

0:48:02.1 MW: Mm-hmm. Yeah.

0:48:03.0 TW: And I have used them as a last call before, but this is one that kind of popped up in a couple of channels, they did a not too long ago, a map of places in the US with the same name. So it’s a very US-centric, but it basically tries to figure out… And we were… I can’t remember if we were recording yet or not, we weren’t in Portland and I… You say Portland, well, in the US, some people are gonna think Portland, Maine, some are gonna think Portland, Oregon, and there’s a Portland, Tennessee, a Portland Indiana. And so this is basically based on, at a county level with Wikipedia and a population size, just tried to develop a metric so that when you enter a place name, it says depending on which county you’re in, what’s the likelihood that people are… What they’re referring to. So it’s a cool visualization, and this is where personally, I’m… I grew up in Sour Lake, Texas, which does not show up in this because I think Texas is the only state with a city called Sour Lake.

0:49:04.6 TW: But I grew up within 10 miles of China, Texas and Nome, Texas, and those three cities all went to my high school, so it’s just kind of an interesting thing to think about some of the regional differences, and it’s hard to not get lost, it’s also kind of nice when you pull it up, it just kind of randomly throws a place name in. And it’s hard to not get sucked in, if you’re in the US to entering places from your youth or current, and realizing how many different places are named Springfield and where you are and how that would be interpreted differently, and it’s like everything on pudding, it is very, very well visualized.

0:49:46.1 TW: So with that, we can call it wrap. Matty, thanks again for coming on and talking. The book again is Listening for Growth: What Startups Need The Most, But Hear The Least. Definitely, I think, hopefully you got from this, people got from this conversation, I think it goes beyond startups, there are some really, really useful thought-provoking things. You are on the Twitters, are you not as @MattyWish, talking about baseball and music and other interests, and maybe there are problems floating in there as well a little bit.

0:50:26.0 MW: Yeah, mostly baseball and music, but occasionally startup stuff.

[laughter]

0:50:31.9 TW: Well, thanks so much for coming on, it’s been a pleasure. And for you, dear listeners, we always love to hear from you what you thought of this show, feel free to labbe how terrible I did relative to Michael on the duties that I am… Every time I do this, I gain a greater appreciation for how smooth he is, but you can find us on Twitter, on LinkedIn, in the Measure Slack group. I will say a big thank you to Michael Helbling for letting me suffer through running this show, so gratitude to him always is a one of my dear friends and one of the co-founders of this whole little endeavor, and then as Michael would always say, No show would be complete without thanking Josh Crowhurst, our producer who… Josh, I’m sorry, there are more edits this time than normal and it’s my fault, and you now too have an appreciation for Michael. So with that, for all of you out there, as you are thinking about the problems you have, the problems your organization have, your hypotheses, how you’re prioritizing them, problems is the fulcrum of growth through all of that, what’s really important is that you don’t make it a problem by continuing to always keep analyzing.

[music]

0:51:53.9 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions and questions on Twitter at @analyticshour, on the web at analyticshour.io our LinkedIn group and the Measure chat Slack group. Music for the podcast by Josh Crowhurst.

0:52:13.1 Charles Barkley: So smart guys want to fit in, so they made up a term called analytics, analytics don’t work.

0:52:18.6 Kamala Harris: I love Venn diagrams. It’s just something about those three circles and the analysis about where there is the intersection, right?

0:52:30.2 MW: How many episodes have you done?

0:52:32.3 MK: It’s 218.

0:52:33.9 TW: Two hundred and… This is number 218. Yeah.

0:52:36.4 MK: What? What?

0:52:36.5 MW: Oh my goodness, bravo!

0:52:37.9 TW: Been doing this since 2016.

0:52:39.1 MW: Bravo!

0:52:40.5 TW: So slow and steady.

0:52:44.1 MW: Yeah. I was told you guys are like the Mark Marin and Deck Shepard of analytics podcast.

0:52:51.6 TW: That’s… [laughter] Wow. Alright, so normally, I am not the normal MC person, so if I bumble along, I have to do a couple of restarts, that’s because this is not my normal role, but I’m gonna…

0:53:06.5 MK: You’re very good at MC-ing, just roll with it.

0:53:09.3 MW: Yeah, I can tell already.

0:53:10.0 TW: As Micheal would say…

0:53:10.6 VK: How many times have you done it Tim?

0:53:11.2 MK: He’s done it a few.

0:53:12.2 TW: Three or four. Yeah. There have been a couple cases where Michael like…

0:53:15.0 MK: He’s better than me. He doesn’t stop and start being like, fuck, fuck, fuck, fuck. Like I do. It’s so funny though. That’s one of my favourite parts of all, about last time. Like going, going, going, Fuck my life.

[laughter]

0:53:35.4 TW: Rock flag and he brought a positive contagion of problems.

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#258: Goals, KPIs, and Targets, Oh My! with Tim Wilson

#258: Goals, KPIs, and Targets, Oh My! with Tim Wilson

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_258_-_Goals_KPIs_and_Targets_Oh_My_with_Tim_Wilson.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares