Subscribe: Google Podcasts | RSS
Subscribe: Google Podcasts | RSS
Mistakes happen. In healthy work environments, not only is that fact acknowledged, it’s recognized as an opportunity to learn. That’s something JD Long has been thinking about quite a bit over the past few years, and he joined the show for a chat about psychological safety: what it is, why it’s important, and different techniques for engendering it. Michael trolled Tim almost immediately, which is: 1) ironic, and 2) slated to be addressed in a blameless post-mortem.
Photo by Timothy Dykes on Unsplash
0:00:05.9 Announcer: Welcome to the Analytics Power Hour, analytics topics covered conversationally and sometimes with explicit language. Here are your hosts, Moe, Michael and Tim.
0:00:22.2 Michael Helbling: Hi everyone. Welcome to the Analytics Power Hour. This is episode 184. Happy New Year. It’s the first episode of a brand new year, and when you think about years past, especially maybe the most recent two, nothing is more true than that we’ve all navigated so much change, so much alteration, so much need to adjust our thinking and actions, maybe more than at any other time in our lives. As analysts, we’re definitely close to change in our careers, which is a good thing, but sometimes inhibiting change or the expression of analysis, there’s a lot of risk that comes with being that person that has to speak up and harpoon someone’s important pet project or data story, and it’s sometimes kind of difficult. Tim, you’re the quintessential analyst. You basically remain the same, but you’ve probably observed this change in others I was talking about, right?
0:01:22.4 Tim Wilson: No, I’m usually pretty oblivious to other people’s behaviors and feelings and emotions. I just harpoon ideas.
0:01:29.9 MH: Harpoon ideas. And that is why I love you. Well, anyways, you’re my co-host on the podcast, welcome. And Moe, also our co-host, you’re always engaged in change. Has it been more pronounced for you over the last couple years? I laugh when I say that, ’cause it’s like, what… Maybe?
0:01:47.8 Moe Kiss: Yeah, I guess a bit. I guess a bit. Sure. Yeah.
0:01:51.8 MH: A bit. That’s right. And I’m Michael Helbling and yeah, we’ve gone through some change. We needed a guest, someone who’s done quite a bit of analysis, providing insights, maybe even written a book about it. James JD Long is the Vice President of Risk Management at RenaissanceRe. I think that stands for re-insurance. I don’t know. He’s one of the authors of R Cookbook. You probably know him from Twitter where he is under the name @CMastication, which is cerebral mastication, and he builds models, and I’m pretty sure he builds them better and explains them better than all of us, which is why we’re super excited to welcome him as our guest. Welcome to the show, JD, it’s an honour.
0:02:29.6 JD Long: Thank you guys so much for having me. All my models are wrong, I guarantee you.
0:02:34.8 TW: But are some of them useful? That’s the question.
0:02:36.5 JL: Yes. [laughter]
0:02:37.6 MH: But I love the way you described… You described it very nicely, it’s like, yes, like George Box says, “All my models are wrong, but I do a really great job figuring out when and where,” and I thought that’s a really great way and a smart way to do that.
0:02:50.8 TW: That’s what the… Is that what the paper really says, if you read it?
0:02:54.9 JL: What… In my LinkedIn profile at one period of time, it just said basically, I build wrong models and my skill is figuring out if they’re useful or something, and I got an amazing number of people be like, “You can’t say that.” [laughter] And I’m like, “Oh, if you don’t understand that’s a reference to George Box, you probably don’t wanna be my friend or whatever we are on LinkedIn anyway.”
0:03:16.7 TW: I don’t wanna interview, yeah.
0:03:20.2 JL: I don’t wanna talk to you if you don’t understand what that is, what that’s a reference to. [chuckle]
0:03:24.3 MH: So I think one quick thing to get out of the way is… Been hearing a lot of rumors about who’s gonna play you in the movie about your career in life, and Leonardo’s been mentioned quite a bit, but also Matt Damon? But who’s the frontrunner in your mind?
0:03:40.5 JL: Oh, that’s a really tough call. It’s hard to say. Matt Damon, I’m a big Bourne fan.
0:03:47.2 MH: Oh, there you go.
0:03:49.1 JL: That’s kind of neat, and I’d like to bring that energy into spreadsheets and R. So maybe that’s whom I’m rooting, but they don’t let me have any say in this, right?
0:03:58.8 TW: He overcomes his Boston accent for so much, so you figured, give him a real challenge to get into that Western Kentucky.
0:04:05.9 JL: Exactly.
0:04:06.9 MH: That’s right. Coming soon to the Marvel Cinematic Universe, R Cookbook, the movie.
0:04:13.8 JL: Could be some Mark… There’s Mark Wahlberg is sort of the black horse there, he’s coming in from behind.
0:04:19.9 MH: I could also see maybe Benedict Cumberbatch or something like that…
0:04:23.5 JL: Oh, there we go.
0:04:25.8 MH: Anyway, lots of good options. [laughter] Anyway, the point I’m trying to make is, JD, your career has been many years and you’ve done some amazing things and the work you do is pretty impressive, but I think it’s kind of interesting ’cause when we started talking a little bit about topics to cover, we kinda laid it on a really interesting one, given the quantitative nature, if you will, of all of our work, which is around, how do you manage change, how do you provide Psychological Safety for analysts and for analytics within organizations? And I was curious as maybe as we kick things off and get into a discussion about it, what’s propelling that interest for you, or what made that something that was interesting and top of mind for you?
0:05:11.9 JL: One of the things, and I think you’ll see this in other folks who kinda come on podcasts and speak in conferences, often what we’re doing is we’re working out out loud ideas that we’ve been thinking about. And the whole talking about it on Twitter, talking about it in a podcast or speaking about it in a conference, we then get feedback, we have a conversation at the bar, conversation later, and it helps polish some of these ideas. Some years ago, and it’s been a number of years, since when I lived… Pre-COVID, right? A whole other era. I gave a conference presentation about empathy, and empathy as a technical skill, and likened it to machine learning, and it’s how I train myself to understand your experience. I’m building a model inside of me to help me understand what you’re doing. And I worked on that idea for a while and thought a lot about managing of people and relating to technical people.
0:06:08.0 JL: And I loved when you had Hilary Parker on here and she brought in empathy into that discussion, and I thought that was fantastic. So an extension of the empathy is when you start thinking about teams of people. Empathy is often just a one-on-one concept. When you start thinking about teams, you start trying to think about, “How do I make an environment so there’s space for empathy? And what are the factors that are making our team less functional.” And a whole bunch of this is, a guy I work with who heads up the team I’m on, had brought up this idea of Psychological Safety, and I thought it was really interesting. And part of what was most interesting is he does not consider himself a very empathetic, tuned-into-people’s-emotions kind of person, and because of that, he thinks about it a lot, and as a result, he ends up being better at it than most. And that’s a real irony. He doesn’t feel like it’s his natural strength, so he thinks about it, works on it, and ends up being better than some of the folks who think it’s their natural strength. That’s development mindset, right?
0:07:08.3 JL: And so we think a lot on our team, RenaissanceRe is known within the reinsurance industry, and, by the way, I have to say, I’m not here representing RenRe. I’m here representing me, and stuff I’m gonna say you’ll find people at RenRe will disagree with it. But it is the place that I work. And it’s the place where I think about a lot of these ideas. We’re known for having a bunch of really smart people. And as a matter of fact, I came in to work there with a guy, and he has a PhD, and I just have a master’s degree, and we had a conversation…
0:07:39.4 MK: Just.
0:07:41.1 JL: Just a master’s degree. And he said… And basically I said, “Wow, here at RenRe there’s so many smart people,” and he was like, “Here at RenRe? There’s so many smart people,” and his eyes got big and I’m like, “Yeah, it’s funny. Same words, very different… ” Is that an environment that you feel like you’re gonna flourish in, or is that an environment that’s intimidating? And I realized people were having both experiences in the same organization. Some were coming in, it was like, “I love this, I go talk to people and they mention methodologies and tools that I’ve never used, and I’ll ask them to explain it to me, and they tell me… And they wrote their master’s thesis on wave transforms and how these signal processing happens.” And to some folks that’s super intimidating. So how do you have an environment where you have people with PhDs, people who are super smart, and you have analysts who are right out of college? How do you take that environment and make it where people can grow, as opposed to just a place where people are petrified?
0:08:38.9 MK: Totally.
0:08:39.0 JL: And that’s the context of where I was introduced to the idea of Psychological Safety. My boss says, “We got to figure out how to make the team psychologically safe,” and we started talking a lot about what does that mean? And I assume that’s what we’re gonna rather than me delivering a homily here for the next 20 minutes, I’m gonna pause and…
0:08:58.7 TW: I’m enthralled.
0:09:01.5 JL: I don’t mean to be delivering a conference presentation here. I’ll do that in time, but this isn’t the right venue.
0:09:06.7 MH: I just have a really quick question before we dive into other questions. Since you’ve been looking into this, what would you say would be the impact on a team of someone coming in and saying, “Just do your fucking job.” [laughter] Sorry. It’s more like a joke.
0:09:24.1 JL: I totally get it, because at some level… I’ll tell you where you mostly see it, you see this when the business and IT start working together…
0:09:31.9 MH: Sorry Tim.
0:09:33.5 JL: And sometimes the business guys… The IT guys kind of come in and say, “Oh, let’s talk about field definitions and what they mean, and the… ” And the business guys are like, “Isn’t that your job? Can’t you just…
0:09:45.0 MH: Yeah, yeah.
0:09:46.3 JL: Do your frigging job?”
0:09:47.8 MH: It’s a quote that Tim…
0:09:48.9 TW: He was trolling me.
0:09:50.7 MH: Tim might have used many times in moments of frustration. But Tim is exactly like what you described, I will say, because he thinks about it quite a bit, but he tells… He only tells people who know they can handle it. Do your fucking job. So if he ever tells you that, it means he thinks really highly of you.
0:10:10.2 JL: It’s a term of endearment, is what you’re telling us.
0:10:11.6 MH: That’s right. That’s what you should take away from that. Anyway, sorry, I couldn’t resist. I was like, that joke’s been living with me for weeks now, so let’s move on to serious topics now.
0:10:23.0 MK: Okay, Helbs, can we stop the trolling?
0:10:25.9 MH: Yes.
0:10:26.0 MK: Because I literally can talk about this topic for days.
0:10:30.6 MH: Me too.
0:10:30.8 MK: The issue, JD, that I’m going through at the moment is that I feel like Psychological Safety is the new buzz tech term that every team is like, “Oh, we’ve gotta have this thing.” And it’s not at my current work, but at previous companies, I have definitely had the situation where it was like, “We’re gonna start talking about this thing, and everyone feels psychologically safe and we can have radical candor,” and all this crap, and the truth is no one trusted each other, the environment was incredibly toxic, and throwing this on the top was like throwing fuel on a dumpster fire of… You can’t just… It sounds like the conversation for you was quite organic and like, “How do we start thinking about this concept?” Versus I think sometimes the leads in the team being like, “We’re gonna have this thing now, and everyone’s going to rally behind,” and it’s… But this person doesn’t like this person, and this person hates working with this person and this is a really terrifying place to start if you don’t have incredible trust within the team.
0:11:31.1 JL: Well, that trust in the team is the Psychological Safety, right? And that’s why you can’t dump it on top. And by the way, I think it is, possibly, it’s definitely orthogonal to radical transparency, and it may actually be out of phase with radical transparency. ‘Cause I see radical transparency in… And here I’m referencing specifically in the Ray Dalio context of, we leave a meeting and we all work through the punch list of correcting each other for everything they did wrong in the meeting? Now, I have never been in one of Dalio’s outfits. I have seen people who operated like that, and I have never seen it produce an environment that’s healthy. I am not saying his environment isn’t, I’m just saying I’ve never seen it, and I’ve never been in his environment. And I have trouble imagining, given what I know about human psychology, I have trouble imagining that working. I think what ends up happening is you end up self-selecting pools of assholes who all work together.
0:12:32.6 JL: And possibly tough-skinned people, people who can just take abuse. And so you self-select a group of people who can all take abuse and you lose whole sections of good possibly contributors who just couldn’t put up with that crap. I couldn’t put up with that crap. I’m just not wired that way.
0:12:52.8 MK: My mind is blown right now. This is…
0:12:56.6 JL: So this is like Abraham Wald’s airplane of Psychological Safety. I’m basically saying that… Okay, so Abraham Wald’s airplane is the airplane with all the red dots, and what we’re basically saying is the airplanes that land were the ones that were not hit in a critical place, so you never saw the critical places get hit. So that’s Abraham Wald’s airplane. Alright, so in radical transparency or whatever, what you basically discover is, “Oh, we got this high-functioning team, and the thing we do is we all critique the shit out of each other after everything we do.” Well, what you’re not seeing is all the people who just couldn’t put up with getting the critique crap criticized out of them and everything they did punched out. So you’re not left with the people… You’re not left… You didn’t create the environment that got the most out of your people. You selected the people that could work in your toxic environment.
0:13:46.5 TW: Which seems like it would wind up just from a… And this was several years ago, we had balanced teams and diversity. We’ve had a few different discussions that talked about the benefit of having diversities, and I hadn’t really thought about that the… And that’s usually… That’s diversity from a cultural, gender, race, but mentality as well, I would think, if everybody’s kind of a… Even if you’re making assumptions about what your customer wants and you’ve got… It seems like there would be a special kind of group think that would start to occur if that’s kind of the team that has been built is all of them that could make it through that particular gauntlet of abuse or some would say efficiency, [chuckle] efficient communication. [laughter]
0:14:31.1 JL: Yeah exactly. But you’ve basically introduced a filtering crucible that everyone has to go through and it filters. It doesn’t create a certain type of person, it filters. So the people who aren’t… And now it may make certain people more that way, but it just filters out a whole bunch of people. So we said, “Okay, we want a healthy environment. We don’t want a filtering environment. How do we do that?” And in my mind, the biggest area where you can see this is, how do you handle mistakes in a team? What’s the team response when there’s a mistake? ‘Cause basically it’s all bullshit and fluff until somebody fs up, and then you’re gonna figure out what your culture is. ‘Cause you’re gonna have a bunch of rhetoric up until that point, but when somebody makes a real mistake, you’re gonna learn what your corporate culture is or what your team culture is. Is it a culture of blame? Is it a culture of anger? Is it a culture of fear? Or is it a culture of development?
0:15:31.2 MK: But here’s the thing, okay, with mistakes, I suppose… Yeah, I suppose I do… I’m probably not as far in the Psychological Safety journey as you by any stretch of the imagination. From a, I guess a simplistic view, my approach has always been, if a mistake happens in the team, it’s my responsibility because I’m the manager of the team. And I feel like… I don’t know. I guess… This is obviously way past blameless post-mortems and the rest of it. But if that’s the example that I wanna lead, but then I guess share with the wider organisation, “This mistake happened, blah, blah, blah, I take responsibility for it,” and there isn’t a healthy culture around mistakes in the rest of the organization, that can lead us down a very dangerous path for me personally, but I suppose that’s the stance I’ve always taken. So can you talk to us about how we get to that blameless point?
0:16:27.7 JL: Yeah, I have seen this in organizations I’ve been a part of where we have very safe maybe corners, and I’m in that corner, and then there’s other teams and we interact with them and they do not have the same level of safety. I watch this happen where we did an analysis and said, “Oh, ops team stopped recording this record that they used to record because they got really busy and they didn’t realize anybody else was using that field in the database.” Right?
0:16:55.1 MK: Yeah.
0:16:55.5 JL: Totally can see how that would happen. It was a miscommunication. And we wrote that up and documented that, and then somebody from that ops team is basically like, “Why’d you throw us under the bus?” We’re like, “We just thought we were being transparent about… ‘Cause we felt safe.” But we realized, “Oh, their team isn’t safe in the way that our team is safe.” I’d learned a real lesson about how that team feels about their Psychological Safety, right? And I learned, “Okay, I need to work with that team and also help them trust us, ’cause we just hadn’t worked a lot closely together, ’cause we’re over here doing one thing and they’re doing something else.” And I realized culture can be unevenly distributed across an organization.
0:17:41.2 MK: Totally.
0:17:42.1 JL: And it can produce these frictions. I think the first is you gotta… Not all mistakes within a given team need to float out of that team. And so if it doesn’t have to float out of the team, no problem. There’s gonna be no impedance mismatch between your level of safety and the rest of the organization. At some point, there may be something that has to float out. At that point, it’s like the team lead needs to broker the conversation. And I don’t know what that looks like. There’s too many moving parts depending on the organization. But the team lead needs to do that. Now, ideally, if you’ve got Psychological Safety at higher levels, you can work that out. I’ll give you an example of one that I was a part of, and it’s basically we at the ninth hour changed an SQL query and made a greater than or equals to a greater than by accident in an edit, and it messed up a date selection.
0:18:39.0 JL: And a bunch of things that should have been included in a window were dropped out because they were equal to, and they got dropped out because of that selection window right? We have all seen mistakes like this, and it triggered a cascade of corrections internally of having to say, “Hey, we produced an exhibit that was discussed at these meetings and it was incorrect. Here is the corrected version.” And we went through how it had happened, and did a blameless post-mortem and discussed it with our juniors in our team. I had made the mistake. Nobody else had touched it, it was mine. I literally… Not just, I’m taking it for the team. I had made that mistake. And so our juniors got to see me make a mistake, and I bring this up with all our new hires, and I tell this story because they need to understand I wasn’t drug out in the street and shot, and you won’t be either.
0:19:39.3 JL: Now, what we did do is we said… We did the… When we did the blameless post-mortem or the post-mortem of it, we realized what happened and what we got into as we were making changes at the last minute, and we didn’t have enough time for a second set of eyes on the changes to review it. And so we got nary and we made ourselves try to move too fast and we should have known better, and basically the takeaway was, wasn’t that JD’s a screw-up, although that may be true, but the takeaway was, we’re gonna not do changes within a week of some report dates, so we can get more eyes on any changes, and we’re gonna do basically a no-changes window and freeze things. And if a C-level executive says they really want it, we’re gonna say no.”
0:20:27.0 TW: It’s interesting ’cause it’s… Hilary Parker… And I can’t… We touched very, very lightly on it when we had her on the show, but she wrote… When she was still at Stitch Fix, she wrote a paper where she talked about the… And this is… I had not made that connection until you’re giving that example of the… You’re not a bad analyst. You weren’t wilfully trying to make a mistake. You weren’t being sloppy. It was actually a process issue, and that the process was in place that said, “Yeah, scramble and make these changes up till the last minute. You can cut a corner, a code review or something else.” And to me, that was like such a… I will carry that point from her for the rest of my career, where she was saying…
0:21:10.0 JL: That’s a very good one.
0:21:11.7 TW: But don’t blame the analyst. What was in place in the process that enabled that… It’s interesting. I guess it’s not that we’ll prevent the mistake. It’s actually minimizing the impact of the error or preventing it from getting to production. It’s fine that the SQL got tweaked, it’s just… If somebody was looking at a diff and said, “That’s odd. Why did you change it from a greater than or equal to a greater than, and just a second set of eyes… ”
0:21:39.5 JL: It would’ve jumped out right?
0:21:40.7 TW: Yeah.
0:21:42.2 JL: It would’ve jumped out. Which by the way, you just made the best argument in the world, why business analysts who are not professional developers should be storing things in version control.
0:21:53.2 JL: That wasn’t what I’m here to proselytize, but it is one of my high value points. I am in a business team. I’m not an IT team. But one of the things that we have all of our analysts do is they use Git, and we use tools that can be stored as plain text, so lots of Python, lots of SQL. In addition, I use a little more R than most, but…
0:22:15.8 TW: That’s just because you’re more enlightened, let’s just be clear.
0:22:21.8 JL: But plain text diffs are super valuable for things like this jumping out, and it also allows better commenting right? Somebody can just look in there at one of my peers and comment and say, “Hey, this doesn’t seem quite right,” and it’s easily shared.
0:22:37.1 MH: Alright, let’s step aside for a brief word about our sponsor ObservePoint, a platform that helps you measure a very important gap.
0:22:46.3 MK: Hmm. What gap is that, Michael?
0:22:48.4 MH: Well, the gap they help you mind, mind the gap, is between what you think you’re measuring on your site and what you are actually measuring on your site. ObservePoint automatically audits your data collection to give you that visibility on an ongoing basis.
0:23:04.2 TW: That’s right, Michael. Essentially, you set ObservePoint up to check your site on a recurring basis, telling it what to look for and where, and it does that automatically, alerting you immediately if something goes wrong and providing automated reporting so you can track the results over time.
0:23:18.0 MK: And ObservePoint’s privacy compliance capabilities keep tabs on what customer data you’re collecting and how to ensure you remain in compliant with digital standards and government regulations.
0:23:30.7 MH: All of that, and they have great taste in podcasts as well. So if you wanna learn about ObservePoint’s data governance capabilities, you can request a demonstration at observepoint.com/analyticspowerhour. Now let’s get back to the show.
0:23:48.7 TW: So there was another… One of the things when we were kind of kicking this around, and you referenced Black Box Thinking by Matthew Syed, which… And I can’t say I’ve read the book, but I wound up watching one of his TEDx Talks where… Another thing I had not… He… The whole TEDx Talk, which is very good, 15 minutes, and he talks about fixed mindset versus growth mindset, and that I think, it started to make the connection that you kinda need to have… A growth mindset goes hand-in-hand with the psychological safety, and I guess that kinda goes into the, “It’s fine to fail as long as you fail forward, or if you’re not, you’re not pushing yourself.” I guess, is that how that ties?
0:24:31.4 JL: The way I think about it is, does an analyst get there, or a person I’m working with do an analytical work, do they get their identity from being the smartest asshole in the room, A, and B, do they get it from being perfect? ‘Cause if I get a sniff of both of those two, they’re gonna have a bad time. ‘Cause they’re gonna have a bad time ’cause in our organization, you’re not gonna be the smartest person in the room. Sorry, there’s just too many smart people. And two, you’re not gonna be perfect ’cause nobody is. So I’ve already got a problem if I got somebody who’s given me a vibe that that’s how their self-image of how they view themselves is structured, and I kinda try to filter for that when I’m interviewing and hiring, and if I do get somebody that has a whiff of that, I just know I’ve got a development area I need to pay attention to. ‘Cause I want somebody that comes in and says, “Being around smart people is like the most fricking amazing thing I’ve ever seen, ’cause I got all these smart people that I can ask stuff of and they’re gonna make me smarter and they’re reading stuff that I’m not reading, and I just love that.”
0:25:35.0 JL: I want people that is just rolling around in that like a dog instinct, like this is exciting and fun. And on the flip side, I also want somebody that when they do not have an identity crisis, when someone finds that they made a mistake, let’s just deal with the process or what they didn’t know or whatever, or if I ask them to reformat their SQL, I don’t want that to be attacking their self-image. Right?
0:26:01.8 TW: Yeah.
0:26:03.4 JL: And so those are like the two… A couple of the things I think about with analytical folks.
0:26:09.1 MH: I’ve found that that person you’re talking about that has to be perfect, or wants to… I know who you’re talking about, but a lot of analytics people also come in from the bottom side of that, which is a lot of impostor syndrome and lack of confidence, and how do you identify that? ‘Cause I think that’s a coaching issue the same as the other one is kind of like what you just said it, if they make a mistake, it’ll really make them pull in sometimes, and sort of, how do you let people know like, “Hey, it’s alright”? And you kind of gave one example of just the story you shared earlier about, you’re like, “Hey, let me tell you a story about how I basically blew up a production system right in front of everybody… ”
0:26:50.5 JL: And we had to correct it. We had to publicly kinda correct it with lots of people in the organization, yeah.
0:26:55.0 MH: “And it was very visible, and I’m not dead.” But I find that’s something that a lot of analytics people need is some encouragement to stay on that level as opposed to like drift below the water line, if you will, of their own confidence.
0:27:11.8 JL: So the impostor syndrome is real, and one of the… Especially… I think especially with new hires and people who are doing analytical work who were not… Maybe that was not their academic background. So they often constantly feel like, “Oh, I’m still trying to figure this out,” and often they know more than the people who went through master’s programs and data science. Right? But they always kind of feel that somebody else has more insight or something than they do. A couple of things we do by design in our team is I try to get people, get young analysts have early wins. And I think I gave you guys some examples of this in some documents we traded in the pre-read. One example of this is the first code review. The way we do it is there’s high mentorship. So somebody’s been working and learning on their first thing they’re writing, first piece of analytic, and it may just be some SQL scripts and a little bit of Python, they’ve been working with whoever hired them to do that. So there’s not gonna be like, “Oh, this is awful,” ’cause they’ve had a partner there with them who was experienced and mentoring them. And then it comes time to do the code review and they’re gonna be real nervous. Well, guess what, it’s incredibly safe ’cause there’s just not gonna be any bad surprises.
0:28:36.0 JL: And so this is exactly what you wanna create, situations that feel risky and scary, but are actually really safe and are gonna end positive. ‘Cause then you’ve got an analyst who goes through this thing and then discovers, “Hey, it’s all great.” Right? They feel like, “I was shot at to no effect. This is wonderful. I’m euphoric.” And this is how you build… This is like… And that’s a step for building trust. And then they discover, “Oh, that process isn’t so scary. I had just… ” ‘Cause I’m not hiring developers. They’ve never been through a code review. And because they aren’t developers and they’re business people, they may be carrying around a lot of, “I’m sure I’m not doing this right,” feelings, and we get them past that by basically letting them have wins.
0:29:19.2 MK: Yeah, see, I feel like I’m totally fucking this up. I feel like that’s a…
0:29:24.2 TW: You know what, Moe, you are, but it’s okay. You know what? This is… It’s okay, this is safe…
0:29:32.0 MK: ‘Cause this point is so good, right? We do a code review with our newbies except the difference I think is by having the mentor work through the code with them, and it not being… ‘Cause our first code reviews, I feel can be pretty brutal because there are normally like 100,000 changes, and I feel like everyone gets through it, but this would be a much nicer experience of like you work with someone in kind of peer program so it’s in 90%, 95% good enough territory, so you get some feedback without it being like, we’re knocking down the house and starting from scratch again, kind of thing.
0:30:06.8 JL: I don’t wanna demoralize an analyst right out of the gate. I wanna do that first code review kind of fast and with relatively low risk. We start building trust, they see some of the design patterns ’cause I’ve already helped them with that, and then we’re chugging, then we’re getting lots of relatively fast wins. And like my last analyst, when she had her first code review, I gave her a rubber duck from Amazon, like a big oversized rubber duck. Are you guys familiar with rubber duck debugging?
0:30:37.3 MK: Yes, definitely, definitely, definitely. I’ve had people throw them at my desk ’cause they’re like, “Moe, talk to the rubber duck,” and then bother me.
0:30:43.9 JL: Well, let me explain, let me explain rubber duck debugging, ’cause I’m sure everyone in the audience isn’t… ‘Cause this is one of those ideas I steal from sort of professional developers and bring over in analytic space. The idea behind rubber duck debugging is when you have a challenge and you really can’t quite figure out how to make it work, you take your rubber duck and you hold it and you explain your problem to your rubber duck, and it’s amazing the number of times that your rubber duck will solve your analytics problem.
0:31:13.3 MK: ‘Cause you know Tim… So Tim, you know how normally you bother someone in the team and you’re like, “I wanna talk about it out loud,” and as soon as you interrupt someone and start talking about it, you’re like, “Oh my god I figured it out.” So you do the same thing, but with a rubber duck so that you don’t take someone’s time.
0:31:29.4 TW: Well, I’m laughing ’cause I actually the… On the front of my kayak, which is like my escape place, and I’d found a rubber duck floating in the lake a year or so ago, and I’ve had it strapped on the front of the kayak, and I think… That’s one of my places that I’d get away and let my brain not be looking at a screen. I might have unintentionally have… Maybe the universe was telling me that I needed to talk to a rubber duck some more.
0:31:54.3 JL: I think it’s true.
0:31:55.0 MH: That’s amazing. That’s so much better than read the fucking manual.
0:32:02.7 TW: But there is gonna be a whiff of a “Do your fucking job” coming out here, because when someone makes a mistake… ‘Cause I… And I’m thinking through when I have been in kind of a mentoring type role, or in… A lot of times I’m reviewing some deliverables and their analyst, just when they’re straight, they often suck at deliverables, so I’ve got… Well, like, “Okay, let’s do feedback, let’s improve your… Let’s improve the presentation.” And I see the ones that are like, “Okay, yeah, this is… ” It’s kind of… It’s work ’cause they had it and it’s just the two of us, and they come out and they’re like, “Wow, this is so much better,” and then they’re more comfortable coming to me and they learn and grow, and that’s great. But then there are ones who are like, “Tim, he’s just gonna keep giving… ” Is there the risk if you make it too safe that somebody… If they make… If somebody makes a mistake, I want them to feel a little bit of accountability and be like, “Oh, I feel bad about that. I want them to take… What did I learn? What am I gonna do different? How do I move forward? And is there… ” How do you avoid the misinterpretation of like, mistakes are not a problem. Like, “Oh, yeah, this environment’s cool. We can screw stuff up and nobody gets in trouble.”
0:33:24.0 JL: Yeah, so you’re basically saying by making this too safe, are you lowering the standards? And…
0:33:28.4 TW: Or is there the risk for some people like… Yeah.
0:33:32.2 JL: If you do it wrong, yeah, you can lower…
0:33:35.3 TW: Okay. [laughter]
0:33:36.5 JL: Yeah absolutely, right? I mean, if you’re like, “Oh, I’ll make it safe by being laissez faire and having no standards,” you’ll create an environment where people feel really comfortable producing crap. That’s not the goal. That would be a mistake of leadership, right? What I wanna create an environment of is I want a high-performing environment where high-performing individuals can feel safe iterating and performing at a high level. So I don’t lower my standards. I just do evals faster so that we’re not having the experience that Moe described where you have like 10,000 edits. That’s morally just demoralizing. And I know this, ’cause when we sent our cookbook second edition off to the copy editor, I can’t remember, I have this note somewhere, the number of edits were so big that Adobe Acrobat would not be stable with that many. They had to break the thing into two documents, so that… ‘Cause it could only handle like 6000… 6258 comments was the most, and we had 12,000, so they had to break it into two PDFs. And I got that document, and I gotta tell you, I was sort of like, “I’m kind of done with this.”
0:34:48.5 TW: Maybe you should have done this one chapter at a time. Is that the…
0:34:52.5 JL: Right? It almost became a filter right? Now, fortunately, I was working with O’Reilly and one of their editors, so we used copy editor, one of their editors said, “I can help you enter this directly into Git,” and I said in my Southern way, “Bless your heart, ’cause I would have just walked away from the damn thing.” And that’s what we don’t want in code reviews, or any review of a deliverable, delivery of product. What we also don’t want is people routing, seeing, interacting with Tim as an obstruction and routing around it. ‘Cause people will behave like a distributed network, and if there is pain, they will avoid pain. So we don’t want them putting off having you look at it until the night before, ’cause Tim can only give me so much feedback because we’re delivering this thing tomorrow. So they postpone in order to keep you in a small box. That’s totally not the incentives. So we wanna do reviews quick and early. And one of the things I run into is, more often with me is perfectionist, right? I’ve got folks who are used to maybe academia where nobody looks at their stuff until it’s very polished, and they won’t commit and push multiple times a day, and I’m like, “No, no, push your job. It’s not gonna hurt.” [laughter] And plus then you have a backup. But don’t wait until it’s perfect and then put it on Github, right? Let’s do lots of iterative stuff.
0:36:17.8 MH: I think you just made a really important point because there’s a lot of analysts who are in environments where they feel pressure or even asked to basically just provide one thunder clap of insight and nothing else, and no iterative process behind it. And it’s sort of intuitive for me, is sort of like, “Oh, our analysis is always way better when we kinda chunk it out and think through it a little bit and peck at it, and then we get to the end and we have a much better… ” ‘Cause I used to work in a large agency and a number of meetings would happen where we went away, did a bunch of analysis for the client, came back to present what we found and the client’s like “Yeah, we already know that.”
0:36:58.7 MK: Yeah, totally.
0:37:00.8 MH: And you’re like, “Yeah, that makes sense, because you’re actually running your business and we’re just fucking around the edges.” I’m swearing a lot in this episode because I’m very passionate. [laughter] But it got me thinking, ’cause I was sort of like, I don’t like that feeling. I hated that. That was terrible. How do we fix it? And I was like, well, you leave agencies first off, and then go do real analytics. But a lot of people are stuck in there. And I think it’s one of the things I wanna turn towards this sort of psychological safety or creating that environment that you’re describing, I think happens at a lot of different levels.
0:37:36.0 MH: Everybody has to play a part. And it’s almost like a like a gate system. It’s like wherever it gets turned on, anyone below that can turn it off from a leader’s perspective, and if it’s not turned on it, it can get turned on lower, but it can’t go up, right? So it’s very individually dependent, it seems like. How do you as a person sitting in a seat, how do you do actions that create it around you or promote it? And then the other piece of that is how do we build more resilience in these kinds of systems so that if this boss who’s amazing leaves, all of a sudden we’re back to square one with all the psychological safety again.
0:38:20.9 JL: So a bunch of this works best if you have evolved thinkers, leading… I have C-level executives who have thought about this stuff and are leading by example. That works best. And then it’s an issue of, keep it propagating down, so make sure as each person begins to be managed some, that they’re ensuring their team is psychologically safe, and you can even ask them as a mentor to a young manager, what they’re doing to foster psychological safety, or are there opportunities for you to share stories with your team about situations that will help them understand the cultures in our organization around psychological safety, or you can pick a different word. Like to Moe’s point, I’m afraid psychological safety is getting watered down, it’s gonna become this magic pixie dust, kind of like data science that we just sprinkle on a turd and hope we get something edible out the other side. And I think it’s not that right? So maybe we should replace it with other terms like being a decent human.
0:39:27.5 MH: Yeah.
0:39:28.9 TW: That might be the best metaphor, visual that ever described in the history of this podcast though, so…
0:39:36.3 JL: Yeah, well glad I could bring it.
0:39:38.9 MK: But, so can I… I just wanna rewind a little bit, JD, you mentioned…
0:39:44.1 JL: Yeah.
0:39:44.2 MK: I guess there being like a bit of an iterative process when it comes to sort of feedback and that loop. As you were talking about that, the thing that I was trying to work through in my head is like, how much of that is the responsibility of the team? And where is the stakeholder’s involvement? So if you are client side, you obviously work really closely normally with your business stakeholders, and even in agency land, like Helbs was talking about that, you need some buy-in from your stakeholders. Is this a… It gets harder, I guess, to manage once you go outside of your team, but you also presumably do want their input ’cause they’re ultimately gonna be I guess the final reviewers of the work. So from your experience, where is that balance?
0:40:33.3 JL: So part of this is, you’re about to learn some things about my personality. I’m the kind of guy who, when I walk on an airplane and I don’t know who the sky marshal is, I must be the sky marshal, right? Like if something goes wrong, it’s gonna be my responsibility to fix it. I blame this on my agrarian upbringing, where you’re the mechanic, you’re the plumber, you’re the agronomist, you’re the economist. If… Well, somebody asked me once if I was into DIY, and I was like, “I don’t even know what that means. What is that? Explain it.” And they explained it and I said, “You’re just describing doing shit. So yeah, I do shit.” If I don’t do it, I don’t know who’s gonna do it, right? So yeah, I guess I’m into DIY. It just sounds like getting things done to me. So similarly, I always assume I’m the culture bearer in any part of my organization that I’m working in.
0:41:23.9 JL: And so I look around, and one of the things I’ve noticed, for example, is I noticed that we have consultants that come and work with us on the IT side, kind of like extra staff, so they’re not just coming in to do a specific thing, they’re kind of like staff developers, but they’re consultants. When they come in, I realize they don’t talk in the group chats and they tend to always ask questions and direct messaging to one person. And what does that tell me? They do not feel psychologically safe. So I will say, “Hey, good question. We should ask this in,” which we use Teams, “whatever team channel has business people and IT people, we should ask it there ’cause I bet there’s other people who don’t know this either,” and I redirect those conversations into “public”, into a group area and have those conversations there, and they discover, oh, they weren’t the only person who didn’t know this or had this question.
0:42:20.1 JL: But it’s a conscious effort by me because I’m seeing they’re not safe. Right? And I also have formulated a plan for other ways to help pull them in and feel safe. I could just acknowledge like, don’t do that. I can be rough about it or whatever, but what I wanna do is I want them to ultimately feel safe asking questions of the business people like myself and also their peers in IT, and I’m like, “Yeah, it makes sense to me that a consultant wouldn’t feel safe. They got zero track record.” So we gotta figure out ways to build that. And similarly in the organization, I see sometimes business people don’t partner with the development teams as tight as I think they should in order to get what they want built out of development teams, and some of that is safety and some of that is ownership. I think every business person who’s the business owner of an IT project should see the success of that project as reflecting on them, but often they kinda have a TEM approach of, “If those IT people would just do their fucking job, then this would all be fine.”
0:43:26.0 JL: But so often they’re missing… The IT people are missing information, and the business folks aren’t coming to the meetings and they aren’t talking about design with the developers. And in that case, I try to nudge my peers in the business side and do a little mentoring. I can’t tell them what to do, but I can sort of catch up with them and say, “Hey, I’m afraid of, on your project, you’re gonna get something you don’t like. You really need to move in with those folks and really get engaged, ’cause you could learn a lot from the process.” So I can nudge, I can’t tell them what to do.
0:44:00.1 MK: Okay, I’ve decided it’s time for a new book because, as I’ve asked you things, I’ve already learned two very, very good things. So one is about the code reviews, the second is about direct Slack messages to people and how you re-route them in a helpful way. I feel like this is your next book, like the practical shit for psychological safety, like here are the actual things that work, not just like spouting about feeling safe.
0:44:28.3 TW: Well is there… I mean I guess there’s another… There is a piece, and it kinda goes a little bit maybe towards the impostor syndrome, but I feel like in the absence of proactive efforts to build psychological safety, there is a tendency for a lot of analysts to make a lot of assumptions that are wrong. I think, put aside the psychological, most environment… I will contend that most environments are much, much more forgiving of mistakes than the assumption of the analyst is.
0:45:01.6 JL: Yes. Yeah.
0:45:01.7 TW: And so…
0:45:01.8 MH: A lot of times.
0:45:03.2 TW: There is this… And even working with marketers that are like, “If I set a goal for a KPI and I miss it,” they behave as though a cardboard box will be delivered to them at the front door by security, when the reality is… And the same thing goes, kind of the iterative like, as Michael, you were talking about like talking to the stakeholder as you’re working on something, and there’s this idea like “They’re gonna freak out if I keep bothering them. They’re gonna expect me to already know this.” And so I think there’s a degree of where just going through life with the assumption that if you make a mistake and it’s the end of your role, if that hurts your career, well, good, ’cause you’re probably not at the right company. Right? I mean let’s map this out.
0:45:55.9 TW: Do you wanna work in an environment where that is the case? So I think there is… There’s a little bit of an onus that can go back on the individual contributor to say whether you have somebody actively trying to promote it, which I think they should be promoting psychological safety and showing and demonstrating, there’s also a part to say, you’re probably safer than you think, and the fact is to behave that way, you gather really good data as to whether or not you’re safe, plus you actually are modeling for your peers and people who are more junior. So I feel like I push that a lot to people.
0:46:31.7 JL: I think folks assume they aren’t safe until they have empirical evidence to contradict them, right? And that’s rational risk-taking behavior that kept us from getting eaten by lions in the jungle as we evolve, to just always assume the negative.
0:46:45.5 TW: That saber-toothed tiger probably won’t eat me.
0:46:48.3 JL: Yeah.
0:46:48.8 TW: Yeah, yeah it’s probably how they…
0:46:50.5 JL: There were people who had that view right? And they won’t… They got… Once again, they got filtered out of the gene pool.
0:46:58.3 TW: Yeah.
0:47:00.0 JL: So, we are the ones who are just paranoid as shit. That’s how we got here genetically, is we just assumed everything was gonna screw us over, everything was gonna eat us, and lots of problems were gonna be had. So what’s left is homo sapiens who want to collaborate but also are not very trusting sometimes. ‘Cause we fear danger, and we see it as danger until we’re given lots of evidence to the contrary. So a whole bunch of the stuff I talked about is really about like the code review and the me telling anecdotes about screwing up and all that. What I’m really doing is I’m trying to give empirical evidence to skeptical humans, I’m trying to give them empirical evidence that they’re safe, and they need to experience that viscerally and then they will begin to experience safety. The stories help. They aren’t enough. One experience helps, but it’s not enough. They kinda need a track record of it. And so there’s nothing wrong with manufacturing those experiences, right? You do this with kids, you manufacture experiences and they grow, and we do that with school. Why not set up things like the code review or other…
0:48:07.2 JL: And all these, by the way, are way more useful than the bullshit HR is gonna come up with, ’cause they’re gonna have you do trust falls and a bunch of contrived crap that everybody is gonna smell the BS on right? But what will really resonate with people is things like actually doing something with them, mentoring them, reviewing their code and saying, “You know, hey, this is really great. I love that you did this. But we always do this part this way, and here’s why.” And they realize, “Oh, I’ll get… I can get redirected and I don’t get fired.” And, “Hey, I looked at the code you committed last night on GitHub. You’re really doing… That was really cool.” Right? “Oh shit, somebody’s noticing that I’m pushing code. I didn’t ask him for a review. And not only that, but he said something good about it.” Now I may have had three thoughts of things like, “Yeah, when we code review, I’m gonna straighten some junk out in there,” is what I wanna say, is if there’s… I mean I’m not gonna… I’m not gonna blow smoke right? If there’s really something like, “Oh, and my analyst regularly do this. They figured out a way to solve the problem I never would’ve thought of.” I’d tell them that crap every time I see it. Catch them doing right, whether it’s your peer or your subordinate, right?
0:49:14.1 MK: It’s actually the same as dog training.
0:49:17.8 JL: It really is. I wasn’t gonna say that ’cause it may sound condescending to people who don’t love dogs as much as I do.
0:49:23.4 MK: It’s not meant to be. It’s like you reinforce the good and you ignore the bad.
0:49:28.0 TW: Moe, I just wanna know how Harry’s first code review goes and if you could… [laughter]
0:49:34.5 MH: Well, I would second JD that a book along these lines might actually be a very, very positive thing, ’cause there are thousands and thousands of analytics managers and directors and vice presidents now that didn’t exist in the world 15 years ago that are just starting to really try to grapple with a lot of these things and there’s not a lot of guidance out there. And it’s interesting ’cause I also did a presentation on empathy and said that analysts were uniquely suited to take this on. And so I’m like… Part of me is like, “Man, could 15-year-ago me come and work for you? Because I don’t… ” I’m like, I would’ve avoided so much pain and suffering over the years if we could’ve been part of a team like that. But there’s so many good lessons there.
0:50:17.3 TW: It was the R Studio Conference where you did the empathy…
0:50:20.7 JL: Yeah so actually that was the second… The first time I did that was at New York R Conference, and I think it was a couple of folks in R Studio heard it or heard about that and were kind of like, “Hey, this is actually part of what we value in our organization,” and they invited me as an invited presenter to come speak at the R Studio Conference.
0:50:41.2 TW: All that is to say the session is available on YouTube.
0:50:45.9 JL: Twice on YouTube right?
0:50:46.5 TW: Twice on YouTube, nice.
0:50:47.9 JL: Well the R one is on… The R Studio one is on their site. They use a different hosting than YouTube. But you can get it at rstudio.com. They’ve got all their conference presentations up there. So it’s there. It’s also on YouTube for the New York R Conference.
0:51:04.5 MH: Alright, it’s time for the quizzical query that’s usually a conundrum for both of my co-hosts, it’s the Conductrics quiz and joined by our special guest quiz taker, JD Long is sticking around. Thank you, Hilary Parker, for making that seem like a thing that people should do. Let’s talk really quickly about our sponsor, which is Conductrics. The Analytics Power Hour and the Conductrics quiz are sponsored by Conductrics. They build industry-leading experimentation software for AB testing, adaptive optimization and predictive targeting. You can go find out more at conductrics.com to see how they could help you with your experimentation programs. Alright, so would you like to know Tim, who you are competing for?
0:51:53.7 TW: Do tell.
0:51:54.3 MH: It’s actually a very close friend of yours. His name is Adam Greco.
0:52:02.8 TW: Oh man. Oh dear.
0:52:03.5 S1: That’s right, Adam. So Tim, no pressure. And Moe and JD, our listener you are playing for is Joe Eveler, who is a good friend of the show.
0:52:12.6 MK: Good day, Joe.
0:52:14.1 JL: Hey Joe.
0:52:15.1 MH: Alright, so here we go. Many times, it’s not feasible or even possible to run AB tests where we’re able to assign treatments to individuals, however, often we can assign treatments at an aggregate level. For example, experiments that test educational programs may assign treatments to classrooms or to schools, so that everyone within a class or school is exposed to the same treatment. Or for marketing promotions, we may assign treatments at the store or geo-level. These types of AB tests are called cluster RCTs, random-controlled trials. While cluster RCTs are often less complex and hence less costly to implement, there is no free lunch. Cluster RCTs will tend to have less power than a standard RCT with the same sample size, this is because students in the same class or shoppers from the same store are often, in some sense, much more similar to each other than compared to a random collection of students or shoppers, and hence, each sample provides less marginal information. What is the term used for the measure of similarity within cluster RCTs? Is it A, CCR, cluster covariance rate? Or is it B, ICC, intra-class correlation? Is it C, ISC, intra-store correlation? Or D, KLD, Kullback-Leibler divergence? It might be Leibler, I don’t know. E, ECA, effective cluster assignment? Alright, so there we have it.
0:53:56.8 JL: I want you all to know this was written by Matt Gershoff who’s a buddy of mine and a drinking buddy. This is exactly what it’s like hanging out at the pub with Gershoff.
0:54:08.5 MK: Exactly.
0:54:08.9 JL: Exactly like this.
0:54:10.3 MH: It’s hard to feel smart sometimes with Matt in the same room as you, but at least he’s very nice.
0:54:14.6 TW: It’s fun if the bar’s crowded, then the more he talks, the more space, like the bar…
0:54:19.6 JL: We get the corner to ourselves…
0:54:22.4 MH: That’s right.
0:54:22.5 JL: Five minutes in, there’s nobody within 15 feet.
0:54:24.8 MH: Alright, so JD, you’ve learned one key tip about the quiz which is procrastinate around…
0:54:34.6 JL: Yeah so Tim, it’s your go, go for it.
0:54:38.9 TW: Oh good. I’m just gonna eliminate C then ’cause I am pretty confident it’s not intra-store.
0:54:41.7 MH: Oh okay.
0:54:43.4 TW: So I can eliminate one. But…
0:54:45.1 MH: Intra-store correlation, C. I agree, that one didn’t even… For me, I didn’t even think that one was correct. You are absolutely right. So now we’re down to four options.
0:54:55.0 TW: They can pick the winner or they can eliminate another one.
0:55:00.6 JL: So Moe, the issue here is like correlation or covariance, and the first two had that in the name. I mean I know that’s the underlying principle. I gotta admit, I’ve never heard of any of these terms. So I don’t do AB testing and re-insurance. So…
0:55:20.2 MH: You make a prediction and…
0:55:22.4 JL: Exactly, and then hope my career runs out before we figure out if it was right or not. Do you have any thoughts here Moe, about… We could either choose or eliminate.
0:55:32.1 MK: Well, I definitely… Well, I think we can eliminate E possibly for the same reasons that you were just mentioning. My only complicating factor is I did not get down what D is, so…
0:55:43.9 MH: Oh D, KLD, Kullback-Leibler divergence.
0:55:48.8 MK: Oh.
0:55:50.3 MH: Kullback-Leibler divergence. I might be pronouncing that incorrectly, I apologize to Kullback and Leibler.
0:55:57.4 MK: Yeah I’m leaning towards eliminating E.
0:56:00.6 JL: Let’s do it.
0:56:01.3 MH: Effective cluster assignment, alright, let’s go to the board.
0:56:05.7 MH: Yes, you can eliminate E. Alright, we’re now down to the three options, A, cluster covariance rate, B, ICC, intra-class correlation, C… Oh no, we got rid of C. So now we’re… Which one did we get rid of?
0:56:21.6 MK: We’ve got A, B and D left.
0:56:21.7 TW: Yeah.
0:56:21.8 MH: A, B and D. Sorry, my bad. So KLD, Kullback-Leibler divergence are our only three left, A, B and D. Thank you, Moe. You should try this side, Moe, it’s really… You’re good at it.
0:56:31.6 TW: Yeah, what was A fully, was what?
0:56:35.4 MH: CCR, cluster covariance rate.
0:56:38.7 JL: As a John Fogerty fan, I’m pretty partial to that…
0:56:42.0 MH: I like it, yeah.
0:56:45.5 TW: Having literally just wrapped up the lyrics to Bad Visualization, the parody of Bad Moon Rising that…
0:56:52.0 MH: Oh nice.
0:56:52.1 TW: Could be we’re coming to them.
0:56:52.9 JL: Fantastic.
0:56:55.2 TW: I’m gonna go ahead and just kinda end this all… ‘Cause it’s not D… And I think I’m gonna say that it is A partly for the CCR. It’s gotta be A or B. I’m right there with JD, but I’m just gonna jump all the way ahead and say I’m gonna go with A.
0:57:09.1 JL: I think it’s B, so let’s find out.
0:57:10.9 MH: Alright, so Tim thinks it’s A. JD and Moe, good with B?
0:57:17.0 MK: Yeah.
0:57:17.3 MH: Alright, let’s say that instead of saying Tim got it wrong, why don’t we say that JD and Moe got it right? It is ICC, intra-class correlation. Congratulations, Joe Eveler, you’re a winner.
0:57:29.2 JL: Yay, Joe.
0:57:31.1 TW: Also a good friend of mine.
0:57:35.4 MH: Yeah, he’s from that area.
0:57:35.5 TW: Yeah he’s right here. I’m actually having a virtual lunch with him.
0:57:39.9 MH: Try to be careful about doxing our listeners on the podcast, so I don’t say where he’s from. But he is on Twitter, so if you look around, you might be able to find him.
0:57:48.2 TW: I just… I wanted to look in field experiments the book and confirm the chapter that I have not yet gotten to, which doesn’t… That applies to most of the chapters.
0:57:58.6 MH: Alright. Well, like we said before, we’re very thankful to Conductrics for sponsoring the Conductrics quiz. Please check them out at conductrics.com. And now let’s get back to the show.
0:58:09.8 MK: JD, we’ve talked a few times during the show about a blameless post-mortem, and we’ve kind of touched on it, and I feel like people that work in tech have a vague idea about it, but can you just explain a little bit about how this works in your team? How do you I guess set up this process?
0:58:26.6 JL: Yes. So first of all, let me tell you, they use this term in agile, and I have no idea if they’re talking about exactly the same thing or not, the version of this that I have in my head is more like, what, is it Syed, the guy we talked about at the beginning of this conference who wrote the book Black Box Thinking. What I’m thinking about is more like that. So that first. Second, we don’t call it a blameless post-mortem.
0:58:49.6 MK: What do you call it?
0:58:51.7 JL: That’s what it is. We don’t call it anything. It’s just reviewing. And we just do it all the time. Right? And… Now, it may be helpful in some teams to have a name for the thing. We do it, we just don’t put the flag down and say we’re gonna do this now.
0:59:09.3 MK: Interesting.
0:59:10.5 JL: We’ll review what happened when something gets wrong or goes wrong or goes right even, when we’re at the end of a project, we’ll do a review, a recap, and we’ll talk about, “Okay, what things do we wish had gone differently? Or what do we think the root cause of… ” Like the one where I screwed up the greater than or equals in some SQL. We talk about, “Okay, what was the root cause of this?” And it was like, “Well, the root cause was we were trying to move too fast and I didn’t get a second set of eyes on my change, ’cause my change was pretty trivial. I changed three lines of code, it just happened one of those lines of code was the… It had the greater than equal in there, and I screwed it up. So, okay, well, how would we change the process? And we’re like, “Well, we could have given ourselves more time and we should make sure that no matter how trivial the change, we get somebody else to look at it.” And we slowly have changed how we do… We call it a second set of eyes on code. It’s kind of a little lower bar than a code review. It’s just somebody else has looked at this. It doesn’t mean we’ve gone through and discussed in great detail. Just somebody else has looked at it. And we found… I’m in an analyst team, a business team, not an IT development team. We try to steal the 10% or 20% of things that IT professional developers do that we’ll give us like 80% or 90% of the value.
1:00:28.8 JL: So we do a bunch of stuff, like a light version of… We use Git and Github to check all our code in, but we don’t necessarily do branching and every feature on its own branch exactly the same way our IT teams do. We do it in a way that works smoothly with our workflow. And so similarly, when we do the blameless post-mortem, what we’re really looking through is what Tim, I think noted about process. We’re trying to figure out if we didn’t get a good outcome, why? Because if we’re getting second sets of eyes on things, if we have enough time, if our analysts are iterating quickly and getting folks to glance at their work, we should be producing the right stuff, and if we’re not, well, what’s the mistake? What’s the change in the process that we need to make in order to get that outcome? So that’s kind of it. The analyst that works for me, she’s been around for six months, and I bet if you ask her, “What’s it like doing a blameless post-mortem with JD?” she would be like, “I don’t know what you’re talking about.”
1:01:30.7 MH: What are you talking about?
1:01:34.7 JL: Now, she and I have reviewed her work, and some of that has elements of blameless post-mortem, but we’ve never had a big… She never had a big blow-up, right? And maybe she’ll sit in on one if we have something come off the skids at some point. But we get very little of that, which by the way, is a point we probably should have made earlier. Part of the reason you want psychological safety is folks will be more transparent about their mistakes if they are confident they’re not gonna get killed because of a screw-up, and you want an environment where people will reveal early and often when something bad happened. There was an incentive when I screw that up to cover my tracks and make sure nobody noticed and try to just fix it next quarter, right? We’ll just sneak that in next quarter and see, hopefully nobody notices. Now, I’m not saying I would have done that, but I’m saying if somebody hadn’t felt safe, there was an incentive to do that, to try to cover up mistakes. That’s totally gonna screw your team up, right? So you gotta produce enough safety so that people would be transparent saying what’s happened. And the other thing is, it’s also, even if they do fix it, and even if they do acknowledge it, you don’t wanna talk about it.
1:02:47.5 JL: Oh, sweet Jesus. Then everybody’s gonna know right? And so if you don’t have a safe culture, then the rest of your team doesn’t learn from the mistakes. And the learning from mistakes is the difference between a fragile system, an un-fragile system, and an anti-fragile system. This kind of steals from Taleb’s definition of anti-fragile. The idea with an anti-fragile system is that after mistakes or things go bad, the system adjust so that it is more robust as a result of that process. The way you build anti-fragility in human teams, you gotta have safety. Now, you can build teams that are fragile, that’s easy. You can even build teams that are tough, in that things don’t fail very much, they’re tough. But to get them to be where when they do fail, the team afterwards is actually stronger and produces better work product after a failure, that’s anti-fragile, and that requires psychological safety, which is ironic because Taleb does not strike me as someone who would be on a… Or create a psychologically safe team. And I think it’s ironic that his principle here of anti-fragility sort of is part of this idea of psychological safety.
1:04:08.9 MH: Awesome. Okay, to keep our listeners psychologically safe, we do have to start to wrap up.
1:04:15.3 JL: Let’s do it.
1:04:16.8 MH: But that’s not really why we’re doing it, but we do have to. Anyway, this has been an amazing conversation. JD, thank you so much, this is… I’m serious that you either need to start a podcast or write a book or both. And there are thousands and thousands of people who wanna learn these kinds of things, I would imagine.
1:04:37.7 TW: You’re a middle-aged White dude, so I think you might be the only person who hasn’t started a podcast yet.
1:04:39.6 JL: It’s shocking that I don’t have one already.
1:04:41.2 MH: Yeah, yeah.
1:04:42.2 TW: Yeah.
1:04:44.7 MH: We could give you some pro-tips. [chuckle] We can’t. No we can’t.
1:04:48.7 MH: Alright. But one thing we do like to do is we’d like to go around the horn and share what we call a last call, anything that we think might be of interest to our listeners. So JD, you’re our guest, do you have a last call you’d like to share?
1:05:01.4 JL: I do, and this is straight off of the Twitter machine from today, a little discussion. There was a article written by Paul Lockhart, and I’ll give it to you all. Do you all do show notes? I’ll give it to you and put it in the show notes.
1:05:12.0 MH: Yes, we do.
1:05:12.5 JL: Yeah, great. I’ll send you a link to this. It was written in 2002 by Paul Lockhart, who’s a mathematics teacher, and it’s called A Mathematician’s Lament, and he goes through and examines what it might look like if we taught music the way we teach mathematics. And this comes up periodically. This goes viral every two years, it seems like, for the last 20. And I think it is so interesting. Now, like all analogies, it’s not perfect, but it’s really interesting ’cause it talks about basically all of the heavy lifting and transposing of notes before anyone would be allowed to play a tune if you were teaching music the way we teach math.
1:05:53.4 JL: And I feel like so much math, data science and analytics gets taught best when people have an application for it, when they see a use and a need, and the equivalent would be like learning to play guitar by learning riffs, so learn your favorite guitar riffs. And then off of that, you would maybe be motivated to learn your circle of fifths, ’cause you wanna understand, why does that riff work, or why is this guitar player able to do this solo? How can you even put that together? Well, there’s a whole bunch of music principles that make that work, but if you’re forced to learn those music principles before you want to do your guitar solo, it’s not very motivating and the shit’s kind of boring. And I feel like a bunch of math and analytics and programming and all of that has a similar characteristic and that we’d be way more motivated if we had something we were interested in and taught it in that context. So Mathematician’s Lament.
1:06:49.6 TW: I like it.
1:06:50.0 JL: The link will be coming at you.
1:06:51.2 TW: We got it already, actually. I saw it on the Twitter.
1:06:54.9 JL: Fantastic.
1:06:58.1 MH: Awesome. Alright. Moe, what about you? What’s your last call?
1:07:01.0 MK: It’s a little bit of a long-winded story to get there, but Canva has a whole bunch of clubs, so there’s a skate club, a water club, a gym club, a bike club, there is also a channel called Investing Club. Now, there are 700-plus people in this channel of people that talk about money and stocks and all sorts of stuff, and I’m not gonna lie, it tends to be your certain age range, White dude, talking about very complicated investment concepts, and a few women at work and I were kind of musing about how… I don’t know, financial literacy, give me a few beverages in a soapbox, financial literacy for women is something I’m incredibly, incredibly interested in and passionate about, and lots of women have this real barrier to investing. So anyway, one of the girls from my group posted, I guess a noob question of like, “Hey, I wanna buy this international stock, I don’t know how to do it,” and the group was actually amazing, lots of people wrote back really supportive answers of, “Hey, here’s how you go about setting up your account, yada, yada, yada.” And so then I kind of posted being like, “Hey, I’m really glad that you asked this kind of noob question because it’s made this channel feel like a place where people can come and learn more and ask for beginner tips.”
1:08:20.9 MK: And some of the people who had been in the channel for a long time are like, “Oh, that’s a really good point. We probably should do more of this sort of stuff.” And so anyway, spin-off from that now, one of the women at Canva who works on brand marketing reached out to this amazing podcast called You’re in Good Company, which is two women who actually did a spin-off of another podcast called Equity Mates because they were listening to these two guys talking about investment and were like, “The barrier of what they’re talking about is just so hard, I can’t follow.” So they did a… I don’t know. I guess like you’re just getting your feet wet in investment kind of podcast spin-off called You’re In Good Company which is these two women, they’re amazing.
1:09:01.8 MK: And so now we’ve got both of the podcasts coming to Canva to talk to us at both levels of like, you’re an introductory investor, you wanna understand how to set up your account, how to look at a different investment opportunity and what its pros and cons are and that sort of thing. And then we have the more advanced guys also coming along to talk about that. And I don’t know. It just felt like both of these podcasts are really phenomenal, but it also is like a nice way to type this psychological safety of sometimes if you’re really worried about asking something in a group, if you are that person that has the confidence to do it, you might give someone else the confidence to also ask in that group. So very, very long-winded story.
1:09:44.0 TW: I tracked seven different cases where I thought this was gonna veer off into a mansplaining story. I feel like that was…
1:09:49.7 MK: Oh.
1:09:50.9 TW: And they were probably there all along the way, it was just right along the brink.
1:09:56.4 MH: Wow, impressive.
1:09:56.9 MK: Yeah so two really good podcasts.
1:10:01.8 MH: Speaking of mansplaining, Tim, what’s your last call? No, I’m just kidding.
1:10:06.0 TW: Mine will be short. I’ve been kinda dabbling and playing around. I’m kind of fascinated by generative art. I’ve been doing my little tinkering with not generative art, but image manipulation with… Now I’m filling my Twitter feed with a daily tweet that is R and GitHub actions and photography. But along the course of that, I came across app.wombo.art, which is you literally just enter any old phrase you want, you pick one of six or seven themes, I think you have 100 characters, and then it sits there and cranks away. There’s not a whole lot of explanation of what it’s doing and how it’s doing it, because it takes a little while to generate the art, but it winds up creating different themed art pieces based on the words that you enter, and it can definitely be a time suck. So app.wombo.art.
1:11:03.3 JL: I got a buddy who’s been playing with something similar. I’m not sure if it’s literally the same one, and his TVs in his house show the art. So he makes a folder and puts the ones in that he’s interested in, and then it shows them on the TV. It’s like he’s got an app on his TV that cycles through them and shows them each for 20 minutes or something.
1:11:23.2 TW: ‘Cause there’s deepdreamgenerator.com is another one that my son had pointed me to, which kind of is a neural network dreaming of different things, which is also kind of…
1:11:35.2 JL: Very trippy. [laughter] Very trippy.
1:11:39.3 TW: So it’s a fun area. And the stuff I’ve been noodling with is I’m trying to keep it to where I kind of understand what’s going on with some image magic and stuff, but yeah, it’s kind of fun. I probably should not have started a text message with my mother about it because she got very confused but was determined to understand it and that really… I was not feeling psychologically safe [laughter] by the end of that exchange. Michael, what’s your last call?
1:12:08.7 MH: I’m glad you asked. Something that doesn’t make a lot of people feel psychologically safe is the fact that they a lot of times have Google Analytics and Adobe Analytics on their website all at the same time. But recently, Frederik Werner posted a blog post that I thought was pretty cool where he shows step by step how to import Google Analytics data into Adobe Analytics using their data sources tool, which is nice. It’s sort of a technical walkthrough, but if you’re using those tools, you might run across a couple of used cases where that could come in handy. So…
1:12:38.9 MH: Well, I think we’ve probably mentioned Frederik’s work on the podcast many times before, so anyway, great blog post, really well done as always by Frederik, and that’s my last call. Alright, you’ve probably been listening and you’re like, “These are great tips, but I also have something I’d like to share.” Well, we’d love to hear it. And the best way to do that is through either the Measure Slack Group, which is a great place that is committed to your psychological safety, and also our LinkedIn group, which is less so, [laughter] and also on Twitter… I mean just ’cause we don’t do a lot with our LinkedIn group. But on Twitter as well, we’d love to hear from you. And as I mentioned, off of the top of the show, JD is also on Twitter @CMastication, so Cerebral Mastication, and you should follow him for sure because then you will not have to wait till the next time he’s on our podcast to learn about the things he’s thinking about on a daily basis, which I highly recommend.
1:13:38.0 TW: And laugh.
1:13:38.0 MH: And laugh. Yeah, it’s pleasant.
1:13:40.2 JL: It’s a very serious academic feed. What are you talking about?
1:13:43.9 MH: Yeah absolutely. [laughter] Right. Let’s just talk about your pinned tweet for just a second. [laughter]
1:13:51.8 JL: Okay.
1:13:53.2 MH: It’s your boat, 404 Fish Not Found, which I think is hilarious.
1:13:55.8 JL: My daughter named the boat. Good job, kid.
1:13:56.5 MH: Nice. So anyway, JD, thank you so much for coming on the podcast. It’s been an honor and a privilege, really, really wonderful to have you.
1:14:05.0 JL: Well, it’s been a pleasure. I love what you all are doing with the podcast. I’ve enjoyed listening to the back episodes, so keep up the good work.
1:14:11.6 MH: Well, thank you very much. And honestly, you know the podcast just wouldn’t happen without our illustrious producer, Josh Crowhurst, so no show is complete without a thank you to him and all his hard work behind the scenes to make the show a reality. And I know that I speak for both of my co-hosts, Moe and Tim, when I say, no matter how hard you’re struggling, no matter if your rubber duck isn’t telling you the answers, just remember, keep analyzing.
1:14:43.3 S1: Thanks for listening. Let’s keep the conversation going with your comments, suggestions and questions on Twitter at @AnalyticsHour, on the web at analyticshour.io, our LinkedIn group and the Measure chat Slack group. Music for the podcast by Josh Crowhurst.
1:15:01.2 Charles Barkley: So smart guys want to fit in, so they’ve made up a term called analytics. Analytics don’t work.
1:15:09.5 Thom Hammerschmidt: Analytics, oh my God, what the fuck does that even mean?
1:15:17.3 JL: And my stool’s squeaky, so I’m gonna get off of it and just stay at my desk.
1:15:20.9 MK: Oh wow.
1:15:21.7 MK: Introduce squeaky, butt…
1:15:25.6 TW: See, Moe, see Michael, the guests sometimes, they care about the quality.
1:15:29.9 MK: You guys seem like a fastidious bunch.
1:15:34.4 MH: Oh well, Tim is fastidious enough for all of us. [laughter]
1:15:43.8 JL: I did this oversized rubber duck with my last analyst’s first code review and I guess first phone request, and I’m gonna start doing that. And it was funny, she told me, she said, “Do you know about my history with rubber ducks?” and I’m like, “No, I don’t know about your history… ” And she’s, “Oh at my last job, I used to make rubber ducks… ”
1:16:01.3 MH: A rubber duck killed my family.
1:16:08.6 JL: I have had facial hair of one kind or another, and we had been married about a couple years, we didn’t have a kid yet, and I was like, “I may trim my beard,” and she was like, “Oh yeah it’s good. Go ahead and shave your beard.” And so I shaved and I came out of the bathroom, feeling clean-faced and my wife said, “Oh my God, put it back.” [laughter]
1:16:32.0 TW: You’re like, “Well… ”
1:16:32.3 JL: And she’s like, “Wow, I didn’t know you didn’t have a chin.”
1:16:39.3 MH: “I married you under false pretenses.” Wasn’t there some teaser that your hair was gonna change or what’s the story…
1:16:47.1 MK: Yeah, wasn’t your hair gonna change?
1:16:48.6 TW: I think probably in about a week. I think I’ve got a…
1:16:50.2 MH: Oh okay.
1:16:51.0 TW: I think it’s getting cut.
1:16:52.5 MH: New year, new you.
1:16:54.7 MK: Ooh, I kinda wanna see it just for comparison.
1:16:58.5 MH: What it used to be? You know me.
1:17:01.6 MK: I just wanted to say… I know I knew you then, but I just don’t remember what it looked like. I feel… Yeah because I feel like the rest of your face has aged so much. Now it will be a whole new look.
1:17:13.2 MH: The rest of my face has aged so much. [laughter] That’s…
1:17:15.6 TW: Wow, you’re so old-looking.
1:17:20.7 JL: Rough crowd in here.
1:17:25.0 MH: Yeah. Alright, here we go. I wish my dogs would stop barking. That’s not a euphemism from your feet.
1:17:32.5 TW: Is that your warm-up vocal? I wish my dog would stop barking.
1:17:35.1 MH: Yeah. I wish my dog would stop barking. The quick brown fox… Shut up you fucking dogs.
1:17:43.7 MH: Alright, we’ll go in three, two… Rock flag and blameless post…
1:17:54.9 MK: Sorry Tim, that was all your fault.
1:17:56.6 MH: Hey, no, it’s blameless. [laughter] It’s okay, Tim. It’s okay. Rock flag and blameless post-mortems.
Subscribe: Google Podcasts | RSS
This site uses Akismet to reduce spam. Learn how your comment data is processed.
https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_215_-_A_Very_Real_Talk_about_Simulation_with_Frances_Sneddon.mp3Podcast: Download | EmbedSubscribe: Google Podcasts | RSSTweetShareShareEmail0 Shares
Subscribe: Google Podcasts | RSS
[…] now for Digital Analytics and psychological safety! Here’s the podcast episode with J D Long.Thanks to Conflux for sharing this excellent piece reflecting on Gene Kim’s […]