In celebration of International Women’s Day, this episode of Analytics Power Hour features an all-female crew discussing the challenges and opportunities in AI projects. Moe Kiss, Julie Hoyer and Val Kroll, dive into this AI topic with guest expert, Kathleen Walch, who co-developed the CPMAI methodology and the seven patterns of AI (super helpful for your AI use cases!). Kathleen has helpful frameworks and colorful examples to illustrate the importance of setting expectations upfront with all stakeholders and clearly defining what problem you are trying to solve. Her stories are born from the painful experiences of AI projects being run like application development projects instead of the data projects that they are! Tune in to hear her advice for getting your organization to adopt a data-centric methodology for running your AI projects—you’ll be happier than a camera spotting wolves in the snow! 🐺❄️🎥
Photo by Caleb Woods on Unsplash
[music]
0:00:05.8 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.
0:00:13.8 Moe Kiss: Hi, everyone. Welcome to the Analytics Power Hour. This is episode 266. I’m Moe Kiss from Canva. And I’m sitting here in the host chairs today because we’re continuing our great tradition of recognizing International Women’s Day and all of the amazing women in our industry. So it’s coming up this Saturday, March 8th, and we’re going entirely gents free today. So of course that means I’m joined by the wonderful Julie Hoyer from Further.
0:00:44.1 Julie Hoyer: Hey, everyone.
0:00:45.1 MK: And Val Kroll from Facts and Feelings as my co hosts. Hey, Val.
0:00:49.1 Val Kroll: Hello. Hello.
0:00:50.0 MK: Are you ladies excited to know that Tim won’t be slipping into some of his quintessential soapboxing?
0:00:56.8 JH: Save some for the rest of us.
0:00:58.3 VK: I don’t think he’d be able to help himself on this one.
0:01:01.5 MK: I know, I know. He’s pretty gutted to miss it. So, as we’re planning the show today, I fired up ChatGPT, which, to be fair, I’m a power user and I asked it to compare our topics from the last 50 shows to the topics data folks are most talking about these days and basically identify the gaps in our content. So, unsurprisingly, the response it came back with was that we should definitely talk more about AI, and it was in caps, so maybe there’s some bias in that model. Who knows? Weird. But it’s got a good point. And we’ve definitely talked about AI on multiple episodes on the show, but we probably haven’t talked about it nearly as much as we could or as much as it’s getting talked about in the industry right now. So it seems like everyone is just so excited about the possibilities. But lots of organizations are also struggling to figure out how to actually identify, scope, and roll out AI projects in a clear and deliberate manner.
0:01:58.9 MK: I think it’s really about that shift from the tactical day to day things to the real transformation that everyone’s seeking. And that’s why for today’s episode, we’re joined by Kathleen Walch. Kathleen is a director of AI Engagement Learning at the Project Management Institute, where she’s been instrumental in developing the CPMAI methodology for AI project management. She is the co host of the AI Today podcast, which I highly recommend checking out, and she’s also a regular contributor to both Forbes and Techtarget. She’s a highly regarded expert in AI, specializing in helping organizations effectively adopt and implement AI technologies. And today she’s our guest welcome to the show, Kathleen. We’re so pumped to have you here.
0:02:43.6 Kathleen Walch: Hi and welcome. I’m so excited to be here today. I obviously love podcasts, so I love being guests on them as well. It’s a different seat for me today.
0:02:52.1 MK: It is definitely a different seat when you’re a guest. Hopefully a little lighter on the load. So just to kick us off, I think one of the things that’s really interesting about your professional history is that you don’t seem to be one of those people that just stumbled into AI in the last year or so and have gone full fledged on it. It really seems to be an area that you’ve been working in deeply for an incredibly long period of time. Maybe you could talk a little bit about your own experience and the journey you’ve taken to get here.
0:03:23.2 KW: Yeah, I like that you bring that up. I always say that I’ve been in the AI space since before gen AI made it popular. I feel like the past two years or so, everybody feels like they’re an AI expert and everybody is so excited about the possibilities. But it’s important to understand that we always say AI feels like the oldest, newest technology because the term is officially coined in 1956, so it’s 70 plus years old. But we just feel like we’re now getting to understand AI. And there’s a lot of reasons for this, which we talk about quite often. But one big reason is that there’s been two previous AI winters, which is a period of decline in investment, decline in popularity. People choose other technologies, other ways of doing things, and a big overarching reason for that is over promising and under delivering on what the technology can do. So it’s really important to understand that AI is a tool, and that there’s use cases for it, and it’s not a tool that’s one size fits all approach, especially when it comes to generative AI. So my background and what got me here is actually I started off in marketing and then moved…
0:04:29.7 KW: Yeah, I know. And then back when I was first coming out of college, my husband’s a software developer. I feel like the technology world and marketing or creative world or anything else, they really were very separate. And over the years they’ve merged closer together to the point now that I think technology is infused in many different roles and not as disparate as it used to be. Then I moved more into a data analytics role. Learned all about the pains of big data, how data is messy and not clean and all of that. And then I moved into more of a technology events role where my husband and I had a startup. It failed, but met a lot of great people from that community. Ended up going with my business partner from Cognilytica for a company called TechBreakfast, where we did morning demo events throughout the United States. And we were in about 12 different cities. So from Boston, Massachusetts to the Baltimore DC Region, North Carolina, Austin, Texas, really all over, a little bit in Silicon Valley. But that’s a unique space. And around 2016 we started to see a lot of demos around AI and in particular voice assistants and how we could be incorporating that.
0:05:48.4 KW: That was when all of the big players in voice assistants started to come out. So we had Amazon Alexa and Google Home and Microsoft Cortana, when that was still a thing. So from that we said, there’s something here. And we started an analyst firm actually focused on AI. It was a boutique analyst firm, and very quickly realized that organizations did not know how to successfully run and manage AI projects. So they said, we want to use this technology, this is great. How do we get started? And we said, okay, well, let’s see if there’s methodologies out there and let’s see if there’s a way, a step by step approach to do things. And what we quickly realized is that there wasn’t. And that’s how CPM AI was developed, which is a step by step approach to running and managing AI projects. And it was important because people would try and run these application development projects. And then you very quickly realize that they’re data projects and they need data centric methodologies, not software development methodologies.
0:06:47.7 KW: And so these projects would be failing. Or they’d say, we want to do AI and we go, well, what exactly do you want to do? And they go, well, we have all this data, let’s just start with the data and then let’s just build this, pick the algorithm and then move forward because there’s FOMO, fear of missing out. And we say, okay. But in CPMAI we always start with phase one, business understanding, what problem are you trying to solve? And even still today, many organizations rush forward with wanting to have an AI application or just saying, oh, look at this large language model, let’s put it on our website as a chatbot. And far too often many things can go wrong. We always say, AI is not set it and forget it. So far too often we see that these chatbots are providing wrong answers and that maybe we shouldn’t have started so big in our scope and we should have really controlled it and said, drill down into what we’re actually trying to solve. So we always say, figure out what problem you’re trying to solve first and really, really make sure that it’s a problem that AI is best suited for.
0:07:51.6 MK: Oh, my God, this is music to my ears. I am seriously. Yeah, because there is… I feel like I’m coming up so often against people that are just like, let’s use AI. And you’re like, what’s the problem? Have you noticed, though, over the last few years, and I feel like, especially in the last 12 months, do you feel like the industry is maturing here or is it Groundhog Day where you just feel like you’re having the same conversation again and we’re not at that stage yet where people are maturing enough to be like, is AI the right solution here? What are you seeing in the industry?
0:08:23.1 KW: So generative AI has made AI available to the hands of many. So maybe we were using AI before when we were pulling up for directions or which, whatever you choose to use Waze, Google Maps, whatever it is that you’re using it, it’ll help route you. Or if you have predictive text with emails or spam filters, that’s using AI. But it didn’t feel like we were using AI because, yeah, it helped a little, but it didn’t really make my life more efficient. But now with tools like Chat, GPT or at PMI, we have Infinity or Claude. I mean, you literally pick the tool of choice and it can help you do your job better. So it can help you… Or even Canva, right? I love Canva. How I’m not a graphic designer by trade, but now with the help of Canva, which is drag and drop, but then apply add AI capabilities onto it, I can do things that I couldn’t do before, like automatically remove background from an image and just have one, like my head now and I remove the background from it, which is absolutely incredible and does not require me to have to learn how to be a graphic designer.
0:09:34.8 KW: Or I can write better copy for marketing campaigns, or I can create images for PowerPoint slides that I no longer have to worry if I have rights to, but because I know I do, because I just created it. So it is helping in that way. But then we also see, you really need to drill down and say, okay, generative AI is just one application of AI. And so a number of years ago, actually back in 2019, because people said, well, I want to do AI. And we said, well, what exactly are you trying to do? And there was a lot of confusion about, is this AI, is this not AI? And we said, why don’t we just drill down one level further and say, what are we trying to do? And that’s where we came up with the seven patterns of AI. So we looked at hundreds, if not thousands of different use cases and they all fall into one or more of these seven patterns. And so we said, why don’t we just talk at that level? Because then it really, it helps you with so much. So the patterns at a very high level.
0:10:28.3 KW: And we made it a wheel because it’s no particular order and one isn’t higher than another, but it’s hyper personalization. So treating each individual as an individual. We think about this as a marketer’s dream. You’re able to hit the right person at the right time, at the right message, but also hyper personalized education, hyper personalized finance, hyper personalized health care. How can we really start treating each person now as an individual? And we can do that with the power of AI. Then we have recognition patterns. So this is making sense of unstructured data. 80 plus percent of the data that we have at an organization is unstructured. Well, how do we make sense of that? So we think about image recognition in this pattern. But you can have gesture recognition, handwriting recognition, there’s a lot of different things. Then we have our conversational pattern. So this is humans and machines talking to each other in the language of humans. This is obviously where large language models fall into play. We think about AI enabled chatbots here. Then we have our predictive analytics pattern. So this is taking past or current data and helping humans make better predictions.
0:11:29.4 KW: So we’re not removing the human from the loop, but using it as a tool to help make better predictions. Then we have our predictive analytics and decision support. So this is where we are able to look at large amounts of data and spot patterns in that data or outliers in that data. We have our goal driven systems pattern. So this is where really around reinforcement learning and optimization. So we think about how can you optimize certain things. We’ve seen this actually with traffic lights. Some cities are adopting this to help with the traffic flow and it can be adaptive over time. And then also the autonomous pattern. So this is where the goal of the autonomous pattern is to remove the human from the loop. So this is the hardest pattern pattern to implement.
0:12:11.7 KW: We think about this with autonomous vehicles, but we can also have autonomous business processes. So how do we have something autonomously navigate through our systems internally at our organizations? And so when we say, okay, well, what are we trying to do now? This helps us figure out what data requirements we need. This helps us figure out if we’re going to be building this from scratch, what algorithm do we select? If we’re going to be buying a solution, what’s going to be best suited for this? Large language models aren’t great for everything, and generative AI isn’t great for everything. So if we need a recognition system, then maybe we shouldn’t be looking at a large language model for that. If we want a conversational system, then yeah, then that’s great. And this really helps us to drill down that one level further and say, what problem are we trying to solve?
0:13:00.7 KW: What’s the right solution to this problem? Is AI the right solution? Okay, if it is, which pattern or patterns of AI are we going to be implementing? And then from there we can say, okay, we know what problem we’re solving, AI is the right solution for this, and now we can move forward. And if it’s not the right solution, that’s okay. But you have to be honest with yourself and with the organization. Because sometimes, I always say, don’t try and fit that square peg in a round hole. You don’t want to shoehorn your way just because you want to use AI, so you create the problem that I can solve rather than actually having it solve a real problem.
0:13:36.0 MK: That was actually going to be my question. When you talk to clients, do you end up showing them the seven patterns to start, or is that like showing them the answers and then they want to pick which one sounds coolest or that they had their mind set on and then they shoehorn and create the problem. Do you have to try to keep that blind from them to get the problem first? Or how do you go about using that?
0:14:02.5 KW: So when we go through the methodology, because that’s what we really teach and follow this step by step approach. So first you have to say, what problem are we trying to solve? And within phase one, the business understanding, we have a series of different steps that you’re supposed to be going through. So one of them is the AI go/no-go. So this talks about business feasibility, data feasibility and implementation feasibility. So do you have what is your ROI, the return on investment? You can measure this a number of different ways. I always say that ROI is money, time and resources. AI projects are not going to be free. And you really have to understand that. Sometimes people just go, well, we’re just going to do this.
0:14:41.1 KW: And I’m like, yeah, but it’s not, it costs a lot of money. And you measure that however you want. Time is money. Resources is money. You only have a finite amount of people that you can put on these projects. Some organizations can have more than others, but still you have to be mindful of that and so make sure that you understand the ROI that you want. We go through a lot of reasons why AI projects fail, and not having sufficient ROI is a failure. So the project may be doing what it’s supposed to, but an example that we give is Walmart decided to have a autonomous bot that roamed the store floors and would check to see if there were items that were out of stock. Well, I just said that the autonomous pattern is a really hard pattern. It’s the hardest pattern. So it’s able to autonomously navigate, and then it had the recognition pattern because it’s scanning the shelves to see if inventory is out of stock or miss stocked. Well, what they could have done is we always say, think big, start small, and iterate often. So don’t try and do everything all at once.
0:15:47.8 KW: Figure out what is that problem you’re trying to solve. Okay, you’re trying to solve a problem with inventory not being on the shelves. Well, maybe start with the aisle that has the most need, not the entire store. And you already have humans that are walking the floor. So maybe put a camera on the shopping cart and say, okay, now, how is this going to solve that actual return on investment? And was this really a problem that we needed AI for? Could we have done it cheaper or quicker or better with humans? Because we still need a human to go and actually restock the shelves. We didn’t have autonomous systems that were able to go and autonomously restock the shelves. So they ended up scrapping that in favor of humans because the return wasn’t worth it. So did whatever they build work? Yes. But was it still a failure because the investment was higher than the return? Yes.
0:16:38.9 MK: I’m sorry, I’ve got to interject. That example is so incredibly interesting because it also sounds like they had this learning after building it. Whereas if someone had done their due diligence of like, what does it cost for a person to walk the store for 20 minutes and check versus like the tech and the infrastructure and the data and all the things we need to build this, you probably could have answered that ROI question before you started the project, but do you feel like most companies have to almost do it to learn it and then they make the mistake and move on? Or is it…
0:17:10.3 VK: Tales of caution?
0:17:12.2 MK: Yeah, like, are people good enough at figuring out this out before they build it or is it only after?
0:17:17.1 KW: So a lot of people aren’t following that step by step approach. And when they’re not, you can tell. So Walmart is incredibly innovative. And they really push boundaries with technology, but it’s not always the right path forward. And so if you go, okay, well, I don’t have the resources of a Walmart. I don’t have the money that I can invest in some of these R&D projects or putting out a pilot project. Another thing that we see, another common reason for these failures is that we get into this proof of concept trap and so we say, never do a proof of concept because it actually proves nothing. You build it in a little sandbox environment. It’s usually the people that are most closely aligned with the project.
0:17:57.8 KW: So they’re going to be using it in the way that the tool was intended to be used, not the way that humans actually are going to use it out in the real world. And then data is messy. Usually in a proof of concept, you have really nice clean data that you’re working with. And then you go out in the real world and you’re like, why didn’t this work this way? Why are these users doing things that I wasn’t planning for? Why are you using it this way? That’s not how it was supposed to be used.
0:18:25.0 KW: And I was like, yeah, but that’s how your users are using it. So we say, get it out in a pilot and have it be in the real world and see how it’s being used. So if they had put this out in a store or two and said, okay, this isn’t working as expected, this isn’t providing the returns that we wanted, maybe we didn’t invest a ton of money, we invested some money and we’re trying it out, but it didn’t work out as we planned and so it’s not worth scaling.
0:18:49.9 MK: So the verbiage of use case is like really common, a lot of the clients that we work with, they have like their AI use case that they toed around with them. And I feel like that is not.
0:19:01.5 VK: I heard you say use case, but I feel like you’re using it differently. It almost feels like a use case is we want an autonomous vehicle to go find the open spaces on the shelf, not the problem framing that you’re talking about. So how often is there too much momentum down this path and this inertia of we have this use case in mind, our OKRs are aligned to completion of this project and so it’s like really hard to turn the Titanic? Or you can just talk about righting the ship. And if you think that that use case language is in converse of the problem solution framing.
0:19:39.2 KW: Yeah, and that’s a tough question, because you sometimes have a application, an AI application. You have something that you want to do and maybe a senior manager or someone in leadership is saying that that’s what they want and you’ve already invested a lot of money, time and resources into it. And so it’s their little pet project. And to pull back from it can be incredibly difficult. People also have those ideas in their mind about what they want and they try and shoehorn it. And so you go, well, I want an autonomous vehicle. So let’s figure out how we can get an autonomous vehicle on the store shelves. And when people talk about use cases, case studies, I feel like those words get thrown around a lot. And it’s like, what exactly do you want with a case study?
0:20:30.2 KW: How is that defined versus your use case versus what it is that you want? So we always say figure out what problems you have. And this requires brainstorming, this requires actually saying what problems are we trying to solve? And write it down, and bring different groups together and say, what are we trying to solve? And then from there, when we talk about the patterns too, you can look at it from one of two ways. You can either look at it as what’s the ROI that you want, and then figure out which pattern is best for that. Or you say here’s the pattern and then you figure out the ROI. So when you say I want this pattern and then you figure out the ROI, sometimes that’s shoehorning because you’re like, oh, well that’s an okay ROI, sure. But if you go, I want my organization to have 247 care, customer support. Well then you go, okay, well then, what’s going to drive to that?
0:21:26.4 KW: And that would probably be a chat bot, for example. So you go, okay, well then that’s what we should be doing. And if Walmart had said, what exactly are we trying to do? And we’re trying to stock shelves better and it’s like, well, what’s the actual return? Drill down even further. Well, what is the real return from that? Because you want more satisfied customers or because you better inventory management or something like that, rather than just saying, well, let’s have something roaming the store shelves to say when we’re out of an item, maybe we should be fixing something with the supply chain earlier on.
0:22:02.3 MK: Is that the biggest failure point you find? Is the identify the problem part that we’ve been talking about? Or is it oh, we can help 80% of clients that come to us get past that point and then the biggest failure point of the AI project is actually later on?
0:22:20.4 KW: There’s 10 common reasons that we’ve identified for project failure. Oh yeah. So one of it is running your AI projects like a software application project. It’s not, it’s a data project. You need data centric methodologies. You need to have a data first mindset. Yeah. Then obviously, if data is the heart of AI, we’re going to have data quality and data quantity issues. How much data do you need? I know a lot of times, especially with like analytics, we talk, you can train on noise. More data isn’t better. So you have to say, what data do I need? And then, do we have access to that data? Is it internal, is it external? Are we going to be adding more data and then just feeding it more noise? I mean, we have so many failure reasons. There was a, I think it was a forest, maybe US Forestry, it was one of the government agencies, and they were trying to count the number of wolves that were migrating in a national park, which is a great use case.
0:23:29.4 KW: You put a camera out and you can do the recognition pattern so that you’re not having humans who are there, which isn’t really great and conducive to being there for however long you’re trying to track these wolves. So, okay, that’s a good use case. Well, what they realized was that it ended up being a snow detector, not a wolf detector, because what it was being trained on, because especially some of these deep learning, for example, is a black box. So we don’t know actually what it’s using to learn. And so they realized, they said, okay, well that’s not performing as expected. So then that’s another common reason. Like I said, proof of concept versus pilot. You’re not putting it out in the real world until you’re investing all of this.
0:24:10.0 MK: I love that distinction. So good.
0:24:10.6 KW: Yeah. And I cringe when people always talk about proof of concepts because I’m like, I don’t think you mean that. And I’m like, you really mean a pilot. And if you don’t, you should be meaning a pilot. And then also a reason I talked about earlier, the number one reason is over promising and under delivering. That’s what brought us to two previous AI winners, and it will bring us into another if we continue to act that AI can do more than it actually can.
0:24:37.8 VK: So the ROI part of this seems like it’s very much tied to this expectation setting. I’m really curious about this especially. I just don’t know how you even get a full team on board with this type of thinking. Even if, let’s say Walmart started with MVP of putting the camera on the shopping cart, would they have been able to understand the actual investments it would take to run with the full product versus just the MVP? Or how does that play into the ROI conversation? Because it seems like that’s so tied into the expectations.
0:25:12.2 KW: Yeah. And we don’t do implementation. So I’m not there helping these organizations. So I don’t get to always hear through the entire conversation. But these should be short, iterative sprints. And so we say, if you really need to be mindful of what it is you’re trying to solve, make sure that you’re not… You want to solve something big. So think big, but then start small and then make sure that it’s actually solving a real problem. Another example that I like to use that I think provides really good example of a positive return on investment is the US postal service. They were, it was around the holidays and they were getting a lot of calls to their call center, more than usual because it’s the holiday season. And so you think about, well, what’s the number one question that they get asked? Track my package. So they said, we are not going to have a chatbot that can answer 10,000 questions.
0:26:01.5 KW: We are going to have a chatbot that can answer one question, track my package. So we can say, what is that return going to be? Well, the return on investment is we want to reduce call center volume because our call center agents can’t handle the volume that they’re getting. They said, okay, we’re going to have it answer that one question. We can compare it to data that we’ve previously had. They said, yes, this is a positive return. It is decreasing call center volume and improving customer satisfaction because people can figure out where their package is a lot quicker. From that they said this was a positive use case. Now we can go to maybe the second most asked question and then the third most asked question rather than saying, let me start and answer 10,000 questions all at once, which a lot of people are getting into trouble now because they just throw a chatbot on their website.
0:26:49.8 KW: They’re not testing it, they’re not iterating on it, they’re not making sure that it’s answering those questions correctly. And they’re not thinking big, but starting small. They’re thinking big and then starting big. So they’re saying, I’m going to put a chatbot on my website that can answer a bazillion different questions. And then it starts giving wrong answers and then they get into a lot of trouble. We’ve seen this with Air Canada, we’ve seen this with the city of New York. I mean, we’ve seen this with Chevrolet dealerships that have chatbots on their site. So like, I don’t even need to make stories up. It’s like every day there’s a new story about some failure.
0:27:21.9 MK: But is that also, coming back to your point about, I was trying to conceptualize the over promising point and it seems like that’s intertwined with this huge scope creep that then happens with many projects that it’s like, the scope becomes so wide and there’s also this assumption that AI can handle a big scope, but actually by doing that, you almost burn the house down before you’ve even started building it.
0:27:46.8 KW: Yeah. So over promising can be scope. And it also just, we over promise what the technology is capable of doing. So we say it can do all of these things and we’re like, but it can’t really. Or we’re trying to apply it in ways that it shouldn’t be used. So then it’s not providing the answers that we want or that return that we want. And then people go, well, now I’m frustrated, it’s not delivering on what we said it would. So we’re not going to use it anymore. And we go, yes, because if it doesn’t fall into one or more of the seven patterns. So another example is what I did not say was a pattern of AI, is automation. Automation is not intelligence. It’s incredibly useful, but you’re just automating a repetitive task. And so we think about RPA technology and that’s incredibly useful, but it’s not AI.
0:28:40.0 KW: And so sometimes people want to make things more than they are. Or if we don’t, if the technology isn’t there. So an example, back in the first wave of AI, back in the 1950s and the 1960s, we wanted to have voice recognition, and we wanted to have cockpits that were voice enabled so that pilots didn’t have to have all these switches and levers and they could just talk. But we didn’t… That technology wasn’t where it is today and so it wasn’t ready. Right. So we had, we over promised on what we could do and then under delivered because we didn’t have what we needed. And so we’re even starting to hit some of that today which we don’t have machine reasoning. So we can’t ask these systems to do more than they really can. And if we don’t understand those constraints, this is where we run into issue.
0:29:28.8 MK: I am dying to dig into something that you’ve alluded to twice, that a lot of AI is actually a data problem. The reason I want to dig into this specifically is I think there is a perception often in the industry that’s a technology problem that’s solved with product managers and software engineers and that sort of thing. How have you navigated that? ’cause like, we’re three data folks who probably appreciate the difference here and technologists in general are amazingly smart, curious people. But there are still nuances to data that are not fully appreciated. In the same way that I don’t fully appreciate the complexity of backend systems or front end code or things like that. How do you navigate that in a business?
0:30:13.9 KW: Yeah, we always say it’s people, process and technology. This three legged stool. And the easiest thing to do is to fix the technology. Fix, I air quote that. So you just add a new technology or you add a new vendor because it’s the easiest, because you can buy it. And it’s something that people feel is within their control, but it doesn’t actually fix the problem. And then process, that’s harder to fix. And so we need to say okay, maybe the way that we’re doing it, we can be agile, but we shouldn’t follow agile from that software development angle. We need to follow data centric methodologies. And that’s also people. And so it’s really important to understand that these are data projects and data, the issue, which, I don’t know, maybe I’m saying something controversial here, but data isn’t sexy. And so people don’t want to talk about it. And people that are in data fields love data, but other people don’t necessarily, and they think it’s a solved problem. And I’m like, it’s not a solved problem and it will never be a solved problem.
0:31:23.9 MK: Yes. Exactly.
0:31:25.0 KW: Because the more data we create, the more issues we’re going to have. And so people just want to throw technology at it.
0:31:30.4 JH: Oh, Tim’s going to be so sad he was not on this. He’s going to listen to this later and literally be fist pumping in the air and be like, yes, yes.
0:31:37.1 MK: I keep being like, Tim’s smiling somewhere in the world right now at multiple points and he doesn’t know why. He’s just like, oh.
0:31:45.0 VK: This warmth has come over me. Okay, so something that I’ve been thinking about ever since you talked a little bit about the example, Kathleen, is the postal service example about the chatbot answering that most popular question. So if the ROI proves itself for that single question, are any other subsequent use cases solving problems just gravy on top? Because if you were to try, just because it worked for that first one doesn’t mean it’s going to be appropriate for the second. Or maybe not for the third. Or perhaps it would have to pull in another pattern which expands the scope. So is it a freeing place to be after you’ve come up ROI positive on one first use case? Because then you have a different proof point for a second use case. Because if it doesn’t work out, you’re like, nope, we’re still good. Track my package. We can explore use case number three, but we’re going to go ahead and happily depart from investing further in use case two as an example. Is that mental model the way of building on that accurate?
0:32:43.3 VK: I’m curious your thoughts.
0:32:44.8 KW: Yeah, I mean, every use case, every example, every organization is going to be different. And so you have to say, what really is that ROI? Because if the ROI is to reduce call center volume, then maybe it shouldn’t be the most second asked question. It should be the most second asked question that the call center gets. And is AI the right solution for it? I don’t know. Depends on what it is. Because maybe if it’s… I need locations of different post offices, you can just have it direct to a point on the website. It depends on what exactly those questions are. But yeah, but to just really drill down. And then when you get to a point that you’re like, this is good, we always say, AI isn’t set it and forget it. So you have to make sure that it continues to perform as expected. And so think about what that means for the resources at the end of that iteration. But you don’t always need to continue and continue and continue and try and make it more efficient and try and make it better and try and have it answer all these different things.
0:33:47.5 KW: Because that’s where people do get into trouble, and they start doing things that maybe have a negative ROI where it used to have a positive ROI. Or they could have done a different use case or a different example, a different project. You want to have those quick wins. So we always say, think about what is the smallest thing that you can do that’s going to show a positive win. Because obviously you’re not going to get investment for further projects if you’re showing negative wins, negative returns. So what could continue to be those positive wins? And then at some point you’re like, okay, we’ve done a lot with this, let’s move on to our next project. Or how can we add a different pattern into this? Or how can we do something different? But you do want to always be thinking about that and saying, and that’s why we always say, come back to this methodology where it’s six steps and it is iterative. So if you’re not ready. So we start with business understanding what problem are we trying to solve. Then we move to data understanding.
0:34:47.8 KW: We need to understand our data. We need to understand if it’s, do we have access to this data? Is it internal, is it external, what type of data is it? And then from there we go to data cleaning. So because again, we know that data is not going to be nice and clean, and we need to do things like dedupe it or normalize the data or whatever it is in that next phase. Then from there then we can actually build the model, then we test the model and then we put the model out into the real world, which we call operationalization. So that would be that one question is one phase of the chatbot. So then we come back and we say, okay, now let’s figure out the next problem that we’re trying to solve and do we have the data for that?
0:35:30.4 JH: I really like the fact that you asked that, Val, because it’s giving me a light bulb moment of I have a coworker, Nick, who always says we’re not here looking for local maxima. And I feel like that’s exactly what you’re saying, Kathleen, is you prove ROI on that use case. But then you have to pick your head up and say, now what is our highest priority problem? Was that ROI enough to maybe make the problem of that huge volume coming in asking to track packages? Not our top business problem where we need to take these people’s resources, time, brain power for AI solutions to keep pointing it in the same direction. Maybe this is where we pivot to get the most ROI. Instead of saying, we started AI here on the chatbot, we must continue on the chatbot.
0:36:15.4 VK: I’m telling you, there’s a company that has this exact work stream where there’s the chatbot AI roadmap. And they are going to run that down versus the reorientation, like exactly what you’re talking about, Julie and Kathleen, about the next biggest problem which might have nothing to do with the chatbot or track my package. Yeah, I like that a lot too.
0:36:35.3 MK: Oh, I love that. [0:36:38.1] ____.
0:36:38.5 JH: Not looking for local maxima or something. Like, I just, I love the phrase.
0:36:43.2 MK: Oh, see, I just always talk about diminishing returns. I feel like that’s equivalent. Yeah. But sorry, people, we are running out of time and I have so many questions for Kathleen. I am dying to talk about skill set. In your experience, people that are project managing with AI, is it a different skill set? Is this the same skill set as anyone doing project management or even the team that are involved, what are the things that make the team possibly more successful?
0:37:13.0 KW: That’s a great question. So when we talk about AI and project management, we talk about it from two angles. A lot of people are talking about what are the tools I can use to help me do my job better? And that’s where a lot of like 95% of conversations are. And there’s so many tools. And people always ask me, well, what’s the best tool? And I go, I don’t know. What are you trying to do? There’s so many different tools. I can’t say there’s no one tool that’s best. But then how do we run and manage AI projects? And that’s where CPMAI comes into play. So what we found is that when we’re looking at running and managing AI projects, we get those traditional project professionals. They’re a project manager, maybe a product program manager, but then we also get project adjacent. So they’re a data scientist or they’re a data engineer and they’ve been tasked with running this project.
0:38:02.1 KW: So the skill sets really are unique and varied when it comes to running and managing AI projects, not typically always that traditional project manager skill set. And they’re usually a little bit farther along in their career as well. So we found that this complements very nicely with PMP, so for example. A lot of people that get CPMAI certified are also project management professionals with PMP certification. They’re a little bit farther along in their career. Doesn’t mean that you can’t run and manage AI projects early on in your career, but it does… We do find that they tend to be a little bit more mid to senior in their career.
0:38:40.9 MK: That’s interesting. I wonder if that’s also because so many of the things that I’ve heard you talk about, both on your own podcast and today, it actually requires really deep understanding of the business and the strategy and asking the right questions. And I feel like typically those are the skill sets that people get better at with time. I mean, I have some amazing junior people in my team that are naturally just very good at that. But I do find it tends to be, you need to have a bit of experience under your belt. So I wonder if that’s part of the allure or if it’s just like people need to, are more willing to take some risks.
0:39:15.4 KW: I think it’s because they know the industry, they know the real problems, the real pain points. And then they’re now solving for that. And so AI is going to become a part of more and more projects as well. So we may see a shift over time and everybody needs to be an AI project manager because they’re going to be involved in more projects. But what we’ve seen so far is that it tends to be on the, a little bit later in their career, not super early in their career. Because you need to have some of that industry knowledge. I mean, even thinking about ROI. What’s the return that you’re looking for at that organization? If you’re new to the industry, you may not know some of those real pain points.
0:40:00.7 MK: And I know at PMI you’ve talked previously about power skills. Can you tell us a bit more about that?
0:40:05.5 KW: Yeah, sure. So at PMI we call soft skills power skills. And I think that this conversation is incredibly important. So even on AI Today podcast we’ve talked about this and I’ve written articles in Forbes about this. When we think about how we’ve taught in previous years, and what we focus on with school and academics in K-12, it’s been a lot of STEM, so science and technology and engineering and math. Some of those types of skills. And they’re great skills to have. But we also need to be thinking about creative thinking and critical thinking and collaboration and communication. And so now that generative AI has put AI into the hands of everybody, we need to really think hard about what it is that those outputs are, and how we use them. So I always like to think about this as two sides. So how do I use my power skills to be better with large language models and generative AI?
0:41:02.1 KW: How do I become a better prompter because of that? And how do I take the results? How do I use generative AI to help me with my power skills? So how do I use it to help me be a better communicator? Maybe it can write emails in different tones that I struggle with, or maybe it can help me with translation in ways that I couldn’t before. Or how does it help me brainstorm? How does it help me bring teams together and have those collaborative sessions? But then at the same time, how do I take my critical thinking skills and say, was this a correct output? Maybe I shouldn’t trust it. Let me, what is it, trust but verify? Always think about what it is that’s coming out. Because we know that they can hallucinate. We know that means that it can give results that act like it’s, it’s confidently wrong. Well, okay, let me do a little bit of critical thinking here and saying, okay, maybe drill down one level deeper.
0:41:54.4 KW: Or how can I have better communication skills with it and do a follow up prompt or write it a little bit differently or have it help me rewrite and tailor even more finely the results that it’s given? And so I think it’s really important to use those power skills and not take them for granted. I also am really interested to see the shift now in learning with, sometimes people get a very negative reaction to AI and they go, oh, it’s going to, students are going to be cheating with this or whatever. And so they just have this do not use policy. But of course people are going to use it. And even organizations, if they don’t really know how to manage this, they’ll go, well, you’re not allowed to use it internally. Well, guess what? They’re all using it on their personal devices and it’s probably way worse because there’s data leakage and there’s security issues that are going on and the organization can’t control that.
0:42:51.2 KW: So we say, don’t fight the technology, but really lean into it and let’s all use it in that trustworthy, ethical, responsible way and not fight it, because it is going to be here. So how do we now teach children these power skills and help use the AI technology to help them be better at communication or collaboration or critical thinking or creativity or whatever that power skill is that you all like and want to think about. I always think about critical thinking. I think that that’s such an important and usually underrated, under discussed scale.
0:43:30.9 MK: We are all just clicking our fingers in agreement. Do you think critical thinking can be… It’s a very controversial question that I have been wrestling with for my whole career. Do you think critical thinking can be taught or do you think some people naturally are better at critical thinking than others?
0:43:52.1 KW: So I think anything can be taught, but I think that some things come more naturally to people. So you may not be a great communicator, for example. You may struggle to find words, but if you use a large language model, it can help you become a better communicator. Same thing with critical thinking, but it’s something that is like a reflex. And so you need to really embrace that. And I think that leaders on teams, colleagues can really help. And that’s something that everybody needs to be thinking about and really feel safe and empowered to have that critical thinking and say, I understand that’s what you said, but what did you mean? Or I understand that’s what you said, but let’s drill down one level deeper. And that’s how you really get that critical thinking. And I’ve been trying hard to teach it to my children. I have two young kids. And then I also think about how do I apply this? And this is so incredibly important because now in the age of AI, there’s a lot of misinformation, disinformation. We say you can no longer believe what you see, hear or read.
0:44:54.6 KW: So how do you say was, did this come from a source that I can trust or should I be questioning this? And okay, so an example out there is there’s a stat that Elon Musk is the richest man in the world, and he has like 44 or 48 billion dollars, and there’s 8 billion people in the world. So if he gave each person a billion dollars, he’d still have $40 billion. And I’m like, that math ain’t mathing. But people are circulating it like it’s the truth. And even one of my friends sent it to me, and then I told him, I was like, wait a second, this isn’t right. And I said to my husband, I go, what is this? And he’s like, this is ridiculous. But people aren’t right because we’re in such a go, go, go world.
0:45:36.1 KW: And you need to understand where this is coming from. People just hear something from the internet, believe it, even though we say, don’t believe it, and then they regurgitate it like it’s an actual stat. And I’m like, please stop. That’s critical thinking, just because you hear something doesn’t mean that it’s the truth. So maybe do math and say, okay, that math isn’t mathing, or figure out where it came from. And it gets harder because AI is prevalent. And so that’s why critical thinking is really now critical.
0:46:09.0 MK: Okay, I’m going to ask one last question, just because that’s what I like to do. I was looking at some research the other day, and I feel like we are so in the thick of AI from the technology perspective, we’re all living and breathing it. But it does seem that there are these huge sections of society that have such a different experience. And a lot of it is that the wider public can be quite apprehensive about AI and that if you’re trying to market a new feature or product or whatever, potentially you don’t even want to mention that it’s AI. And I was a bit surprised by that. And I was going through San Francisco a couple of months back, and I was blown away because every single ad was talking about AI. And I was like, I don’t get this. Why do all the ads reference AI? And of course, I started chatting to people about it, and they’re like, because it’s San Francisco. It’s because people want to use it to attract talent. And, like, look how shiny we are. We’re doing the cool thing, but that’s not necessarily the same as what the customers want.
0:47:09.9 MK: Is that a tension that you’ve noticed? Like, I don’t know, companies have to package it up and maybe not fully show the like what’ and all of, that this is AI solving your problem.
0:47:22.0 KW: Yeah, So I think it’s, I like how you brought that up because San Francisco is Silicon Valley. So they’re very tech forward and tech leaning. And a lot of this is coming from there. So of course they’re going to be pushing that. And that that landscape does look different than other parts of the country or the globe. You also have to think about what industry you’re in and some industries are embracing AI a lot more than others. That’s like a heavy technology. And probably most of those ads were heavy in tech. And you think about all of the tech companies that are from there. But then there’s other industries that are not as forward leaning with AI even if they’re using it. And that’s for a number of different reasons. Like healthcare, there’s a lot of applications that could be used but aren’t always used or are used, what we call augmented intelligence, where it’s not replacing the human but helping them do their job better for a variety of different reasons.
0:48:21.5 KW: You can’t have AI systems diagnose patients. So they can provide a diagnosis, but then the doctor needs to actually provide that, at least in the States in very limited use cases can you actually have have an AI system diagnose a patient. Construction also is an industry that is not a heavy adopter of AI. Yes, of course there’s applications for it, especially when you think about work, job sites, the recognition pattern is being used to make sure that people are either not on the site when they’re not supposed to be. So keeping that watchful eye over it, or for safety reasons, making sure that they have on protective gear, hard hats, and IT can monitor it in real time and then you can fix it in real time so that you can prevent injury. And so I think that it depends on the industry. And also there’s a lot of fears and concerns when it comes to AI that we don’t feel with other technologies.
0:49:14.7 KW: I don’t think people fear mobile technology, for example, as much as AI. And this comes from a variety of different reasons. Science fiction, Hollywood. We conjure up all these different ideas of what good and bad AI can do. We think about HAL or the Terminator or Rosie from the Jetsons, and we don’t have this when it comes to other technologies. So people have real fears which are emotional and concerns which are more rational, and we need to be addressing that. So messaging plays a part in all of that. And I think that it depends on the industry, it depends on the user use case. And so we shouldn’t hide necessarily that we’re using AI, but we don’t always need to be so forward leaning if the industry isn’t quite ready to embrace it.
0:50:04.5 MK: Thank you so much, Kathleen. That was such an incredible place to end. Yeah, I think we’re all blown away. We’re going to have to do a part two at some point if we can drag you back. But we do like to end the show with something called Last Calls where we go around and share something interesting we’ve read or come across or an event that’s coming up. You’re a guest. Is there something you’d like to share with the audience today?
0:50:23.1 KW: Sure. I mean, obviously AI Today podcast. I think it’s wonderful. It’s been going on now eight seasons and we’re in the middle of a use case series. So if people want to see how AI is being applied in a number of different industries, then definitely check that out. And also one event, I’ve been an Interactive Awards judge for south by Southwest for a whole decade now. I can’t believe. I know. And I’m going back, so really excited for that. And PMI is going to have a presence there, and so I’ll be on a panel discussion. So I think that that’s pretty exciting. Yeah. I can talk AI all day, every day. So I’ll be a judge at the Interactive Awards live. So it’s March 8th when the judging happens and then my panel will be a day or two later.
0:51:08.5 MK: Nice. Thank you.
0:51:09.7 VK: Very cool.
0:51:10.6 MK: Julie, what about you?
0:51:12.3 JH: So I’m pretty proud, this week I finally tried out making a gem in Gemini and I don’t know if any of you guys have tried it, but I was really proud. It was just one of those things on my to do list. I’m like, I want to play with it, I want to do it. I kept putting it off. I didn’t find time. Was at work and found a great use case for it. And so I finally took the time to do my pre prompting. And actually part of what I wanted to call out here was that I finally understood what it was doing. I would hear everyone at work say, I set up a gem, I’m recreating myself. It’s the coolest thing ever. It can do so many things for me. And I’m like, whoa, okay, I’m intimidated, but it sounds awesome. So when I sat down to do it with some of my colleagues, they were explaining to me that what it’s doing is you’re pre prompting Gemini, and so you get to save all this information. So, for example, I said the role I’m playing is a consultant in analytics and experimentation.
0:52:05.7 JH: This is my title. Here’s my LinkedIn, here’s what I focus on. Please use with every answer the context of these couple documents that I gave it. And in those documents I was able to give it a lot of slideware and other documents I’ve created in the past of saying, like, this is the topic I want you to reference when I’m asking you these types of questions. And so once I really understood that it wasn’t magic, you weren’t giving it just a subset of data, you were pre prompting the model, it was like it finally really clicked. And I tried it out today. I said, I’m trying to spin up this specific thought leadership group. I gave it a few sentences of things I had brainstormed. I gave it to my gem, who I named Juniper. And I’m embarrassed to say. I literally went to ChatGPT and was like, what are fun names for gems? Because I was not feeling creative that day.
0:52:57.9 MK: Stop it. No you didn’t. That’s [0:53:00.3] ____.
0:53:01.0 JH: Yeah. Anyway, so Juniper, I asked Juniper for this. It gave me like a two page outline for the whole charter of the group. And it was like a little broad, like I’ll take it and change it. But yeah, I very impressed by this gem. So something fun to go try. It was less intimidating than I thought.
0:53:19.2 KW: Very nice.
0:53:19.8 MK: I like that. Over to you, Val.
0:53:22.7 VK: So mine is a twofer, but they’re actually related, and it’s actually more related to this conversation than I was originally even anticipating, which I love. So first of two is a medium article called Thinking in Maximums Escaping the Tyranny of Incrementalism and Product Building. And it’s all about the local versus versus global maximum. It goes through all these use cases of like, why MVP thinking is actually problematic in some cases. And all these stories of companies that actually swung big and why that’s so much better than taking it down to the smallest bolts of the product and getting feedback and not really being tied to the full vision, which I’m just call out. I’m not sure I agree with all this, I just find it interesting. And then I was also listening to a podcast from the product school.
0:54:11.1 VK: They were interviewing the CPO of Instacart. And one of the call out quotes from that was, you won’t hear me say or use the word MVP because I find it to be very reductive. And I think that product is so much bigger than that. So anyways, I’m just, I’ve been doing some research around, is this a theme in the product world and product space and how they’re thinking about this? Because obviously as someone who has an experimentation background, I’m very much a fan of de risking and think big, start small, Kathleen, which I love that. So. But two interesting reads from very different POVs than where I stand and thinking about how to break down and think about the work and de risking choices as you’re moving along the process. So two good ones there.
0:54:53.6 KW: Those are good. I can’t wait to read those.
0:54:56.1 VK: Yeah. And how about you, Moe?
0:54:58.1 MK: I have nothing to do with the show. Mine are just too fun. Well, one is Canva Create is coming up next month, April 10th in Hollywood Park, Los Angeles, which I am super excited about. It’s just, yeah, really fun atmosphere and we always have some incredible speakers. So super pumped about that one. The fun one, so I had a session yesterday with my mentee and she started talking about Gretchen Rubin, and she’s like the four tendencies and blah, blah, blah. And I was like, this sounds really familiar. And then I realized I’d listened to a podcast on it, but the podcast was applying the four tendencies to children, and how you raise your children. And then I’d never actually gone back and read the total work of Gretchen. And so we had a really interesting conversation about it. It basically talks about, whether you’re an upholder, an obliger, a rebel, or a questioner. And it’s basically to do with where your motivation comes from, if it’s an internal motivation, external, both, etcetera.
0:55:57.2 MK: And the thing that blew me away is that I had listened to it and been like, this is the one that I am. And then as I was talking about it more and more, I was like, oh, I’m a different one. And then I did the quiz and I was like, I’m actually a completely different one to what I thought. So that was like a really big eye opener because, yeah, I’ve been thinking a lot about my own motivations and how I can get the best out of myself. And life and balance and all of these things. So it was actually also just like a really nice way to break up my day. So I’m going to have, my poor team don’t know it yet, but I’m going to ask them all to do the calls because I’m so interested to see what everyone is. So, yeah, those are my two last calls. Just to wrap up, I want to say a massive thank you, Kathleen. This was just phenomenal. We have not even touched the sides of all of the possible directions that we could have discussed with you. But a very big thank you for coming on the show today.
0:56:47.5 KW: Yeah, thank you for having me. This was such a wonderful discussion.
0:56:51.4 MK: And we can’t end without saying a big thanks also to our producer, Josh Crowhurst and all of our wonderful listeners out there. If you have a moment, we’d love if you could drop us a review on your favorite podcast platform. And I know I speak for Val, Julie and myself, no matter how many problems you’re solving with AI this year, keep analyzing.
0:57:15.7 S1: Thanks for listening. Let’s keep the conversation going with your comments, suggestions and questions on Twitter @analyticshour, on the web at analyticshour.io, our LinkedIn group and the measured chat Slack group. Music for the podcast by Josh Crowhurst.
0:57:33.5 Speaker 6: So smart guys wanted to fit in. So they made up a term called analytics. Analytics don’t work.
0:57:40.2 Speaker 7: Do the analytics say go for it no matter who’s going for it? So if you and I run the field, the analytics say go for it, it’s the stupidest, laziest, lamest thing I’ve ever heard for reasoning in competition.
0:57:56.4 MK: Quick before you drop, Kathleen, when you were talking about the communication skills helping with the way you communicate, informing the prompt engineering and even what you’re talking about Julie? ChatGPT did me dirty. So you know how it shows all… It’s essentially showing your search history in that left rail unless you hide it. I went back to end of 2023, and half of what… My responses were, give me three more. Give me three more. And that was, I was giving it no more direction or information. I was like, give me five more. Make it funny. Give me five more. It was like all I said to…
0:58:37.4 JH: It to be fair, sometimes I say do better.
0:58:42.9 MK: I was like, no additional information. Just try harder. Rock Flag and AI is a data problem. Ooh, that’s a good one.
Subscribe: RSS