We finally did it: devoted an entire episode to AI. And, of course, by devoting an episode entirely to AI, we mean we just had GPT-4o generate a script for the entire show, and we just each read our parts. It’s pretty impressive how the result still sounds so natural and human and spontaneous. It picked up on Tim’s tendency to get hot and bothered, on Moe’s proclivity for dancing right up to the edge of oversharing specific work scenarios, on Michael’s knack for bringing in personality tests, on Val’s patience in getting the whole discussion to get back on track, and on Julie being a real (or artificial, as the case may be?) Gem. Even though it includes the word “proclivity,” this show overview was entirely generated without the assistance of AI. And yet, it’s got a whopper of a hallucination: the episode wasn’t scripted at all!
Photo by ChatGPT-4o (obviously, right?) as prompted by the Analytics Power Hour’s Senior AI Specialist, Michael Helbling.
[music]
0:00:06.9 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.
0:00:18.2 Michael Helbling: Hey, everybody, welcome. It’s the analytics power hour, episode 270. Back in 2023, I asked AI to write an intro to the podcast, and in the words of Moe, AI did a pretty shit job. But AI hasn’t gone away, and the possibilities, capabilities and potential of these LLMs are expanding, seemingly by the minute. So today we’re strapping on our robot helmets and plunging head first into the wild, worrying world of artificial intelligence. So what’s AI really for? Is it just a fancy predictive model? Maybe just a massive checkbox on your boss’s latest buzzword bingo card? I don’t know, hype versus help? Automation versus annihilation? And whether or not your next co worker might be, I don’t know, a chatbot with boundary issues. So grab a drink, mute your Slack notifications, and prepare to find out if your career path is evolving or being quietly replaced by a GPT powered spreadsheet whisperer. Speaking of spreadsheet whisperers, let me introduce my co-hosts, Julie Hoyer.
0:01:23.2 Julie Hoyer: Hi there.
0:01:28.1 Michael Helbling: I’m very excited that you’re on the show, Julie, because I feel like you’re probably one of the most knowledgeable people about AI in our group, so I’m going to be leaning on you quite a bit.
0:01:36.6 Julie Hoyer: I don’t.
0:01:37.9 Michael Helbling: No, I’ve been observing all of us and pretty sure, yeah, you’re going to be.
0:01:40.6 Julie Hoyer: I’m the sleeper.
0:01:45.9 Michael Helbling: I mean, we’ll see. We’ll see. All right, next up, Val Kroll.
0:01:49.5 Val Kroll: Hello.
0:01:50.5 Michael Helbling: Val. I did love the April fool stuff that you and Tim put together for facts and feelings.
0:02:00.6 Tim Wilson: A month ago.
0:02:01.6 Michael Helbling: So I guess that’s a good use of AI. Yeah.
0:02:02.6 Val Kroll: Yeah. Facts and furious.
0:02:04.3 Michael Helbling: Yeah. April Fools a month ago. Everyone knows when April Fools is, Tim. Moe Kiss…
0:02:10.2 Val Kroll: He’s just remembering back.
0:02:12.2 Michael Helbling: Yeah, welcome, welcome.
0:02:17.5 Moe Kiss: Thanks. Excited to be here.
0:02:18.3 Michael Helbling: Would it surprise you to learn that good chunks of that intro were written by AI?
0:02:25.0 Moe Kiss: Yes. Yeah, yeah.
0:02:25.8 Michael Helbling: They were.
0:02:26.3 Moe Kiss: Yeah. Sounds legit.
0:02:28.9 Michael Helbling: The models have progressed quite a bit. And speaking of people who haven’t progressed quite a bit, Tim Wilson. Count…
0:02:37.8 Julie Hoyer: Insert cheering sound.
0:02:38.4 Tim Wilson: XLOOKUP.
0:02:41.2 Michael Helbling: XLOOKUP.
0:02:42.8 Tim Wilson: That’s right.
0:02:44.4 Michael Helbling: I’m whispering to the spreadsheet. That is actually the first…
0:02:50.0 Val Kroll: Eye roll.
0:02:51.6 Michael Helbling: No. I literally read Tim’s blog way back in the day with Excel tips and tricks. Like, I learned things from Tim Wilson about Excel. So that is true…
[overlapping conversation]
0:03:02.1 Tim Wilson: 2008.
0:03:04.5 Michael Helbling: Hey, listen, it’s working for you, so don’t give up, all right? I’m Michael Helbling. So, yeah, let’s… What do the kids call it? Vibecast or Vibe podcast? I don’t know. Let’s do this thing. All right. So, Julie, is it going to take our jobs, this AI thing?
0:03:25.6 Julie Hoyer: No, definitely not.
0:03:26.0 Michael Helbling: All right. No, thank you.
0:03:26.1 Julie Hoyer: My experience.
0:03:26.4 Michael Helbling: Great show, everybody.
0:03:29.6 Julie Hoyer: I’m not worried. See you next time. Rock flag.
0:03:41.0 Michael Helbling: See you next time. Okay, but why isn’t it going to take our jobs? We should probably dig into that a little bit. And let’s also maybe dig into what our jobs are a little bit so that we can kind of see where AI helps, where it doesn’t. And I guess other people can also chime in too.
0:03:57.5 Julie Hoyer: I guess. Okay. Most recently, something I’m running into a lot is… And I feel like this is an example we’ve talked about previously on the podcast, multiple times. A lot of people have written blog posts about it. And it’s just funny because now I’m fighting this battle in multiple fronts at work. The same discussion of. I think for analysts, AI is not ready to just replace us. Even for, like, writing queries. There is no, like, talk to your AI and ask it your business questions and have the data insights come from, your big data warehouse or anything. People are still so excited about that.
0:04:30.7 Tim Wilson: Wait a minute.
0:04:32.5 Julie Hoyer: From what I have seen, it’s not…
0:04:32.4 Tim Wilson: Debugging versus giving it a… I mean, Julie, just said from business question to having it write it.
0:04:37.9 Julie Hoyer: Yeah.
0:04:40.3 Tim Wilson: Which…
0:04:40.9 Val Kroll: Yeah. Fair.
0:04:43.1 Michael Helbling: Yeah, Yeah. I think the distinction is important, but let’s let you keep going.
0:04:49.6 Julie Hoyer: Yeah. I think it’s still that there’s a lot of this, like, fantasy of, like, it’s gonna be so much faster for an analyst. Like, go into your analytics tool and just like, type away your questions that you have to answer and get insights really quick. And I have just had some specific, like, experiences recently where I’m like, see, it’s still not. You guys are saying that that’s like the promise, that’s what they want, but it’s not true. So I’m still not seeing it even in that sense of for an analyst and reporting, we’re not close to that, which that takes a ton of time as an analyst to synthesize the data, put it into a coherent answer, and have it be insightful for your business stakeholder.
0:05:26.4 Moe Kiss: Without giving away too much, this is a delicate tightrope to walk. Ah, so what we’ve been trialing, and there’s some super smart people at Canva. Adam Evans had a really brilliant idea, and then Sam Redfern, who I used to work really closely with, has been exploring kind of productionizing it. It’s been really cool. It’s like looking at like, what are the top queries that are getting asked, like SQL queries, versus like a table, right? Or like a report table or a model table, and then using AI to help like generate the best query possible to get back the data. And what we’ve noticed is if we do that, and then we return the data back, and then ask our business question, it’s doing a better job. And we’re starting to like test that out across multiple different business streams. And I’ve decently played with it and pretty comfortable. Like I think the thing is like you definitely, we’re not at a point where you don’t need a data person involved at all. Like you still definitely need to QA data, you definitely need to be like looking at the query logic, all that sort of stuff. But it is a lot more promising probably than I expected in a faster time. And I’m almost… I’m going to throw out something controversial and Tim is like sitting on the edge of his seat. I think we might get to a point where we don’t need dashboards. Mic drop.
0:06:57.7 Tim Wilson: Well, yeah, I think I agree with that.
0:07:00.5 Moe Kiss: Oh, maybe it’s not surprising.
0:07:02.1 Tim Wilson: Well, I mean, I think dashboards are generally bullshit.
0:07:04.6 Julie Hoyer: So I was gonna say more that I think, Moe, hearing your success with this so far, though, the difference for me is I’m not working with clients that are building something homegrown. They want something out of the box that works. I think people don’t realize the training that goes into it. I mean, it’s contextful. It takes a lot. There’s a lot to think through and people aren’t connecting those dots of like, all the steps in between. But.
[overlapping conversation]
0:07:30.6 Moe Kiss: Sorry, you’re thinking that most people just want to buy a tool and be like, here’s access to our data warehouse. And now…
[overlapping conversation]
0:07:37.0 Tim Wilson: Those fucking tools are hitting me every goddamn day. We have solved it. And they stand up the biggest fucking straw man of the problem is business users can’t get to their data. And imagine if they could just ask what were sales like in the Northwest region last month and it would generate that query. And it is the biggest fucking farce. I was having the exact same reaction. Within an enterprise organization with experts that have the ability to have captured a lot of queries and captured a lot of expertise and train it, I do feel like is very, very different from the promise of the BI platforms and all these Johnny-come-lately upstarts that are like, we can solve this. That drives me nuts.
0:08:28.3 Julie Hoyer: Because they’re saying, you come with all your data. We have a really good LLM. Now ask it your questions. And it has the data there so it can answer for you. And it’s not taking into account that it’s not smart enough. It doesn’t know… You haven’t trained it how to actually… It doesn’t have the context around your data. It doesn’t have the context around your business. That takes so much.
0:08:47.2 Moe Kiss: The thing that we’re still having to do is we have a very unique data warehouse in how we’ve chosen to build it where we have like… Well, we’ve tried to build a lot more like small lean tables to answer specific questions, which means that we have thousands of tables, right? And so the joins become complex, all that sort of stuff. And the thing that we are still very much having to do is helping point it at the right table and provide context on that table. And so I think the thing that I’d probably… Like my own thinking has developed quite a bit is that previously I probably used to see our data warehouse as being like almost a barrier to us using AI, whereas now I’m starting to see it as much more of an advantage, but you still need that like SME knowledge of like, this is the best table to use. And one of the ways that we’ve been solving for that is looking at what are the top dashboards that people are looking at at a company level, because often it’s like the report layer table that’s sitting underneath that dashboard is the best possible data source because it’s all structured and clean and like has all the right dimensions. And then we point it at that specific table. So like, I totally hear what you’re saying. Like I have such a different perspective because we do have the SME knowledge,
0:10:04.1 Julie Hoyer: What it’s intriguing if you actually are using is your training data, the history of all the queries that have been run. And I mean, that’s kind of the wisdom of the crowds. If your training data is what are the queries that the experts have written and now we can estimate the best query.
0:10:19.7 Michael Helbling: What’s definitely become clear to me is that your source data requires many different other pieces of metadata or parallel data, like the queries being run, like what questions people are asking when internally reports are being used, what other things are happening in the business that aren’t stored in that dataset so that an inference engine like an LLM can actually come up with something that is not just sort of like that intern level, time on site was 42 seconds type of bull crap you get from big agencies. Did I just say that out loud? Sorry.
0:10:59.6 Tim Wilson: I think maybe somewhere you would have named a specific agency.
0:11:02.6 Michael Helbling: I didn’t want to go that far. But it’s also interesting because the companies who are out on the forefront of this trying to build these chat-assisted or AI-assisted data exploration tools, probably one of the ones that I’m most familiar with right now is Zenlytic, which is they’re very upfront about the fact you have to build this other layer, which they call a cognitive layer, on top of it so that you can actually leverage their tool. And they don’t claim to provide you insights at this point. They just claim to provide you ad hoc data. So if you need to get a metric, they can do that for you. And I appreciate both the honesty and the progress. Because I am bullish on this. I think there is a future here. But I also think we’re nowhere close to asking a question and getting an answer that includes context and insight that gives us a next action.
[overlapping conversation]
0:12:08.2 Tim Wilson: Picture this. You’re stuck in the data slow lane. You’re wrestling with broken data pipelines, manual fixes, suddenly streaking across the sky, faster than a streaming table, more powerful than a SQL database, able to move massive data volumes in a single bound. It’s not a bird. It’s not a plane. It’s Fivetran. I need a hero for data integration. Fivetran, with over 700 pre-built, fully managed connectors, seamlessly syncs your data from every source to any major destination. No heroics required on your part. That means no more data pipeline downtime. No more frantic calls to your engineers. No more waiting weeks to access critical insights. And it’s secure. It’s reliable. It’s incredibly easy to deploy. Fivetran is the tool you need for the sensitive and mission-critical data your business depends on. So get ready to fly past those data bottlenecks and go learn more at fivetran.com/APH. Unleash your data superpowers. Again, fivetran.com/APH. Check it out.
0:13:16.1 Julie Hoyer: So an example of that cognitive layer that we’re running into. So we were trying to use, or trying to use, Explore Assistant in Looker. And I don’t love it. I don’t understand. Anyways, we won’t go down that. I don’t love it. Here’s an example. Two examples. We don’t have that cognitive layer. I don’t know how we build that in. We’re trying to do this, let’s say, like we’re trying to do a project that’s pretty at scale. And like a cognitive layer for every client we might use this for, right? Like that’s quite a bit of work to spend it up. So we were even doing a test use case where we were working with this, the AI agent Explore. And we asked it, we said, show us the top 10 performing landing page, like cost per landing page, right? And then we asked it, the worst performing. And we were like, look, I was working with some engineers. They’re like, look, we got it to provide the data we were expecting. And then I realized it actually wasn’t understanding best and worst either. Like even those semantics of me saying best cost per landing page would be the cheapest ones. They were showing me the most expensive and vice versa.
0:14:22.4 Julie Hoyer: When I said worst, they were showing me the cheapest. So it’s even little things like that. Or we were trying to ask about a specific metric, but we were just using the layman’s terms, right? Like a business user asking about it. And because the name coming from the data source is nowhere near that, you know what I mean? It was never going to get to that data point for us.
0:14:42.1 Moe Kiss: Okay. So what is top of mind for me right now is like, why do I not seem to be having these same challenges? Is it just that like we also have an enterprise account and we’re uploading so much more of our own business context. And so then we’re like not having these hurdles. Is that like a big part of it?
0:15:03.8 Michael Helbling: I think yeah, because what you can train an LLM on is all about what you get back out of it. I could so I live in a world where Moe, I don’t have clients who are taking all of their data and storing it in an LLM or how about this consciously executing a data strategy aligned with growth of AI usage consistently. I have some clients are doing quite a bit with it, but what they’re seeing is the exact same thing. They now have people full time whose job it is to ensure that the AI is getting fed the right information, which I think is kind of fascinating. And then the other thing is that there’s such a big expectation gap because of what AI is able to do in other categories. So like, for instance, when I sat down with my son recently and we quote vibe coded a video game the other night and we had a working video game in like five minutes. It kind of blew my mind. And here’s why, because I don’t know how to write code, but this AI could take a step forward and capability so big it makes people think, oh, that step forward is available in every context. And it’s simply not. Because, and I’ve thought about this a lot, like, why is it so good at coding already? And I think the reason why is because code lives all in the same place and is logical in its structure. So, like, the code is right there.
0:16:35.9 Moe Kiss: It’s good at some code.
[overlapping conversation]
0:16:37.1 Michael Helbling: No, no, it’s not perfect at coding, but it’s the best… Like, writing code is what AI is really… The most product-ready thing it can do, I think, besides making cool animated versions of your own photos now, is what it’s really amazing at. And it blows my mind how good it is now at it. Like, it’s so impressive. But I also start to realize that, like, oh, yeah, because everything it needs to know is right there. It’s all in the code.
0:17:06.9 Moe Kiss: Yeah, but, okay, can I talk you guys through an example that someone in my team showed me.
0:17:11.5 Tim Wilson: I want to call out that, Ethan Mollick did a whole, like, vibe coding to build a game piece that’s, worth a read that was kind of… It was speaking things into existence where he… It was a little bit more involved game but kind of where he took steps forward and steps back. So that just reminded me of that, your example.
0:17:29.5 Michael Helbling: Way to slip in a last call there, Tim. Nice job.
0:17:33.2 Tim Wilson: Nope, wasn’t even on my last call.
0:17:36.4 Michael Helbling: Oh, Moe yeah, go ahead.
0:17:37.1 Moe Kiss: Just showing off altogether. Okay, so someone in my team showed this last week. And to be fair, I have not played with Claude at all. I have been quite monogamous in my AI tooling. And basically what he did is created a new Claude project. He uploaded into it LookML for an existing… So LookML is the language that sits behind Looker, which is a dashboarding tool for anyone listening. So you have to write LookML code to basically get the data in the right format to build a dashboard. And so he uploaded, like, basically a LookML for an existing Look. He then added, like, the underlying data that sits behind it from the data warehouse as well as the code of how that table is created, then gave it a sample data set. And basically, like, saved these all to his project. And then within, like, a good 15 minutes, Claude using the… Because he put a lot of thought and effort into the steps and what data he… And what context he uploaded. It gave him back the LookML to build a dashboard. And that was… He turned that around in 15 minutes and built this whole new dashboard for our stakeholders, which, to be honest, we didn’t have the resource in our time to build.
0:18:54.0 Moe Kiss: He definitely, talked us through the fact that he had to make tweaks and make changes to this or that or the wrong visualization was picked here or he wanted the colors to be this or that sort of thing. But that is another example of it is so much about what you’re putting in. And I just wonder sometimes if the expectations of people are here is one very selective bit of data. Now answer this really complicated question, which it doesn’t have enough business context to do. And that we need to spend more energy on putting quality in. Oh, I don’t know. I feel like Tim’s rolling his eyes at me.
0:19:31.7 Tim Wilson: Hopefully not to generate a fucking dashboard. But okay, that’s awesome. We found a fashion…
[overlapping conversation]
0:19:36.1 Moe Kiss: It generated a fucking dashboard. Do you know how excited I was? Big shout out to Steve Austin.
0:19:41.3 Julie Hoyer: I’d be excited and save you all those steps in time.
0:19:43.1 Michael Helbling: Hold on. You just said we’re not going to need dashboards. Why are we generating them? No, I know.
0:19:48.9 Moe Kiss: People still think they need them now. But in a year, I don’t think they will. Because they’ll be… Ultimately you look at a dashboard to be, like are we on track or not? Like, what was our performance? Are we hitting it? Blah, blah, blah. I feel like it’s a crutch that people need. And it’s like, if you can answer that question without having a dashboard, why would you need it?
0:20:10.8 Michael Helbling: Yeah. I look forward to a future where my brain gets stimulated and I smell apples when sales are down in the Northwest.
0:20:21.6 Tim Wilson: I mean, that’s kind of bizarre. I mean, to me, the only place… I mean, not to mount the dashboard. But the only place a dashboard is really useful is actually showing in a consistent manner are we delivering against the business outcomes, against our targets. So I actually would think that would be useful. I don’t want to ask an LLM every time, what is it I care about? What metric is it that I want to look at? I don’t know. That’s maybe a topic for a whole other.
0:20:51.9 Val Kroll: For you to say, where am I underperforming? And have it spit it out.
0:20:55.2 Moe Kiss: Am I on target? Where am I underperforming? And what action should I take?
0:21:04.6 Tim Wilson: Okay. Actions.
0:21:06.8 Julie Hoyer: Couldn’t say that one.
0:21:07.4 Tim Wilson: I think I do need another…
0:21:08.9 Moe Kiss: Tim’s going to need a drink.
0:21:10.2 Michael Helbling: I think Canva has another breakthrough product category here, analytics tools.
0:21:15.6 Tim Wilson: I did a thought experiment where I said this is kind of really the best, the perfect dashboard would be one that only showed where you were underperforming. So you’d have the same structure, but everything would go away if you were actually delivering, you were meeting your results, and so you’d wind up with a very sparse. But I still think there’s human value of knowing what to look at and where, because that’s been another thing that so much of the hype around AI, and this even goes back to other products pre-AI that were still doing the, oh, we’re going to put stuff, our users don’t want to see charts, they want to know what’s going on. And so it basically would barf out text that described the charts. We are, as human beings, a visual representation of data is easier to internalize than prose.
0:22:08.2 Moe Kiss: Some tools do the visualization too. Like I didn’t realize how good Claude is at doing that. Like it does visualizations for you and like scorecards and all that sort of stuff. So it’s like, do you need this dashboard to exist in perpetuity? Or is it like, you’re going to do your check-in at whatever cadence it is for whatever meeting, and it just pops it up and there you go.
0:22:28.3 Tim Wilson: But I hope that it would pull up the same thing every time. Like there’s the same… There’s value in consistency.
0:22:33.9 Moe Kiss: That’s a good point.
0:22:34.6 Tim Wilson: Structure,
0:22:35.2 Michael Helbling: Yeah, but I think you could have a prompt that schedules that and runs it the same way every time.
0:22:41.0 Julie Hoyer: But is it more efficient, like technologically and the whatever it takes to run AI to keep asking it the same thing when you need to like just create it once and let it sit and go look at it, right? Like is it really worth the like energy…
0:22:54.4 Michael Helbling: Much like computers, I expect the cost to come down over time. So I don’t know. Who cares about that? I mean inefficiency.
0:23:07.4 Julie Hoyer: It just feels inefficient. Yes, exactly what I want. Like let me build it and save it in a dashboard and I’ll go click on it every Monday. Like that to me just seems easier.
0:23:16.0 Tim Wilson: But the hurdle that is much easier, and it just goes a little bit to that example of I’m building something, I’m writing some code, I’m writing some SQL, I’m doing considerate just the traditional task where I might hit a snag and read through and put in comments and try to figure out where the hell it’s breaking and then go and search and read like seven Stack Overflow posts that aren’t quite on. I mean, I’ve been… The limited work I’ve been doing when I’m like, I want to specifically do, I want to take the system time and I want to convert it from this to this and compare it to that. And it’s probably old school now. Like I wind up in perplexity. And I think, Michael, you made a comment like offline that the coding part, it is good. And with the interface I was using with perplexity where I’m like, oh, it’s watching me. It’s looking at the posit community. It’s looking at Stack Overflow. It’s basically doing a bunch of Google searches and consolidating and comparing them to my query. And then it’s returning me code that is very good and reliable.
0:24:21.5 Tim Wilson: But that’s not me asking it a business question. That’s me as an analyst saying, I want to see this. Can you help me write some code to do that? And because I’m asking it about doing stuff like in R, I have a decent grounding in R. So what comes back, one, it’s not writing… It’s not coding the whole video game where I know nothing. It’s giving me 10 lines of code. And I’m like, oh, I didn’t know the system function existed. That’s pretty cool. I’ve learned more. So in that case, I feel very comfortable that it’s like rapidly speeding up instead of me doing 12 searches and winding up on the same unhelpful Stack Overflow post. It’s actually returning the right result. And I’m like, oh, I’ve learned something and moved on. I was like, holy cow, this accelerated my iterations on writing the code. I’m like, that’s pretty cool. And that seems wildly better than it was even six months ago.
0:25:23.8 Val Kroll: So to go back to the original question that launched us into this, which was like, is…
[overlapping conversation]
0:25:33.9 Val Kroll: Yeah, I’ve been holding on to my answer for this whole time.
[overlapping conversation]
0:25:35.1 Michael Helbling: Listen, Val is trying to talk here, people. Come on.
0:25:41.8 Val Kroll: No, I just remember one of my… Because lots of people have written about that topic. Like that’s definitely an interesting thing that people read. But one of the best articles I had seen on this, no surprise, Eric Sandosham. And one of the concepts that he brought up around this was that, AI is really good at problem solving and it’s getting better and better, but it’s not making a lot of progress on the problem defining part of it. And that’s like where that human component always is. And that’s like the business context that we’ve been talking about, like coming up with the hypothesis, structuring exactly what tasks needed to be done in order to do whatever you were working on. Tim, if you want to reveal your project, I’ll leave that to you. But I think that that’s a really helpful way that my brain kind of organizes and categorizes where there will continue to be improvement, but where they’ll always need to be an assist. And that’s why we can be comfortable.
0:26:35.7 Michael Helbling: And I’ll go a step further than that, Val. I actually really think that as AI comes into its own, it’ll start to really show who can do that really well and who cannot in organizations. Like AI is going to basically highlight the people who are really shit at understanding the levers that drive the business and driving down into the causes and effects that actually make things happen. And it’s actually going to make people look bad eventually because it’ll be like, oh yeah, you’re not getting anything of value out of this tool. That’s strange. Let me just, oh, no, it’s like that. And then suddenly that person’s going to be shown to be like, not really of the caliber.
0:27:20.4 Moe Kiss: I don’t know. I feel like maybe it’s just me being crazy optimistic as usual. I see this really exciting. Like there are so many boring bits of the data job.
0:27:31.3 Tim Wilson: No one’s saying it’s not.
0:27:34.2 Michael Helbling: I’m bullish. Totally. I want those people out. So I think that’s great.
0:27:39.8 Tim Wilson: But it’s the difference between, and Moe, you shared an example that did not make it to a recording and we won’t name who did it. It was some business partner saying, hey, can you generate some hypothesis? Like it literally asked, like the prompt, can you generate hypotheses, took those, threw them over the wall to you and said, hey, can you prioritize and validate these or your team? Compare that to… And I’ve heard, like I was talking to John Lovett about how he went about writing his latest book, The New Big Book of KPIs by John Lovett, which now it doesn’t have to be my last call. And his part of his…
0:28:18.3 Val Kroll: Look at you stuffing this episode with last calls.
0:28:21.5 Tim Wilson: Stuffing it in.
0:28:20.3 Michael Helbling: Jeez.
0:28:24.3 Tim Wilson: But part of his technique, and I’ve heard others talk about… I mean, this is not totally original, but he said, imagine you are a… He gave specific industry people. He said, you’re responding to me as an ideation assistant. And I feel like a lot of people, and I mean, Ethan Mollick, Jim Stern, John Lovett, lots of people are saying, let the AI be really smart coworker, use it as a sounding board, still be a human, but instead of saying, hey, Julie, can you hop on a call so we can kick some stuff around about that? Before you’ve done that and got to find time on Julie’s schedule, it can instead be, hey, you’re an analyst with a applied math master’s degree who’s been working in agency, whatever. Now, I have a question about this. What sort of prompts would you, what would you ask me? What would you think? What would your ideas be? So that is an ideation companion. And I’ve tinkered with that as well. Not saying, give me this and I want to take and edit the responses, but much more of a, I want to use you as a nonjudgmental and infinitely patient sounding board.
0:29:33.6 Tim Wilson: And that I think from a hypothesis generation, because that forces me to actually express what am I thinking? What do I see? I think it might be this. I think it could be this. Just like I would in more of a human interaction, as opposed to, I want to write the one sentence prompt and have it just give me the answer. And when you look at some of the people out there who are posting, like their prompts are pretty involved. And it is the case back to the coding of where Cassie Kozyrkov had an article where she said, if you know how to code, it is actually in many cases faster to write the damn code than to write a prompt that describes what you want the code to be. And that’s very different from Mike, like your example with writing the video game with your son.
0:30:19.2 Michael Helbling: Oh yeah, because I can’t write the code.
0:30:21.4 Tim Wilson: Right. So I’m like, so I’ll just describe it and I’ll work in that prose. And I was like, oh, okay, that makes… I don’t know.
0:30:27.4 Val Kroll: On the sounding board front, wouldn’t it be cool if we could make it talk to Julie’s gem?
0:30:34.2 Moe Kiss: Oh yeah.
[overlapping conversation]
0:30:35.3 Julie Hoyer: See, that’s the part of my job that I like. If people were like, oh, I don’t want to bug Julie, I’ll talk to her gem, I’d be like, I’m gonna toss this gem real quick.
0:30:43.8 Michael Helbling: So speaking of sounding board, so I built this in Notebook LM Plus, I just took all the personality assessments and leadership style stuff I’ve ever done, dumped it in there, and I made an AI chat agent that people can interact with about my personality, my style, ask questions about how to conduct meetings with me, and I’ve given that to my team. So that…
0:31:09.3 Val Kroll: Oh my God, gotta go, sorry guys, gotta go.
0:31:11.4 Tim Wilson: I’m Out!
0:31:12.0 Val Kroll: I’m busy, all of a sudden.
0:31:13.5 Michael Helbling: But I mean, there’s lots of these amazing little things you can do with tools like that.
0:31:18.2 Julie Hoyer: I love that.
0:31:20.2 Michael Helbling: And it’s not just idea starters, it can also be things that are like, things we never thought of as tools, because before what I’d do is I’d kind of type up sort of a one-pager of here’s how I work best with people, and kind of like, people would read it or throw it away probably, but now it’s sort of like, if you’re curious about something, here’s eight years of leadership personality stuff I had to take tests on, feel free to just ask it anything.
0:31:50.7 Julie Hoyer: And kind of fun if you’re like, ooh, I don’t want to ask Michael this question, but I need to know, I’ll ask his personality, AI agent.
0:31:57.7 Michael Helbling: It’s not me in there, okay, it’s just about me.
0:32:01.4 Val Kroll: It’s like a Black Mirror episode.
0:32:02.9 Michael Helbling: And before anyone asks, I could only share it within my own organization, because that’s how Notebook LM Plus works, so I cannot share it with you, so don’t ask.
0:32:12.9 Moe Kiss: Do you know, okay, I need to have a [0:32:14.0] ____ about something.
0:32:16.1 Michael Helbling: Yeah, do it.
0:32:18.0 Julie Hoyer: We love a gripe.
0:32:19.4 Moe Kiss: This is where I’m seeing like AI really just like fuck up my life. I’m so sick of reading things that have been written by AI. I am like so violently angry about it, especially it is getting overused to write work on analysts and data scientists behalf, like put together the findings. And it is crap, because I think there is a way you can make it okay of like, you write it and like just clean up my text versus like… But I am reading so many documents that are written by AI. And the thing that also frustrates me is if like anyone has like a half-baked idea, it’s suddenly like, here’s a doc on it. And you’re like, great. So now I have like 5000 times more docs to read. And it’s a half-baked idea because you didn’t have to spend the day writing it or a couple of hours writing it. You could basically leave yourself a voice note and then turn it into a doc. And so people are just like throwing these docs around. And I’m like, it’s actually so frustrating.
0:33:26.7 Michael Helbling: You should see some of the social media promotions that are like AI generated. They’re the worst.
0:33:35.2 Tim Wilson: Tell me about it. But Moe, I’ve got a solution for you. You take those docs, you chuck them in an AI, you get a one-sentence summary, move on.
0:33:43.6 Moe Kiss: And then, no, but the problem is…
0:33:46.6 Val Kroll: No, I didn’t really like your insert one-sentence idea.
0:33:49.2 Moe Kiss: The issue is though that often the… I feel like almost like the directness or the like the takeaway gets so watered down that what you’re reading starts to turn into like smush. And you’re like, it loses like the crispness of what the idea was.
0:34:09.7 Michael Helbling: And this is… I think this is very, very important. There’s a point about AI that I think is really important about what you’re talking about, which is the way I say it is AI is right down the middle in terms of an average. And basically when AI does something, it kind of does it just okay. And sometimes that’s really great. Like it made me a just okay video game. And that’s amazing because I can’t, I’m zero on that. But if I’m like, I’m pretty good as an analyst and it makes me a just okay analysis, that’s pretty crummy. I can’t work with that. I need better than that. And so one of the things that sort of stood out to me about AI and its usage is that knowledge and expertise actually becomes a massive and important filter for how AI is actually going to be beneficial or not beneficial. Like I was talking to my tax accountant and he’s like, oh, Michael, you wouldn’t believe the crazy things people are getting from AIs about taxes. I’m like, yeah, because they have no idea how they should be doing their taxes. You as a tax expert can take one look at that and know if it’s good advice or bad advice. Just the same way as I could take one look at an AI’s output and it’s something I’m an expert in and know if that’s good enough or not good enough or like 50% of the way there and I can tweak it upward.
0:35:32.2 Michael Helbling: But the point is without knowledge, I only could possibly hope for average. And so that’s where everyone has to understand is sort of like when you let AI do something you don’t have expertise in, you’re basically only gonna get maybe 50 to 60% good quality. And of course that number’s improving. I’m excited for it to keep improving, but the reality is that’s really what we’re getting out of that. And we’re not getting anything that no one’s ever thought of before. We’re only getting what’s been thought of before and it’s most standard. Because I tested this with data strategy. I went to the deep research in ChatGPT and I said, really put together a research around the top themes and things like that with data strategy. Like what are people saying about it? What are the… And it did a great job. I mean, pulled 40 different sources and wrote this whole thing about it. And I said, what’s the missing thing from all of these different things? And it literally fell over. It couldn’t really come up with anything, because it’s not there to like do that kind of thinking. Now I can do that kind of thinking, but there’s not enough other people in the consensus applying that to it that it can build a knowledge base around to say, oh, I’ve trained myself on that information. Here you go. And so that’s where we always have to… It’s important to think about, okay, yeah, my expertise applied to AI gives me a superpower, someone without expertise applied AI brings you up to average. And so then now you can see like, okay, then how should we use it in our businesses? The one thing I do get concerned about, about AI and how we’re going to proceed because we’re obviously not going to stop using it, is what do people without expertise do to build expertise now? Because if AI is writing all of our code in the next three years, how do people who are starting out as software developers build that expertise to be able to coach the AI to write amazing code? Or how does that next amazing breakthrough and coding language or the replacement to SQL or ever come about if all we’re using is the same things AI knows the most about? Because like people I’ve talked to who are developers, the more esoteric the language is, the less the AI is really doing a good job with it. The more popular the language, the more amazing it is because there’s a bigger corpus of information for it to consume and learn on. So it’s a really interesting challenge to think about.
0:38:01.7 Michael Helbling: As analytics people, I think about it for us mostly, it’s sort of like, okay, so yeah, how do we take a junior analyst and make them into an amazing senior analyst down the road? And if AI is coming in and doing like a bunch of that job, the nice thing is AI is nowhere close to doing the analyst job. Now give it two years and my story will change like so much progress is being made and I’m super excited about that. But that’s the thing I think for a lot of us and especially experienced listeners think about is how do we make sure there’s a bridge backwards so that we don’t lose the connectivity so that future people can come in and be good at this as well. Because the last thing we all want is everyone getting to average and no further.
0:38:52.1 Tim Wilson: This one I can’t remember the source on, but I do remember seeing someone who had said they’d used AI to… It had given it kind of a what it was wanting to get more expert at and said, develop a training plan for me. And these are the criteria I want to do a half hour a day because I kind of along those lines, that’s why I’m terrified that people think this is going to let me skip the steps of hard work and frustration and thinking about the business, about how code works, about architecture, whatever it is. And I don’t think there is… That that’s not what it’s going to do. Like people still need to develop expertise and you develop expertise through practice and there’s a degree of accelerating, but I don’t. Yeah, Julie.
0:39:47.3 Julie Hoyer: I think it’s crazy that one, Michael, I love the way you were talking about the averages. I’ve never thought about it that way. And that was definitely like a clarity moment for me, because I feel like people can’t start with a blank slate. Like how do you… To your point, how do you gain the skill or Tim, kind of what you’re saying, like, how do you gain the skill to look at a blank screen and be like, I need to go write code to do this, or I need to get my thoughts out in a coherent way. And if you’ve always had the ability to, like, go to AI and get even, just a starting point? I don’t know. I just feel like that’s such a core skill in problem solving and problem definition and just like, growing in general in your capabilities. Because something I found too, is like, sometimes I struggle or or push back, maybe drag my feet on going and using AI sometimes because to Moe’s point earlier, I don’t like the brain work of going through and slogging through its long verbose kind of average answer and tweaking it.
0:40:51.4 Julie Hoyer: Like I sometimes do better my workflow and the way I like to work and like the output I get, I like a blank screen and I just brain dump or I just try something. And then to Tim’s point earlier, like then maybe I go and use AI to help me. But I don’t know. It’s like such a different exercise in my head that I find it exhausting to take an initial AI output and then make it into something good.
0:41:15.1 Moe Kiss: Do you know what’s so funny? I’m the complete opposite. Like I loved it because I’m one of those people that literally needs a rubber duck on my desk because I need to like have something to bounce off and be like, oh, I’m hitting this wall. Like, or, oh, I haven’t thought of this. And like I am the epitome of the rubber duck when I’m… Especially if I’m writing code. And that’s what I essentially am using AI for now is like to go back and forth and then be like, oh, no, you haven’t gotten this right. OK, oh, no, I want to look at this now or like I want to change this wording. And I do… I was thinking about this the other night. I was working on something and part of me was like, oh, I feel like this might have been faster if I just did the whole thing from scratch. But I feel the output ended up being better for my working style because I got that feedback loop, if that makes sense.
0:42:04.7 Tim Wilson: But I don’t see how that’s different. That’s you still initiated it. You brought your expertise, your point of view, your thoughts, and you put it in. I think Julie, if I’m hearing right, saying, but if I start with a… If I don’t come in with a starting point and ask a query is something I’m looking, if I don’t come up with something to bounce it off, I just show up with a prompt. I’m going to write this kind of vanilla thing and I’m going to get vanilla back. And then it’s… I’m going to say, send it to my favorite presentation tool and say, generate a presentation of it. And it’s going to make a vanilla presentation that checks a lot of boxes, but doesn’t move anything forward.
0:42:51.1 Julie Hoyer: I don’t know. It’s even like when I’ve asked it to help me like summarize a lot of data, like I’ve done a sentiment analysis recently. So I was using a sentiment analysis like gem in Gemini, and I like stripped it all PII and all that. But I put some of these responses and I was asking like, help me take these 700 responses and just like, help me identify some themes. And at first read, it’s like, oh yeah, that’s great. I could ask for direct quotes that prove each of those themes. But then it’s like, I’m going through and checking and I did read through like all the responses. And it’s just interesting how much rework and I’m not saying that’s like not a good place to start, but that is like exhausting to me of being, it wrote up this thing. And now I actually have to re dissect it and take it out. And it’s just a very different. Yeah, working style. I like when I can come with a more like vision of what I’m trying to get. And I guess for the sentiment analysis, I don’t know a better way, right? Like, how am I supposed to go through all these written things and remember all the quotes and what like physically like put them in categories? Unrealistic. But that exercise made me realize to Tim’s point, yeah, I like using AI to further something I kind of already have going rather than it spitting out this initial kind of messy thing and having to rework it. I guess it’s just a preference thing.
0:44:04.7 Tim Wilson: So there’s my Cassie Kozyrkov from the her what is vibe, vibe coding piece where she said she was making the difference of like trying to read somebody else’s code versus writing your own code and trying to debug it. And she was like, at least when you write buggy code yourself, you understand the flawed thinking that created it. With vibe coding, you’re playing archaeologists in someone else’s mistakes. So if you went through your and did the sentiment analysis yourself, by the time you got to responses number 650, you’d be like, Oh, I’m doing this differently from how I did it initially, now I need to like go back and do it again. But you’d have that kind of baked into what you’re doing. If you just skipped all of that. You don’t know what…
0:44:44.3 Julie Hoyer: Yeah, yeah, that’s exactly that’s the exact example and perfect way to put it. I don’t trust it. I don’t know all the assumptions that made and now I’m kind of having to dig and check it off.
0:44:54.7 Moe Kiss: That’s interesting what Cassie said, though, because I found… I’ve used it quite a lot not sorry, I haven’t used it to QA someone else’s code, but I have used it definitely to understand someone else’s code. And I found that super helpful. Because it was like a business area that I wasn’t as familiar with. I wasn’t familiar with the tables and all that sort of stuff. And I kind of was like, I wanted to like sense check of how is this being… How is this metric being calculated? All of that sort of stuff. And it helped me understand that at a time when the person who wrote the code was asleep. And it was really useful.
0:45:30.4 Tim Wilson: That’s not her point. Her point at all. That’s not her point at all. Her point was, if you ask it to generate the code, then you’ve gotten the code that is the person who’s asleep.
0:45:38.8 Moe Kiss: Oh, got it.
0:45:40.4 Tim Wilson: And she was saying this is like debug… So absolutely…
0:45:46.5 Julie Hoyer: If you don’t go ask it all the questions of like, why’d you choose this? Did you think of this edge case? What happens at this edge case? It’s not like you just don’t know.
0:45:52.6 Tim Wilson: I mean, I think that’s a great point. If you’re trying to look at somebody else’s like, if it’s a spaghetti hot mess, and you’re saying, I mean, I could even see asking it like, how good is this? Like, this seems like it’s 4000 lines. Is could this be done better? So isn’t it? But again, that’s an assistant of saying, I don’t understand what this is doing. And the person who wrote it’s not here. Help me out. I think that’s…
0:46:13.6 Michael Helbling: That’s a great use case.
0:46:14.6 Moe Kiss: Okay, one of the things that I feel like is coming to mind also with vibe coding. I’m gonna say something that might also be controversial. I wonder if the reason like the LookML example is so good is because LookML is so basic. Like you’re not like… Most people can write LookML, it’s fairly like, simple, I would say. Versus like, if you’re trying to write code for something that’s very complex, and then trying to debug it, I could see that being very… It would be very bad at that. Whereas like, I don’t know. So maybe it has to also do with the complexity of the problem, like what the code is trying to do, or the coding language.
0:47:00.4 Tim Wilson: But that has been… The complexity front has me thinking, if you look at where people kind of jumped to self-labeling themselves data scientists after they’d taken a Python boot camp and didn’t have the… What are the trade-offs in the different models that I could choose to run on this? Asking AI, you’re probably giving it incomplete information, and hey, what kind of, should this be gradient boosting? What should I use? And maybe, Michael, it goes back to what you were saying, somebody who doesn’t know any better, if the AI says, well, based on what you gave me, it didn’t think to probe for some other factors, it didn’t know some context or nuance, it could totally send you down a path that wasn’t helpful, whereas if somebody’s like a legit, experienced data scientist who probably wouldn’t even need to… They wouldn’t query it, they’d say, well, given the nature of this, I think we should use X, Y, and Z.
0:48:01.9 Val Kroll: There’s something that you said there, Tim, because you were saying if they had taken the Python boot camp, they might not know to think about different models, having that knowledge, and then when you were juxtaposing that, you said someone who has more experience, because I think that that’s a key part of it, is the chipping down and falling down and knowing what the watchouts are, I think that that’s a huge part of it, too, is there’s nothing to replace the experiences that we, the scars that have made us who we are.
0:48:29.3 Julie Hoyer: They stick with you.
0:48:30.0 Michael Helbling: I want all of you to struggle with applying Stephen Fuge’s principles to data visualizations of random BI tools.
0:48:36.9 Moe Kiss: Okay, so one concept that has been churning around in my mind a lot of late, and this is tangential to this whole AI piece, I kind of keep coming back to, what happens if we give people more ability to self-serve, answer their own questions using AI, whatever it is, and they misinterpret it, or they make mistakes? And recently someone said to me, they’re like, and what if they do? So they make a mistake, they misinterpret the data, they’re accountable for that mistake and that misinterpretation, and then they need to fix it, and they don’t make that mistake again. And I feel like it’s this tension that’s been rolling around in me where I’m like, I always want to protect people from making the less good decision, and so I’m like, I want them to make the best decision possible the first time, and so I’m always like, oh, and we can help you do that, that’s what data science does, and it’s like, the funny thing is, though, as we talk about expertise, so much of your expertise comes from making those mistakes yourself, so it’s like, anyway, I’m just thinking out loud about letting people just fuck up themselves and then figure it out and how there’s value in that.
0:49:42.3 Tim Wilson: Plus, that’s actually kind of part of the human experience.
0:49:47.6 Moe Kiss: I was going to say, yeah.
0:49:48.6 Tim Wilson: Like, there’s this, I think lots of things have been… Lots of thought pieces around, like, if there wasn’t hardship and frustrating stuff and mistakes made, I mean, that’s getting rather philosophical, but if everything is a smooth path, then what are we.. We’ve got to go find aliens to fight or something, that’s where Star Trek….
0:50:10.3 Moe Kiss: But isn’t that the point, though, that all these people that think, hey, I can just throw a CSV into ChatGPT, it’s going to answer all my business questions, I don’t need data scientists, blah, blah, blah, why not let them do it? Be like, sure, you want to upload these CSVs in, answer your questions, get some shitty answers back and make some shitty business decisions? That, my friend, is going to be a great learning opportunity.
0:50:34.0 Val Kroll: And then they’ll make a bad decision, and then it will come back and they’ll be like, oh, it was just really low-quality data, we really just need to clean our data and just we need some more tools, different tools, it was the tool’s fault.
0:50:45.4 Michael Helbling: The tools are always the ones that are messing us up, for sure.
0:50:48.8 Moe Kiss: Oh, Val, that hurts, that hurts.
0:50:51.0 Val Kroll: Well, if someone thinks that that’s a solution, Moe, do you really think they’re going to have, like, the self-reflection to be like, oh, it’s not me.
0:50:59.1 Tim Wilson: That’s the other thing that these guys, separate from the we’re going to stand up, our little Johnny-come-lately, just ask the question and give you the answer, the other is that there are so many people have jumped on this, well, with AI, you’ve got to feed the beast, so you need to get all of your data. So that has also stood up an enormous number of companies that are now sowing fear, uncertainty, and doubt that we’ve got to have all the data pumped in, and it’s kind of energized the… I was talking to a long-time friend, she’s a marketer, and she was like, went on a tear about cookie blocking, European-based company, she’s in North America, and she’s like, we had to fight so we could get the cookie, even if they don’t track the… If they don’t accept consent, they can… If they don’t consent to…
0:51:53.6 Moe Kiss: What’s going on?
0:51:56.2 Julie Hoyer: Val’s checking her blood pressure.
0:51:57.7 Michael Helbling: Val’s checking her blood pressure on Tim’s rant. She’s like, poor Val.
0:52:07.6 Tim Wilson: But that has been fed as well. It does get to where Val was, that oh, and now if a bad thing happens, it’s not because I tried to shortcut it, nobody’s going to accept that the AI is no good, it’s going to be we must not have had enough data, the data must not have been clean enough, and they throw it to the data team, and that becomes the problem when it’s just often not. It’s like, no, you need to think harder.
0:52:30.6 Moe Kiss: Thanks, Tim, I’m back to pessimistic. Full swing!
0:52:33.5 Julie Hoyer: Just make sure people can still sniff out the BS. You need enough people that can sniff out the BS, and you need enough people to not get stuck in the echo chamber that maybe AI is making worse in some areas. You know what I mean? That’s where my head goes is the people who can see beyond will still rise to the top. Because I feel like you’re going to get a lot of that echo chamber stuff.
0:52:56.5 Michael Helbling: It’s hard enough to maintain data quality in a single source of data or a single data set. Now map out the four to five data sets you’ll need to maintain in complete alignment with complete accuracy. It’s not a job that’s going to be very easy very fast. That’s the truth. And we have to do that if we want LLMs to be able to house the context for actually doing what we would call analysis.
0:53:25.2 Julie Hoyer: Blood pressure is back.
0:53:26.2 Tim Wilson: Blood pressure is back.
0:53:27.3 Val Kroll: Check mine.
0:53:29.8 Michael Helbling: We’re the one audio podcast with prop comedy. All right. Well hey, we better wrap up this episode. Congratulations each of you. Now go ahead and go put AI expert on your LinkedIn profile. Everyone else is doing it.
0:53:53.6 Tim Wilson: AI strategist.
0:53:55.4 Michael Helbling: Oh AI strategist. Oh I like that. That’s better. Did you use an AI to come up with that?
0:54:00.5 Tim Wilson: No, I might have seen that on a long time member of the analytics community. I was like, oh, interesting.
0:54:08.0 Michael Helbling: Very good. And actually what stood out to me, I loved Moe, hearing from your experience because it’s a lot different than what I’m experiencing out there in the context that you’re operating in. So that was really great. And I love the juxtaposition and just sort of learning from that. So that was amazing. Tim, not that I got nothing. Tim, of course, name dropped like everybody, Cassie and Ethan Mollick, which I also loved.
0:54:39.4 Moe Kiss: Most well-read individual.
0:54:41.2 Michael Helbling: Yes, exactly. Continuing on in his quintessential analyst ways. Nice job.
0:54:47.2 Val Kroll: He can’t help himself.
0:54:48.7 Michael Helbling: And Julie, way to lead the conversation today. Thank you.
0:54:52.1 Val Kroll: We knew it.
0:54:52.8 Michael Helbling: I said it. Listen, I just typed into Gemini and I said, who’s going to be the best?
[overlapping conversation]
0:55:00.3 Val Kroll: Who’s made the best gem?
0:55:03.7 Michael Helbling: Yeah. And it was like, oh, who’s… Gemini’s like, who are my options? Oh, Julie. Julie, my mom. And Val, thank you, too. No, because I think what you did, Val, which was actually super important for the conversation was you turned it back into who or what we’re going to do with people around this, which I think we were all over the place. And you brought us back to probably the more important central element of this, which is we’re analytics people. All right. And I went on a few rants, so yay. All right.
[overlapping conversation]
0:55:39.5 Michael Helbling: Let’s say right now that I bet you’re out there passing this whole episode through an AI filter to bring it down to like 30 seconds or something. But if you hear something you’re interested in, we would love to hear from you. So please do reach out. You can reach us on LinkedIn or on the Measure Slack chat or by email, contact@analyticshour.io. And we’d love to hear from you. Please do not send us AI created emails. Moe does not appreciate that. Or if you do, train the AI to be very succinct.
0:56:15.6 Moe Kiss: And funny. And funny.
0:56:21.1 Michael Helbling: And funny. Yeah. And they’re getting so much better at being humorous now. So it’s good. And then the other thing I’d like to say is, we’ve been around for a long time, and if you’ve never thought to go on your favorite platform and give us a rating or a review, I’d say AI can help you with that too. So we’re not above it. Just go out there and give us five stars and a long-winded AI driven… No, don’t do that. But do rate and review the show. It helps AIs consume the show and then tell people the cool things we say.
0:56:55.5 Michael Helbling: And then, last and certainly not least, a big shout out to Josh Crowhurst, our producer, for everything he does to help us get this show off the ground.
0:57:04.1 Tim Wilson: Can we just say that every time you fuck around with, AI-generated images of us as a group, and Josh always looks amazing.
0:57:13.6 Moe Kiss: He looks amazing. You know what it is. I’ve worked this out. AI knows what to do with images of men with beards. That is, like, the summary I have.
0:57:24.1 Michael Helbling: Oh. Okay. That’s interesting. Okay. That’s probably a whole episode right there, Moe, I don’t know. But anyways, yes, Josh Crowhurst, who looks amazing in Studio Ghibli form, as well as other elements. But yeah, thank you, Josh, for everything you do. And I would just say, and I think I speak for all my co-hosts out there, no matter what part of your job AI is doing, the part it can never do for you and you got to keep doing, is to keep analyzing.
0:58:00.3 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter at @analyticshour, on the web at analyticshour.io, our LinkedIn group, and the Measure Chat Slack group. Music for the podcast by Josh Crowhurst. So smart guys want to fit in, so they made up a term called analytics. Analytics don’t work. Do the analytics say go for it no matter who’s going for it. So if you and I were on the field, the analytics say go forth. It’s the stupidest, laziest, lamest thing I’ve ever heard for reasoning in competition.
0:58:41.2 Val Kroll: Guys, I’ve got an exciting example to share today. I’m not telling you now.
0:58:45.9 Michael Helbling: Yeah.
0:58:48.1 Julie Hoyer: I haven’t even had coffee. Like this is fucked.
0:58:57.1 Michael Helbling: I am going to need to get another beer.
0:58:56.6 Tim Wilson: See, told you to chug that.
0:59:05.9 Moe Kiss: Well, I don’t know if you need coffee.
0:59:09.6 Julie Hoyer: Yeah, if we push it right up to a 5:30 central ending time, we might get an appearance of Abby Lou.
0:59:16.9 Michael Helbling: Oh.
[overlapping conversation]
0:59:17.0 Val Kroll: Abbie.
0:59:17.8 Tim Wilson: I think that’s perfect, actually.
0:59:21.0 Moe Kiss: That sounds wonderful.
0:59:25.0 Julie Hoyer: I opened up my laptop over when we were eating breakfast this morning just to like do something really quick. And she’s like, are you talking to Tim? The way she refers to Tim constantly cracks me up. What was it she was pretending to be working when she was homesick? Yeah, she was like, hey, Tim. Like she was pretending to talk to Tim.
[overlapping conversation]
0:59:45.3 Moe Kiss: My kids do the same, but they’re like, I’m gonna go do work now. And then they sit at my desk and tap and I just turn up my keyboard. But they are call Tim.
[overlapping conversation]
0:59:52.7 Michael Helbling: They don’t name specific co workers. I mean, you have a few more co workers…
0:59:57.7 Julie Hoyer: They probably have a little more variety.
0:59:58.8 Moe Kiss: Well, they do. They do. When they come to the office, they’re like, where’s Auntie Priscilla? Yeah, they do have their favorites.
1:00:10.0 Michael Helbling: I put in Slack my first attempt to make us into Muppets and it invented a random other Muppet and put it in there. I was like, there’s a ghost. That’s Ken Riverside. I don’t know.
1:00:23.4 Moe Kiss: That’s Ken, but that’s like old Ken. I definitely thought of him as like younger, hipper, more dapper. But I like.
1:00:34.4 Michael Helbling: Yeah, yeah, no, we’ve already got Ken nailed with AI before.
1:00:34.8 Julie Hoyer: I like how we’re all Muppets and Tim is from the Simpsons.
1:00:39.5 Michael Helbling: Yes.
[overlapping conversation]
1:00:40.6 Moe Kiss: Oh, wait, which one’s that one?
1:00:43.9 Val Kroll: Ted is Flanders cousin and we’re all Muppets.
1:00:52.5 Tim Wilson: So that one didn’t work very well.
1:01:09.2 Michael Helbling: Rock Flag and more dashboards through AI, now.
Subscribe: RSS
Coming soon!