#236: The AI Ecosystem with Matthew Lynley

Aptiv, Baidu, Cerebras, Dataiku… we could keep going… and going… and going. If you know what this list is composed of (nerd), then you probably have some appreciation for how complex and fast moving the AI landscape is today. It would be impossible for a mere human to stay on top of it all, right? Wrong! Our guest on this episode, Matthew Lynley, does exactly that! In his Substack newsletter, Supervised, he covers all of the breaking news in a way that’s accessible even if you aren’t an MLE (that’s a “machine learning engineer,” but you knew that already, right?). We were thrilled he stopped by to chat with Julie, Tim and Val about some of his recent observations and discuss what the implications are for analysts and organizations trying to make sense of it all. 

Startups, Newsletters, and Podcasts Mentioned in the Show

Photo by Ivan Bandura on Unsplash

Episode Transcript

[music]

0:00:09.7 Announcer: Welcome to the Analytics Power Hour. Analytics topics cover conversationally and sometimes with explicit language.

0:00:12.8 Tim Wilson: Hi everyone, welcome to the Analytics Power Hour. This is episode 236. My name is Tim Wilson and it’s now 2024. So happy new year. We reviewed all of the latest benchmarks for Google’s Gemini AI and determined that we’re good for at least another two or three episodes of this show before we just hand the whole thing over to the robots. So for now, I’m joined by a couple of human co-hosts. Julie Hoyer is an analytics manager at Further, which by the way as of this recording, when I asked Bard what the company formerly known as Search Discovery is now called, it took a big old swing at it and missed entirely. So the company formerly known as Search Discovery, now Further, where Julie is an analytics manager. Welcome to the show, Julie.

0:01:00.8 Julie Hoyer: Hello, hello. Excited to be here.

0:01:03.1 TW: All right. And also from Further, we’re joined by Val Kroll, who is an optimization director there. And if any of her coworkers think, remember that day back some point in the past where she canceled a whole bunch of meetings, so she had a podcast recording and was trying to save her voice? Well you know when we recorded this. So in a partial way, using minimalist words, we’ve got Val Kroll. Welcome to the show, Val.

0:01:28.0 Val Kroll: Hello. Yes. Today, today was that day, [chuckle] when I sounded like this, [laughter] but thanks for still having me.

0:01:36.9 TW: We’ll see if we still have you by the end of the show. If not, we’ll have an AI reading your questions from the chat. So on our last episode, which was the year in review show for 2023, we talked a lot about AI and generative AI. And we’re gonna kick off 2024 by digging into AI specifically through the lens of kind of the myriad platforms and technologies and companies that underpin what many of us wind up just kind of experiencing as a really clever ChatBot or as a code assistant or as a custom image generator. You think about it, there are 100s, if not 1000s of components in that underlying ecosystem. NVIDIA, Hugging Face, PyTorch, Mojo, Modular, Replicate, Llama, Purple Llama, Anthropic, and well, you get the idea. It’s hard to imagine that anyone could successfully wrap their heads around this entire landscape, much less stay on top of it as it rapidly evolves. But our guest Matthew Lynley does exactly that, or as close as any human we’ve been able to find is able to do that.

0:02:37.0 TW: And he writes about it, [chuckle] in his newsletter, which is called Supervised. You can find Supervised and subscribe to it at supervised.news. So Matthew’s formal education was in mathematics. I will try to reel them in if Julie and Matthew start trying to converse with each other by verbalizing equations. Before we started the show, there was a discussion of fluid dynamics and theses, and Val and I almost, our eyes glazed over, but we’ll try to keep him in check. But despite the mathematics, Matthew also had a second major in journalism and his career has pretty much been a steady blend of the two since he’s kinda woven into it out of a few different places. Mostly he’s been on the journalism side of things with his writing appearing in Business Insider, The Wall Street Journal, BuzzFeed News, and TechCrunch. Most recently, before starting Supervised, he was Business Insider’s lead reporter on AI and big data, covering companies like Databricks, Snowflake, OpenAI, and others. So welcome to the show, Matthew.

0:03:35.8 Matthew Lynley: Thanks for having me.

0:03:36.9 TW: So let’s start with that. You’ve sort of started this new thing, this Supervised. You made the decision that this could be kind of a full-time thing to just focus on journalism around kinda the business and the ecosystem of AI. Can you talk a little bit about sort of how you came to that decision? Like what prompted you to make that leap and, are you a masochist, I guess? [chuckle]

0:04:03.4 ML: First off, terrible idea. I really generally do not advise people do it, but yeah, when I was at Business Insider, when we were covering AI, AI did not mean ChatGPT and LLMs and things like that. This is, way back when Hugging Face was hosting little tiny projects and there was this company called Snowflake that all of us, I think, were users of at one point or another and Databricks and kind of like the weird quasi-rivalry/non-rivalry between the two of them. And ChatGPT came out in November, so it’s a little bit more than a year old.

0:04:38.7 ML: And as it was going crazy, as everything was going crazy and things were moving so fast, selfishly, I was looking for something that went a little bit deeper than just the sort of like broader publications that target really broad audiences. I understood what an LLM does. I understood what Transformers were and all that other stuff, and there wasn’t anything kind of not going all the way down to the very bottom where guys like Nate Lambert and Jack Clark excel at, and they do an amazing job explaining the intense intricacies of it. But just something that was just like one level deeper, I think I was focused more on people like myself and other hobbyists, practitioners, and the people that have the credit card that are deciding, okay, which one are we gonna be using internally if we’re gonna be using something in the first place, where they don’t need to understand the specifics of a given product.

0:05:29.4 ML: That’s the MLE’s problem. Sorry, Machine Learning Engineers problem. They just need to know why are we going with this one versus the other one, all that kinda stuff. And a lot has changed in the last year. Things have moved much faster than I would have expected. I like to tell people I try to stay six weeks ahead of what’s happening in AI because that’s about as good as I can do at this point because it moves so quickly. But the idea is, what are people talking about that no one’s writing about? I went to this event and here’s the new technique or the new kind of product that’s out there that everyone’s playing around with that you haven’t heard of. And yet, for some reason, the thing is like freaking 50,000 stars on GitHub and is everywhere. And so that was kind of the original idea. Running a thing is hard. [laughter] I will say that. So to your point, maybe, soft maybe. But yeah, so that was kind of the original idea.

0:06:23.6 JH: So has it from when you jumped in and just kind of the casual kinda business level watching what’s happening in the investing space? And it’s kinda like, well, all the money because the interest rates and this and that and companies that are venture backed are struggling except for AI. There’s kind of an explosion with like any component in that.

0:06:46.4 TW: Like, has there been just a… When it comes to just companies starting up, I mean, you had a post last year that was late last year. There was 29 startups in this space. Is it just like every day, someone else is getting a non-trivial amount of funding for filling some little slot somewhere in this entire ecosystem? And are you trying to stay on top of that?

0:07:10.7 ML: It’s a lot. That list was… The premise of that list was, ChatGPT had turned one, and I was like, I don’t know what to write. So here’s a bunch of companies that I’m following, I guess. And people liked it, which is great. It’s like, a lot of people was just like, I’ve never heard of this company before, but that’s cool. But to your point, there’s, yeah, every week there’s at least one new crazy team coming out, doing something that fits some narrow slice of the pie. I wanna say it’s sort of early December, where it’s a company called Mistral, which produces open source, large language models. We can get an open source, which is a whole other thing. They raise a couple 100 million dollars, or I think, it was a $2 billion valuation. And everyone was, huh? What do they do? They make a model. That’s it. And yeah, so when you’re looking at, but when you’re looking at AI specifically, I think, we’re still figuring out, what the hell to do with all this stuff? And when someone has an idea, they’re like, “Hey, I think we could do this.” Everyone is like, great, let’s go try it. And here’s something funny to go with it and we’ll see what happens, which is very 2010, 2011 mentality. But I’m here for it to be clear. I think it’s fun. So. But, yeah, yeah, and it’s every day, every week there’s some new crazy project on GitHub or Hugging Face or something along those lines.

0:08:36.2 VK: Do any of these companies reach out to you after you write about them? ‘Cause I feel like you’re almost a sell side research analyst talking about the whole ecosystem and keeping track of your portfolio. I’m curious if everyone’s like, Actually that’s not how we think about ourselves, or has anyone come back to correct you? Like as I was reading your content, I was wondering so badly if that ever happened.

0:08:58.6 ML: Angry Reader notes. I actually… I prefer the brutal feedback. I think that’s the most useful feedback. Unfortunately, I don’t… Fortunately I don’t get a lot of it. I do get some. But yeah, I reach out to every company that I hear of, just to, I say, Hey, do you wanna chat? I heard about you guys doing X and whatever. And sometimes they get back to you, sometimes they don’t because everyone’s really freaking busy. There’s this company called LangChain, which is this really popular open source tool for creating… Again, we can go into this stuff later, called this thing of agents. Say AI that can like act on your behalf to do stuff. It’s called an agent. The founder is a guy named Harrison Chase. And I remember he was at four separate events over the span of two days last year in 2023 and giving back to back talks. ‘Cause I saw him one day at an event and I saw him again the following day at another event. Just everywhere. He’s freaking everywhere. And so, everyone’s out there, everyone’s busy, everyone’s chasing everyone else around. We do our best to keep up at and keep track of everything, but sometimes it’s just like, Vff.

[music]

0:10:09.9 S1: It’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:10:15.9 TW: Well, Piwik PRO has exploded in popularity and keeps adding new functionality.

0:10:21.3 S1: They sure have. They’ve got an easy to use interface, a full set of features with capabilities like customer reports, enhanced e-commerce tracking, and a customer data platform.

0:10:32.2 TW: We love running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:10:39.6 S1: Yeah, head over to Piwik.pro and check them out for yourself. You can get started with their free plan. That’s Piwik.pro. And now let’s get back to the show.

0:10:52.3 TW: But is your kind of watch in your… I mean, you obviously can’t go super, super deep with any of them. Do you start to see something new has come up and said, wow, that seems really shaky, or that doesn’t seem new. Like it is such a weird thing to be going through and seeing like, this is a language that I’m reading an article, I understand the subjects and verbs, and then there’s these words that are mentioned and, I can’t imagine that they’re all equally legit with kind of what they’re doing. Is there… ‘Cause I guess maybe backing up, you referred somewhere on your site about the multi-billion dollar scaffolding assembled in the past few years as a way to describe all of these companies and what has occurred? Well, you’ve said it as well, all of these companies are incentivized for more of this stuff to happen.

0:12:00.4 TW: So if you’re NVIDIA and you’re kinda leading on the hardware front, do you… You wanna ride that train as long as you possibly can? You’re not really incentivized to say, is there value being delivered to organizations or to the world at the end of the day? I don’t know. There are a ton of… I don’t think there are a lot of people who are maliciously intentioned. It just feels like there’s a risk that there are so many people that they’re just kind of hopping on the train and they’re assuming that whatever their little piece that they’re plugging in that’s filling a legitimate need, that somewhere the whole ecosystem is spitting, viable and useful stuff out at the end. And should that be a concern about how much money is chasing this? I mean, referring to it as like, we’ve gone through this in Silicon Valley before and good stuff has come out, but not until stuff is imploded first.

0:13:03.0 ML: What did we call it? Venture funded capitalism in the Uber and Lyft days, if I’m right.

[laughter]

0:13:09.8 TW: Oh, venture funded capitalism. Oh, I like that.

0:13:13.1 ML: Yeah. Yeah. And this is like 2014, 2015 when we thought on-demand was gonna be the future of everything, I guess.

0:13:20.1 TW: I mean, even the original internet boom and the bubble, or the original internet bubble bursting. I don’t know. Like that gives me some pause.

0:13:29.1 ML: Well, no. I mean, it’s… Yeah, no, I mean, it’s totally justified. I mean, when you have all this… I mean, it is really a well, for… There’s a couple of reasons why one is we’ve gone through a lot of false starts. I think like you can look back at on-demand and there’s clearly a ton of value created there. And we got some really cool stuff out of it. You can call a car from your phone. That’s pretty bad-ass that you couldn’t do back in like 2010, but was it the kind of like life altering technology that we thought it was gonna be right when it started? No, it took several years to kind of mature into what was on-demand best for, which is food delivery and cars, like valet, probably not. Car washes, probably not. All these, all these other things. So with AI, there’s a couple of things happening, but one is when it came out, it’s just cool as hell. Like you boot up ChatGPT, and it’s like telling you stuff that like may or may not be true, but it’s like telling it to you in actual legible English and like right from the get-go it’s useful in some ways or not. Trust, but verify.

0:14:37.3 ML: But right from the get-go it’s useful. You can use it for writing code. I use it for writing code all the time. It’s great. When it goes down, I’m totally screwed. I’ve like forgotten how to do all this stuff and like forgotten how to Google for stuff. And so I think that you can sort of look back on 2010 and 2011 and as all these things were happening, there were a lot of really cool ideas coming out, like Foursquare. If you guys remember checking in at bars where they were giving you like three glasses of wine.

0:15:08.7 VK: He still does it. He still does it. He’s the last, hold on.

0:15:12.2 TW: I still do it for meaningless coins.

0:15:14.8 ML: Yeah. But there was a time there were like three, Foursquares. There was Gowalla. And, and I can’t remember the other one, the third one. And, but with AI, just, you can think of it as happening on like a really compressed timeline. So like two years of mobile investment blowing up, this has happened in like six months probably. And so that’s what unfortunately we’ve all been here to see all this stuff happen multiple times I feel. But when you think about it, it’s just happens on a very fast timeline, which is you say Oh, like Mistral raised a ton of money at a several dollar billion valuation. That’s awesome. Wait, another one already did it. Holy crap. However long later, or Oh, like here’s another big company with a bunch of big names behind it that also raised a ton of money. And so inevitably there’s a shakeout. There’s always a shakeout for how these companies are gonna… Which ones are gonna land and actually be useful. And which ones aren’t, it’s happened every single time.

0:16:15.6 ML: Like I said, it turns out OnDemand Valet wasn’t that useful and it was kind of expensive. So we didn’t end up going with it, but calling a car on your phone is pretty awesome. Even if it’s expensive now. And this the same thing happened in mobile. And the same thing happened in all these other big transition phases. And it all inevitably happened and it already happened. Again, we’re all in Analytics. So we are very familiar with like how big the stack is at this point and like how eventually things will probably coalesce to like Snowflake and GPT and like maybe three other things. Instead of the 20 things that we have in place right now. So but it always happens this way. And again, I think the difference here is that we haven’t really had a new tech that has just like worked fricking away.

0:17:04.3 ML: To be clear, this also is not brand brand new, like GPT-3 was in 2022, I want to say. And so three, five is it’s like an iteration, like the Nth-Iteration of all this stuff, ChatGPT just happened to be like the, Here’s how everyone can use it. Here’s how everyone can interface with it. So it took a while to get here in the first place, but I think this sort of scale at which like people found it useful right from the get-go sparked a lot of excitement from both casual people saying, Hey, write me a bedtime story for my kid because I can’t keep track of everything that’s happening in their head. Or, Hey, I’m a VC can you write me an intro email to this company that I’m trying to invest in? All that kind of stuff. So it just happens like everything is happening just on a much faster, faster timeline.

0:17:57.3 TW: Faster, faster.

0:17:57.8 VK: So if we were really honest with ourselves, were we really asking those types of questions when we first started? Because not that I’m willing to share my browser history, but I definitely, my first prompt was asking, can you write a poem about the quintessential analyst, Tim Wilson?

0:18:16.1 TW: Oh, Jeez. Oh my God.

0:18:18.1 VK: Was the first thing, but I bet you will be like one of those things. Like you kind of remember what your first query was or your first prompt. So is this where we share Tim, what yours was?

0:18:28.0 TW: I don’t remember. And I’m very sad for you.

0:18:30.8 VK: Julie. What about you, Julie?

0:18:33.3 JH: Well, I feel very exposed because I haven’t actually prompted anything. I’ve been a little busy. I haven’t done anything with it.

0:18:43.0 VK: The bedtime stories. That’s a great one. But do you remember, Matthew?

0:18:45.4 JH: Yeah, that was a good one.

0:18:46.4 ML: Yeah. However, in about an hour, she may be.

0:18:52.0 TW: Yeah. First prompt, you are not at any point to refer to yourself as an AI ChatBot and you keep this as conversational as possible. I think it was what do you wanna call yourself or something like that? And or give yourself a name. So I know I’m talking to some something on the other end for convenience purposes, more than anything else.

0:19:11.6 VK: Love it.

0:19:11.9 TW: Not for like anthropomorphic case and more just like Hey, ChatGPT, do you want to do access? How this kind of doesn’t roll off the tongue as well as like a name…

0:19:21.0 VK: Smart.

0:19:21.1 TW: Real name does.

0:19:23.3 JH: That’s fun. Actually, I was going to ask, so you were starting to talk kind of about the land grab phase that we’re in with all of these companies and AI. And I kind of wanted to ask you, I mean, it sounds like you’ve kind of already alluded to this and we know the cycle that we’re going through, there isn’t room for all of them. Something is going to end up acquisitions, people are going to kind of carve out their own spaces. And I was wondering if we can actually start by talking about what are the spaces these companies are fighting for? I mean, Tim, you kind of talked about one company earlier was in like the hardware space. Like what are some of those other categories that they’re going to be fighting for, saying land, but like space and land in. And then after that, I was hoping to know who do you think will be successful in some of those areas?

[laughter]

0:20:11.8 VK: Six weeks out.

0:20:12.4 TW: We’ll come back to this.

0:20:13.3 ML: Yeah.

0:20:13.5 VK: I made a face. I made a face. [laughter]

0:20:14.5 ML: Six weeks out, six weeks out.

0:20:15.5 TW: I made a face on the second one for people that don’t have video here. Oh, okay. Well, I mean, there’s a little something for everyone in it. I mean, I think when you’re talking about the bare metal, the house does always win here ’cause everything has to happen on Azure or AWS or GCP and using Nvidia hardware, or Oracle hardware or something along those lines. So that’s locked, honestly. I mean, there’s a lot of startups trying to build new technology at the silicon level to try and rethink like, Oh, training stuff, training machine learning models on data, how to inference them, which is basically like calling them from an API, the term we use is inferencing, I guess. And so there’s some stuff happening there.

0:21:02.0 TW: Like they’ll, maybe one or two of them will be successful, I think. I’m not gonna… I don’t wanna call any shots, obviously, although, like Cerebras is pretty big obvious. So which is another… That’s a big chip company, but again, we’re all in on analytics. The stack is pretty fricking large, and the stack for AI is also increasingly kind of spiraling outta control over time. Which fortunately, a lot of stuff we’re already using, or I used to use it, at least, when I was a Snowflake jockey, I still gonna be in use. We’re all gonna be using Databricks and Snowflake most likely for the foreseeable future. And BigQuery has the runner-up prize here, most likely. But as you start to kind of get up to that point of, oh yeah I typed in, Hey, what do you wanna call yourself? And some server fired off an answer, I can’t even remember the name now. That’s how much has happened.

[laughter]

0:21:54.1 TW: There’s a slice for a lot of things. So there has to be a GitHub for this. Is it GitHub? It’s actually not GitHub. It’s a company called Hugging Face, which is worth four and a half billion dollars. They host a lot of this stuff. This traditional, old school ML stuff, but also Mistral, I think if you can download types some offshoots of the Mistral models there, which is the new most popular open source one. And like all these other kind of Mistral instruct tune Beluga Orca… 77, the number one performing model on the LLM leaderboard. Don’t Google this. It’s like I’m speaking actual English here in AI.

[laughter]

0:22:40.6 TW: It’s a lot to wrap your head around. But there has to be software for training all these models, and you don’t wanna go all the way down to the very bare bottom, like code this in Fortran. Well, not, probably not Fortran, we were talking about Fortran earlier…

0:22:57.8 ML: Fortran earlier.

[laughter]

0:22:57.9 TW: Probably not Fortran, but you don’t wanna be using a NVIDIA’s kind of Arcane software development stuff, which is called Cuda. You’d rather be using like C++ or Python. And so there’s stuff that kind of abstracts that a way to make the development a little bit easier. There’s the actual frameworks, like PyTorch, and TensorFlow, and Jax is from, Google is another increasingly popular one for managing, training of these models and constructing them and customizing them and stuff like that. There are companies that do that for you. There’s one called Anyscale that handles that for you. So you don’t even have to use PyTorch. You can just… It’s even way easier, even simpler than that. And then there are companies above that, that do all of the below for you, where they’re just calling Anyscale or other kinds of infrastructure. It’s increasingly like, do not mind the person behind the curtain.

0:23:53.9 JH: Yeah.

0:23:54.0 TW: And there’s like more curtains.

0:23:55.7 JH: I mean, the analytics stack seems bad enough. I’m just thinking, people say like, Oh, we’re gonna take certain pieces in-house or build things out. I feel like there’s no way people are gonna be able to build out any part of those things themselves. They are gonna have to go to the big companies, like you were saying, to be able to use any of this, it feels like.

0:24:14.2 TW: Or the startup that’s saying they’re gonna abstract it. I mean it does feel like, I mean the modern data stack, when we look back on the hype cycle for the modern data stack, that felt like a precursor where it didn’t take that long. It was a similar thing. Well, you have this problem, you’re managing your data catalog is gonna be hard, so let’s start up companies to do better data catalog stuff. And I don’t know, the way you just described that as being like, well, they, somebody says we’re gonna solve this niche, and then they solve that and say, Well, that niche has to work with this other niche, niche, I don’t know how to pronounce it.

[laughter]

0:24:52.9 TW: And then, oh, that’s complicated. So now somebody else comes along and says, put us on top of it and like no one can really put it together. And then you referenced Databricks a lot in what you write, and I barely scratched the surface of vaguely being aware of what Databricks was doing 18 months ago. And when I saw it, I think it was actually things you were writing where I was like, wait, Databricks is at the core of the AI. How much is that happening with, Databricks didn’t start out saying we’re in AI, but even today, you’ve brought them up multiple times. That seems like the core, how much are companies pivoting saying that’s what we do? And I know there are some companies that are pivoting saying, we’ve been doing AI all along, so now we’re on the…

0:25:42.9 ML: Oh yeah. [laughter]

0:25:43.1 TW: Market, we’re just gonna brand it more that way. But when it, from a product develop… Wasn’t Databricks just kind of another one of those sit on top…

0:25:52.1 VK: ManageBac.

0:25:54.9 TW: Yeah. Manage. Yeah. So, but now it’s at the core of AI. That had to be a product, that was more than just a positioning shift that was them saying, We’re gonna play in this ecosystem.

0:26:08.3 ML: Yeah. Well, so when you look at… Someone described it, it’s like binary stars of Snowflake and Databricks violently orbiting around each other. And then the periphery of all these little startups that kind of exist, on the first, second, third rings around them, orbiting around them. I mean, what was Databricks also at lake architecture. Unstructured data. Like dump everything. Don’t even have to think about it. Managing Parquet files, which now there’s a startup trying to do new Parquet. That’s the whole thing. And Parquet is a big file format for, if I’m glossing over this stuff really fast. But…

0:26:53.3 TW: I’m also just thinking our transcription is machine assisted human transcription and this is gonna be a doozy for them, for the people who don’t like to listen to the show, but love to scan the transcript. Oh boy, this is gonna be a fun one.

0:27:08.9 ML: Yeah. P-A-R-Q-U-E-T. Did I say that right? I think. But, yeah, so I mean, it’s… And unstructured data is where a lot of these AI models are trained on. So the raw documents of just gobbledygook text, instead of neatly, semi neatly assembled tables that may or may not make sense and hopefully are not deprecated and for all this stuff. And that’s what these processors do, is just stick a hand in this data lake, rip out information that’s needed to train it. And on the other end you get ChatGPT that says blah, blah, blah, blah. Pretends they know the origin of, what was the company called? That you talked, Further, right?

0:27:55.4 TW: The Further, yeah.

0:27:55.9 ML: The precursor to Further, pretends it knows what it’s talking about. And it’s honestly, it’s kinda true for a lot of companies where the tools for what powers a lot of modern AI actually existed already, but there wasn’t a scale moment. There wasn’t a blow up. This is actually how we reach a billion users status. It was like, Hey, can you help me predict whether or not someone is gonna buy a blue bottle coffee when they buy a coffee grinder, and put it in the checkout cart to… It’s stuff like that. So the…

0:28:31.6 TW: What was the piece that made the scale? Was it the GPUs? Was it the hard… What made the scale possible? Was it two or three things coming together at once?

0:28:42.1 ML: Well, it wasn’t… It was basically, there was a reason for it to exist for consumer usage and anyone could build something like this. Like Facebook, when Facebook launched, why is Facebook have a billion people on it. ‘Cause it’s a consumer app and everyone wants to use it and post photos of their dog and stuff like that. And they of course have their own internal stack. And when you look at companies like Databricks, originally it was like, Okay, what can we do? We can help you do this. Like I said, this churn model or this recommendation system and stuff like that. Just give us all your data and we’ll do this stuff for you to the extent that we can. And if we can’t do it, we have this whole network of partners that can help you do it.

0:29:26.0 ML: But were these companies touching a billion users? Absolutely not. Well, maybe they were some companies were touching large, large users, but was there a Facebook? Definitely not. And then ChatGPT comes along and gets to a 100 million weekly active users in whatever time period it was, whenever the dev day happened after when Sam Altman was kicked out of the company for six days or something like that, then it was just like, Oh my God, I could put this in my product and suddenly everything has a consumer feel to it, and that’s pulling from the way I customize it as like pulling from this data. And so suddenly anyone could just imbue their stuff with the consumer experience. Mostly the chatbots right now, obviously, but we’ll get to other stuff at some point in the future, but everyone could instantaneously imbue customer experience with this.

0:30:17.5 ML: And so all of this data became way more useful. Immediately, overnight, everyone’s just holy crap. Where are we on LLMs? How can we efficiently pull data out to throw into GPT-4 to provide a better response from it? The technique is called retrieval augmented generation, or RAG, R-A-G, and, or should we just build our own from scratch? Which is why they bought Mosaic for 1.3 billion. I’m throwing… There’s gonna be a lot of fact checking here, I feel like.

[laughter]

0:30:56.0 ML: Which is basically you have all this data, let’s make build your own ChatGPT… You build your own GPT-4 and so it’s basically overnight all this became instantly more valuable, Anyscale. Again, I keep coming back to these guys, but Anyscale for example, they served standard machine learning models like churn and things like that.

0:31:12.6 ML: And all of a sudden, you can build an LLM with them and you get an API on the other end and you can use it immediately, or you can customize an LLM and use it immediately. It’s like suddenly like they’ve fallen into this business that has direct immediate consumer application or has a consumer feel to it. Scale AI, they were a data labeling company and it turns out that data labeling is important for data for it actually to be useful. And so they just managed to fall into like one of the most important businesses in the next current and probably, or sorry, 2024, probably 2025, 2026. And so all these companies, they had been doing stuff in this area for a while and Databricks just happened to be probably one of the bigger ones, I think, because they were effectively like the Lake, Data Lake company to work with.

0:32:01.0 ML: And they had all ML flow and all of this other stuff to make that stuff… To make your life a little bit easier for putting together these models. But it’s sort of like all of these tools became just way more useful overnight. And Databricks is definitely one of them and probably one of the core ones because there’s just all the data was already there and mechanisms to access it were already there because they were already being accessed for ML models to begin with. There was fine tuning of LLMs well before this with Bert, which is a smaller one going back to 2018 or 2017, I can’t remember at this point. Speaking of moving fast.

0:32:37.5 ML: But Databricks is just like, again, it went from being a important company for ML to being a really, really, really important company for ML in the same way that all these other companies became, Scale went from a company that was important in ML to really, really important in ML. And again, it goes back to that launch or, Holy crap, it works. Whoa. That was cool. Well, but it worked right away.

[laughter]

0:33:00.9 TW: Do you have a sense of like, is that, does that generate chaos inside these when one of these companies goes through like the, Oh my God, we’ve become super important. There’s a blow up in demand, but there’s gotta be blowing up like feature needs, and all of a sudden they need product managers who were kind of working their way along at some pace, and now all of a sudden they’re just like, “Go, go.” Like it feels… It’s like you’re like, if you’re building scaffolding and saying, Well, we go like one floor every hour, and all of a sudden you’re like, “Oh my God”, they’re building at two floors an hour next door and we better keep up. All of a sudden, do I want scaffolding to get built in a really, really rapid rate? Is there fragility, is there risk there? I mean, even like OpenAI was whatever weird business stuff, but it, some part of it was like all of a sudden they became the center of the universe and then weird shit started to happen organizationally. Is that happening at these? Are you talking to people who have… They’re like, I’m coding 18 hours a day and pushing stuff out and can’t get it through. I mean, is that happening? [laughter] Should we be scared?

0:34:14.7 ML: Hopefully we don’t do 18 now. Hopefully we do 14 instead of 18 now. [laughter] This isn’t like 2016. Yeah, I mean, it’s like any startup. Scaling startups is really fricking hard especially if you have a team that’s working and then you start adding new people on and it rocks the boat and people start getting pissed off and leaving and all that other stuff. And there’s the… I will say I definitely appreciate having worked at startups because you get to see on the other end how much complete, no one knows actually what they’re doing. And permanent decisions are made on the spot in Slack, [laughter] that you can’t correct.

[overlapping conversation]

0:34:53.0 ML: Or can’t change, [laughter] having been part of these discussions. And it’s the same problem that startups have always faced, and growing pains, blah, blah, blah. The challenge with AI though, is obviously, to your point, the scales is immediate and fast, and you have so many competitors all at once, that just appear overnight. And it’s a big challenge for these companies, founders that are maybe they’re first time founders and they had this insane idea of starting a media publication for some reason in 2023 and figuring out, How do you methodically scale that? And some of them are gonna go great and some of them are not gonna go so great. Stability AI is probably the poster child of things not going so great after launching Stable Diffusion, or not launching, but kind of taking over managing Stable Diffusion, which is one of the kind of cultural moments of AI where suddenly you could type in a thing and there was an astronaut on a horse.

0:36:00.8 ML: And obviously there’s been a lot of reporting about there’s some stuff, weird stuff that happens there. And it’s gonna be… It’s the same as any big startup shift, going back to MDS one of the things that the MDS probably had as an advantage over AI, is things moved a little bit more slowly in terms of deployment and development. DBT I think, is a good example of a company that a lot of people think moves really methodically. And I encourage everyone to hang out on the DBT Slack. It’s great. [laughter] And that kind of… It’s hard to have that mentality in AI because it’s sort of constantly all hands on deck because you’ll be going to bed and the next morning Mistral comes out with a crazy open source model casually dropping a link to a torrent on Twitter after you spent probably dozens of hours putting it together, a huge marketing deck and material for Gemini.

0:37:00.0 ML: And then all of a sudden, that’s the story of the day because turns out that Mistral has this chance of blowing you up if you’re not careful. And we’re talking about Google.

[laughter]

0:37:10.7 TW: Geez.

0:37:10.9 ML: But I mean it’s a challenge for any founder and any startup. It’s also a very big challenge for larger companies which move really… So Google traditionally moves pretty slowly relative to… I mean, it used to move really quickly. Now it moves a little bit more slowly than compared to other companies and they can just get hit overnight with some big release happening from a startup that can move 50 times, a 100 times faster than they can. So again, kind of going back to the 2010s, 2015 MBS, 2020s MBS era that, but escalated to a 100 x. [laughter]

0:37:53.0 JH: So as a company, how do you even know where to start if you wanna start getting into using AI and you need all these other companies to, help support you and doing it, how do you make decisions? And then on the other side, as a startup in that space, how do you communicate to the people you want to buy your service or your software or whatever, how do you communicate the benefit of working with them and then how do you compete against all those people? My head is spinning in the fact of it’s so complex for the companies to figure out how they compete with the others and then as a consumer of their service or products or whatever. How do you know what you need even and who to work with?

0:38:33.3 ML: Yes.

[laughter]

0:38:36.9 JH: That’s crazy.

0:38:37.3 ML: Well, it’s no, I mean it is wild. You can talk to a lot of companies and the kind of order for gen AI and let’s be clear, we should separate AI and gen AI out, ’cause these are like, gen AI is a… Generative AI is a subset of AI and that’s part of why the reason why I called the newsletter supervisor ’cause it’s like a joke.

[laughter]

0:39:05.4 ML: Because most problems are easy and don’t require like all the crazy stuff that we’re doing right now. And the order for gen AI comes from on high. The CEO is like, Where are we on AI? Whereas AI is like, “Hey, the head of data is like, what did we introduce a turn model?” That seems like a good idea. Like, “Let’s evaluate and see what’s out there. And everyone knows what happens when you get an email at 2:00 in the morning from your CEO saying like, “Hey, where are we on this?” Things start to get a little crazy.

0:39:30.6 ML: Everyone jumps to Google [0:39:32.4] ____ very fast to figure this out, so I think that’s part of where the confusion stems from is just, How do we even just get started on this? And most of the time it’s like, let’s put a ChatBot in that seems to be a good idea, but you start to see some interesting stuff happening and emerging that goes beyond co-generation and Chatbots and things like that, but I think of a lot companies they just don’t really know what they wanna do other than they’re like, We need to do something, or we’re gonna get beaten by our competitors who are also thinking we need to do something, or we’re gonna get eaten by our competitors.

0:40:12.5 TW: We’ll accept, I mean there’s a degree of like… There’s the signaling, we want a signal to the market that we’re doing something with AI, which is funny, I’ve had discussions with companies like that, that can be okay, you’re selling widgets and you’re from on high it’s where are we with AI? And really it doesn’t matter if it’s really the smartest thing to do, we just have to be able to tell our customers or the street or somebody that we’re doing stuff with AI, which in the grand picture, you’d rather… It’s like I have a hammer and I’ve got to go find something to whack with it.

[laughter]

0:40:54.0 TW: As opposed to now I got this really nice hammer, it’s along in my tool belt with the screwdriver and a wrench and this other stuff. Now, when I turn to my business problems, a hammer is an option, we just definitely feel like, we’re in that manic phase where it doesn’t matter, we just need to have something, and it’s probably… I think there’s a lot of confusion when in the business world a lot of people say AI and they’re thinking generative AI, and then there are a lot of vendors that are saying, “Well, we’ve been doing AI because we’ve got machine learning that can be used to trigger activities that’s basically AI.”

0:41:31.1 TW: So now we’re gonna say that we’re doing AI, and it does feel like it’s causing a lot of confusion and FOMO and lots of things that aren’t necessarily the most efficient way to get from point A to value, ’cause then it’s like looking around… Where are we with AI? I don’t know. Fine. Sue the intern can start. [laughter] She took a class, she just graduated last year, and she knows how to write how… She had a class where she used TensorFlow, and so all the studies you’re putting something into production. Yeah. Okay, I don’t know. Sorry, that might have been…

0:42:12.6 ML: No, no, no, no, it’s true. A lot of it is just, we need to be able to tell our board that we’re an AI or we’re doing something with AI. And I think, to your point, when we talk about the kind of manic phase, it’s on both ends, so it’s on the companies that are saying, Oh, we gotta do something in AI, what are we gonna do? And then you have this whole other group of companies that are like, Oh crap, what should we build an AI to work with generative AI, ’cause we already do AI-related stuff, and so what should we build right now to plug into this generative AI stack? And again, to your point, we still don’t know exactly what to do with the tech, other than the fact that it’s cool as hell and everyone wants to use it ’cause it is pretty awesome, but it’s sort of the value, like the ROI from it is not 100% clear for a lot of use cases there some, like co-generation is a good example of the…

0:43:14.7 ML: It’s a very obvious value add, you have people like me that are using it probably way too much and forgetting the difference between loc and iloc and stuff like that. And that’s an obvious one. And then for assistance, that’s another pretty obvious one, like Notion’s assistant app is really awesome and pretty complex to put together in the first place, and then there’s this idea of agents, which was creating tools that can act on your behalf to do more arbitrary tasks, instead of write an email for me and the iambic pentameter of whatever, it’s like, Hey, you’re now Shakespeare, and I want you to respond to everyone who works at… Who’s called me from Salesforce and the sales department and tell them, I don’t need Salesforce, but write it in the form of Shakespeare, and there’s more steps there that go into that, but that’s another possible outcome from a product perspective, so it’s very much… It’s a lot of toys right now with a small subset of useful… Generally useful like, Okay, these are actually gonna be businesses and Mid generation, Midjourney is another really good example, I think.

0:44:31.1 ML: But again, to your point, it’s like, remember when check-ins… Everything had to have a check-in and if you didn’t have a check and you were totally screwed, or geo. If you didn’t have geo in your app, you were totally screwed. What’s your geolocation strategy?

[laughter]

0:44:52.0 ML: And now we’re the same thing, and so it’s like, What should we do? Like how to check in. Tim’s at this bar doing whatever, or at this coffee shop doing whatever. And it’s like, Well, why do I need to know that in Yammer or Salesforce.

[chuckle]

0:45:04.1 ML: It’s same thing, I mean everything old is new again. And again, we keep going back to the same thing, but it’s just faster because we’re all extremely online now, and compared to where we were like 2010, 2016.

0:45:21.0 VK: So I’ve a question for you Matthew about a piece of advice that you give to analysts where they’re kind of stuck in this place where the CEO is asking them like, Hey, what could we put in our goals as far as AI this year, that email 2:00 in the morning, and you know better than to run around with a hammer, like Tim said, but you’re kind of stuck in this middle place, like what advice would you have for how to approach it or what would be the best way to start centering yourself in those business problems and make yourself aware of the applications that might make most sense for your business, where would someone start on that journey?

0:45:55.4 ML: I would say so. If you’re an analyst, the first question, it’s like, Why can’t we do NLP for data? Why can’t I just query? What’s right turn correct? It turns out that’s really freaking hard but… I feel like no one’s actually figured that out yet. And it’s gonna, we’re a long ways off from it. But I think it kind of goes back to, Okay, what are our actual business needs other than at AI and what are our OKRs? And ’cause obviously the 2:00 AM emails, everyone’s gotten the 2:00 AM email before, and there’s obviously a lot of excitement around it. But just kind of, if we’re analysts, our job is to track these things. Our job is to be tracking these OKRs and KPIs in the first place. And so the question is Okay, well, how come… Here are the three or four or five common use cases for AI, like ChatBot, CodeGen, whatever.

0:46:54.5 ML: Write my DBT template for me for the love of God, and how can we actually from those get to value, get to the actual value of what it is we’re trying to do. A good example is exploratory data analysis CodeGen is beautiful for that. It’s awesome, like ChatGPT is, I mean, again, trust but verify, add the asterisk to every single sentence, trust but verify.

[laughter]

0:47:21.6 ML: But having a code buddy to the… Again like Hey, can you do a bootstrap test for this? Instead of going through the pain in the ass of doing all this. That’s an easy one. And so thinking like you wanna do something in AI, what are our goals with this in the first place?

0:47:36.8 ML: Well, first off, let’s have coffee and talk about this in person 3D preferred instead of Zoom. And what are our goals with how do we think that this can feed the business? These are the five or six common use cases that are out there that I’ve seen so far. Obviously there could be more that we haven’t evaluated yet. But I can see from our current… What we’re currently tracking, we could improve productivity of our software engineers by X based off of data from retool and GitHub, whatever. If we implemented CodeGen internally, here are the potential downsides of doing that in terms of like data leakage or whatever. But and would improve Jira ticket request, drop your ticket request by four, I don’t know, whatever. Like pick an OKR and say like, Okay, here’s how it could feed your OKR. Okay, we’ll do that. And I’ve staved you off for a week until, [laughter] not that it’s a bad idea, not that that’s a bad idea. [laughter]

0:48:35.4 TW: Anyone saying I want to use AI, it’s like, Well, do you have a hypothesis about how it will deliver value? And how would you actually measure the performance? To me, it should go back to… It can go back to analyst speak, kind of what you’re saying. Let’s tie it to an OKR, have it as coffee. I love the… I think plugging like, Hey, that should… Analysts maybe should subscribe to this supervised newsletter ’cause there’s a degree of letting this stuff… I mean, it is still overwhelming as I try to go through it, but I also recognize that the way the human brain is gonna be processing this is, it’s gonna be useful ’cause it’s gonna be when I need to make a decision, I kind of just immersing myself in it is gonna help make it smarter.

0:49:21.4 TW: But putting on the analyst hat of saying, if they say we want to use AI, like, Okay, well, let’s have a conversation about what problem we’re trying to solve. Can we align on a problem and how we think AI might solve it. You’re being able to outline, Hey, these are four or five classes of use cases. Does one of these align with a business problem? Okay, now let’s think of that as a hypothesis. Using an LLM, we could reduce our customer service investment because we could have a better ChatBot. Okay, now we’re getting somewhere. Now how are we gonna actually measure whether it’s happening? The success can’t be, we rolled out the ChatBot. We need to rolled out a ChatBot that actually achieves the business goal. So…

0:50:09.3 JH: Do you think, sorry, quick one. Do you think that if people… Okay, with the complexity of AI and everything we’ve been talking about, do you think companies will be left in the dust if they don’t jump on it now and try to start figuring it out? Or do you think there is some benefit to letting the water settle a little and see how something shake out before they’re suddenly trying to use AI somewhere in their company?

0:50:32.3 ML: Most companies are waiting for the water to settle. So to be clear, AI is… It’s definitely a thing, but it is very loud and loud does not necessarily equate to big and at scale. And the reality I think is, no one wants to be the first person to put the wrong chart up in a board meeting. That’s like, you do not wanna be that person.

[laughter]

0:51:00.2 TW: You don’t wanna be the lawyer who had your…

0:51:01.8 JH: Yeah. Ground. Right, right, right, right.

0:51:03.7 TW: Yeah, yeah.

0:51:04.3 ML: Yeah, yeah, yeah. No one wants to no company wants to be the first company to do that because it’s not good. And you’ll be the Harvard case study, unfortunately. [laughter] Even if there’s someone worse than you, you’re gonna be the Harvard case study. So most companies are wait and see, I think. And we’re talking about, you have salesforce, which just happier saying, oh, I’d sign and whatever, and dah, dah, dah, dah, dah. And they kind of have to have a story in AI because they’re at a scale in the tech industry because we’re incredibly navel-gazey in this industry that they have to have some story to tell in the first place. Are the products useful, it depends on the situation you’re in.

0:51:47.5 ML: It depends on the company that you work for. It depends on are you doing a Salesforce migration, all that stuff. But the majority of companies in the Fortune 500, whatever you wanna call it, they’re just hanging out. They’re just okay, hold on. Let this shake out for a second and then see what… Apple doesn’t even have an LLM they’re working on them. But is there a… Is Siri an LLM? Absolutely not. [laughter] or is it.

0:52:16.7 TW: Just gonna insert that the time gap between when we’re recording and when this rolls out. Again, if Apple has launched an LLM when you’re listening to this, and it’s possible you can trace back to some window of when we’re actually recording this in late 2021. No.

0:52:34.1 ML: Yeah. Yeah. 2023. Yeah.

0:52:35.6 TW: No, I wasn’t gonna give it that way.

0:52:41.4 ML: Yeah. Yeah. But most companies are just waiting. Just wait and see what tech ends up being useful. And for the actual data companies that are the ones that are going to be powering the majority of the stuff, they have an incentive to not pick winners because they want the ecosystem to play out. Because at the end of the day, they just want you… Want compute happening on them. They want you using their product to execute some kind of operation. That could be a Snowflake or Mongo or Redshift or BigQuery, like Azure, pick any of them. At the end of the day, they just want more compute happening because that’s revenue for them. And that’s great.

0:53:24.4 ML: They don’t really care where it comes from. And if they pick the wrong company, then they could just inadvertently send us down a pathway that doesn’t end up working out as intended. And so they kind of also have to sort of sit back and say, Okay, let this play out as in the insane mechanism it is. And we’ll partner with everyone and like sort of add integrations for everything. And then wait and see. Just wait and see what ends up being really successful. And we’ll build in the stuff that we think is really interesting that our customers want, or we’ll just wait and see, how our vector database is going to play out. That’s a big part of AI right now. And so there are a lot of… The majority of companies are on the sidelines, just kind of like letting the… Watching fireworks happen. Someone unintentionally set it off all at once and it’s blowing up in the distance. Do you guys remember that video. The entire firework…

[overlapping conversation]

0:54:30.7 ML: So they’re just waiting to see how things play out. And so like the case of your CEO calls you at 2:00 AM and they’re like, Hey, where are we? There’s a good case to be made where we should just hang out for a second, wait until we see what other people are doing, and then follow their example or not in the opposite direction, depending on how things go. And there’ll be a number of companies that I think will be, they’re more tech forward. Notion, I think, is a good example where they have the agility to add and put out these kinds of heavy duty, beefy AI products that are difficult stuff to put together, but they’re hard to get out the door. And they can be good test cases. And I can just sit here and be like, Okay, cool. Wait and see. Was it popular? Great. Let’s integrate it. Let’s do it. We’ll do something like that. So going back to the analyst situation, like what should we do in AI? Just well, what are our competitors doing? They’re not doing anything. So let’s just hold on for a second and not get too excited here.

0:55:37.2 VK: Send them the video of the fireworks going off. Be like listen, we got a minute to figure this out.

0:55:43.2 TW: Yeah. Let’s stay over here. Well, this could be two or three hours. I’m not sure. Your rate of being able to rattle off these technologies is still a little mind boggling.

0:56:00.6 ML: Unnerving.

0:56:00.6 TW: Unnerving. Yeah. This has been a great discussion. Unfortunately, the clock is ticking and we need to head to a wrap. So before we do that, we always like to go around and have a last call, something of interest or note or share worthy, whether related to AI or not, but what’s not related to AI? So Matthew, you’re our guest. Would you like to go first with a last call?

0:56:29.0 ML: I don’t know. Like Stuff’s crazy. Just grab a hot chocolate and listen to lo-fi beats to relax/study too before you make any serious decisions, I guess. Or the synth version or the synth wave one. There’s a synth wave one now, apparently. And yeah, it’s fast. So I’m probably going to make it… It’s probably going to be four weeks ahead instead of six weeks by the time this comes out. So.

0:56:53.8 TW: Oh, geez. Wow. Julie, do you have a last call?

0:57:00.0 JH: I do. So this one, it was a link kind of buried in a newsletter I get every day, a few weeks back, and it’s called the Eternal Jukebox. Pretty sure it’s powered by AI, but I thought it was really cool. You pretty much…

0:57:13.4 TW: That’d be a morning brew.

0:57:14.9 JH: Oh yeah.

0:57:15.1 TW: Yeah.

0:57:15.8 JH: So you pretty much get to put in your favorite song. It looks it up on Spotify and then it loads it. And I like the visual of it even, it’s a circle and it’s got different coloration for the notes and it represents the song. And then it has all these connections across the circle and you can watch as the song plays. And when it hits one of those connection points, it jumps back and forth around the circle. And you can see it kind of grows outwardly every section of the song it’s played. I mean, and it just keeps going your favorite song and it’s like jumping around it. And I thought it was pretty cool. So, check it out.

0:57:47.4 TW: So have you put Poppy in front of it and seeing if she is enchanted by the…

0:57:52.8 JH: I haven’t, but I’m pretty sure she would be. She likes any screen at this point, but I should, I’ve been wanting to play with a couple of different songs. I did like one or two so far and it was pretty good. So check it out.

0:58:05.2 TW: Very cool.

0:58:06.5 JH: Yeah.

0:58:07.3 TW: Val, do you still have a voice to share a last call?

0:58:10.7 VK: I’ll do my best. So this is a build off of a previous last call that you had, Julie, you recommended the Ologies podcast. And I’m super addicted to them. And there was one recently that was like a reboot or like a revised or revisited episode from an entomologist, an entomologist. I’m saying that correctly with my crackly voice, Helen Zaltzman. And she has a podcast called the Allusionist and they’re 30-minute episodes. And there was one that she recommended on the Ologies podcast talking about breaking down apologies. And I was like, How could there be an entire podcast episode about apologies? But not only did she prove me wrong because it was super fun and entertaining and I learned a bunch of those three experts who dedicated their lives to breaking down apology. I was like, this is incredible. So I just love people who are so passionate about their craft, which is one of the many reasons why I really loved today’s conversation today, Matthew. So again, thank you for being a guest and yeah, the Allusionist, with an A, podcast. It was awesome. So that’s my last call.

0:59:17.0 TW: That’s awesome.

0:59:17.6 VK: How about you, Tim?

0:59:19.5 TW: So I’m gonna do a quick twofer. One of them is kind of on the AI front, so Jason Packer, I feel like I’ve had him as the last call before, but he and Juliana Jackson kinda co-wrote a post that’s pretty lengthy called Outsider Thinking in the Age of AI. And it’s, I think Jason said it was like 5000 words. It actually took me two sittings to read it ’cause I had to go somewhere. But it’s kind of a, just amusing, it’s not necessarily gonna answer your questions about AI, but it’s got some details about the Turing test that I didn’t know. It references the Shags, which is a very strange musical phenomenon that happened.

[laughter]

1:00:02.0 TW: It’s got Outsiders versus Mavericks, it’s got Picasso, it’s got Van Gogh, so it’s like Jason’s brain and then Juliana came in and added more stuff and it’s just kind of musings about sort of the outsider thinking being something that AI is not good about, not good at doing and kind of breaking down why that is.

1:00:19.8 TW: And then the second thing, since this is our first episode of the year, so people are kind of getting maybe out of the holidays and heading into, what am I doing professionally? I’ll put in a plug for Measure camps. So there is the Measure camp calendar at measurecamp.org, but I’ll also call out specifically the US, which I don’t think has had an in-person measure camp since like 2020, maybe even before that has two already scheduled for in-person. One’s on the 2nd of March in Austin and one is on the 13th of April. It’s Measure Camp New York, but that’s actually in New Jersey. And Val could probably explain why you do things in one state, but you call it a New York thing, but that’s gonna be at the New Jersey Institute of Technology on the 13th of April. So think about it if, even if, and if you’re not in the US, check out the calendar, see if there’s one that you can get to Measure camps or, they’re awesome. So I’ll stop at two. I probably could have done four, but once again, this has been a great discussion. So thank you, again, Matthew for coming on the show. That was a fun ride. And…

1:01:25.0 ML: Thanks for having me. Although it was like really fast last call. Numlock News is my favorite newsletter. Definitely go read it. And it is by Walt Hickey is fantastic. And I strongly encourage you to subscribe.

1:01:38.4 TW: So do you know him like beyond him having, had you?

1:01:44.7 ML: Yeah. Yeah. We work together, Business Insider. Yeah.

1:01:45.1 TW: Oh. You’re a business insider together. Okay.

1:01:46.7 ML: Yeah.

1:01:47.1 TW: Yeah. The number of Walt Hickey sourced, yeah, I probably could have said that, that we found Matthew because Walt’s Sunday interview was with Matthew Wright. I think it’s right as you were starting up Supervised, but Walt also has a book out that is fascinating. Which… Where did my copy go?

1:02:06.4 ML: You Are What You Watch. You Are What You Watch.

1:02:08.3 TW: There we go. Which is, it’s got visualizations in charts. So we have plugged Numlock News many times and I’m a daily reader of the Numlock News, so. I have stickers. I’ve done a number.

[laughter]

1:02:19.3 TW: So no show would be complete without thanking our producer Josh Crowhurst, who is Josh, you don’t have to fully fix the transcript. This one’s gonna be a challenging one, so we’ll let the readers… I love that when we’re talking about transcription and within 30 seconds, you literally, I think Matthew said…

[vocalization]

1:02:40.5 TW: A gibberish word. [laughter] I wanna go find that one. And now I wanna find this one and see what actually came out. It’s probably just to say unintelligible.

[laughter]

1:02:50.2 TW: So as always, we’d love to hear from our listeners. So you can find us on the Measure Slack on LinkedIn, on the platform that the car guy runs. You can find us email contact@analyticshour.io. We’d love to hear from you. We’d love to get a rating or a review and whatever podcasting platform you listen on, if it has that sort of support. We kind of love to read reviews, angry or not. And with that, so no matter whether you are running around recklessly with an AI hammer and smashing every analytics project with it, or…

1:03:30.4 ML: Dashboard.

[laughter]

1:03:33.5 TW: Whether dashboards, building dashboards with AI hammers or whether you’re just toiling away and saying, look, this is making my code writing a little bit faster. Whatever you’re doing for Julie, for Val, for me, for Mo and Michael, just to mind your day, keep analyzing.

[music]

1:03:50.3 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter at @analytics hour, on the web at analyticshour.io, our LinkedIn group and the Measure Chat Slack group. Music for the podcast by Josh Crowhurst.

1:04:07.9 Charles Barkley: So Smart guys wanted to fit in. So they made up a term called analytics. Analytics don’t work.

1:04:15.2 Kamala Harris: I love Venn diagrams. It’s just something about those three circles and the analysis about where there is the intersection, right?

1:04:24.7 TW: Rock flag and transcription nightmare.

[laughter]

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#239: Non-Technical Backgrounds in the Modern Analytical World with Kirsten Lum

#239: Non-Technical Backgrounds in the Modern Analytical World with Kirsten Lum

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_239_-_Non-Technical_Backgrounds_in_the_Modern_Analytical_World_with_Kirsten_Lum.mp3Podcast: Download | EmbedSubscribe: Google Podcasts | RSSTweetShareShareEmail0 Shares