#228: What AI Can't Do with Dr. Brandeis Marshall

It’s a lot of work to produce each episode of this show, so we were pretty sure that, by this time, we would have just turned the whole kit and kaboodle over to AI. Alas! It seems like the critical thinking and curiosity and mixing of different personalities in a discussion are safely human tasks… for now. Dr. Brandeis Marshall joined Michael, Julie, and Moe for a discussion about AI that, not surprisingly, got a little bleak at times, but it also had a fair amount of hope and handy perspectives through which to think about this space. We recommend listening to it rather than running the transcript through an LLM for a summary!

Articles and Other Resources Mentioned in the Show

Photo by MARIOLA GROBELSKA on Unsplash

Episode Transcript

[music]

0:00:05.9 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.

0:00:12.9 Michael Helbling: Hi, everybody. Welcome to the Analytics Power Hour. This is episode 228. What are AI’s capabilities? Where are the boundaries? Well, I asked an AI to write an intro to a podcast about this topic, and here’s what it came up with. The rise of AI has led to increasing fears that humans will be replaced in the workplace, but there are areas where AI cannot go. And in this episode we’re gonna dive into these areas and explore the promise and limitations of artificial intelligence. It’s time to overcome the hype and truly understand what makes human intelligence indispensable for now. I don’t know why it said for now at the end, but let me introduce my co-hosts. I didn’t actually say that’s a little joke, but, alright, Moe Kiss, you’re the Director of Marketing Data at Canva. Hello. How are you going?

0:01:03.4 Moe Kiss: Good. And what I can confirm is that AI is not as funny as you. Like that is a true thing.

0:01:12.4 MH: Well, I mean…

0:01:13.6 MK: That was missing some good humor, some Helb’s intro humor.

0:01:16.1 MH: I tweaked it a little bit to make it more ominous. Yeah.

0:01:19.4 MK: Okay. Alright.

0:01:21.0 MH: And Julie Hoyer, Manager of Analytics at Search Discovery. Hello. How are you doing?

0:01:30.4 Julie Hoyer: Hello. Happy to be here. Can’t wait to dive in and talk about all the human aspects.

0:01:36.9 MH: I love it. I’m excited too. And I’m Michael Helbling. I’m the founder of Stacked Analytics. We also wanted to bring in a guest, someone who could bring some expertise to this topic. Dr. Brandeis Marshall is the CEO of DataedX Group at Data Ethics Learning and Development Consultancy. She’s a computer scientist with a PhD from Rensselaer Polytechnic Institute and has served as faculty at Spelman College and Purdue University. She’s also the author of the book, Data Conscience: Algorithmic Siege on Our Humanity. And most of all today she is our guest. Welcome to the show, Dr. Marshall.

0:02:10.4 Dr. Brandeis Marshall: Hey. Hey. Hey. It’s wonderful to be here. Let’s talk about AI and the soulless part.

0:02:18.0 MH: Yeah. What can’t AI do yet. No, but part of this discussion stems from I think a Medium post that you wrote about sort what AI can’t do. But maybe let’s start further back as AI has kind of blossomed in the last three or four months and you’ve been obviously in the computer science and the data space for quite some time. How have you seen this unfolding from your perspective in the data community?

0:02:46.0 DM: I’ve been what I call as an early observer. I’m just watching AI become public and how it has taken over everyone’s minds and I think everyone needs to just calm down, take a beat, give it 30 days. Everyone’s trying to jump on a bandwagon. Everyone’s trying to like open an account. Everyone’s trying to use a technology but they don’t understand what it is, what it does, how it’s useful, or if they really even should be applying it. Is it even relevant to their everyday lives? And so as someone in this space, I’m sort of got the lawn chair out with the popcorn, just like, I’m just gonna watch these fools do some things for a couple of days and then I’ll say some things.

0:03:40.4 MH: Nice.

0:03:42.6 JH: I was gonna ask, what are some of the areas that people believe AI is really good in that you already see as like false hope or not where it would be best used?

0:03:54.3 DM: I think number one would be that it’s going to replace people’s jobs. I think everyone is just like, “Oh, AI is now going to do the job that I do and so I won’t have a job.” And no one really understands algorithms or AI in a way that I do because AI and more specifically algorithms, they just go forward. They’re trying to get a result, they’re trying to get an answer, they’re trying to produce some outcome. That’s not the human way. So when you try to replace humans with AI tools, systems, platforms, you’re gonna run into problems and errors because AI is gonna mess up. I mean, we’ve already seen it time and time again. The ChatGPT is gonna produce some language that doesn’t make sense. So you have to retool it, it’s going to produce a list of agenda items that’d be like, that’s not useful. So there’s people that still have to be in the loop. So that’s the biggest thing that I’ve seen so far, because people are just firing folks and they’re gonna have to like rehire at least half their staff back because they’re gonna realize that AI is just, it’s a dud if it’s unsupervised.

0:05:19.4 MK: What do you think, ’cause like this, I guess this perspective or like this hype around, it’s gonna take everyone’s jobs and we’re all gonna be out of work and that sort of thing, it’s not a view that I share. ‘Cause I’m always like, cool you can do the boring bits of my work and then I can hopefully focus on the fun bit or the people stuff, which is the things I love anyway. Like I wanna work in the complex stuff. So if you can automate simple things for me that actually makes my life better. But it just seems like the conversation has gone off-track somewhere. Is it that the topic itself is so difficult for people to navigate? Like why has this conversation kind of been derailed that people have this fear?

0:06:03.3 DM: I think the fear is because of the bad actors. I mean there is AI Voice that has been scamming people for years, but it has definitely taken an uptick in the past few months with ChatGPT and other generative AI systems. So if you’re unfamiliar, AI Voice replicates the voice of a real person but it is really a bad actor who is trying to scam the person on the phone in order to get money claiming that their loved one has been kidnapped in some respect, additionally with banking. But I think the reason why this AI hype is around is because there hasn’t been any really good education around data skills. No one really understood an algorithm till about five years ago, right? And then after they understood the algorithm, then it all of a sudden jumped to data science that was like a big word that everyone was afraid of.

0:07:05.8 DM: And no one really broke it down to just say it is statistics and it is coding that is now being exploited into society. So that’s another thing that happened in that evolution. And I think most prominently is that there’s been very little governance around it. There’s been no guardrails around what companies can do and cannot do with our content, with our data, with our likeness. I mean, we all sign these forms to say you can use our voice and our picture however you like. And that’s been going on for generations. But now in the digital space, people can now make up different people, right? And so the rate in which algorithms have turned into AI tools that have then been used by bad people has made everyone scared. And no one has just said, let’s just have like tech education 101 for the world. No one stopped and done that.

0:08:13.5 JH: And it feels like too, it’s a little bit of like future talk is a big focus of like, oh now that ChatGPT and everyone can use it and everyone was like so wowed by it, right? Like you were saying, it’s kind of like come up, it’s very front of mind for everyone and everyone is very a little in awe. And then it’s the chatter of like, oh, what’s coming next? What does this mean is coming. And I feel like people very quickly do get into that cycle where they’re saying, well, if it can do this, it can do all these other things. And maybe a lot of it too is like assumption of what’s coming rather than looking kind of to your point, the reality of where it’s actually at and how it’s currently working and what’s feeding it and all those considerations and assumptions that go into this underlying algorithm. And being able to critically think about like what are the shortcomings of that? Like what does that mean then for how it can actually be utilized out in the everyday world. I feel like that is… It took a while I think even for myself to start to think of those things when it first came out because it was just like the buzz and the hype of, oh look what it can do.

0:09:16.5 DM: Right. But I think because I am a Black woman living in America, I’m always looking at the pros and the cons because every system is not designed with me in mind. So I’m always considering, okay, there’s some good things that this can do, but what are the bad things that are harmful, right? There was conversations back in 2019 that I was a part of when it came to deepfakes and I was like saying, “Hey, deepfakes are bad. Deepfakes, this is an important conversation we need to have about how people’s faces are superimposed on other people’s faces. And this is a terrible thing that technology is able to do now.” And folks were not really paying that much attention. That technology has turned into what we are dealing with now with AI writing and AI Voice and generative AI. So there is contingency of people who tend to be those that are historically excluded from systems that are raising alarm bells, but no one’s listening because everyone’s just trying to make a buck. There is that capitalistic society, there is the tech culture of trying to convince everyone that tech is the solution. And there are people like myself that are saying, yes, tech can do good, but it also can produce harm. So let’s look at both sides of the coin and then let’s march forward open-minded and open-eyed.

0:10:52.8 MK: Okay. But so my mind is going in two totally different directions. There was a report a few weeks ago covered by Australian media, basically we have an eSafety commissioner and they’ve started getting complaints of children using AI to generate explicit imagery of other children to bully them online, right? That’s horrendous and is covered by the press here. And so like the need to focus on, I guess the bad or what can go wrong is important. But then the other side of me is also saying like, this is also generating a lot of that like, and I don’t wanna say hysteria because this is obviously a horrific situation, but it’s generating that negative press about AI that then also makes people like reluctant maybe to see the positives or like how it can be. Like, I’m seeing both sides of the story, right? Like you need to educate people about the negatives, but if they get covered in the press, is that also going to be detrimental to the conversation, is that… I’m not sure if I’m making sense here.

0:11:58.4 DM: No, you make perfect sense. ‘Cause this is what I battle all the time is saying we need to start thinking about what are interventions, what are solutions, what are things that we can tell people in order to do. So, like one thing that is very top of mind, especially with AI Voice, is do not answer calls that you don’t know the number. Don’t respond to texts that you don’t know the number. If it’s coming from your “bank” via text or email, you call your bank branch directly. You don’t click on the link inside of your text or on the email because you wanna make sure that it has been vetted. So there are very directed tangible steps that people can do to vet, but the issue is, is that there’s so much of the extremes to your point, Moe, that the actual helpful insights don’t get through in the media, right? Because the media feeds off of the extremes. The media doesn’t feed off the solution.

0:13:00.8 MK: Totally.

0:13:02.3 DM: And so we as a people, I’m one of those like lone people going, let’s talk about solutions, let’s talk about ways that we can build equity in everywhere so everyone understands. So the parent that has the child that’s being bullied knows what to do and what to say to the administrator at the school, right? In order to say this is how my kid is being bullied and here’s the evidence. Like that is helpful and useful and that’s a way to use AI that is powerful and impactful and also protects those who are vulnerable. But that’s not the conversation we’re having. We’re always having the extremes. We’re never talking about the solutions.

0:13:39.7 JH: Right. And I wonder if those extremes… To me, at least this is how I personally feel and I do wonder if other people may feel this way, like kind of going back to why we think AI can do everything is ’cause you hear those extremes and you think if it can do that, what else can it do? And it’s scary and it’s like we’ve never faced something like that. Like you can superimpose my face on these images, these videos and it’s not me. I feel like those extreme cases help people’s imagination probably go very wild with what is actually possible. And it’s probably hard then, to your point Dr. Marshall, like bringing it back to what are the solutions, what are the real boundaries in which AI has to work in? And it’s hard to get people to kind of come back, I think, to that concrete space after hearing a story like what Moe you just shared. I mean that’s crazy.

0:14:22.5 DM: Yeah. I mean those type of cyber bullyings happen all the time. But then there’s the fun side of the extremes, right, Julie? Like you see the AI generated dance moves of like the former presidents, like I just saw that on IG like a couple of days ago. Right? So it was like, oh that’s fun, right? Or you see your face distorted in different ways, that sort of fun. So then people get into that creative mode too. And then again that feeds into the, “Oh this is all the things that AI could do for me.” But they don’t talk about, isn’t that kind of weird that you can take a former president’s face and superimpose it on a different body and then make it dance? Like do we really want that to be a good thing to happen, right?

[music]

0:15:17.4 MH: Alright, it’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:15:20.9 Tim Wilson: Well, Piwik PRO is easy to implement, easy to use and reminiscent of Google’s universal analytics in a lot of ways.

0:15:26.1 MH: I love that it’s got basic data views for less technical users, but it keeps advanced features like segmentation, custom reporting and calculated metrics for power users.

0:15:36.6 TW: We’re running Piwik PRO’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:15:42.6 MH: That’s right. So head over to Piwik.pro and check them out for yourself. Get started with their free plan. That’s Piwik.pro. Alright, let’s get back to the show.

0:15:54.9 MK: So I’d love to talk a little bit about your Medium post, ’cause we’ve kind of touched on I guess the extremities of AI, but in particular what AI can’t do. Did you wanna kind of take us through, the first one that you mentioned is contextual awareness?

0:16:09.9 DM: Yeah. So how I thought about this Medium post was me sitting in a room full of other people thinking about how they were having a little bit of AI hype and they were questioning what AI could do for them. And I started to just identify these three areas. So the first one is contextual awareness, which is multifaceted because context is something that takes the political historical realms all together in addition with social and economic factors as well. So I really wanted to talk about how AI does not have context in a lot of scenarios because it is only given a certain amount of space to make a decision about the next step. It doesn’t necessarily have a idea of the journey. It only knows about one step at a time. So that’s the contextual awareness that we as humans have that AI does not have, I don’t think ever will have.

0:17:12.7 DM: But again, that’s a point of contention for some people. And then the second one happened to really be about… Oh shoot, I forgot. I forgot my own stuff. What’s the second one? Oh, conflict resolution. So conflict resolution was the other one that really was top of mind because when it comes to conflict resolution, there’s always a friction point and a contention. And what I know of algorithms and how software works just in general, right, teaching it for a long time, helping to build a couple of the software systems, is that software systems don’t deal with conflict well. They just sort of bail out. They say, “Cannot answer the question,” or they stall or something.

0:18:01.9 DM: So we as humans can actually be part of resolving conflict and dealing with tension points way better than any machine could actually do it. And then the last one, which I think is the most important one, which is the critical thinking. And for me, the critical thinking is the preamble to the other two. Critical thinking is how we can prevent situations from happening in the digital world that impacts us as a society negatively. And so how do we as individuals make decisions and swerve, right? We get a new context, we get a new conflict, and then we decide to take a different route. I really think of GPS systems, which is a very good use of AI.

0:18:53.0 DM: You have to put in a destination. You have to know where you’re starting and then GPS will find that route. You can put in intermediate stops along your route and the GPS will map it out for you very beautifully but if you want to change where you’re going, you have to abandon the original route, right? And so when it comes to critical thinking, we as humans do this switch relatively seamlessly because we are taking in multiple factors and we don’t just have one destination in mind. We might have multiple at the same time. So that’s a little bit of my Medium post and what I was thinking through as I was just like writing it down and just saying, y’all, we’re good. AI ain’t got us beat.

0:19:47.2 MK: I think the critical thinking one is… Like for me personally, is probably the most interesting because it is a skill set that we are constantly discussing in the data space. I would argue lots of people are not even good at critical thinking let alone trying to get a machine to replicate it. [laughter] And it can be so difficult to define and like pin down. I was trying to explain it to someone the other day what we mean when we think about really like problem solving and working through something and it’s like there aren’t a set of rules that you can plug into a machine and be like, do this and then this and then this. And I’ve had attempts of getting analysts to try and explain this process to another analyst and it’s like, it’s actually hard because it isn’t a step-by-step. It is literally intrusion and adventure situation where you could take many different paths.

0:20:37.2 DM: Yeah. And everyone has a different way on how they process the information. I think that is what people are missing when it comes to computer programming, right? People would say, “Oh, just learn how to code.” But as someone that has taught people how to code for nearly two decades, a little over two decades, and that kinda age myself but anyhoo, I know how someone codes better than I know how they write because they will use certain structures and systems that are fundamental building blocks of coding and I’ll be like, “Yeah, that’s what that person did. Yeah, I understand.” But if someone’s… Their handwriting, I might not know it but I’ll know how they code. So it is an art form, it is a language when it comes to translating what we are trying to express as humans to a machine that’s just zeroes and ones. And I think that art is lost, so when it comes to critical thinking, the same thing happens. You cannot just replicate someone’s process because if anyone’s a coder out there, they know, they get someone’s code and you’re like, “What is this? I don’t understand this.” And it takes you a minute because you’re trying to re-orient from what you think the process should be to what that person’s process is.

0:21:56.7 JH: One other piece of this that I just keep looping back to in my mind is the idea too that, you know, you keep saying how AI has to be trained on things, we also have to be giving it information it can understand, it’s trying to work within a bunch of parameters and all of that is historical data. It’s what’s happened, it’s been the reality but I think to me like the magic of being like a human and that critical thinking in the moment is when you’re faced with a completely brand new situation, the context, a new conflict, new parties suddenly enter and are involved, like how… I just can’t wrap my head around how could an algorithm train on only historical data suddenly properly, you know, take this one new data point in time and respond in a way that a human would. I guess, to me like that, that’s one of the biggest sticking points in my head is like, we only have the past, nobody knows the future. So like is that one of the big shortcomings of AI or do you think they are maybe better at predicting the future ’cause we’re not as unique and net new as we think like history repeats itself like I don’t know.

0:23:06.5 DM: I think that AI is not good at predicting the future. I think that the historical data is only good in the context in which it was gathered. So, for example, maybe more concrete, a lot of the machine learning models that are part of these generative AI tools like ChatGPT are based on, let’s say news articles and books, right? But let’s say, does this look just at news articles? Well, news articles for a very, very, very, very long time have been written mostly by White men. So that means it’s their perspective. I mean we could go back to just 20, 30, 40 years ago and look at the news articles and how they speak about certain people, right? Anyone who is undocumented, we say undocumented now. Before it was illegals. Before that it was immigrants. Before that it was some derogatory terms, right? So AI to me can’t predict the future. That’s why humans need to be in the loop and I think that’s always going to be the limiting factor of AI and the other issue that I always like to bring up is that people think AI is gonna evolve and like we as humans aren’t gonna evolve. [chuckle]

0:24:42.9 DM: Like, we’re gonna evolve too. So we’re here before AI and we’re evolving and then AI is evolving. AI evolving fast, I understand that point but we are evolving even faster because we have AI. So we’re always ahead of AI no matter how you look at the spectrum. Like humans first, then AI. We created the AI, so we are always gonna be evolving faster than the AI will, right? It’s kind of like Google. Google was a brand new, great thing, 25… Whatever, 25, 30 years ago, 25 years ago, whatever. Now we’re kind of like, it’s a phonebook online. Like, what is it again? You know, we’re just kind of not too impressed anymore. Same thing is gonna happen with the AI.

0:25:40.2 MK: Dr. Marshall, I really just wanna dig in to something that you said there and your perspective, I guess in my mind, one thing that I’m always kind of wrestling with internally is like there’s a difference between things being fair and equitable and… Yeah, I mean I can stand on a soapbox and talk about that for a long time but…

0:26:00.3 DM: Me too.

0:26:01.5 MK: Just discussing that perspective about like AI and I guess like marginalized voices or minority groups, like is there a particular… Like, do you think there’s something that we’re missing in that piece ’cause like you said, right, the input is historical data and historical data is always gonna represent the majority of our population or the majority of the voices or whatever the case may be. Is there ways that we can better protect there that we are not considering or that you’ve given thought to and if I’m totally taking you into an area you haven’t considered, feel free to let me know.

0:26:32.9 DM: I think about it quite a bit because I’m trying to do my best or to amplify all types of, you know, historically excluded voices, right, around the world. So I think the stories of people who have been marginalized are never represented well in the mainstream because this is a society that’s very tilted towards certain demographics right? White, patriarchal male, yalalalala and on and on. But what I do think is that we honor their space so if they want their stories to be digitized, then digitize them in the way that they want. And I think that is something that we’re missing as a society, is that there’s this kind of like, everything needs to be somehow digitized. Well, maybe certain aspects don’t. Maybe the story doesn’t need to be typed up, doesn’t need to be auto-recorded. Maybe it needs to be just word of mouth and handed down in a certain tradition and honoring their culture and their community. And I think that’s the part that we’re missing. For those individuals in those communities that want to have their content in a digital space, then include them in all aspects, right? From the funding to the dissemination, right? Not just including them on the back end after you’ve done all of the things, created the platform and include them in all of…

0:28:19.0 DM: And I think that’s the other part that’s missing is that there’s not an inclusion of the people and their communities from the very beginning. And it could happen in micro ways, right? You just decide in a room that you’re going to have a particular survey done and you want to interact with a group that has historically been marginalized but you never talk to anyone that’s part of that marginalized group. Like just that step.

0:28:51.7 MK: It’s like you can’t think about it when you’re building the system, you need to actually think about it when you’re like “collecting the data” that needs to be where the thought process starts.

0:29:01.9 DM: Yeah. It needs to be in the requirements, in the spec component. It doesn’t need to be on the testing side. Like it needs to be way earlier in the conversation and I think those are the parts that we are missing as a scientific society but also as a practitioner or society. People just do things to get ’em done to meet a deadline. They don’t actually think about who is it impacting on the back end first? And then reverse engineer it.

0:29:32.6 JH: I wanted to ask too. We touched on this a little earlier, especially from your Medium post with the three aspects and you were saying, you know, those are very human and if your role encompasses any of those, like you should have a job moving forward because AI can’t do those. And, you know, within a role, I think it’s even hard to say, like if you were to go around the room and ask, what’s your role at work and what are all the tasks that go within that role? I think it would be really hard to nail down exactly what those are and so thinking of AI as good at doing tasks and people now firing humans to say AI is going to take over your whole role feels very short-sighted and it feels very unrealistic to what you were mentioning earlier because if it’s good at tasks but we can barely describe all the tasks the role does, how do we expect it to take over. I mean there are projections out there for millions of jobs that it’s gonna take over and each one of those is a role that I feel like is more complex than we have given it credit and so I’d love to hear yeah, your thoughts on that.

0:30:37.7 DM: I completely agree with you, Julie. Can we just take over the world now? [laughter]

0:30:46.7 JH: Sounds great. Let’s do it.

0:30:46.7 MH: One podcast at a time.

0:30:48.6 DM: One podcast at a time. Yes. Because that’s where I sit. I sit on the, do you even know what that person does? Do you even know what you do in your role? Because you don’t necessarily write down everything that you do every single day. I mean, just the level of meetings that people have and the recording of those meetings is so much tasks that then get decided on who does them and when it gets done and some get thrown by the wayside ’cause they become not important anymore. This is where I think humans will always have a job, because someone needs to man the AI. Because who’s… I mean, if AI’s gonna take over a job, then who’s gonna make sure AI did it right?

0:31:38.3 MH: Yeah, that was the big point I took away from your second thing around conflict resolution because to me, what happens in that space is usually some sort of holding of attention between two things that may need to balance out like for instance, a business, a profit motive balance with how we treat people and how we treat them fairly or equitably. So, like an AI is gonna come at you with, well, here’s how I think we address the profit motive but how do you know it’s adequately holding the tension of treating people the right way. And I think that’s at the heart of it, where we can’t really let AI have those responsibilities and it’s not ready and you see it even when you do chats with GPT, if you stray into an area where it feels uncomfortable, you said it really well, Dr. Marshall, you know, computers just aren’t great at maybe types of questions, I guess, this is a paraphrase, is the way you said it was basically they shut down. They’re just either a yes or a no. And well, ChatGPT will be like, I can’t answer any of these questions. We gotta stop talking about this and it feels sort of bureaucratic in a way and that… It’s just sort of like, yeah, yeah, yeah.

0:32:44.5 MH: And so one of the things I think about in terms of sort of the future of this is because AI is not a person and its really difficult because it feels very person-like when you’re talking to a generative AI in ways, like it’s gotten really good at mimicking sort of human speech and English and patterns and things like that. Do you believe that we’ll get to a point where we can teach an AI how to hold some of these tensions in decision making and things like that or is that something that a computer should never, ever be expected to do in the history of humanity and I don’t really know, it’s calling for a lot of philosophical and… [laughter] I think drawing on a lot of your expertise as a computer scientist but I’m just, you know, speculation, I guess.

0:33:36.5 DM: Yeah, I’m pretty hard-nosed about like, no. I’m a hard no. No. Computers need to not do certain things and I don’t think the zeros and ones will ever get to the point where it can make those type of thoughtful, critical decisions because there’s gonna be too many sub-conditions that the computer will glitch. [chuckle] The algorithm will just be like, I don’t know what to do because it could go five different ways and it doesn’t know what to output. Right? So I think the best it can do is to provide you with the choices and then you as the human can decide if you’re gonna take one of those choices or go a different route. But I think if we as a human society relinquish the control to AI, we are going to be the bots.

0:34:39.1 MH: Yeah. And you kinda see this played out practically in all the examples of where we have negative outcomes in society from depending on machine learning algorithms too heavily in terms of like, sentencing in courts. I think there was a story about Amazon a while back where they were using it to sort of grade resumes and it was doing a really awful job of that and so on and so forth. It seems like that’s… When we just let it run with it, that’s where we kind of stray off of what should be the right path. That we kind of need a human to still be checking all the work.

0:35:11.7 DM: Exactly.

0:35:12.7 MH: It sounds like.

0:35:14.4 MK: But I just need to throw a spanner in the works. [laughter]

0:35:20.4 DM: Go for it.

0:35:20.5 MK: ‘Cause I keep listening to us all say like, cool, AI is not gonna take our jobs, we still need to be there, who’s gonna check that the models are running properly and I’m like, but do we actually have the skill set and the sophistication ourselves to do that well because there’s a lot of indicators that we’re not doing a great job at it so it’s kind of like, yes, AI is going fast and moving fast and it’s kind of like if we’re meant to be there to be the check and balance, like one of the skills that we as people need to develop to do that check and balancing job well ’cause I’m not sure we’re there right there right now.

0:35:55.9 DM: No, we’re not. We’re not there at all and that is a two-prong problem. The first one I mentioned earlier, which is the education, right? Getting upskilled or re-skilled in understanding the digital infrastructure and what that means. The second prong of that is, of course, companies who claim intellectual property on how they are building these systems that they are not sharing… That there is no transparency in how these systems are operating are built. So we have two opaque things happening at the same time that is not working in our best interest, right? We need more people to have curriculum that understand what AI data, computing, statistics and the digital skills are.

0:36:46.9 DM: We can label them different things, right? We need that, right? People right now not the future children, not the current children but adults right now, everyone’s talked about the children, let’s talk about adults. Adults need to understand that. But then we also need companies to be more transparent about what their systems are doing but this is to me is that that second part is the rub because I don’t believe a lot of the companies understand how their systems are working. The reason why they don’t is because they’re using open source technology that they are just plugging and playing that they don’t even take a look under the hood to see what that open source technology is really doing nor have they vetted it. It’s just become part of product right into their full system. So they don’t understand it and they wanna go fast.

0:37:33.9 MK: Yeah, but it’s funny, I was always gonna say is open source technology the solution then because then that gives transparency of what businesses are doing and what models they’re using but then it seems like that also has the flip side, right? Then if people are employing open source technology that they haven’t written that maybe, yeah, they don’t look under the hood, they just plug and play?

0:37:55.6 DM: Yeah. So for me, open source is a point of ethical contention because no one is talking about the ethics of the open source technology. Everyone wants to protect those who are creating open source, the programmers and developers. That’s great. Protecting their software library, so you know who wrote the software libraries. That’s wonderful, but no one is talking about whether or not they have done a responsible and ethical job in building the technology, what systems are they using, what structures are they using, what sort and search algorithms are they using, is it biased, is it not biased. They haven’t done any type of risk mitigation analysis over the open source. And the other issue that I in particular have with open source is no one is leading them either, ’cause everyone is volunteer, so there’s no incentive in order to be ethical or responsible.

0:38:53.7 DM: There’s only an incentive in order to make sure your name gets associated with a very popular software library, and you get internet fame. So I have certain contentions with open source and that most people don’t think about, but I think about as someone that’s been in the field, but also looking at the space. And now that GitHub is owned by a trillion dollar company that also has embedded in it generative AI tools with GitHub Copilot, we’re seeing a lot of junk coming out of open source now, so you have to be very careful.

0:39:33.7 MH: Well, it seems like with open source, the horse has left the barn in terms of people are pretty gung-ho to try to apply it in some capacity, but I think that’s what brings me back to your third point, which is, it’s never too late to start introducing critical thinking to this process, which is both a human thing to do at the ground level, but also at the architectural level as well, if you will.

0:39:56.4 DM: Yeah.

0:39:57.0 JH: So would your biggest piece of advice be that people need to start questioning it more when they go to use it, should they use it when they use it, the output, or what would be your couple of pieces of big advice for people now that AI is out there and becoming mainstream, cool, everybody wants to use it?

0:40:15.6 DM: Yeah. I think it will be questioning it. I think it is, where has it been used? So what’s the source of it? Who owns it? Who owns it, what’s the source of it, what are the case studies for the positive and what are the case studies for the negative? If you can’t identify those, then hold fast, right? Just stop. Stop looking at it, because if you are seeing a technology that’s out there that you can’t identify where it came from, and good and the bad of it, then you don’t get a 360 view of it, and I think because we’re human, we see the 360 view, but AI only sees 180 at a time, and so we need to bring the 360 in. So that’s my biggest piece of advice. And also talk to people, not just random people, but talk to people that are actually in the space. So, there’s podcasts, there are newsletters of people in this space, we’re talking about there’s people on YouTube who are talking directly about new technologies that have that critical lens that is talking on the positive and the negative, and so that’s who you want to follow, it’s not just the whomever happens to be popular on TikTok, or IG but trusted sources is where you need to be looking to focus.

0:41:45.7 MH: Awesome. Alright, well, this has flown by so fast.

0:41:49.9 JH: Yeah, it has.

0:41:52.0 MH: This is so awesome, we get to start to wrap up. Dr. Marshall, very great. Thank you so much, a very sobering conversation around just sort of how to look at this from multiple perspectives, and I think that’s given us a lot of thoughtful things to take away. Anyway, one thing we like to do is go around the horn and share something that we think might be of interest to our audience. We call it last call. You’re our guest, Dr. Marshall, do you have a last call you’d like to share?

0:42:17.9 DM: Oh my God, I have two. Can I do two?

0:42:20.3 MH: That’s okay. Yeah, totally. Tim Wilson, who’s the quintessential analyst, regularly has two so that is a…

0:42:28.2 JH: Tim approved.

0:42:29.6 DM: Tim approved.

0:42:30.8 MH: That is a fine layer to exist in, for sure.

0:42:33.8 DM: Alright. So the first one, I have to just plug my Black Women in Data Summit, it is happening September 23rd, 24th, it’s in Atlanta, as well as virtually it’s blackwomenindata.com, and just connect with a… Subscribe, you’ll get some great stuff as far as commentary and is what is in this perspective, especially for Black women in this space, that’s the first one, so I gotta do the shameless plug. And then the second one happens to be something that is dear to my heart right now, which is the Montgomery Riverfront brawl that happened. And for those who don’t know, you can just type it in, August 5th, 2023, and you will see what happened, but you can go to my website DataedX.com/RTN, that’s my Rebel Tech Newsletter. And I put a post out, I think it’s the August 15th post that just has an emoji version of what happened, and I find… It just gave me joy when I found it as a data person to see a scenario play out and then it to be converted into emojis. So those are my two things.

0:44:06.6 MH: There’s been… There’s so much great content that’s come from that. Like there’s a song I heard the other day, it was so awesome, so, yeah.

0:44:13.0 DM: Lift every chair and swing. [laughter]

0:44:19.5 MH: Yeah. I’ll never look at a white chair, folding chair as the same ever again.

0:44:25.3 DM: It’s beautiful. And there’s a lot of historical context to that as well, that particular plot of land was an auction block during the enslavement days, and so that land has been now almost like reclaimed by the Black people in the area, so just know that there’s historical context to the joy of all the memes that have been going around, so those are my two.

0:44:55.0 MH: Yeah. Awesome, thank you. Alright, Julie, what about you? What’s your last call?

0:45:01.1 JH: My last call is actually tied to everything we were talking about today because I have a newsletter that I get in there was an article recently by the Wall Street Journal, and it’s talking all about… It’s titled The $900,000 AI job is here, and it’s pretty much salaries for employers or salaries for jobs at Netflix and Walmart around AI and managing AI, and that number just kind of blew me away, and the newsletter highlights from that article though, just some of the statistics of these different salaries for engineers working with AI, product managers of AI, and it was wild. So not only do we think AI is going to take a bunch of jobs, it sounds like though, if you can be the human aspect working with AI sounds pretty lucrative. So that’s my last call.

0:45:54.5 MH: Nice. Alright, Moe, what about you?

0:45:58.0 MK: I feel like we need a drum roll for mine, I’m a little bit hyped. Next month, on Saturday, the 28th of October, we have MeasureCamp Sydney happening at Google. I am so… Yeah, it’s my Christmas, it’s the best day of the year. For those… If you’re based in Australia or New Zealand, I mean, if you wanna come from even further away, you are welcome to, because tickets are free. But just a reminder, it’s an unconference. We normally get about 200 people and the Sydney MeasureCamp, like I’m just gonna like brag a little bit, is freaking phenomenal. I think the caliber of speakers and yeah, just the whole day ends up being super, super fun. And we have an awesome after party afterwards as well, where we do database trivia. So if you’re a data nerd and you wanna geek out at an unconference on a Saturday, head to sydney.measurecamp.org and get yourself a ticket. And Helbs, over to you.

0:46:58.9 MH: All right. Well, I have a really simple one, but it’s very… And it’s fairly tactical, but it might just help people out there. So a lot of people in the analytics community have been switching out their Google Analytics for the latest version in the last few months, Google Analytics 4. And I’ve consistently found a couple of people who just always have really great information to share. And I’m constantly looking at their posts and things they’re posting about it and learning from them. So I figured I’d just share those two people so that you can too, if you’d like. So one of them, they’re both on Twitter, one of them is a guy named David Vallejo, and he’s a super underrated, like technical genius type person around this stuff. And he’s always posting awesome information about how the system works and what’s behind it. There’s a lot of questions that people have about how Google Analytics 4 works. So it’s just useful. And the other one is Charles Farina, who I also get a lot of value from. And there’s also many other people, of course, in our community who are providing great information, but those two really stand out to me and are sort of my go-tos for information in that space. So a little bit tactical, but sort of helpful.

0:48:07.5 MH: All right. Well, as you’ve been listening, I know you’re probably thinking to yourself, well, I’d like to learn more about that, or I’d like to comment. Well, we’d love to hear from you. And the best way to do that is through the Measure Slack community on Slack, or our LinkedIn page, or also on Twitter, or X, or whatever it’s called now, I don’t know. But yeah, we would love to hear from you. I don’t know, Dr. Marshall, are you active on social media? I think you might be active on Twitter, maybe some other places.

0:48:35.7 DM: Yeah, so I’m more active on LinkedIn now…

0:48:37.1 MH: LinkedIn.

0:48:37.5 DM: Because I don’t know what’s happening on whatever it’s called.

0:48:42.0 MH: None of us do. [laughter]

0:48:47.9 DM: And because I’m not paying on whatever that platform is, people don’t see my stuff anymore. So I have a wasteland on whatever that platform is called. But so I’m actually really on LinkedIn. So just my name, Brandeis Marshall, and then you’ll find me, because there’s like only like two of me in the world.

0:49:09.8 MH: Perfect. Awesome. And yeah, and we’ll put a link to that in the show notes on the website, so you can find her easily there. So awesome, great conversation. Dr. Marshall, thank you again so much for coming on the show. Really great.

0:49:22.7 DM: Thank you for having me. This was fun. Hopefully I didn’t scare people.

0:49:24.8 MH: I mean, it’s eye opening, but I think it’s better to just sort of be aware than to not be aware.

0:49:31.7 DM: Absolutely.

0:49:33.5 MH: And as we wrap up, no show would be complete without a huge shout out to our producer, Josh Crowhurst, who works behind the scenes to make all this possible. And Tim Wilson, who’s also helping out a little bit on the show production side this week, helping us make sure everything goes smoothly. So thanks to both of you. And I know I speak for both of my co-hosts, Julie and Moe, when I say, no matter what the AI is telling you, keep analyzing.

0:50:01.1 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter, at @analyticshour, on the web at analyticshour.io, our LinkedIn group, and the Measure chat Slack group. Music for the podcast by Josh Crowhurst.

[background conversation]

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#260: Once Upon a Data Story with Duncan Clark

#260: Once Upon a Data Story with Duncan Clark

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_260_-_Once_Upon_a_Data_Story_with_Duncan_Clark.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares