#250: Real World Data (RWD) Lessons from Healthcare-land with Dr. Lewis Carpenter

A claim: in the world of business analytics, the default/primary source of data is real world data collected through some form of observation or tracking. Occasionally, when the stakes are sufficiently high and we need stronger evidence, we’ll run some form of controlled experiment, like an A/B test. Contrast that with the world of healthcare, where the default source of data for determining a treatment’s safety and efficacy is a randomized controlled trial (RCT), and it’s only been relatively recently that real world data (RWD) — data available outside of a rigorously controlled experiment — has begun to be seen as a useful complement. On this episode, medical statistician Lewis Carpenter, Director of Real World Evidence (there’s an acronym for that, too: RWE!) at Arcturis, joined Tim, Julie, and Val for a fascinating compare and contrast and caveating of RWD vs. RCTs in a medical setting and, consequently, what horizons that could broaden for the analyst working in more of a business analytics role.

Podcasts and Articles Mentioned in the Show

Photo by Nicolas Solerieu on Unsplash

Episode Transcript

0:02:25.9 Announcer: Welcome to the Analytics Power Hour. Analytics topics covered conversationally and sometimes with explicit language.

0:02:35.2 Tim Wilson: Hi everyone. Welcome to the Analytics Power Hour. I’m Tim Wilson and this is episode number 250. So bear with me for a minute while I get a little philosophical on a path to getting real, because that’s what we’re gonna be talking about today. Real Data. Actually, Real-World Data. Which, I think we know when we actually stop and think about it, but do we remember to stop and think about the real world as often as we should? I mean, 10,000 visitors to a website, as reported in, say, Piwik PRO, represents 10,000 people. I mean, give or take, I know a hit is not a person, a visitor is not a person, but stick with me.

0:03:15.7 TW: It’s roughly 10,000 human beings in the real world who were navigating our website on their phones or their tablets or their laptops. And they were doing real things, taking real actions. Human beings can make for pretty unpredictable and messy data generators though. So we thought it’d be useful and interesting to step out of the Real-World Data that I and my co-host for this episode come from, which is primarily digital and marketing and product, and poke around in healthcare data and the Real-World Data they’re in. Julie Hoyer, you’re an analytics lead at Further. Did you actually know before we started prepping for this show that RWD, Real-World Data, and RWE, Real-World Evidence, and RWI, Real-World Insights, were actually formalized concepts in healthcare analytics?

0:04:05.8 Julie Hoyer: No, to be honest, I had no idea, and I can’t wait to learn more about them.

0:04:09.6 TW: Me either. And Val Kroll, Head of Delivery at Facts and Feelings, are you excited to add RWD and RWE and RWI as complements to RCTs in your personal lexicon?

0:04:23.1 Val Kroll: Sure I am. Lewis, you have your work cut out for you.

0:04:27.8 TW: Well, as I said, we’re gonna be talking about healthcare analytics on this episode. And while all three of us as co-hosts have been involved with the marketing side of healthcare, we’re actually talking about things like drug discovery and treatment efficacy and patient outcomes when we’re talking about Real-World Data in this context, and we needed to bring in someone who is deeply knowledgeable to wade into that territory. Lewis Carpenter is the Director of Real-World Evidence at Arcturis Data, where he is responsible for insight generation from unstructured data that comes in from healthcare data providers. With the end goal being to generate Real-World Evidence insights across a range of disease areas that inform the drug discovery process. I know, it’s kind of wild, right? Lewis has a PhD in medical statistics, did postdoc work at King’s College London, has held a variety of medical statistician roles, and those all led to him leading the Real-World Evidence team at Arcturis Data. And today he is our guest. So welcome to the show, Lewis.

0:05:26.2 Dr. Lewis Carpenter: Hello, Tim, and hello, Julie and Val. Lovely to be here.

0:05:29.7 TW: Awesome. So I think maybe the best place to start may be with some definitions. I referenced RWD and RWE and RWI and RCTs as different acronyms already. So, Lewis, can you maybe start by giving us kind of an explain it to my mother level of overview of those different terms?

0:05:52.6 DC: Absolutely. I think if there’s one sector that I’ve ever worked in that loves an acronym, I think it’s healthcare and pharmacy. So I appreciate it can be quite challenging to get through all the different acronyms. So Real-World Data, as I think you really put well at the beginning in the intro, is essentially any data that arises through the real-world or day-to-day routine care within the healthcare provider. Obviously, I’m based in the UK, so this is largely through records that are collected as you interact with the National Health Service that we have in the UK. I think these can be things like records that are collected by the doctor, but they can also be quite broader. So we’ve got the advent of wearable technologies, for example. And so Real-World Data can also encompass the data that’s collected as you’re wearing those wearables.

0:06:37.8 DC: And, it might be sleep-based data or sort of heart rhythm-based data. And essentially, it’s things that are not influenced by anything else. Real-World Evidence is thankfully a little bit simpler. It’s just the evidence that you generate from that Real-World Data. Now, where the other acronyms around sort of randomized control trials and that kind of aspects come in is these are potentially how we can also differentiate it really, which is you have interventional studies and you have non-interventional studies.

0:07:03.2 DC: So Real-World Data are very much non-interventional studies. No one’s providing any difference to routine care or sort of deviating from established protocols. Whereas in randomized control trials or RCTs, you are experimentally manipulating an aspect of that patient’s care. And therefore, you’re moving into the interventional space. So in most cases, you’re randomizing patients to receive a treatment or not to receive a treatment, and therefore you are having an influence on the care they’re getting. And that’s why it differentiates from Real-World Evidence and Real-World Data in that context. So hopefully that gives you a good understanding of those acronyms.

0:07:39.4 TW: Yeah, I mean, I think as I was digging into this, it was just the contrast to sort of the business or the marketing analytics where in the last 20 years when everything’s been digitized, I feel like the marketing analytics has been, it’s been using just the Real-World Data. That’s kind of the default is we have all this behavioral data that we’re able to observe and collect. And then when it comes to RCTs or A/B testing or controlled experimentation, whatever, and maybe A/B testing is kind of the more common vernacular in marketing, that’s kind of the, oh this is kind of the exception, which it seems like it’s kind of the opposite in medical that the default was more the gold standard because you’re dealing with people’s health and lives. And the Real-World Data and how can analytical techniques be used to get real evidence, like reliable evidence out of that is, I mean, a little bit more of kind of an upstart. Like, is that a fair framing that it’s a contrast to RCTs? Like, you can get stuff out of this. Everything doesn’t have to be an RCT.

0:08:50.1 DC: I think, yeah, and I think that’s fair and certainly a good representation of where the evidence is at the moment. I’ve been working in this field for the good sort of 10, 15 years now. And to be honest, the term Real-World Evidence wasn’t that ubiquitous across the field. It was actually just largely called epidemiological studies. And so the use of Real-World Evidence and how it’s being implemented has changed over the last sort of five odd years. But what I would say to your point is that yes, whilst there’s this kind of this differentiation and certainly Real-World Evidence is being used within to generate evidence that you potentially would be interested in from a randomized controlled trial or an interventional trial, the use of the different types of data is really driven by the research question that you’re trying to answer. And that’s really the key aspect.

0:09:38.6 DC: So a lot of the sort of Real-World Data, the population level data is answering one question, and the randomized controlled trial is trying to answer quite a different question. And I’m very happy to get into exactly what those questions are and why they’re different and what the pros and cons of the different data to answering those questions are later. But that’s that’s usually the driver. And I think actually listening to your analytics podcast where you’re sort of talking about the analytical approach that you would take is kind of driven by the research question. The sort of data that you want as well is driven by the research question as well.

0:10:09.7 S?: It’s time to step away from the show for a quick word about Piwik PRO. Tim, tell us about it.

0:10:14.8 TW: Well, Piwik PRO has really exploded in popularity and keeps adding new functionality.

0:10:20.7 S?: They sure have. They’ve got an easy-to-use interface, a full set of features with capabilities like custom reports, enhanced e-commerce tracking, and a customer data platform.

0:10:30.6 TW: We love running Piwik Pro’s free plan on the podcast website, but they also have a paid plan that adds scale and some additional features.

0:10:37.6 S?: Yeah, head over to Piwik.pro and check them out for yourself. You can get started with their free plan. That’s Piwik.pro And now let’s get back to the show.

0:10:52.5 JH: That was gonna be my follow-up question, actually, was I would love to hear examples of the difference between the questions that you would use each one for.

0:11:00.0 DC: Perfect, yeah. So I think a very typical question you’ll get within Real-World Evidence side would be in the UK or we could say US or any sort of geographic region, there is a new drug being used, how often is that drug being used and in what patient groups is it being used? That would be a very sort of typical what we call actually pharmacoepidemiological study, which is basically an understanding of how drugs are being used within a given population. And so for that, Clinical trials is an incredibly expensive and time-consuming way of doing it. To go through and try and enroll people and sort of follow them up over many years to understand the different drugs they’re using is just not practical.

0:11:36.7 DC: And of course, there’s no actual need to have any intervention or change to treatment. You want to understand naturally how are these drugs being used within the sort of healthcare authority that you’re interested in. And so that will be the main driver for a lot of Real-World Data and Real-World Evidence type of questions is just understanding how things are being done within the healthcare setting. Randomized control trials are very much specific to one type of question, and that is how effective is a drug relative to another drug or relative to a placebo?

0:12:06.6 DC: And that is really what it’s set up to do. Now, as I mentioned, Real-World Evidence is kind of moving into that territory and sort of we’re starting to explore the ways in which Real-World Data can start answering questions around drug efficacy. But really in order to do that well it all comes from study design and data collection and that’s really underpins the ability to answer that question in a really unbiased way. And that’s where randomized control trials will always continue to be the gold standard. So you typically will see there’s a really nice diagram that’s that’s quite ubiquitous across the literature around sort of hierarchy of evidence within healthcare. And it usually starts with sort of case studies and sort of anecdotal evidence from clinicians is kind of the lowest level, and then moves up to observational data, randomized controlled trial data and then at the top we have something called a systematic literature review which is essentially a collection of all trials that are run in that space.

0:13:00.2 DC: So rather than you just leveraging one trial you’re leveraging, you know, six or seven trials that have done it and sort of saying on average what does it say. Now that one is always a little bit contentious because it’s sort of seen as the hierarchy of evidence across all research questions. The reality is it’s the hierarchy of evidence for treatment effect but it’s not necessarily the hierarchy for observational research where we’re interested in, as I said, how medications are being used within a healthcare setting.

0:13:26.8 DC: So that’s why you tend to get this sort of bias where the randomized control trials is very good for one particular thing, whereas you might want to use a different design and a different data set for a different research question.

0:13:37.7 VK: So interesting. And I was actually reading on your site one of the case studies that was published about the external control arm. And fully admitting I was hanging on to like 85% of what it was talking through. Could you talk a little bit about how that plays into the way that you’re dealing with like trials that are happening and creating that… The way you’re able to help like estimate the actual efficacy and kind of creating that third kind of trend line? And hopefully this isn’t too specific of an example, but I would love to hear the interplay of that with the RCTs.

0:14:12.8 DC: Definitely. So maybe as a little bit of pretext to that. One of the challenges with Real-World Data is that essentially, I always draw sort of comparisons with some of the data that you guys work with in sort of marketing and finance and elsewhere, where essentially, exactly as you said, Tim, the observational Real-World use of sort of finance or how users are interacting with a website is kind of the end all and be all of kind of what you’re trying to ascertain. So essentially, in order for you to understand how people are interacting with it, that is the obvious way in which you would collect data. So it’s kind of almost, sort of a almost a single kind of data collection problem. And as you said, you would only move to A/B testing if you were sort of wanting to go, what’s the effect of tweaking one parameter or tweaking one potential aspect of the way in which the users are interacting.

0:15:05.7 DC: Healthcare data, I think, sits a little bit differently. And its biggest challenge is that it’s not collected for research purposes. It can be, but in the context of things like electronic health records, where we’re essentially just collecting data that’s been for patients that are going through the healthcare setting. They’re not collected for research purposes. And behind every decision for a treatment or for a laboratory test or for a diagnostic test or for a scan is a clinician making that decision.

0:15:32.5 DC: And so inherently that data is biased because every interaction a patient had is led by a decision that that clinician has made on the balance of that particular patient presentation. And that’s the real fundamental sort of bias that comes through with Real-World Data is that you’re trying to understand, well, I can’t just compare patients that get a treatment and patients that don’t get a treatment because there’s a fundamental reason why those patients didn’t get that treatment and why they did. So the differences between those two groups goes well beyond just the fact that they received or didn’t receive a particular treatment.

0:16:05.0 DC: With that little bit of pretext, we kind of come to how Real-World Data can ever then be used for treatment efficacy. So one of the sort of biases that I usually teach and sort of talk to is something called, to that point, which is called confounding by indication. You know, there’s a bias that happens because the indication that that patient has had that treatment for is being confounded by the fact that the clinician has made a choice about giving that treatment.

0:16:35.8 DC: So it might be the case, for example, that if you give one type of treatment, that’s for a particularly sick patient group. So you would only ever reserve this treatment when a patient gets really, really severe. So if you compared that treatment with patients who didn’t get it, you might think, and you might wrongly assume, ’cause if you look at the data for, say, mortality, and compare that between the patients that get it and don’t get it, you go, well, hang on, people on that drug are dying far more regularly than people not on it. And you would wrongly assume that the drug is causing increased mortality. The reality is that no, it’s a confounding by indication. They’ve been indicated that drug because they’ve got more severe disease, and they’re obviously dying more because they have more severe disease.

0:17:13.0 DC: So these are the real challenges that happen with Real-World Data and why you need to really consider the confounding and the biases that occur. So how do we do it them, particularly within the concept [0:17:24.4] ____ of external control arms and why do we do them? So one of the challenges with randomized controlled trials on the other end of the spectrum as I mentioned earlier is that they are time consuming and they’re very costly. It takes years and years to plan these trials and then to then enroll that out, recruit the patients and identify them and then collect the data over say a three-year program can take upwards of seven to ten years typically.

0:17:50.8 DC: And you’re looking at quite niche patient groups as well. There’s very strict criteria that determines whether a patient could go into that trial or not. Now, there are instances, particularly within rare diseases, for example, where it’s both logistically really challenging to find enough patients that you can actually randomize them to either be on the treatment or to be on not the treatment. And that might be the placebo or a different, the current best standard of treatment. And there’s also, there’s an ethical consideration. You know, most patients will enroll into a clinical trial because they’re hopeful that they’re gonna get randomized to receive the treatment.

0:18:24.0 DC: And so particularly with rare diseases where there’s not many treatment options, is it ethical to randomize a patient to not receive a potentially life… And obviously, we don’t know its effect, but there’s a potential that there’s an improved effect, hence, we’re testing it. Is it ethically right to deny them access to that? And so people are thinking, well, it would be great if we could just do what we call single-arm trials, where essentially we give patients all the intervention. So we don’t randomize them. Everybody gets the intervention. But then the problem is, well, I have nothing to compare it to. So I know that the drug is effective, but how do I know how much relatively is it better compared to, say standard treatment or no treatment?

0:19:04.0 DC: And so this is where people are looking at different sources of data to say, can I create a control arm that is external to the trial that I can then compare the single-arm intervention data with, and then I can get an idea of how comparatively better this treatment is. And that’s where particularly with the advent of electronic health records, where we’re getting a huge amount of data being collected electronically and cloud storage allowing us to access that data. We’re starting to see that we can identify patients who look very, very similar to the trial patients who are getting the intervention, but the only difference is, is that they don’t have the intervention whereas obviously the patients in the intervention trial do.

0:19:42.0 DC: We then have also got a whole repertoire of methodologies that we’ve developed over the last 10, 15 years in medical statistics that says we can do more, we can do some really cool propensity matching based methods that allow us to try and adjust for those confounding effects. So we can make sure that age is not a confounder here, we can make sure that gender and ethnicity and other clinical factors are balanced between the two groups statistically as well as making sure that they look very similar. And therefore, we’re getting closer to being able to derive an effect, which is basically telling us what’s the difference between these two groups, and we can be more confident that’s purely attributed to the drug and nothing else. So that’s kind of what we’re trying to achieve with external control arms and particularly with Real-World Data in that essence.

0:20:29.2 VK: Is there ever a concern that the biases like how they figure out or decide to go into the trial, is that something that would be trying to control for their doctor, the level of care they’re getting, because I understand creating that external arm, which sounds amazing, but I do wonder if the mechanism for them getting into the trial, is that something you guys are concerned about, and do you ever have to control for aspects around the HCP or doctor, healthcare provider, and do you have that information? My brain’s going down this rabbit hole.

0:20:58.9 DC: That’s a great one. So I could probably actually tease apart two different aspects for that question, which is the first one is, a lot of the time the clinical trial set up, and I realized I haven’t given a pretty good definition of what a randomized controlled trial is, but I’m happy to do that if we think that’ll be useful for the listeners, but the set up for a trial and how you determine whether someone is eligible or ineligible to be in a trial is often based on a number of eligibility criteria, and these are typically inclusion and exclusion criteria that tries to make the patient group as homogeneous as possible so that you can basically derive, the treatment effect is really, really pinpointed, ’cause in a sense you’re trying to get rid of noise. I think that’s probably the best way of explaining it. Get rid of as much noise as possible so that I know the exact effect of that drug when compared to not having it. Now, those eligibility criteria are great in making a really homogenous group, but what they suffer in is that they then become quite difficult to generalize beyond that really, really homogenous and specific group.

0:22:04.3 DC: I use an analogy of, actually with cars in this space, so I’m gonna go down this rabbit hole and hope that it makes sense. When you’re trying to buy a car, you might be interested in the miles per gallon that it does. So the way in which we can do a standardized comparison between two different cars is that we take them on a really rigorous test that’s really, really specific. So they only drive at 50 miles an hour, the amount they accelerate, the amount they brake, the amount they turn, it’s all standardized, so that when I run one car around it, I can directly compare it to another car, and I know that it’s not because they drove faster or that there was different conditions and things like that. So that test is really good at allowing you to go, compared to that car, this one is five miles per gallon better than that one, right?

0:22:45.7 DC: So that’s almost what the randomized controlled trial, doing this in very, very standardized and controlled experiment, it can tell you what the difference is. But of course, we know the claimed miles per gallon on any car advert is wildly different to what we actually achieve when we go and do it in the real world. And that’s where exactly the same problem of randomized controlled trials come in. They tell you what the effect of the drug is relative not getting it, but when we actually give it to patients outside, in the real kind of real world sort of setting, we sometimes notice we’re not seeing that same effect. We’re not getting that same amount of efficacy within the real world.

0:23:21.0 DC: And it’s because the randomized controlled trial is just not very representative. And so to your point, Julie, I think one of the really cool things is going, can we tweak the inclusion and exclusion criteria of the trial to make it more inclusive and to make it more generalizable, so that essentially it’s more representative of actually what’s gonna happen when the trial is actually authorized and used in the healthcare setting, so that’s one really sort of strong aspect about sort of, actually, there is a lot about the design of a trial that we can use and we can use Real-World Data to help inform that, so we can understand if you do and don’t have that criteria, how does your population change and do you exclude a lot of people particularly ethnic minority groups that are typically underrepresented. We know that there is a huge bias to white male populations within clinical trials. And so understanding well, what’s their effect in ethnic minority groups? What’s their effect in women? How does dosage affect those sort of things?

0:24:10.8 DC: So trying to go, well, maybe there are certain criteria that are weighted against those types of underrepresented groups that if actually we expanded, would give us better representation within the trial settings. And then I had a second point which I have completely forgotten. So I will try and remember that in a bit.

0:24:28.2 VK: Well, I have a plethora of back questions that I’m thinking about, so you can think about that while I ask this, will probably be a simple one. I’m just curious, ’cause obviously you’ve alluded to the process of these trials today is really long, really arduous, but that’s in place so that we can get that really solid approval from the government, FDA, NHS, to say, this drug is indicated for this type of disease state or whatever the symptoms that the patient is experiencing. Are some of the things that you’re talking about here as accepted for saying even those single-arm trials, could you get a drug approved with those types of testing or with that type of rigor, or does it have to run alongside it, or are there any types of methodologies that you’re still hoping become the green light for certain drugs getting the final approval?

0:25:18.2 DC: Yeah, great question, and I think you’ve really hit here on the already raging debate, which is between the sort of clinical trial and real world evidence-based here. So the two schools of thinking are clinical trials are heavily bureaucratic and heavily red-taped, and actually a lot of the reasons that we don’t run them in a lot of these different contexts is because it’s costly and time consuming to do so. I think particularly from COVID, we’ve seen how when there’s a clear impetus and a clear sort of desire to move quickly and there’s a need to move quickly, we can actually accelerate the approval process and still maintain the same level of rigor and experimentation that’s needed to get a fully safe and authorized and effective new treatment, such as the COVID vaccines that were released.

0:26:05.6 DC: So that school of thinking is around, rather than trying to overcome the problem by replacing it with something else, maybe we should just try to make and streamline the process of randomized controlled trial itself because then that will increase the uptake of them. So that’s one school of thinking. And the other school of thinking is that’s great and I think we should do that, but there’s always going to be those use cases where actually doing a fully randomized trial is going to be too difficult or that we need the evidence much quicker than is available through a randomized controlled trial. So yes, do that as well, reduce the bureaucracy and reduce the cost and times associated with conducting randomized controlled trials. But we might as well leverage the data that we have access to. And I think, as I said, particularly within rare diseases where actually it doesn’t matter how much red tape and bureaucracy you take away, if they’re incredibly rare diseases, it’s always going to be really challenging to get enough patients and enroll them.

0:26:58.9 DC: And as I said, the ethical challenges that come with it. The sort of thinking there is, well, why wouldn’t you try to leverage this analysis and this data, Real-World Data to supplement that clinical trial data to give you a better understanding? And I think we’re certainly seeing that, particularly with the rise of more and more precision medicine. So particularly within oncology and other areas, there’s a drill down to very, very specific genetic subtypes and particular types of tumors and things like that where medicines are targeting that on a very precision-based level and therefore your population sizes are getting smaller and smaller. And so the need to sort of introduce these medicines much quicker is becoming quite clear and therefore it’s not always possible to wait for a fully randomized trial.

0:27:42.4 DC: So I think what we’re seeing at the moment, and this is this landscape as I see it, is that regulators such as FDA, EMA and NICE in the UK are using the sort of single-arm trial data supplemented with Real-World Data to give them that comparator data to look at accelerating the approval of things like cancer medications and other medications in early access schemes.

0:28:06.8 DC: So these are typically whilst they allow access, but they’re still monitoring its use. So there’s still some ambiguity about whether they’re fully effective and a good cost benefit for that health system. And obviously this is much more poignant in the NHS where we have quite strict regulations on approval for medications and the cost effectiveness of those medications. Less so in the US where you obviously got a lot more of an insurance-based system, but the idea is essentially you have it and you allow it to be used over a year period while you’re still collecting data, while you’re still finalizing clinical trial programs. And then after a year you reassess and you go, Okay, we’ve had this for a year now, do we think we should move this to a routine provisioning?

0:28:47.8 DC: And certainly, in the work that I’ve done, some of the work that the team have have produced here at Arcturis, we are actually leveraging that real world external control arm data to help make regulatory decisions, which is fantastic to see. Translatable, evidence-based medicine is kind of what I’ve loved and dreamed of doing for so many years. And it’s great to see the work that we’re doing now translating so tangibly. So the uptake of new drugs that are really effective. So I certainly think there’s appetite and the likes, those different bodies themselves are actually releasing guidance to help us understand how we can use this data, what’s the best practice.

0:29:22.3 DC: And so there are a lot of methodological and sort of analytical questions as well still. So I could go and talk days about sort of how we’re handling missing data, how we’re dealing with different bootstrap estimators and variance estimation problems and propensity scores and all of that kind of stuff. So it’s quite fun to be in as a medical statistician to sort of see that and pioneer those methods, but also just seeing them be in uptake, but there’s definitely appetite for them.

0:29:44.8 TW: I’m struck and I don’t know if this is the language that gets used as you’re talking, it sounds like it really is kind of an uncertainty reduction play that the Real-World Data can reduce it to some extent, sure. An RCT would reduce it further, but for practical or cost or time reasons, that’s not necessarily available. I still, you said a while back that it was in the marketing or the business context sort of coming at it from just the observational data, I feel like the challenge that we have on the business side is that we don’t have that gold standard orientation of how we’re using little r, little w, little d, Real-World Data and we’re treating it like it is giving us a higher level of uncertainty reduction, whereas it seems like in healthcare that’s kind of the standard.

0:30:47.2 TW: So now you’re having to look at it being more critical of how the data is being used and what techniques can be applied. And it feels like, I mean, I’m curious when you have all these different techniques, is it saying what limitation of the Real-World Data is this addressing and to what extent is it able to address it? Is that the framing? How much closer is this getting, how much is this moving us up the hierarchy of evidence with each added complexity that’s being introduced in analytical technique?

0:31:25.5 DC: Yeah, I think that’s exactly right. I think you put it perfectly in terms of it’s almost a trade off and you’re constantly working out, how do I trade off the increased potential burden by not having the medication being used, by waiting longer, by not being able to have a fully randomized controlled trial versus how much am I certain that there is a cost benefit to this, introducing this new drug? And exactly as you say, all of these techniques and these methods are trying to slowly chip away at that uncertainty so that regulators can make a more informed decision about whether they should. A lot of the time they will land on the fact that they just don’t think there is sufficient evidence to approve it. And particularly given some of the really strict thresholds that we have here in the UK, a lot of the time you’ll hear potentially very effective medications not being approved because they just don’t meet those cost thresholds that are required for its uptake in the NHS.

0:32:19.9 DC: And that’s just a result of having a finite health resource that we do and the setup that we have here in the NHS. And I obviously I appreciate that that’s also true across different healthcare nations, but of course the way you pay for it and reimburse it is different. I think the methods are, they will get you so far. I’m sort of, I’m reminded of a really good paper by Hanan and his colleagues, which really hits on the fact that statistical methods will be able to do a lot in terms of helping you understand the relative differences and help you try to reduce the uncertainties around those biases that exist. But really it comes to fundamental research methods and study planning. It’s all about planning your study well and the randomized controlled trial epitomizes what a perfect experimental design looks like.

0:33:10.3 DC: If you want to know how much better is this drug if I took it versus I didn’t take it, because essentially that’s the answer we’re doing, which we’re trying to sort of essentially deal with a hypothetical situation which we can never recreate, which is if we take some, like an aspirin for example, what you wanna do is you’re gonna go, I have a headache, I’m gonna take an aspirin to see whether it gets rid of the headache. In order to know exactly what the effect of that aspirin was, I need to be in another parallel universe where there’s a counterfactual version of me that everything else is exactly the same except I don’t take the aspirin and then after say an hour or two hours, I can then determine did my headache go away or didn’t it. And then I would know if I could compare those two parallel counterfactuals. I’d know exactly what the effect of the aspirin was.

0:33:51.5 DC: And the randomized controlled trial basically is a way of trying to design that type of experiment. So we can certainly do a lot with statistical methods and being a medical statistician, I do have probably a bias towards the impact that those can have, but I think what’s really key is actually understanding, just because Real-World Data’s biased and you maybe found a biased result, it might be because you just poorly designed that experiment to begin with. In the same way that you can have a poorly designed randomized trial that doesn’t quite get to the heart of what you wanted to do, you can have poorly designed Real-World Data studies. And I think that’s a real key thing for me, which is you can’t just sort of go, well it was either the stats methods weren’t good enough or there was a bias in the data, it was actually maybe I didn’t make the right decisions.

0:34:36.3 DC: And so there’s a big framework at the moment around actually in our Real-World Data studies, we should do something called a target trial emulation, which is you go, I’m gonna design a randomized controlled trial in a setting that I don’t think I can do a randomized controlled trial setting. How can I replicate that exact design in my Real-World Data? And you follow exactly the same principles about how you would design a trial. So who’s gonna be in it? What are my endpoints? How do I determine when I’m observing those patients for, to make sure you minimize all the potential biases that happen. And then it’s only that way that you can be sure that actually the effect is there or not there, or there may be other biases that you can’t account for.

0:35:13.6 JH: Can I ask a marketing scenario and get your reaction, even if you don’t have an answer, I would just love your reaction. So, because after everything we’ve been talking about, I just keep going back to one of the most frustrating things I run into for a long time now. And it is, let’s say we’re marketing to different segments of healthcare providers, and usually we are not part of deciding these segments. They have been decided. And as soon as you start asking questions about the segments, you find out that they’ve said, for example, we have split these healthcare providers into five groups and it’s all dependent on how much they prescribe the product we’re interested in. And then we’re gonna take these groups based on if they’re loyalists, they’ve prescribed this product a lot to the people who are dabblers or non-existent and they barely prescribe this product. And then we’re going to market them differently because they’ve shown different levels of interest in our product.

0:36:16.3 JH: And then we’re gonna come to you analyst, me, that’s me on this end getting this question, how effective was our marketing on making people more loyal? And I just sit there and think you have a self-fulfilling segment. There are people with a lot of different attributes and I have never won this argument. I mean, a lot of it is because they’re given these segments from different teams and things, but with that scenario, how would you, one, react to it, and two, would you actually use best practices of RWE designs to tackle that?

0:36:55.6 DC: Right. Yeah, very good question. So if I’ve understood the sort of premise right, so correct me if I’m going through this and then you’re going, hang on, you’ve really got the wrong end of the stick here. But I think my reaction initially is exactly as you said, the ways in which you are already setting up to try and basically determine the effect of the marketing on those different groups is already biased by the fact that they’re ranked by how well they’re already prescribing that medication. I can see why intuitively they would wanna do that because they would basically wanna go, well, these are my low performers, I wanna see the bigger effect on those versus those that I already know are there. So really what I think you could probably do is take the same kind of approach that we do, as I was mentioning earlier, that target trial emulation. So really what you want to go through is kind of set up experimentally how would you design a randomized controlled trial in this case where essentially you get a whole bunch of healthcare providers that are not biased, but they’re not ranked or they’re unknown about what their prescription levels are.

0:37:55.0 DC: You would randomly allocate them to either receive the marketing or not receive the marketing, and then afterwards, over a set period of time, you would then compare them and say, “Well, the people that were marketed on average had a higher level of… Increased in prescriptions versus those that didn’t have the marketing.” So you can take that premise and go, “Well, how do I apply that to the fact that I’m not going to randomise these healthcare providers, I’m just gonna monitor them over time?” And really, that’s why I think my instinct here was that it’s biased, because you wouldn’t start from the premise of going, “Well, I’m going to randomise them and stratify them based on how much they’re already prescribing that,” because you would go, “Well, hang on. That doesn’t quite make sense.”

0:38:32.4 DC: So, that’s probably the approach I would take. And I think you could probably look at potential biases that can come from the approach you’re already doing. And there’s ways in which you can quantify it. So that’s quite cool. ‘Cause I think one of the traps you can get into with Real-World Data is people go, “Well, the data is biased, so we can’t do it. And therefore, there’s no point in looking at it.” But probably the more sensible and more productive thing is going, “Well, we know there are biases. I agree with you. There definitely are biases in this data, ’cause it’s not experimentally controlled. But that doesn’t mean that the finding is therefore completely null. It might be still a valid finding, even though there are some biases that play with it.” So actually, the more sensible thing is to go, “Well, let me quantify that bias, and let me determine how much of an effect that bias is actually having on the outcome or the result that I want to prove.”

0:39:19.7 DC: And there’s a really great case study where this was used in smoking, actually, because it’s not feasible to randomise people to have cigarettes and not have cigarettes to work out what’s the impact of lung cancer from it. But we’re pretty sure that there is a very, very strong association between cigarette smoking and cancer, and lung cancer. So what they did, and this actually came from a lot of tobacco industry. They were saying, “Well, you’re only using Real-World Data. You’re only using observational data, non-interventional data. So we’re not certain that that data has some biases and that might be into play.” So they’ve gone, “But we can show the people who don’t smoke versus that do smoke, and we try to control for things like age and other medical conditions to make sure that they’re there. And we can see an effect. We can see… We can derive an output,” or in this case, it’s a hazard ratio. So how much more on average are patients having lung cancer versus not having lung cancer over a period of time?”

0:40:12.9 DC: So they said, “Okay. Well, yes, we agree. There could be a sense of bias, but that doesn’t mean that it nullifies the effect of tobacco on lung cancer. So what we’ll do is we’ll say, “Well, let’s assume there’s another confounding effect, there’s some other unmeasured confounder that is having a play on the association between cigarette smoking and lung cancer, how big does that effect need to be before it nullifies the association between cigarettes and lung cancer?” And what they found was that this effect had to be absolutely huge. There had to be a monumental, unknown confounder that nobody thought of, with a huge effect on both of those two, on the stratification and the outcome, before it actually got rid of the statistical effect of that association itself.

0:41:01.4 DC: So they’re going, “A, this is incredibly unlikely, and B, it has to be so large that actually we’re pretty confident that the association is true.” So there’s loads of different ways in which you can try to overcome the potential biases that happen. But I think, as I mentioned earlier, it all comes to the study design. So I think in your case, just going, “Well, how would I operationalise, how would I experimentalise that?” If that’s a word. I don’t think it is. “How would I make that into an experiment?”

0:41:26.9 JH: It is now.

0:41:27.5 DC: It is now.

0:41:27.9 JH: Let’s just say, you coined it. You heard it here first.

0:41:30.9 TW: It’s funny, Julie, ’cause the question is, this is like working with one side of a shop of a pharma company that has… The other side of the same pharma company is absolutely immersed in RCTs and RWE. So it’s like… You’re like… You’re not even trying to make the case with just a widget manufacturer who has no exposure to this at all. It’s one where you’re like that… But look, your colleagues in the R&D world, they would absolutely mock you. In those organisations, those groups don’t… They don’t really intersect or overlap. But that’s what… The tragedy of it. It’s like, “Oh, my God.” Your colleagues in R&D would just be embarrassed. They’d be mortified if they heard what you were doing. But those groups just don’t interact.

0:42:27.7 TW: Can I ask you? You brought up the regulators and their acceptance of RWE or not. And I could put myself in the shoes of a regulator who is used to having this high up on the evidence hierarchy RCT data. And they have the added… There’s the risk aversion that, “If I’m gonna approve something that is solely RWE-based and then it turns out later it was rolled out and there was a trial or people do die, that’s gonna come back on me.” Is there a bias for the regulators to say, “I don’t want this to roll out because the downside is, Well, it never hits the market,” but there’s a counterfactual that’s unobserved. If I do roll it out, then there’s the upside that it might have a positive effect on health outcomes. But there’s a downside that I’ve got more uncertainty than if I did an RCT, which is… It could be bad. It could have adverse effects. And therefore, as a human being, as a regulator, I’m gonna keep that bar really high.” Is that… Do you run into that, that it’s like… You have to clear that bar because they’re like, “It’s risky?”

0:43:51.7 DC: Yeah. I think you’re probably right. And I think, it’s such a staple for… And again, just sort of a reminder in terms of, we’ll be very much focused on the treatment efficacy side. Obviously there are other research questions which are important as well, but I think you’re absolutely right. I think that the concept of randomised controlled trials as being the gold standard for very, very correct and obvious reasons is kind of almost embedded in that, there’s a responsibility from their point of view to only authorise medications that they have certainty over their effectiveness. And like you said, every medication will have side effects. So there will always be a potential harm that could occur, even if, okay, it’s not as effective as the current medication, but there’s a cost implication. And obviously, in national health services, that’s a big factor. And obviously, for US, in terms of insurance premiums and so forth.

0:44:47.3 DC: And then there’s also, exactly that, the trade-off of going, “Well, if this creates more harm… ” Because a lot of these medications can. They can have quite bad safety profiles in terms of creating adverse events. So I completely agree. And I think it will take time before people are more amenable to different types of evidence. But I think it still hits to the fact. If you can, the one thing that they’re certainly very tight on is that there has to be incredibly good justification for why you’re not doing a randomised controlled trial. So I think that will always be the first question, is, Is there any justifiable reason why a randomised control trial cannot be conducted other than it will take a longer time or be costly? ‘Cause those are not gonna be factors that the regulator is gonna be concerned about, other than, of course, obviously, if it is a potentially effective medication that is delaying that uptake, which is why things like… We’ve got something called the Cancer Drugs Fund, or CDF, here in the UK, where we can give managed access to the drug earlier as it’s still being assessed.

0:45:49.6 DC: But I think it will take time. And as more and more, we integrate Real-World Data into that clinical trial and start using and developing those methods and understanding where does it work and where doesn’t it work? ‘Cause I think that’s the key part. Because there’s been… There was a fantastic initiative called… I think it was called RCT Emulate, which essentially took a large number of randomised controlled trials, emulated them with Real-World Data and saw how similar or different were they to the results of the randomised controlled trial and trying to understand, “Okay, there’s a lot of cases where I could emulate it perfectly, but there’s a lot of cases where I got very different results.” And of course, the ground truth of the randomised controlled trial is the truth. So why is the Real-World Data not being able to emulate it correctly? So I think as we increase our understanding there about where it does and doesn’t work, that will also increase the acceptance of it, because then we would know, “Okay, we know the limitations, we know how far… We know that there’s gonna be potential use cases where actually it’s not ever feasible to use Real-World Data.” But yeah, I think we’re quite early on, is essentially my point, I think.

0:46:57.2 VK: That’s interesting. All right, so I have a left turn question ’cause I’m worried that Tim’s gonna cut me off soon.

0:47:01.8 TW: I was about to.

0:47:03.0 VK: I knew.

0:47:03.1 TW: I was like, “Do I announce that we’re gonna try to wrap it up, or do we say somebody’s gonna get one more?”

0:47:09.4 VK: Okay, so this would be… I’m going for it. So this is my… I’ve always wondered this, so the… And I’m really hoping, ’cause I’ve asked multiple doctors, so I know that Lewis, you’ve got it for me. Off-label script writing. So when a doctor writes a prescription for something that it’s not necessarily indicated for, because they have a reason to believe that this will help you… And I’ve seen the positive effects of someone writing off-label for me in the past. Does that data bubble up in any way for that drug getting approved for those other indications? I’m sure there’s so many things at play, like is it profitable for a pharma company to invest in that, I’m sure is one of the factors, but is anyone looking at how those drugs are being written for things they aren’t indicated for to see if there’s a business case for figuring out whether or not that that is something that should be explored and approved? Or is that data going to any good use…

0:48:09.3 DC: That’s a fascinating one. I completely agree. And I think this… You really hit on where Real-World Data will always be the advantage over randomised controlled trials. ‘Cause I think not only did I hit on, you can understand how the treatment effect might vary in particular ethnic minority groups or underrepresented groups more broadly. You can then look at things like, exactly as you say, where the drug is being used in a slightly different indication that will never be reflected in clinical trials. And that’s where Real-World Data is always going to be the only way that you’re gonna collect that data. I don’t know if there are initiatives. I think it’s certainly, you’ve given me a hell of an idea going forward. I’m gonna… That data exists. We would know… I mean, the type of granularity of data that we work with here is incredibly deep. So we would know, “Hang on. This drug is being used, but I know it shouldn’t be used within this type of patient because they haven’t gone through the right treatment pathways or they have the slightly different treatment profile.” So it certainly is possible. I know that.

0:49:07.6 DC: Maybe… I don’t know whether this is related, but it is a potentially useful side point anyway. Real-World Data has also been really useful for label expansion as well. So probably one of the most sort of poster child of this is a breast cancer treatment that was expanded to male patients with breast cancer, ’cause as you imagine, male breast cancer is really, really quite rare. And so it’s really quite difficult to understand the effect and run a randomised controlled trial for the men for the same drugs that women receive. So Real-World Data has the advantage in that you can look at population level over quite… Typically, some of our data goes back to the year 2000, so over a 20-25 year period. We can look at all of the male breast cancer patients.

0:49:51.9 DC: We can then use Real-World Data to go, “Well, let’s have a look for patients who may have got that treatment or not got that treatment and do a comparison.” And then in conjunction with the randomised controlled trial that’s been done on female patients, we can create a pretty convincing use case that actually this drug should be expanded to male patients as well as female patients. So I think in that case, that potentially hits on where you can go with the off-label uses, which is around label expansion. So yeah, I think that’s the closest I can get to try and help you with that one.

0:50:22.6 JH: Oh, it’s interesting.

0:50:24.6 TW: Wow. Oh, this is definitely one we could go for hours. This is almost making me want to be eager for the day that Facts and Feelings signs a pharma client so I can get back into the world of all the intricacies. Julie’s like, “Yeah, don’t… Be careful what you wish for it.”

0:50:42.6 JH: I was like, “Tim, I’m gonna bring this recording back up when it happens. I’m gonna remind you you said this.”

0:50:48.4 DC: And you know what? As a medical statistician on this program as well, I didn’t get into one part of the medical stats that we do. So that was quite impressive. But I feel like I provided a wealth of other information.

0:51:00.5 TW: You did. This was great. I will say I had a… Back when I was in business school and I was part of a little startup through a Moot Corp competition that was a self-administered Pap smear was the technology. And we were in business school, so we’d had a stats class, but we had to get… We got funding and we went to a medical statistician and we’re trying to… He was trying to explain to us what the 10 patients in just a purely all treated, that that was not really gonna give us this sort of statistical oomph that we were hoping for with that product. And we’re like, “But why? We definitely observed… ” And…

0:51:39.7 JH: I don’t think you need Lewis to tell you.

0:51:43.1 TW: Yeah. No, I’ve…

0:51:44.2 JH: No offense, Tim.

0:51:46.4 TW: I’ve reflected on that many times over the years, that… Yeah, that was…

0:51:51.7 DC: So I have a question for you guys in the marketing side. Is p-values something that come up with you guys as well in terms of… I presume it is. And obviously, in medicine, it’s almost like this… Unfortunately, much to my disdain, a sort of the herald of whether a treatment should move forward or not. Is that similar in your fields?

0:52:10.4 TW: I feel like it’s coming up more and more. We wind up having, sort of, from an experimental design, A/B testing design, a Bayesian versus a frequentist approach, which means then the Bayesians come and bludgeon the frequentists. And I feel like I see a lot more of the, like, “Don’t just treat just the p-value as like the thing. It’s an arbitrary threshold. But even trying to get marketers to understand what it means or doesn’t.” So I feel like it bubbles up a lot. And then a lot of people who probably should be thinking about it, like run away. I don’t know.

0:52:45.2 JH: Well, even… Tim, you did a keynote a couple years ago about like, “Did this metric move enough for me to care?” Was kind of like the underpinning of it because of you’re comparing like, “Oh, 15% from the source to 17%? Oh, 17… That’s so much more. And it’s like, “Well, let’s be a little bit more critical about that.” So I think that marketers are coming around. I definitely think that that’s a really common conversation with the people you’ll find on this podcast and a lot of our listeners. But yeah, convincing those marketers to be on the same page is sometimes a challenge.

0:53:20.4 DC: Good. I’m glad that we’re all fighting the same fight.

0:53:25.3 TW: Well, this was… Yeah, this was everything I was hoping the discussion would be and more.

0:53:30.4 DC: Good.

0:53:30.6 TW: And it’s… Oh, it’s just… I love having these sorts of discussions about an expert coming from somewhere else. I’m like, “Oh, so much is the same. And there’s so much as… You’re in better shape, but you’re also in worse shape.” So, I love it. I wanna follow Julie around for the rest of the day and just watch every conversation she has for the next 48 hours.

0:53:49.7 VK: Yeah. I’m going to be the broken record. I’m thinking like, “Julie keeps going on and on about this stuff.”

0:53:57.3 TW: All right. So before we wrap up, we like to finish every episode by going around and having everybody share a last call, just something that’s interesting that they’ve found related or not to this specific topic. And Lewis, you’re our guest, so do you have a last call to share?

0:54:15.8 DC: Yeah, I do, actually. I would highly recommend a podcast called “When Things Fell Apart,” I think it’s called. So this is a fantastic podcast that looks at… Particularly, I think series two looks at COVID era and how a number of conspiracy theories and events across various different cultural aspects manifested. And so it’s an incredible insight to how these conspiracy theories start and how almost benign they are at the beginning and then how they manifest and develop over time. And so, yeah, certainly a really, really interesting listen and a fantastic insight into human behaviour. And it does… As I say, it moves into COVID vaccine conspiracy. So, still linked to healthcare in some respect.

0:55:00.6 VK: I love that. That’s actually… Julie, the idea for the upcoming episode that you’ve been chatting with us about, I feel like that’s going to be like a wealth of information for some really interesting threads to follow there.

0:55:13.3 TW: Look at Val dropping the teaser for an as yet unplanned episode that could occur in the next 12 to 18 months. Your punishment for that, Val, is what is your last call?

0:55:26.1 VK: Not a punishment. I’ve actually been holding on to this one. So, I feel like every third last call that I give is something that was published on the UX Collective Medium. But this was a really good… Was really interesting. The piece is called “Complicated Sticks: The Rise of Tools for Everything and Nothing in Particular.” And so I was hooked by the title. But it starts by talking about that Apple iPad ad that got a lot of negative publicity where there was all the different instruments and calculators and it smashes into an iPad Pro. But the whole piece’s talking about how there’s all these tech products out there that are like, “We can do everything. So, what do you wanna do with it?” And how the pendulum has swung from being really tightly tied to the use cases of the user for one specific thing, but now it’s just everything to everyone and sometimes nothing to a lot of people who have really deep skills. So, there’s a lot of really interesting examples in here and advertisements. And it’s from a product perspective. So, it’s definitely an interesting read. So, “Complicated Sticks.”

0:56:32.0 TW: I’m shocked not at all that the one specific example you pulled from it was actually an Apple fail. So, you’ve stayed very, very on brand.

0:56:41.6 VK: Android PC user for life.

0:56:44.3 TW: Yeah. All right. Julie, what’s your last call?

0:56:48.7 JH: So, mine… Actually, Lewis, I was so afraid when you said you had a podcast that you were gonna to say this podcast and I would have panicked.

0:56:55.1 TW: There are like five podcasts out now I hear, that there’s like…

0:57:00.2 JH: Okay, I’ll explain why. But because the way you were entering it, I was like, “Oh my gosh, what are the chances he says the same one?” Because it’s a BBC radio podcast. And it’s about statistics. So it’s called the “More or Less: Behind the Statistics.” And I actually got this off a list of podcasts that Analytics Power Hour was part of. And so I was excited to see that on LinkedIn. And I was looking at all the other podcasts. Always searching for new stuff to listen to. And so I happened to listen to this one. And it was very interesting. It’s really short bites of like 9-30 minutes. And it takes statistics that are out there in different headlines or floating around or people are using them in arguments and discussions. And it looks into the studies behind them and says if they’re really valid or not. And they have some interviews and things in them. So just as a teaser, three of the titles that caught my eye, and I only got to listen to a couple so far, but it was, “Is intermittent fasting going to kill you?” “Are falling marriage rates causing happiness to fall in the US?” “And is reading for pleasure the single biggest factor in how well a child does in life?” And I was hooked. I’ve just been scrolling through like, “Oh my gosh, which one am I gonna listen to next?” And fun ’cause it ties into the teaser Val was giving anyway.

0:58:06.3 TW: Was Tim Harford the presenter on the ones that you’ve listened to?

0:58:10.1 JH: Yes.

0:58:10.8 TW: Past Analytics Power Hour guest, Tim Harford? Yeah. So, that’s good. You should give that episode a listen. He’s delightful. So, I will plus one More or Less. They’re short, which is nice.

0:58:24.4 DC: The only thing I would add, though, is, as I say, I’m a bit mortified. I didn’t use More or Less, Julie. So I think I’m glad for you, but not glad…

0:58:31.8 JH: See? There was a chance it was stolen. I was nervous.

0:58:35.6 DC: So I listen to it. For the UK listeners, it’s on at 9 o’clock on BBC Radio 4. So it’s my commute in to the work every Wednesday. But, just a warning that for the last two, three weeks, it’s all been very UK election focused. Yes. So, a little more on point.

0:58:55.3 TW: And we’re recording before… I think we know how that one’s gonna turn out. We’re recording before the 4th of July when that actually happens. So, we’ll tease that out, but… All right.

0:59:06.4 VK: All right, Tim, how about you?

0:59:07.8 TW: So my last call… I think I’ve gone to the… I will also be a gone to this well before, but Annie Duke did an interview with the co-authors of a book called “Is Your Work Worth It?” which I’ve not read the book. It’s Christopher Wong Michaelson and Jennifer Tosti-Kharas. But, the premise… It’s a pretty lengthy interview. And the premise is that you’ve got the old Mark Twain quote that, “If you love what you do for work, you’ll never work a day… ” Or, “If you love what you do, you’ll never work a day in your life.” And their premise is like, “Let’s not paint things quite that way, that, “Oh, it’s just on you to find your passion and then all happiness will come.”” So it’s a much, much more nuanced view and how it might be okay for… It is okay if you just… Your job is your job and you use the job to give you funding to pursue what you’re really interested in, or you have some mix between it. And they’ve done a lot of research into different groups, like the people who always wanted to do X, but they were just gonna work, work, work, work until they could do X and then they die suddenly and tragically. And it’s kind of like, “Yeah.” And then what is the family saying? Like, “Oh, they never got to go back to be a middle school teacher.”

1:00:24.0 TW: So it’s kind of a thoughtful piece around where work fits into our life and that there’s not one single right way to think about work. It also… Which wasn’t super tied to the specific post, but Christopher, the one co-author, deep in this interview, made a… I just loved what he said about AI, where he said, “I would like to believe that intelligence that’s artificial will never be able to replicate some of the human creativity that we possess. I would find that human life had a lot less meaning if we were able to do that,” which I just thought was… It put into words what I’ve been struggling with a lot with the AI, that what is the piece that it can fall under. So I really, really liked that quote.

1:01:13.7 TW: Good one.

1:01:15.2 TW: Ah, all right. Well, Lewis, thank you again for coming on. This was really fun. And I’m gonna be thinking about this one for a while. So thanks for coming on. No show would be complete if we did not thank our producer, Josh Crowhurst, who will take our verbal flubs throughout this discussion and make them go away, which means you did not hear any of them. Although he did the best I could with me. So, we’d also love to hear from you. If you have any thoughts or resources or reflections from this episode or other episodes, reach out to us on the Measure Slack or on our LinkedIn page. We’re easy to find. Since I’m sitting in the chair best held down by Michael Helbling, I will, as I always do, use the opportunity to note that you can get your podcast stickers for free by going to bit.ly/aph-stickers.

1:02:15.7 TW: For those of you who actually filled out the survey we were doing a couple of months ago, thank you for doing that. And you all already are getting stickers. And some of you actually got hoodies. And we got some great feedback from that. So, thank you for those of you who actually filled out that survey. But with that, regardless of whether you are using Real-World Data, whether you’re generating valid real-world evidence, whether you’re running RCTs or whether you’re just trying to, once again, explain what the hell a P value is, for myself, for Val, for Julie, keep analysing.

1:02:52.6 Announcer: Thanks for listening. Let’s keep the conversation going with your comments, suggestions, and questions on Twitter at @AnalyticsHour, on the web at analyticshour.io, our LinkedIn group, and the Measure Chat Slack group. Music for the podcast by Josh Crowhurst.

1:03:10.7 TW: So smart guys wanted to fit in, so they made up a term called analytics. Analytics don’t work.

1:03:17.5 Kamala Harris: I love Venn diagrams. It’s just something about those three circles and the analysis about where there is the intersection, right?

1:03:27.4 VK: Okay. How’s now? This is my natural volume.

1:03:29.6 TW: Does that seem good?

1:03:30.0 VK: If I get excited, maybe a little louder.

1:03:32.6 TW: Okay, that’s good.

1:03:32.8 VK: It’s okay though. Not like making anyone deaf. My poor husband recently went to a concert and then got an ear infection from it, so…

1:03:40.2 TW: Oh, God.

1:03:40.7 VK: Don’t wanna give any of you an ear infections from volume.

1:03:42.7 TW: Wait. He got an ear infection from the concert? What was he doing at the concert?

1:03:47.3 VK: I don’t know. It was loud. He came home after a concert. He’s like, “Oh, my ear’s kind of bothering me,” and goes to bed, wakes up.

1:03:52.2 JH: I’ve never heard that.

1:03:52.7 VK: Two days later, he had an ear infection.

1:03:55.3 TW: Was he sticking his fingers in his ear?

1:03:57.6 VK: I don’t know. They were like, “Your eardrum looks inflamed.”

1:04:02.4 DC: Wait. Who did he see? You know, that’s too much.

1:04:04.5 VK: The Stones. It was the Stones.

1:04:06.0 DC: Ah, I like that.

1:04:06.1 TW: Oh, he went to the same one we went? Wow. Our trainer went to that.

1:04:07.8 JH: Went out with his dad and some family.

1:04:09.9 TW: My wife’s cousin went to that. Okay. Yeah. It seems like…

1:04:14.4 VK: I’m sure they didn’t get ear infections from that. I don’t know. You just got lucky.

1:04:16.8 JH: Lewis, do you have any research on that?

1:04:23.2 DC: I’ll study it now. I wish I did.

1:04:23.6 JH: Yeah. Any ear infection data you could share?

1:04:25.4 DC: Yeah. Got Real-World Data, so. Okay.

1:04:31.2 JH: I was… I didn’t wanna jinx us, but I thought… Near the end, I was like, “I don’t think there’s going to be much editing other than taking the outtakes out. Like, it was so smooth.

1:04:40.4 DC: I’m a little bit worried about that.

1:04:40.5 TW: Congratulations, Julie. You just generated an outtake. I knew that if I waited long enough…

1:04:45.4 JH: I’m glad. I’m glad.

1:04:51.1 TW: Rock flag and so many acronyms.

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#255: Dear APH-y: Career Inflection Points

#255: Dear APH-y: Career Inflection Points

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_255_-_Dear_APH-y__Career_Inflection_Points.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares