#148: Forecasting (of the Political Variety) with G. Elliott Morris

Once every four years in the United States, there is this thing called a “presidential election.” It’s a pretty boring affair, in that there is so much harmony amongst the electorate, and the two main candidates are pretty indistinguishable when it comes to their world views, policy ideas, and temperaments. But, despite the blandness of the contest, digging in to how the professionals go about forecasting the outcome is an intriguing topic. It turns out that forecasting, be it of the political or the marketing variety, is chock full of considerations like data quality, the quantification of uncertainty, and even (<gasp!>) the opportunity to run simulations! On this episode, we sat down with G. Elliott Morris, creator of The Crosstab newsletter and a member of the political forecasting team for The Economist, to chat about the ins and outs of predicting the future with a limited set of historical data and a boatload of uncertainty.

Resources and Ideas Mentioned in the Show

Episode Transcript

[music]

00:04 Announcer: Welcome to the Digital Analytics Power Hour. Michael, Moe, Tim and the occasional guest discussing analytics issues of the day and periodically using explicit language while doing so. Find them on the web at analyticshour.io and on Twitter @AnalyticsHour. And now the Digital Analytics Power Hour.

00:27 Michael Helbling: Hi, everyone. Welcome to The Digital Analytics Power Hour. This is episode 148. You know, spend any time amongst the Americans, and you’ll find that the second favorite sport that we have is figuring out election possibilities. Number one, of course, being attempting to model out pandemics. Well, you may not have heard, but it’s an election year here in the United States, and we are getting ahead of the game here at The Power Hour, and I mean, sure there are some givens. I mean, very few people have figured out what the potential impact of a Kanye presidential run will have on the race, but in this time before the conventions or right during the conventions, I guess, it looks like Biden vs Trump. Hey, Moe, you take a pretty keen interest in the political realm, do you not?

01:16 Moe Kiss: I do, unfortunately, right now, I guess, but yes.

01:20 MH: Sort of a morbid fascination?

01:22 MK: Yeah, you can’t look away.

01:24 MH: At least you’re not beholden to it, for the least, you don’t live in the country. And Tim, you listen to so many podcasts, you can’t help but be well-informed about politics, am I right?

01:35 Tim Wilson: I main-line politics podcast through both ears.

01:40 MH: That’s awesome. And of course, I’m Michael Helbling and I have won two popular votes in my lifetime. Okay. We needed a guest, someone who could help us figure out who has got the leading edge coming up to November. So Elliot Morris is a data journalist at The Economist, he is also the author of The Crosstab Newsletter. He specializes in polling and predictive analytics. He came to notoriety for his excellent forecast of the blue wave in the 2018 midterms here in the United States, and today, he is our guest. Welcome to the show, Elliot.

02:17 G. Elliott Morris: Thanks ya’ll. Thanks for having me.

02:19 MH: It’s pretty cool that we’re getting a chance to talk. Now, for the listeners, like you haven’t been in this game a long time, but you seem to be pretty good at it. What’s your secret? [laughter]

02:31 GM: Well, I don’t like to toot my own horn [chuckle], and to be fair, I haven’t been doing anything that long ’cause I’m so young, but I like doing it, it seems to work out okay, I guess, maybe replacement level at least so…

02:44 TW: But how did you stumble into it? I mean, you were at Texas, Hook ’em Horns.

02:49 GM: Hook ’em.

02:50 MH: There you go.

02:50 TW: But you were like an undergraduate and it seems like you kind of drifted into this, you sort of found this kind of forecasting politics, like where… How does that happen? I’m trying to think back to that point in my life and certainly had not found something as kind of niche, and I guess in hindsight, cool. [chuckle]

03:13 GM: I think it being an election year, when I took statistics in political science courses probably, it had a non-zero effect on the trajectory, but I was always interested in policy growing up. I did high school and college debate, and so this just probably just came a bit naturally to me, and who knows, it might not be the only career I have, but it seems like to be a good one for now.

03:37 MK: It does but I also feel like there’s so much pressure because it’s one of those things that the polls and the outcomes of your work are so influential to so many people. Do you feel the sense of responsibility or… Like how do you manage that? People are making big decisions.

03:56 GM: Yeah, the way to manage it is just to do the best work possible, I guess. We have a lot of people who work on the election forecast at The Economist, it’s not just me, and so though I am one of the more public figures of it, I’m the one who comes on lots of the podcasts and stuff. It’s a group effort to try to make sure we’re right, and so we don’t impact people’s behavior poorly and to share a bit of the pressure, you’re right, it’s a pretty public business nowadays.

04:22 TW: Well, how do you think about, or how do you approach… I feel like this is one of those things that even on the marketing analytics side, we grapple with the same thing in that, fundamentally, we’re dealing with a lot of uncertainty and probabilities, and it’s like back in the if the weather forecaster says an 80% chance of rain and it doesn’t rain, there’s this tendency to say, “The forecast was wrong.” It’s like, “Well, no, it didn’t say it wasn’t gonna rain, it said there’s an 80% chance that it’s not gonna rain,” and pointing back… 80% chance that it is gonna rain, so it’s okay, 20% of the time it won’t. And maybe pointing back to 2016, and really the first time I read even before that Nate Silver’s, The Signal and the Noise, talking about this idea of uncertainty and what probabilities are and what forecasting actually is. How much do you wrestle with that? Trying to, “Yes, we’re visualizing it with intervals and trying to represent things as being probabilities,” but Is your audience really there? Do they understand that? Are there tricks or thoughts that you have as to how to help educate the masses in that this is not math, it’s statistics, it’s probabilities?

05:42 GM: Definitely hand wringing. Yeah, look, probability is a hard concept, we can only really hope that the more people see probabilities and are exposed to the data literacy, though that’s kind of a bad word, I guess, on here, that they can come to understand it. We also, at The Economist, are pretty empirical thinkers, so our readers are used to thinking about things with data, and so our target audience we think is pretty well prepared for thinking about these sorts of things. And we think… I mean, frankly, even if everybody can’t understand it, the people who are making decisions based off of these data, the best analysis possible, so we’re gonna give it to them. But yeah, it’s not perfect, and there are tons of caveats just to forecasting in general, but also with some of our other modeling work that we do at The Economist, we try to explain it the best that we can and we hope that we do the best job, and if we don’t, we hope people hold us accountable so we can be better next time.

06:40 MK: So at the moment, the company where I work we’re… I guess we’re still on a bit of a journey. I hate that word and also data literacy is, yeah, its own myriad of complexity, but one of the things we’re trying to work on is getting better when we forecast given ranges and I have a few juniors in the team who are really, really new, and they’re kind of like, “Well, this is the range, but this is the forecast number,” and I’m like, “Don’t give them the number, just give them the range, because if you give them the number, they’ll take the number.” And trying to explain these concepts to a newbie, but then also like… I’m like, “We’ve got a clean slate here, right? We get to pick how we educate our stakeholders, and if they get used to seeing a forecast as a range, that’s probably a good thing. I’m just curious, do you think that’s the right approach or… [chuckle] I’m just… I’m trying to get our stakeholders used to the idea of, yeah, probability and uncertainty and the fact that we make the best estimate possible, but I’m just not always sure if I’m going about it with the right tactics, and I feel like if… You know, you’ve probably nailed this.

07:56 GM: I don’t know if we’ve nailed it, but I think the range is more important than the point estimate, so if you’re giving them the full range of the outcome variable, that’s better than just telling them that thing that’s most likely, especially if there’s a ton of uncertainty, like there’s in an election. Today, for example, if we were telling people Joe Biden’s gonna win by eight points, they might just think, “I’m not gonna go vote,” right? But if we tell them, “Joe Biden’s gonna win by eight points,” but he might only win by one, and we launch into our electoral college spiel and, we say, because these big western states are more republican, and the United States has this weird system of electing presidents that, makes the modeling a lot of fun, but might not make the political system that fun, then he might lose, then they might go out to vote. I think ranges are good.

08:44 TW: Do you grapple with that? Even going back, years before there was the volume of data and the speed with which the data was coming back, there was always the, “The newscasts aren’t gonna announce the results until the polls close on the West Coast,” ’cause they don’t want to have people think they don’t need to go vote, now, it’s kinda moving even before the election that the forecasts say the possible outcomes, the ranges are getting narrower as it gets closer, is there a grappling with the how to communicate that, “Hey, you still… You still need to go vote,” or actually even potentially holding back information because you’re concerned about that?

09:25 GM: I have made it a habit not to say, “This is uncertain, go vote, but regardless what they did or say, go vote.” because even if it’s not… Even if it’s certain outcome from a data perspective, and you know nothing’s really certain because whatever models, polls are only as good as their inputs and maybe if something happens the day before election day that the model has never seen before, like a nuclear attack or something, the probability is low, but in case that happens… Anyway, we want people to vote no matter what the forecast says, we want them to exercise their civic duty, yeah, we just tell them that up front. I’ve been wondering for a while whether or not we should put this disclaimer on our website. It’s probably too much information for some people, they’re just gonna vote anyway, and it’s not super germane, but on Twitter and stuff, on social media, and on interviews I always say, “No matter what the forecasts says, vote,” and on the other hand, if a bunch of people aren’t voting then the forecast also has the higher chance to be wrong too. We’re a bit self-interested in telling people to vote.

10:23 TW: That’s right. If everybody will go vote, we’ll be right.

10:26 GM: Yeah. The other thing is that not a whole lot of people actually see these models, when you denominate out of the entire US public. Core forecast, I think has a couple million views, maybe at most, it’s just been up for a month and a half now, by election day, I think a unique number of people who might see it it’s like 5 million, that’s enough to change the outcome if they’re all in mid-western states, but they’re not really, most of them are from population centers that are well-educated or around the world, so the idea that the forecast has a determinable influence over the outcome is sort of up in the air. We try to be careful. I doubt that the forecast are changing results of elections though.

11:10 TW: It seems like data journalism or political data journalism forecasting would be ever increasingly challenging when it comes to what you’re doing now, and looking at the factors of these things that seem really complicated in 2020 as a hyper-polarized audience, there presumably is once again, more data than ever available, there are more attempts to kinda manipulate the electorate, be that legally through… Domestically or internationally, through whatever means, then you throw in the black swans like COVID-19, which presumably kinda throws more wrenches into the ability to forecast, how do you approach all of that? Do you say we kinda have to just assume away some of those or assume that they have a net-zero effect, which seems weird, how do you even mentally or methodologically even approach that many kind of big moving things that presumably would have an influence on the forecast?

12:16 GM: It’s quite a… Quite a lot there.

12:19 MK: [chuckle] Tim’s… Yeah, Tim’s known for doing that.

12:21 MH: Classic Tim question.

12:24 GM: Hey, tell me about everything you’ve ever done. [chuckle] I’ll try to take it in chunks.

12:29 MH: Yeah, yeah. We need to try to clip on here.

12:34 GM: The first thing I’ll say is that the forecasts do think in the future, pollsters tell us what’s happening today, the utility of the forecast is in telling you how much that might change. Our forecast explicitly answers the question based off of the past elections for which we have data in elections that have economies like these, bad, where the president is as popular as yesterday, bad, where his poll numbers aren’t where they’re at today, also bad, how likely is it that that similar candidate would have won the election given all the variance between now and election day.

13:15 GM: We try to take that source of uncertainty seriously. There’s a very public discussion right now about how much variance should be in that bottle. It seems like in recent elections, because we’re in a hyper-polarized time, like you mentioned, that variance is lower than it has been in the past, and maybe that’s also because polls have gotten better so it artificially looks like the public moves around less. I think it’s obviously a little bit of both of those, or maybe a lot of one and a little of the other. We don’t really know, because the sample size for elections is like 18, but we try to take all those modelable sources of uncertainty seriously. But then there are also the external sources of uncertainty that you don’t get, and we don’t get those sources of uncertainty from elections that technically aren’t in the training data. So our data for our model goes back to 1948. Anything that happened in the election year before 1948 is not captured in the model. So a pandemic, for example, is not in the training from 1948-2016.

14:16 GM: We don’t know how that affects the outcome of the model. It could be that there’s more uncertainty, because it increases the likelihood of an economic recession, for example. But maybe the public doesn’t react to that economic recession the way they would act to a normal economic recession that’s caused by the government, as indeed seems to be the case today. So maybe what we would infer about the variance in the election is not actually true. Maybe there’s an outbreak of a major war, right? We’re looking at post-World War II elections, not conflicts like Vietnam or Afghanistan, but real outbreaks of war that rattled the public and made people change job industries to build machines of war. Our model wouldn’t know what happens in that case, so there are some edge cases that strictly speaking, the model doesn’t take into account.

15:04 GM: Now, the question of how likely those events are to happen is a relevant one. Obviously, we’ve had one, one in a hundred year outcome already this year. Maybe we’ll have another. And that could actually change how the model works. We are actively testing the robustness of what we call forecasting at the tail outcomes. Because we’re in an election that’s so lopsided already, it’s hard to know if the forecast is working optimally. It could be thinking, “Oh, we’ve never seen an election like this, so the variance is pretty low, the outcome’s already almost decided.” But we don’t know, because we’ve never seen something like this before. So maybe things are a bit more uncertain than a model can tell us, and that’s something we have to wrangle with basically every single day.

15:46 TW: So one thing I read following the 2016 election when a lot of models failed to predict the outcome, was that there may have been quality issues with the polling data itself in terms of certain Trump supporters maybe not indicating who they were actually gonna vote for. I don’t know if that’s actually true, but do you think that there is something to that, and to what extent do you think that may be true in the underlying polling data that we’re gathering now? Or how do you account for that?

16:17 MK: It helps. Can I just jump in there? Because actually, I was gonna ask something very similar. So I did nerd out on the weekend, and over a long hike told a group of my friends that we were having a political forecaster from The Economist on and everyone… It was the talking point for the next two hours. And one of the guys mentioned, ’cause he listens to The Economist podcast, that one of the methods that’s being used for polling now is asking who your next door neighbor would vote for to try and account for that. It’s similar to user testing, right? Like what people say they’re gonna do is often different to what they actually do. So is that the kinda techniques that you’re using, well, the pollsters are using? And then I’m guessing the poll data is just one of your inputs?

17:01 GM: That’s right. So since The Economist doesn’t do any polling explicitly, we don’t have an explicit answer to that question. But since we do take in polling data, it’s an important one. There are a few pollsters. Only actually one comes to mind immediately that is using this technique to combat the tendency, the alleged supposed tendency for Trump supporters to not answer the phone. And I will caveat this by saying, it actually doesn’t look like that was very true in 2016. We have some evidence that might suggest it happens, but a lot of other evidence says it doesn’t happen. But if they’re lying to pollsters, you’re not gonna be able to tell by using polling data, probably, so it’s still a bit up in the air. But yeah, so there’s one pollster, the Trafalgar Group which does this. They say, “Who are you voting for and who’s your… Who do you think your neighbor’s gonna vote for?” I think it’s a creative solution.

17:55 GM: It doesn’t actually seem to work. In 2018, the pollsters that did this had the same amount of errors as pollsters who didn’t. You might be able to say to rebut that, that, “Oh, in 2018 there was an tendency for Trump supporters to lie to pollsters,” which if that’s true, then why are we doing it anyway? Right? So I don’t know. I’m happy that pollsters are thinking creatively about all the different ways to improve their data collection, because as you mentioned in 2016, there were some problems, some big problems with polls in some states. Nationally they performed pretty normally, but in the swing states, as we all know now, they underestimated support for Trump. It’s unclear to me if that’s because people are lying to pollsters. That doesn’t seem to be the case. It seems to me that the data of the polling was processed poorly, right? They didn’t have the right composition of the electorate, not enough Trump supporters, and they got unlucky that there was a huge shift late in the election.

18:51 MK: What about your own personal bias? I’m like one of those weird people that’s really obsessed with bias. Do you think that there was any role previously in 2016 or even now in your own work, where you expect to see a particular outcome? I know it’s a little bit more different when you’re using a particular model, but you expect to see a particular outcome and you kind of lead your analysis in that direction. How do you reflect on that yourself and as a group that’s working on this problem?

19:21 GM: So I’ll start with the group and then go to the personal. It does seem in 2016, like the media was biased toward Hillary Clinton. Not only institutionally but also the people that make up the media are Clinton voters, they’re educated, they typically live in cities, many of them are White and educated, they aren’t the type of people who vote for Donald Trump. And so if they see coverage that says Clinton’s gonna win, subconsciously, they’re probably going to take that information as more credible than the information that says the opposite. Speaking personally now, we like to think as good forecasters that we are immune to those biases, but that’s foolish. I certainly have my own biases, I don’t think that that shapes my analysis, but it probably does shape, well, it’s not an input to the analysis, but it does shape the outcomes that I am likely to tolerate based off of all the robust testing, right? You can’t test the model perfectly, again, we’re seeing this right now, it seems like our model might be overconfident on Joe Biden and maybe if I had a different set of biases, I would have noticed that earlier that it’s overconfident. I would hope that that’s not the case. But it’s impossible for me to say, I can only hope that if that’s the case, our readers call us out on it and we fix the problem.

20:33 MH: So can you talk a little bit about, I assume, part of what your process is, is running simulations, can you talk a little bit about that process and how you do that more technically, just ’cause I think it probably would be interesting to our listeners.

20:51 GM: Yeah. I can tell you how the model works.

20:52 MK: Yes. Yes.

20:54 GM: We can nerd out over the election model. The way you should think about it, is that there are some sources of data that tell us before we even look at polls, how the election is likely to turn out. That again is the state of the economy, how popular the president is and to some degree, the level of political polarization in America. As people make up their minds, the band of outcomes, the possible range of democratic votes decreases because you have some amount of republicans who are never gonna change their mind, whereas in 1972, they were more models that crossed the line. We use typical regressions to figure out the blend of those three outcomes, we also have to be careful because we’re only talking about 20 data points here, so there’s a real danger to overfitting your model, especially if you’re trying to use two or three variables on in equals 20 data set. The typical rule we think of is that you should only have one variable for every 10 observations, so we’re really pushing the limit here with two or three. We regularize our regression to try and caution against that. Every time we are training it, we are testing it on data, it has not seen yet, this is called leave-one-out-cross validation.

22:08 GM: And if changing the value of one parameter decreases the error on unseen data on election years that the model has not seen yet, we think that that is an improvement over just testing all of your models on data that it has already seen, obviously, you shouldn’t do that. And so after we have this very, very carefully crafted expectation for how the race is going to unfold in November, and this updates every day, because we have economic data, political data about the president that updates every day, by the way, right now, that says Biden should win by seven or eight percentage points with a roughly equivalent margin of error, so it’s saying it’s pretty unlikely that Trump’s gonna win the popular vote, which of course he didn’t in 2016, and he’s less popular now, so it sort of makes sense. After we have that prior expectation for the race, we add the polling data on top of it, and then that’s where we start simulating outcomes. So we use a statistical procedure called Markov chain Monte Carlo, which I am not going to attempt to explain to you here. [laughter] Not because I don’t think you’ll understand it, but because I am not a good enough teacher for high level basing statistics of that sort, I can however, explain to you how the model works.

23:19 GM: So we’re going to tell our model, right? Take March 1st, 2020. We are gonna tell you what our prior expectation of the race was, and we’re going to give you the model, all of our polling data up to this point for every single state. We’re also gonna tell you a little bit about how those states relate to each other. So we know, for example, that Michigan and Wisconsin and Minnesota all have a lot of White people. They’re not super conservative as they are in the South, they’re not that religious anyway, so states that are similar to another, should vote similarly, and also they should move similarly throughout the election cycle. So take all this information, predict what’s gonna happen on March 1st and then go to March 2nd. And if you got a new polling, if you got a new poll on March 2nd in Michigan, that’s better for Trump, bump up his percentage in Michigan and also bump up his percentage in the states that look like Michigan. And you continue that throughout the election cycle, this what I’m describing is a process called a random walk model, and if you do that all the way until November 3rd, you have one simulated outcome for November 3rd.

24:28 GM: But we also know that polls can be wrong, so let’s start a new trial election. Let’s start again on March 1st, let’s take in that new polling information and then run all the way to November 3rd, but then say on November 3rd that the polls are wrong by five percentage points. Okay, what’s gonna happen if Trump beats all of those polls by five percentage points? Actually, and I think at the election right now, he probably still lose, it’d be razor thin, right? Because Biden would win by three points, and that’s about what he needs to win the popular vote, but in this trial election. It’s a close election. Okay? Repeat this process 20,000 times. Every time you’re solving a different equation, you’re adding different pollster error, and in that way, we come up with a full range of what we call the posterior outcome for November that takes into account polling error, that takes into account that states should move similarly that the predictions should match each other. And we think this is a pretty good approximation, but it’s just one approximation of reality, and I think if we change some of the parameters, it would give us a different answer. So it’s just a… It’s a rough guide, we think…

25:33 TW: So how does your, I guess, day-to-day or the team’s day-to-day… I think about what I assume has to happen, there is new data coming in, there is existing data that’s already there, so presumably some of the new data can be piped in and automated, some of it has to be manually loaded, there’s presumably looking at how the model’s working and maybe adjusting or tuning the model, I guess, and then there’s actually writing up, what is the model saying and actually trying to explain the results. How does your time actually break down? What is it you spend 30 minutes doing this, but you spend six hours doing this, it’s kind of a fascinating mystery to me as to what is actually involved in the day-to-day of that in the actual heart of the election season.

26:30 GM: So we don’t make any adjustments to the live model after we’ve launched it, right. We think that… Without telling people, right? So day-to-day, there’s no tinkering that’s going on to the live production version of the model. We do do work behind the scenes, so like I’ve been telling you, it seems like the model right now might be a bit too confident about Joe Biden’s chances. We’re not talking about a 10 or 20 percentage point overconfidence, but we’re talking about something like, it doesn’t know about the probability of a major war happening in the next five months, so the probability of that happening might be 1 or 2%, it’s unclear about who that would favor, so maybe we need to dock Joe Biden for that scenario by like one or two percentage points. Okay, it also thinks that polarization has changed the electorate. Maybe that’s not true. I think it’s pretty true, but again, that might be my personal bias. I mean, it’s empirical too, but for whatever that’s worth. So we’re making changes along these lines of our dev version of the model, we are actively thinking through ways that it might not be working. Most of these considerations don’t pan out, we think maybe the model is not working right here, we test it and actually it would work fine in this scenario. Maybe Joe Biden gets a slew of really bad polls in October, we just wanna make sure the model reacts to that scenario, we test it, it works fine, that never needs to tinker out into the world.

27:51 GM: It’s just minutiae, it’s just testing. That type of tinkering doesn’t happen too often, and it doesn’t take up too much time, and usually it’s just on my local computer, so it’s like an hour on a Tuesday, but the model does need new polling data. We have to feed it that. We have hooked up a fancy Google spreadsheet for that type of information, so we just type it in whenever a new poll comes out, constantly checking the various sources of polling information for that. And then the economic data it gets automatically from the St. Louis Fed, it gets Trump’s approval rating polls automatically from fivethirtyeight.com, and then we run our own algorithm on that raw polling data, but yeah, I should also take this opportunity to say if you’re an open source developer for any of the model, any of the data our model ingests, we’ve thanked you at the bottom of the page for the model, and also I will thank you now again ’cause it makes my job a whole lot easier if some intern somewhere is collecting the data, not me, so thanks. Yes, some days it’s more work than others, some says there’s no new polls and I don’t even have to think about it, I just look at the model when it refreshes at 8:00 AM, 1:00 PM and 6:00 PM US Time. Those days are sadly growing fewer and fewer as more polling data comes out each day.

29:02 TW: And then what are… What’s actually the tools that you’re using? Not looking for anything proprietary, obviously, but is it R, Python, SQL, SaaS? What is the actual tool set? What’s the code that you’re working in day-in and day-out?

29:21 GM: So first off, our model is on GitHub. You can download it and play with it yourself, it’s entirely open source. I think we are one of the first major news outlets to do that.

29:29 TW: And there are something like fivethirtyeight.com that maybe isn’t quite as open source, and maybe you have some thoughts on that as well.

29:39 GM: Well, we are using their data in the polling model, so they’ve done a pretty… They’re doing a great job open-sourcing, I think a lot of the data that can be useful for the public and for other outlets. We have personal disagreements, forecasters and sometimes… You know, sometimes you gotta have receipts, right? So if you have a disagreement over what statistical methods work and don’t work, our scholarly opinion, our being myself personally, you should show the work, I think as good scientists, but anyway, you got me off on a tangent.

30:14 TW: So, hold on, just to clarify, so we should be casting sort of Nate Silver sort of the Aaron Burr to your Alexander Hamilton.

30:22 GM: No, no, no, no, no, no. No, absolutely not.

[laughter]

30:23 GM: No, no, No. If anything, he’s both. If anything he’s both Aaron Burr and Alexander Hamilton.

30:30 TW: I’m just kidding, I’m just kidding. Yeah.

30:34 GM: But the model is written in R mostly. The fancy Markov chain Monte Carlo process I’ve described already is in Stan which is very hard to read, and I don’t recommend you open the Stan file unless you know how to read Stan yourself. And there are some spreadsheets admittedly that I have coded with Excel, so don’t shoot the messenger, I had to get the data from somewhere, but…

31:01 MK: Sorry, Elliot, I’m having… I’m not gonna lie, I’m really loving listening to Tim nerd out a little bit on the model details, but I did wanna go in a slightly different direction, and I’m trying to kinda wonder how much backstory to give you here. So in your presentation, from a little while ago, you have a point about not acting certain. Now, I’m really struggling with this one because I feel like in every other space where we work with data, I wouldn’t say that you act certain, but you need to act confident. In fact, it’s like something that I’m actively always trying to work with our junior team members on is because confidence helps people, I guess, take your recommendation seriously, act on your data, and I feel like your field is the one place where you don’t wanna be overly confident, but in the rest of the data world… I don’t know.

32:06 MK: We kinda talk about it as like White-dude energy. You just have to be really confident in your recommendations, and it’s actually something that I had to work on a lot myself because I came from government where they actually want to understand a lot of your uncertainty and your concerns, and then you basically just give them the information and they make a decision, whereas when I moved to the private sector, I found that they kind of wanted you to make the decision for them, they wanted you to be like, “Here is what I would do in your shoes, here is my really strong recommendation,” and I feel like it’s just like this constant balance that I’m trying to walk. I suppose I’m just really curious to hear about your own experiences with this, ’cause it sounds like the work you do is the complete opposite.

32:52 GM: I wouldn’t say it’s the complete opposite. I’d say there’s a difference between acting certain and having false certainty about whatever conclusion the data is leading you to. So in the forecasting space, as I mentioned, there’s lots of caveats to what we can actually know about voter behavior or about presidential elections and thus how people are acting. There’s all these caveats about whether or not a large unforeseen event could happen over the next six months. And the model really just can’t take that into account if you’re relying the typical frequentest tools of inference or you’re not explicitly telling your model that their… That your model wouldn’t otherwise know about these things existing.

33:35 GM: If you do that, if you’re not cutting corners, if you’ve coded your model correctly, if you’ve rigorously tested it over your train-test validation sets, then I think you can act confident. I still don’t think in the world of political inference that you can ever really be that certain about something. You can probably be certain that in America right now, there are more democrats than there are republicans, but that’s only because we have hundreds of surveys about this, and there’s other tangential data sources like president’s approval rating that suggested that’s true, but you probably can’t be sure about what’s gonna happen five months from now. So anyway, so there are multiple sources of uncertainty in our work that’s probably different than typical data-work, right? That’s forecasting into the future, that’s not having a whole lot of data. If you have a source of big data that tells you what webpage works better than another in A/B testing, I think you can be certain about that.

34:32 MK: Do you think maybe it boils down to being confident with the level of uncertainty that you’re facing?

34:41 GM: So I think, yes. I think you can be confident about your model if it is built correctly, and in the case of presidential elections, you just have to be really careful in how you construct it. Even the best models have blind spots, or seem to have… Seems to have some blind spots that we’re trying to address, and it’s a question of how big those blind spots are. So if you don’t know how the electorate is gonna respond to a nuclear attack in San Francisco, and do you think that that’s a non… An event that has a non-zero probability, you have to guess about it. And if you’re guessing about it, then you can’t be certain. But you can be confident in all the work you put into figuring out what happens in that scenario, or you can be up front about what sort of uncertainty that that introduces in your model. And so long as you’ve done that, you don’t have… I think you don’t have a… Then you’re not suffering from false certainty in your outcome. What I’m really worried about in presidential election forecasting, is that there’s a bunch of people who are falsely certain, that they haven’t done the work properly. They’ve run like one or two regressions. They think they know a lot about the world, and that leads into their forecasting work. And we end up in more of a pundit space than a prognosticator, or statistician space, and I think that that’s bad and that we have to be careful to avoid it. But that’s different than a lot of work with data and inference.

36:04 TW: So when it comes to back to the representing the uncertainty, just logically, the further out from the election, the more uncertainty there is, the closer, the less. So like those airbands, do those start to get narrower and narrower as we get closer to the election and is… I think, is that presuming that’s kind of a visual way to represent that uncertainty is getting lower when there’s less time for things to change?

36:33 GM: Yeah. So because we’re thinking… Because your forecasting model needs to know that there’s some chance of the data leading you astray, you probably also wanna take into account that that chance of your model changing is larger if you’re further out from election day. Like the president can’t start a war, or a new Coronavirus can’t plague the United States, or on the other end, like the president can’t preside over an administration that invents a cure for Coronavirus if you’re on election day or the day before an election day, but there’s a pretty significant chance of that happening if it’s March. Similarly, debates and conventions can’t change the outlook of the race if they’ve already happened, but if it’s July, the model should know that, “Hey, in 1988, 1976, there were huge effects of debates and conventions, so there could be this year too.” But that’s not right, it needs to know… Your model needs to know before that happens that it could happen. And after, that chance disappears, so the model should reflect that. So if you do go on our website, you’ll see that there’s a larger confidence interval in March. Again, it might not be large enough, it’s probably not like horribly wrong, we don’t think that… It might just be two percentage points off or something, and it’s…

37:56 TW: Sorry, I’m not on your site. I’m on over on GitHub downloading that model the right now.

38:00 GM: You’re downloading it.

38:02 TW: Alright.

38:02 MK: Big nerd.

38:03 TW: Well, we do have… Moe…

38:06 MK: This is a quick one, I promise!

38:10 TW: Oh my goodness.

38:11 GM: I’ve got time.

38:11 MK: Everyone’s wondering about this. Okay, Elliott, you’ve gotta talk to me about the database. Is there like a special person on the team who does that, or… I’m not gonna lie, like…

38:20 GM: Oh, multiple people.

38:21 MK: Yeah. So how do you guys do database at The Economist? What are they called? Are they called data visualization gurus, or what are their…

38:29 GM: They are certainly data… They are data visualization gurus. They’re officially called visual journalists. We have two of them that work on the website. One of them does more of the backend stuff than the charting. That’s all done in D3, so they’re also… They’re also not just like designers, but they’re also like coders.

38:52 MK: Is the D3 on Git as well?

38:54 GM: The D3 is not.

38:56 TW: D3 is just a framework, right?

38:58 MH: That’s a good javascript platform.

38:58 MK: Yeah. Right. Yeah.

39:00 GM: Right so the code for… The javascript code for the charts that is following the D3 framework is… I don’t think it’s public.

39:10 MK: Okay.

39:11 GM: Probably it doesn’t have any reason to be. If people wanna know how we make our charts, just email us, we might tell you, I don’t know.

39:17 MK: Okay, that sounds great ’cause we’re using a tool now that we use D3, so there’s some cool… Especially some of the geographic data, that’s… And anyway, like I said…

39:27 MH: Okay, so now, we do have to start to wrap up, but quickly, we’ll do a lightning round. Everybody, your most unscientific prediction of the November election, given there are no running mates, I’m willing to go first with the craziest idea, and here we go. Joe Biden wins the election and immediately steps down, ushering in our first black woman president, Kamala Harris. Okay, go, go.

[laughter]

39:51 TW: I’m not… No way, Plus, by the time this comes out, Biden will have chosen, this is… It may be Kamala Harris, it may not.

40:00 MH: Yeah, I’m not saying we’re gonna be right, but wouldn’t it be cool if you call it without knowing? Okay, go, Tim. No questions.

[laughter]

40:08 TW: I’m not gonna do it. I can’t do it. I just, I don’t know. Stacey Abrams… I can’t, I can’t do it.

40:20 MH: Well, it’s a game, it’s not a prediction.

40:21 GM: It’s intriguing as a game, but every time I’m thinking of a scenario, I’m like, “That’s probably can’t… That’s probably not gonna happen,” so I’m trying to pick one that might happen. I’m trying to optimize the probability of the event I pick. I can’t think of one, though. You’re gonna have to go to Moe, first.

40:38 MH: Alright, Moe.

40:40 MK: Oh, I’m not… I am so not doing this. I cry out loud, I might go wrong.

40:46 MH: If you could come up with one where Andrew Yang somehow gets in there, I think that would be pretty popular.

[laughter]

40:53 TW: I can’t…

40:53 GM: No comment.

40:54 MH: I like my Biden-Harris idea, I like it.

40:58 GM: It’s a pretty mainstream idea, though.

41:01 MH: Well, except for the he immediately steps down. That’s not me.

41:06 TW: I think I could… I think I would do that with a twist, though.

41:11 MH: And then he’s like, “Hey, first woman president, let’s do it.”

41:14 GM: So I don’t actually know what the Constitution says about that. You have to be elected by the Electoral College, so you have to wait until December, whatever, 12th to do that.

41:25 MH: Yeah, yeah. I would definitely advise if he’s going to do that strategy to do a post inauguration. [chuckle] Transition power, then step down. Alright, so really, nobody else is gonna have the… And listen, Moe, I call it having Beyoncé energy, thank you.

41:45 MK: Oh, I love it. But I also, I’m just gonna plus one your idea ’cause I really like the sound of that, even though the probability is probably very low.

41:52 MH: Okay, alright.

41:54 TW: Well, but I think it would actually be more maybe a year in, and then Biden will say, “Okay, I’m not gonna run again. I’m gonna take on the most politically brutal things, I’m gonna champion those. I’m gonna give to the VP some stuff that are meaningful that she could run with and win. I’ll provide political cover ’cause I’m making myself a lame duck and really setting her up to run successfully in 2024.”

42:24 GM: Yeah, wait until after the midterms and then do it.

42:27 TW: Oh, yeah.

42:27 GM: Or right before the midterms.

42:29 MK: Okay, now, I’m plus one-ing Tim’s idea.

[laughter]

42:32 MH: Which is just riffing off of my ideas though.

42:35 MK: Yeah.

42:35 MH: Alright, nobody’s counting electoral votes, this is just wild speculation. I can understand, Elliott, given your profession and your professional responsibilities, if you do not wish to engage in such speculation.

42:48 GM: Maybe offline.

42:49 MH: There you go, that’s right. Part of mine is also that Bitcoin becomes the default currency of the United States, so there’s a lot that goes into it. No, just kidding.

42:57 GM: Oh, you’re one of those.

43:00 MH: Okay, no, I’m not. Hey, by the way, we are running a special at the Power Hour, if you give us 1000 Bitcoin, we’ll give you 2000 back. Okay, no. Let’s do last calls. That’s what’s super good about this show is we do like to go around the horn, share something we found interesting, we think might be of interest to our listeners. So Elliott, you’re our guest. Do you have a last call you’d like to share?

43:19 GM: I’ll share two last calls. I’m reading a book by a historian named Sarah Igo about the history of polling and surveys, statistics, really, about Americans, the rise of viewing Americans as a mass public, called The Averaged American, which is really good. And then I’m also reading Lord of the Rings for the first time. It’s been a year and I am halfway through the second book. [chuckle] It’s great, but it is a lot.

43:46 MH: So now, had you already watched the films before doing this, or?

43:49 GM: No, I haven’t seen them yet, so no spoilers.

43:51 MH: Okay, no, no, no. I think you’re doing a great thing for yourself and for the future of the country by doing it that way.

43:58 GM: Yeah, I was really inspired by Stephen Colbert and his…

44:02 MH: Oh, right, yeah.

44:03 GM: Horrific knowledge. [chuckle]

44:05 MH: Yes, yeah, he’s right up there. Alright, Tim, what about you? What’s your last call?

44:13 TW: Well, I could do the second book in The Stormlight Archives. Actually, that’s courtesy of you, that I’ve just started reading Brandon Sanderson. That’s a reference to the past episode, Elliott.

44:25 GM: It’s the first one, though.

44:26 TW: Oh, look at that. Nice.

44:27 MH: Oh, yeah. It’s the first book in The Stormlight Archives, yeah.

44:32 TW: So my actual last call is Adam Ribaudo, a past guest on the podcast. He went through, and this was just a self-education R exercise. He went through Cole Knaflic’s Storytelling with Data book and downloaded the spreadsheets where she had everything in Excel for the charts, the visualization she’d created, and he recreated them using R, using ggplot, and talked about how… So he wrote a post called “What I Learned Recreating 29 Storytelling with Data Charts in R.” I’d gone through a similar exercise on a much smaller scale, but he actually did, as a way to pick up how to use the tool, he gave himself a somewhat artificial but useful exercise to go through, and it’s a quick little skim, and hats off to Adam for actually pulling that off, 29 different charts recreated in R using ggplot.

45:33 MH: Nice! Alright, Moe, what about you? What’s your last call?

45:38 MK: So I did a bit of soul searching over the last 24 hours because I didn’t have a last call. And I did my usual like, “Okay, should I scramble and try and find something really interesting?” And I decided no. Do you know why? Because, so there’s been some research that’s come out that says that people are actually working longer hours because of COVID. Since working from home, I’ve noticed that I’m definitely working longer hours. I get to the end of the day, I don’t even wanna listen to a podcast. I don’t wanna be anywhere near my phone or laptop. I wanna be gardening or I wanna be doing a jigsaw puzzle and I’m kind of encouraging people to do the same. Plus, to be frank, I’ve actually been doing a much better job of carving out time to do a bit of data engineering at the moment.

46:26 TW: Wait, not at home. On the weekend?

46:30 MK: No, this is during work hours. Settle down.

46:32 TW: Okay. Okay. Okay.

46:34 MK: So I’ve been spending a lot more time learning DBT, and so I’m kind of like, “You know what? It’s totally fine that I’m not reading nerdy data stuff on the weekend.” And so I’m abstaining.

46:47 MH: I love it. That’s great because now you’re paving the way for those others of us, Moe, who also from time to time are like, “What is my last call? I don’t know.”

46:58 MK: No, but I just feel like sometimes from the outside, it looks like people always have it all together, and I think it’s really important sometimes to see that sometimes people don’t have it together. Sometimes people need a freaking break. And yeah, that’s cool.

47:14 MH: Technically, Moe, you’re kind of stealing my thing that’s sort of like I’m the one who doesn’t have it all together so I try to be that guy. So no, I do not have, well actually sort of. It’s a GPT-3 last call. So recently open AI have put out into beta a new version of their, I think it’s like a text AI model called GPT-3, and it’s doing some really crazy interesting stuff. So it’s still pretty limited who’s got access, but in some of my Twitter streams I’m seeing people playing with it, and there’s literally like describing a website and it will build the code behind it, and it’s doing some very interesting touring, competent types of transactional conversations with people. So it’s a very interesting step forward in natural language and AI, so will be worth watching.

48:14 TW: Is that the same one that the New Yorker or something, it actually had it write an article or started it? I think I…

48:20 MH: Yeah I mean similar stuff, but I don’t think that particular one is based on GPT-3. I’m not sure though.

48:28 GM: We had GPT-3 or GPT-2, I think, write an article for The Economist once in our Christmas Edition. It was nonsense ’cause it doesn’t know facts, but the semantics of the sentence were incredibly accurate.

48:45 MH: Yeah.

48:45 GM: But it was just making up stuff.

48:50 MH: Yeah, GPT-3 is a big step forward and there’s a lot of excitement about what it can potentially do in terms of popping up more chat bots when you visit websites probably. I don’t know. But yeah.

49:03 GM: It’ll take my job one day.

49:04 MH: Yeah, taking all of our jobs away. But I think then UBI, so maybe it’ll be okay. Alright, we’ve gotta wrap up, and it’s always great to hear from you, the listener. And you’re probably listening to this and saying, “Well I think the forecast should be modeled this way,” and you just found out there’s like a whole GitHub depository you can go download. So after you download that, run your own models, then send your comments and questions. No, I’m just kidding. We would love to hear from you, and the best way to do that is through the community on the Measure Slack or on Twitter or in our LinkedIn group. And, Elliot, you’re also on Twitter as well. Is it @GElliotMorris? Is that…

49:47 GM: I’m on Twitter far too much. [laughter] But if you have questions for me which are insightful, you can send them to me @GElliotMorris, yeah.

49:57 MH: GElliotmorris, perfect. So, and it’s a great follow on Twitter as well. And also, you could probably subscribe to his newsletter in the cross tab. He’s obviously busy with an election, but it’s something that you should probably keep up with. Okay, Elliot, thank you so much for being on the show. It’s really been a pleasure to discuss this stuff. I love the passion that you’re bringing to it and the intelligence. I feel like election forecasting is in pretty good hands. I was worried, but now I’m feeling better about it. No, I’m just kidding.

50:28 GM: Well I’m happy to be here.

50:29 MH: I’m the least qualified person to tell you anything about whether you’re doing a good job or not. [laughter] No, it’s true. I don’t know anything about it. I just like reading the stuff that comes out. I’m a big… I like the outputs. So anyway, obviously no show would be complete without making mention of our excellent producer, Josh Crowhurst. So, Josh, thank you for everything you do. And obviously no matter what election results or the modeling says, I know I speak for both of my co-hosts Tim and Moe, when I tell all you analysts out there, keep analyzing.

51:10 Announcer: Thanks for listening. And don’t forget to join the conversation on Twitter or in the Measure Slack. We welcome your comments and questions. Visit us on the web at analyticshour.io or on Twitter @analyticshour.

51:24 Charles Barkley: So smart guys want to fit in, so they made up a term called analytics. Analytics don’t work.

51:30 Thom Hammerschmidt: Analytics, oh my God. What the fuck does that even mean?

51:41 MK: I need to know how fast news travels. Did you guys hear the story in Tasmania about a great white jumping into a boat, picking up a 10-year-old kid, and dragging him into the water?

51:51 GM: Like straight up from Jaws?

51:53 MK: Yeah, actually, and I was really skeptical. I’m like, “This is a load of shit. I bet you he was gutting fish, the shark jumped in, and then knocked the kid in the water,” and all accounts are like, “No, he was not gutting fish. No, he actually went for the kid, pulled him into the water, and then the dad jumped in and apparently scared a great white and then he swam away.” Anyway, I’m really scared. I don’t know why there’s all these shark sightings going on and everyone keeps sending them to me which I’m not cool with. I’m like, “I don’t need to see it.”

52:25 MH: You know, Moe, there are times when somebody says something and I had not turned on the recording yet and then there are times when I managed to turn it on like right now where I got all of that. So, in answer to your question, Moe, no, I had not heard that story yet and I have a couple questions.

52:46 MK: Okay.

52:47 MH: No. I’m just kidding.

52:51 MK: Yeah, but I also just think Australian politics are really boring.

52:56 GM: ‘Cause it works so well.

52:56 MK: Well, in comparison.

52:58 MH: Well hey, at least the sharks aren’t though.

53:06 TW: Rock flag and multi-level regression and post-stratification.

One Response

  1. […] (Podcast) DAPH Episode 148: Forecasting (of the Political Variety) with G. Elliott Morris […]

Leave a Reply



This site uses Akismet to reduce spam. Learn how your comment data is processed.

Have an Idea for an Upcoming Episode?

Recent Episodes

#258: Goals, KPIs, and Targets, Oh My! with Tim Wilson

#258: Goals, KPIs, and Targets, Oh My! with Tim Wilson

https://media.blubrry.com/the_digital_analytics_power/traffic.libsyn.com/analyticshour/APH_-_Episode_258_-_Goals_KPIs_and_Targets_Oh_My_with_Tim_Wilson.mp3Podcast: Download | EmbedSubscribe: RSSTweetShareShareEmail0 Shares