Hide table of contents

Summary: A blog post circulating among EAs points out that recent presidential elections have been decided by fewer than 100,000 votes. It may be tempting to conclude that each extra vote in a swing state has a 1-in-100,000 chance of changing the outcome of the 2024 presidential election. In this post, I explain why this is not the case. I estimate the actual number to be 1-in-3 million for a vote in Pennsylvania (the most important swing state) and 1-in-6 million for a generic "swing state vote". This has important implications for people who are deciding whether to donate to efforts to change the outcome of the 2024 presidential election.

 

Introduction

Like many of you,[1] I want Kamala Harris to win the 2024 U.S. presidential election. I also think that electoral politics as a cause area is underrated by EAs, and in 2020 I wrote a blog post arguing that voting for Joe Biden is an effective use of time. To summarize the argument in a paragraph:

If you live in a swing state, there's about a 1 in 10 million chance that your vote will flip the outcome of the entire presidential election. The outcome of the election will influence trillions of dollars in spending. So your vote influences how hundreds of thousands of dollars get spent, in expectation (in addition to non-budgetary considerations).

By the same token, if you support Kamala Harris then you might consider donating to efforts to get her elected. If you can get her one extra swing-state vote for $1,000 (that's my best guess), that means that you can spend $1,000 to influence how hundreds of thousands of dollars get spent.

Is that a good deal, compared with other EA interventions? Maybe! I usually estimate that the U.S. government saves about one life per $10 million that it spends well. If you believe this guess, you'd be saving a life for about $10k-100k, which is... fine but worse than interventions like the Against Malaria Foundation. (Of course, it's much more complicated than that.[2])

But what if you thought that one extra swing-state vote increased Harris' chances of winning by 1 in 100 thousand? In that case, you'd be spending $1,000 to influence how tens of millions of dollars get spent. That's a really good deal -- literally a 100x better deal -- and is probably worth it!

 

Where does the number 100 thousand come from? The anonymous blog "Make Trump Lose Again" (MTLA) makes the case that some interventions to help Harris get elected are really cost-effective. Quoting from the blog post:

Biden won the last election by 42,918 combined votes in three swing states. Trump won the election before that by 77,744 votes. In 2000, just 537 votes (and likely some Republican meddling) in Florida decided the election for Bush, who won a second term by 118,601 votes in 2004. 

There’s a good chance the 2024 election will be extremely close too. [Emphasis original.]

(What does it mean that Biden won by 42,918 votes?  If Trump had won Arizona, Georgia, and Wisconsin, he would have won the election. He would have needed 10,457 more votes in Arizona, 11,779 more votes in Georgia, and 20,682 more votes in Wisconsin, for a total of 42,918 votes.)

It may be tempting to draw the conclusion that an extra swing-state vote will increase Harris' chances of winning by 1 in 100 thousand. Indeed, a couple of people I've talked to implicitly had that takeaway from the blog post. But as I will argue, such a conclusion is unwarranted.

 

This post has two parts. In Part 1, I explain why the quote from MTLA does not straightforwardly translate to an estimate of the impact of a marginal vote. Specifically, I argue that:

  • (The less important reason) It is a coincidence that three of the last six elections were within 100,000 votes. I estimate a 20-25% chance of this happening again in 2024.
  • (The more important reason) Even if I told you that the 2024 election will be within 100,000 votes, you wouldn't know which states will be the decisive ones.

In Part 2, I lay out a better way to think about the impact of a marginal vote. 

  • I introduce the microHarris (μH) as a unit of election impact equal to an increase of one-in-a-million in Harris' chances of winning, and use Nate Silver's forecast to estimate how many μH various interventions are worth.
  • I conclude that an extra vote for Harris in Pennsylvania is about 0.3 μH (one in 3 million), and that an extra vote for Harris in a generic swing state is about 0.17 μH (one in 6 million). This is 30-60x less impactful than the one-in-100k that one might gather from the above quote.

 

Part 1: The MTLA quote doesn't tell you the impact of a marginal vote

How close should we expect the 2024 election to be?

Usually, election closeness is measured by how much you'd need to swing the national vote (uniformly across all the states) in order to change the outcome of the election. For example, if Biden had done 0.63% worse against Trump, he would have lost Arizona, Georgia, and Wisconsin, and therefore the election. We will call this number (0.63%, in 2020) the tipping-point margin.[3] Here are the tipping-point margins of recent elections:

YearTipping-point marginNumber of swing-state votes you'd need to swing
20200.63%42,918 (in GA, AZ, WI)
20160.77%77,744 (in MI, PA, WI)
20125.36%527,737 (in FL, OH, VA, CO)
20088.95%994,143 (in NC, IN, NE-02, FL, OH, VA, CO)
20042.11%134,648 (in IA, NM, OH)[4]
20000.01%537 (in FL)

And here's a chart of the same data.[5]

Three of the last six elections had a margin of victory of less than 1%. But although there's reason to believe that U.S. elections are on average close and will continue to be close,[6] there's no reason to think that half of elections will be extremely close (within 1%) half the time. Instead, the most parsimonious model for tipping-point margin -- a normal distribution[7] with mean zero and standard deviation around 4-5% -- fits this data well.

So, how likely is the 2024 election to be "decided by fewer than 100 thousand votes"? Right now, Nate Silver's presidential election model estimates the margin in the tipping-point state to be (roughly) normally distributed with mean zero and standard deviation 5.5%. If you trust that model, then based on the curve above, there's about a 20% chance or of this outcome.[8] (My personal subjective probability is more like 25%, though.) If you want, you can bet on this Manifold market.

Knowing that 100,000 votes would swing the election isn't a game plan

Okay, but 25% is actually a pretty large chance! Suppose I came back from the future and told you that the election was decided by fewer than 100,000 votes. What actions would you take to try to get Harris to win?

Well, let's say that I came back to August 2016 and told you that the election would be decided by fewer than 100,000 votes. What would you have done then?

Perhaps you would have looked at election forecasts, found which states had the most valuable votes (roughly speaking, the states that seemed closest to the national tipping point[9]), and worked to flip votes in those states. Perhaps you would have gone to FiveThirtyEight's excellent[10] 2016 forecast, and looked at their voter power index:

...and decided to fly to New Mexico, Nevada, and New Hampshire to knock on doors (or donate to voter registration efforts in those states, or whatever). In which case, come November, you would have been disappointed to learn that actually you should have been knocking on doors in Michigan, Pennsylvania, and Wisconsin.

Similarly, if I had told you in August 2020 that the election would be decided by fewer than 100,000 votes, you would have probably chosen to knock on doors in Pennsylvania, whereas the states that ended up deciding the election were Georgia, Arizona, and Wisconsin.

To summarize, flipping 100,000 votes is not enough. You need to flip 100,000 votes in the right states. And that involves a lot of guesswork and getting lucky, even with the best election models.

 

Part 2: So how valuable are swing-state votes?

Ultimately, if you're trying to argue that people should donate to help Harris win the election, the problem with the argument "this election may be decided by 100,000 votes" is that it doesn't tell you how effective any particular intervention is. In this section, I will aim to bridge that gap.

The microHarris: a unit of intervention effectiveness

The effectiveness of an intervention should be judged by how much it increases the probability that Kamala Harris will win the election. We will be considering interventions like "one extra vote for Harris in Pennsylvania": small-enough interventions that this increase in probability is measured in millionths.

Hence, inspired by microCOVID, we will define the microHarris. A microHarris (or μH) is a one-in-a-million increase in the probability that Kamala Harris will win the election. The effectiveness of interventions (or events) can be measured in μH. Example usage includes:

  • "Wow, what a good ad! If they spend $100,000 on this ad campaign, that might be worth 20 microHarrises."
  • "Ooh, did you see, Taylor Swift endorsed Kamala?[11] That's like 1000 microHarrises, maybe more if continues to encouraging her fans to vote!"
  • "Wow, what a stellar debate performance. That's gotta be worth at least 20,000 microHarrises."
  • "I spent all day knocking on doors in Pennsylvania! I know it's only one microHarris, but that's actually kind of a lot if you think about it."[12]

In the next section, I will try to answer the question: how many μH is an extra vote for Harris? I will consider variations of this question: a random vote; a vote in a swing state; a vote in Pennsylvania.

How many microHarrises is an extra vote for Harris?

All the forecasting models I know of are in agreement that a vote in Pennsylvania is worth more than a vote in any other state. So let's start there: how many microHarrises is an extra vote for Kamala Harris in Pennsylvania?

I reached out to Nate Silver's assistant, Eli McKown-Dawson, for an answer, but didn't hear back. But luckily, there's an incredibly helpful snippet in a Nate Silver blog post from a few weeks ago. Nate was arguing that Harris should choose Pennsylvania governor Josh Shapiro as her running mate, because running mates help presidential candidates in their home states. And so Nate considered how changing Harris' standing in Pennsylvania (without otherwise affecting the model) would affect her chances of winning the election:

  • Harris initially won the Electoral College in 19,081 of 40,000 simulations (47.7%)
  • If Shapiro nets her an additional 0.5 points[13] in Pennsylvania, she wins the Electoral College an additional 401 times. That brings her total win probability up to 48.7%.
  • If he has a bigger impact than that and nets Harris 1 full point in Pennsylvania, she wins the Electoral College 49.6% of the time.

In other words, a 1 percentage point increase in Harris' margin of victory in Pennsylvania (let's call this the "Pennsylvania bump") increases her probability of winning the presidential election by 2%.

What are these 2% of worlds? They are the worlds in which Pennsylvania is both very close and decisive:

  • Without the Pennsylvania bump, Trump wins Pennsylvania, but by a margin of victory of less than 1%. That's about 68,000 votes.
  • Without the Pennsylvania bump, Harris loses the electoral college. With the Pennsylvania bump, she wins it.

(See this footnote[14] for some intuition about where the 2% comes from and why it's a reasonable estimate of this probability.)

In other words: according to Nate's model, an extra 68,000 votes in Pennsylvania counterfactually increase Harris' chance of victory by 2%.

This means that one extra vote for Harris in Pennsylvania is worth 0.3 μH. Or put otherwise, the probability that she wins the election increases by 1 in 3.4 million: a far cry from 1 in 100,000.[15]

 

What about votes in other swing states? Conveniently for us, Nate's Voter Power Index (VPI) tells us the importance of votes in different states, in proportion to each other:

The VPI is normalized so that the average VPI across all voters is 1. So a Pennsylvania voter's vote is 8.2 times more important than average.

Thus, an extra vote for Harris in Wisconsin is worth 4.8/8.2 times as much as an extra vote for Harris in Pennsylvania, or about 0.17 μH, and so on.

What about a generic "swing state vote"? The seven states above are the canonical "seven swing states" in this election. If an organization tells you that they'll be turning out voters for Harris in swing states, I'd default to assuming that they're targeting these seven states, in proportion to their populations.[16] The effectiveness comes out to about 0.17 μH, or 1 in 6 million, per vote.

What about a random vote in the United States? That's 8.2x less valuable than a Pennsylvania vote, or about 0.036 μH.

Does Nate Silver's model have too much spread?

I think the strongest objection to the calculations I've done here is that Nate Silver's model is underconfident: that the distribution of outcomes is too broad.

For instance, as of August 31, he models Trump's margin of victory in Pennsylvania as normal with mean 0.6% and standard deviation 5.3%. Is that too much standard deviation?

That standard deviation is inferred from historical data on (1) how much elections change between now and November and (2) how much polling error we should expect on election day. Perhaps you could argue that we ought to expect factor (1) to be below its historical mean, because the electorate is polarized. But on the other hand, this election features an unexpected Democratic nominee, and voters' perceptions of Harris are not yet locked in. Indeed, Harris' favorability ratings have changed massively since she became the nominee.

So if I were forced to pick a side, I would say that 5.3% is slightly too much standard deviation -- but only by a little. And so perhaps my microHarris estimates are a little too low, but I wouldn't adjust them by more than 20%.

 

Conclusion

To summarize the numbers:

  • If you're going to try to get Kamala Harris more votes (and you're a relatively small player), you should go all in on Pennsylvania and you'll get 0.3 μH per vote (1 in 3 million chance of flipping the election per Harris vote).
    • If you persuade someone to vote for Harris instead of Trump (rather than just finding Harris another voter), you get 2x credit (since you're changing the margin by 2, not 1).
  • If you're donating to a place that claims to be flipping votes "in swing states", I'd count that 0.17 μH per vote, or 1 in 6 million.

Is that worth it? I don't know: I think it depends a lot on your priorities and beliefs. I definitely don't think it's crazy. If you trust my $1,000/vote estimate from earlier, then every $3,000 you spend has a one-in-a-million chance of changing the outcome of the election. (If you object to this analysis on EDT grounds, see footnote.[17])

Personally, I think that the U.S. federal government's policy on A.I. over the next four years will be incredibly important to our future, and that A.I. policy will in expectation be much better under a Harris administration than under a Trump administration. My current belief is that donating to efforts to help Harris elected is better than all non-A.I.-related interventions, but less good than donating to organizations that are specifically trying to set up good A.I. governance in the United States.

But of this I am far less certain. My take in the previous paragraph is informed by murky intuitions, beliefs, and values. My estimates of how good things are for the world could be off by a factor of a hundred, and may even have the wrong sign.

By contrast, I'm pretty confident in my estimate of the probability that an extra vote in Pennsylvania will flip the outcome of the presidential election. I'd be quite surprised to learn that I was off by more than a factor of five.

So my advice: if you're deciding whether to donate to efforts to get Harris elected, plug in my "1 in 3 million" estimate into your own calculation -- the one where you also plug in your beliefs about what's good for the world -- and see where the math takes you. And if the math takes you to helping Harris get elected, I suggest reading the Make Trump Lose Again blog post to find out the most effective places to donate to!

 

  1. ^

    Most EAs identify as liberal, see here.

  2. ^

    This calculation basically assumes that money spent under the Trump administration is wasted, while money spent under the Harris administration is about as good as the government's bar for which programs are worth funding. This is simplistic in a number of ways. But for me, the dominant consideration is that I think Harris is likely to handle AI better than Trump. For this reason, I think the calculation understates how good it is to donate to Harris.

  3. ^

    The tipping-point margin is equal to the winner's margin of victory in the tipping-point state.

  4. ^

    My number differs from Make Trump Lose Again's 118,601 because Iowa and New Mexico were closer than Ohio, and I've decided to count John Kerry as needing to flip those states before he could flip Ohio. Another way of putting it is that Kerry couldn't have known to focus specifically on Ohio in advance of the election.

  5. ^

    Lest I be accused of overfitting by using a quadratic curve: I decided on using a quadratic fit before I saw the data. This is because, as tipping-point margin increases, the number of states in which the loser needs to flip votes increases linearly with the tipping-point margin. Thus, we should expect the total number of votes needed to flip the election to increase quadratically, not linearly, with the tipping-point margin. (Also, the model has only two degrees of freedom -- not three -- because I forced it to pass through (0, 0).)

  6. ^

    If one party is more popular than the other, it is incentivized to moderate in order to win over more voters.

  7. ^

    Technically a half-normal distribution if we're defining tipping-point margin to be a positive number.

  8. ^

    For the purposes of this prediction, I'm defining "decided by 100,000 votes" using the method that gave me 134,648 for 2004, as opposed to Make Trump Lose Again's 118,601 (see footnote 3).

  9. ^

    In other words, if you were to sort states from red to blue, the states that you'd expect to be closest to the decisive 270th electoral vote.

  10. ^

    The forecast gave Trump a higher chance of victory than prediction markets or any other (high-profile) publicly-available election model.

  11. ^

    This hasn't happened yet, but I hope (and think) it will!

  12. ^

    On Election Day 2020, I knocked on registered Democrats' doors in Pennsylvania with a friend. One family had forgotten to drop off their ballots until we reminded them. I give us credit for two counterfactual Biden votes in PA!

  13. ^

    Here, I believe that "points" means "margin of victory": winning PA 51-49 (i.e. by 2%) is 1 point better than winning PA 50.5-49.5 (i.e. by 1%).

  14. ^

    Polls in Pennsylvania are essentially dead even. Based on historical data about polling error, Nate Silver models Trump's margin of victory in Pennsylvania (possibly a negative number) as uncertain, with standard deviation 5.3%. So the probability that his margin of victory is between 0% and 1% is about 7.5%. But we also need Pennsylvania to be decisive (i.e. for Trump to win the electoral college by fewer than Pennsylvania's 19 electoral votes). Nate's model assigns probability 9% to that possibility. However, we must remember that "Trump wins PA by between 0% and 1%" and "Trump wins the electoral college but by fewer than 19 electoral votes" are not independent: they are highly correlated, since they both happen when the election ends up very close. It seems that according to Nate's model, the probability that Trump wins the electoral college but by fewer than 19 electoral votes conditional on him winning Pennsylvania but by less than 1% is about 25-30% (i.e. three times higher than the unconditional probability). That seems intuitively reasonable to me.

  15. ^

    This number has perhaps gone up a little, since the election a few weeks closer so uncertainty in the outcome is a little lower, but Pennsylvania has remained as close as can be. But I think it hasn't gone up very much, maybe 10%.

  16. ^

    Maybe they're spending disproportionate time in Pennsylvania. (That would be good.) On the other hand, maybe they're wasting some of their resources on states like Florida.

  17. ^

    I've heard the argument that if you believe in evidential decision theory, you should expect the impact of a donation to be much larger, because your donation decisions are correlated with other people's: deciding to donate $3,000 should increase your estimate of how much was donated in total by more than $3,000. I have a couple of thoughts about this:

    • If you're just deciding how to spend your donation budget, then I suspect this argument applies equally well to other places that you might donate to. So you should figure out the best place to donate to and be extra happy having update on the fact that people who were using a similar reasoning process probably ended up donating to that place too. (Although if you think you're correlated with sufficiently many people, you might want to think about which donation opportunities hit diminishing returns on marginal donations most quickly.)
    • If you're deciding whether to donate at all, this argument might be more compelling. But note that if your decision of whether to donate hinges on which decision theory to use, then your decision probably isn't correlated with that many other people's!

117

4
0

Reactions

4
0

More posts like this

Comments8
Sorted by Click to highlight new comments since:

There might be a second path to impact here in addition to the chance that any single vote is pivotal to changing the election. In choosing future candidates and platforms, the parties will take into account the margins of victory. A big win by Harris (or Trump) will be interpreted differently than a very narrow win. The parties may look at this as a measure of where the median voter lies (or some other measure relevant to the elections) and adjust accordingly. Each additional vote will have a small impact on this, but as you point out the scale is large.

So it may not only be a very small chance of a large impact but ann almost certain chance of having a substantial (but very hard to measure) impact.

(This is not my original idea there have been some papers in this in political science or public choice.)

I think this is all very reasonable and I have been working under the assumption of one votes in PA leading to a 1 in 2 million chance of flipping the election. That said, I think this might be too conservative, potentially by a lot (and maybe I need to update my estimate).  

Of the past 6 elections 3 were exceedingly close. Probably in the 95th percentile (for 2016 & 2020) and 99.99th percentile (for 2000) for models based off polling alone. For 2020 this was even the case when the popular vote for Biden was +8-10 points all year (so maybe that one would also have been a 99th percentile result?). Seems like if the model performs this badly it may be missing something crucial (or it's just a coincidental series of outliers). 

I don't really understand the underlying dynamics and don't have a good guess as to what mechanisms might explain them. However, it seems to suggest that maybe extrapolating purely from polling data is insufficient and there's some background processes that lead to much tighter elections than one might expect. 

Some incredibly rough guesses for mechanisms that could be at play here (I suspect these are mostly wrong but maybe have something to them):

  • Something something polarization, steady voting blocs for Rep & Dem aren't shifting much year to year. This means we should expect similar margins this year as 2016 & 2020. 
  • Some balancing out process where politicians are adjusting their platform, messaging, etc to react to their adversary and this ends up increasing how close elections get.
  • Maybe something where voters have local information on whether the person they don't like is more likely to win and they then feel more motivated to vote? Turns out, in aggregate, this local information is pretty accurate and leads to tighter-than-expected elections.
  • Maybe political parties/donors observe how much their adversary spends in a given state and are consistently able to spend to counteract their efforts. This maybe provides a balancing effect that tightens the race. This would have the unfortunate consequence that visible spending is much less effective - but maybe implies that smaller, more under-the-radar, projects are better.

Thanks for those thoughts! Upvoted and also disagree-voted. Here's a slightly more thorough sketch of my thought in the "How close should we expect 2024 to be" section (which is the one we're disagreeing on):

  • I suggest a normal distribution with mean 0 and standard deviation 4-5% as a model of election margins in the tipping-point state. If we take 4% as the standard deviation, then the probability of any given election being within 1% is 20%, and the probability of at least 3/6 elections being within 1% is about 10%, which is pretty high (in my mind, not nearly low enough to reject the hypothesis that this normal distribution model is basically right). If we take 5% as the standard deviation, then that probability drops from 10% to 5.6%.
  • I think that any argument that actually elections are eerily close needs to do one of the following:
    • Say that there was something special about 2008 and 2012 that made them fall outside of the reference class of close elections. I.e. there's some special ingredient that can make elections eerily close and it wasn't present in 2008-2012.
      • I'm skeptical of this because it introduces too many epicycles.
    • Say that actually elections are eerily close (maybe standard deviation 2-3% rather than 4-5%) and 2008-2012 were big, unlikely outliers.
      • I'm skeptical of this because 2008 would be a quite unlikely outlier (and 2012 would also be reasonably unlikely).
    • Say that the nature of U.S. politics changed in 2016 and elections are now close, whereas before they weren't.
      • I think this is the most plausible of the three. However, note that the close margins in 2000 and 2004 are not evidence in favor of this hypothesis. I'm tempted to reject this hypothesis on the basis of only having two datapoints in its favor.

(Also, just a side note, but the fact that 2000 was 99.99th percentile is definitely just a coincidence. There's no plausible mechanism pushing it to be that close as opposed to, say, 95th percentile. I actually think the most plausible mechanism is that we're living in a simulation!)

I think it's very reasonable to say that 2008 and 2012 were unusual. Obama is widely recognized as a generational political talent among those in Dem politics. People seem to look back on, especially 2008, as a game-changing election year with really impressive work by the Obama team. This could be rationalization of what were effectively normal margins of victory (assuming this model is correct) but I think it matches the comparative vibes pretty well at the time vs now. 

As for changes over the past 20+ years, I think it's reasonable to say that there's been fundamental shifts since the 90s:

  • Polarization has increased a lot 
  • The analytical and moneyball nature of campaigns has increased by a ton. Campaigns now know far more about what's happening on the ground, how much adversaries spend, and what works.
  • Trump is a highly unusual figure which seems likely to lead to some divergence
  • The internet & good targeting have become major things 

Agree that 5-10% probability isn't cause for rejection of the hypothesis but given we're working with 6 data points, I think it should be cause for suspicion. I wouldn't put a ton of weight on this but 5% is at the level of statistical significance so it seems reasonable to tentatively reject that formulation of the model.

Trump vs Biden favorability was +3 for Trump in 2020, Obama was +7 on McCain around election day (average likely >7 points in Sept/Oct 2008). Kamala is +3 vs Trump today. So that's some indication of when things are close. Couldn't quickly find this for the 2000 election.

Considering the cost for live saved by the US government on average might not be the best measure? I suppose that having A Democrat and Office would lead to a certain amount additional spending but would it double the impact? On the other hand the Counterfactual spending might be much higher impact if we think for example that the Republicans will/foreign aid spending or funding for pandemic preparation. Wondering how these concerns were balanced.

Yeah I agree; I think my analysis there is very crude. The purpose was to establish an order-of-magnitude estimate based on a really simple model.

I think readers should feel free to ignore that part of the post. As I say in the last paragraph:

So my advice: if you're deciding whether to donate to efforts to get Harris elected, plug in my "1 in 3 million" estimate into your own calculation -- the one where you also plug in your beliefs about what's good for the world -- and see where the math takes you.

"I usually estimate that the U.S. government saves about one life per $10 million that it spends well."


I'm curious, why do you think this? The value of statistical life (VSL) you linked is about the benefit of saving a life, not about the cost. If we assume the government consistently uses this $10 million VSL threshold for interventions, it could reflect the (marginal) cost of saving a life. But this feels like a weird approach.  It might be useful to get a sense of order of magnitude, but I'm somewhat skeptical

Yeah, it was intended to be a crude order-of-magnitude estimate. See my response to essentially the same objection here.

Curated and popular this week
Relevant opportunities