Polling Analysis and Election Forecasting

Month: October 2012 (Page 1 of 2)

Can We Trust the Polls?

If you believe the polls, Obama is in good shape for reelection. And my model’s not the only one showing this: you’ll find similar assessments from a range of other poll-watchers, too. The lead is clear enough that The New Republic’s Nate Cohn recently wrote, “If the polls stay where they are, which is the likeliest scenario, Obama would be a heavy favorite on Election Day, with Romney’s odds reduced to the risk of systemic polling failure.”

What would “systemic” polling failure look like? In this case, it would mean that not only are some of the polls overstating Obama’s level of support; but that most – or even all – of the polls have been consistently biased in Obama’s favor. If this is happening, we’ll have no way to know until Election Day. (Of course, it’s just as likely that the polls are systematically underestimating Obama’s vote share, but then Democrats have even less to be worried about.)

A failure of this magnitude would be major news. It would also be a break with recent history. In 2000, 2004, and 2008, presidential polls conducted just before Election Day were highly accurate, according to studies by Michael Traugott here and here; Pickup and Johnston; and Costas Panagopoulos. My own model in 2008 produced state-level forecasts based on the polls that were accurate to within 1.4% on Election Day, and 0.4% in the most competitive states.

Could this year be different? Methodologically, survey response rates have fallen below 10%, but it’s not evident how this necessarily helps Obama. Surveys conducted using automatic dialers (rather than live interviewers) often have even lower response rates, and are prohibited from calling cell phones – but, again, this tends to produce a pro-Republican – not pro-Democratic – lean. And although there are certainly house effects in the results of different polling firms, it seems unlikely that Democratic-leaning pollsters would intentionally distort their results to such an extent that they discredit themselves as reputable survey organizations.

My analysis has shown that despite these potential concerns, the state polls appear to be behaving almost exactly as we should expect. Using my model as a baseline, 54% of presidential poll outcomes are within the theoretical 50% margin of error; 93% are within the 90% margin of error, and 96% are within the 95% margin of error. This is consistent with a pattern of random sampling plus minor house effects.

Nevertheless, criticisms of the polls – and those of us who are tracking them – persist. One of the more creative claims about why the polling aggregators might be wrong this year comes from Sean Trende of RealClearPolitics and Jay Cost of The Weekly Standard. Their argument is that the distribution of survey errors has been bimodal – different from the normal distribution of errors produced by simple random sampling. If true, this would suggest that pollsters are envisioning two distinct models of the electorate: one more Republican, the other more Democratic. Presuming one of these models is correct, averaging all the polls together – as I do, and as does the Huffington Post and FiveThirtyEight – would simply contaminate the “good” polls with error from the “bad” ones. Both Trende and Cost contend the “bad” polls are those that favor Obama.

The problem with this hypothesis is that even if it was true (and the error rates suggest it’s not), there would be no way to observe evidence of bimodality in the polls unless the bias was way larger than anybody is currently claiming. The reason is because most of the error in the polls will still be due to random sampling variation, which no pollster can avoid. To see this, suppose that half the polls were biased 3% in Obama’s favor – a lot! – while half were unbiased. Then we’d have two separate distributions of polls: the unbiased group (red), and the biased group (blue), which we then combine to get the overall distribution (black). The final distribution is pulled to the right, but it still only has one peak.

Of course, it’s possible that in any particular state, with small numbers of polls, a histogram of observed survey results might just happen to look bimodal. But this would have to be due to chance alone. To conclude from it that pollsters are systematically cooking the books, only proves that apopheniathe experience of seeing meaningful patterns or connections in random or meaningless data – is alive and well this campaign season.

The election is in a week. We’ll all have a chance to assess the accuracy of the polls then.

Update: I got a request to show the actual error distributions in the most frequently-polled states. All errors are calculated as a survey’s reported Obama share of the major-party vote, minus my model’s estimate of the “true” value on the day and state of that survey. Positive errors indicate polls that were more pro-Obama, negative errors are for polls that were more pro-Romney. To help guide the eye, I’ve overlaid kernel density plots (histogram smoothers) in blue. The number of polls per state are in parentheses.

It may also help to see the overall distribution of errors across the entire set of state polls. After all, if there is “bimodality” then why should it only show up in particular states? The distribution looks fine to me.

Site Updates, Every Day or Two

I’ll be updating my forecasts and trendlines from the latest polls every day or two between now and the election. I know everyone is anxious about the outcome. But there’s a limit to how much we can learn about the race on a daily basis. The high-quality polls take a few days to complete, and the data are so noisy that we need a large number of them before a trend can be confirmed, anyway.

The comments on the last post got a bit long – you’re welcome to pick up the conversation here, and I’ll try to respond where I can.

Edit: The site has started getting a lot of traffic, and sometimes isn’t loading properly. Apologies; I’ll see if there’s anything I can do about it.

Into the Home Stretch

With the debates complete, and just two weeks left in the campaign, there’s enough state-level polling to know pretty clearly where the candidates currently stand. If the polls are right, Obama is solidly ahead in 18 states (and DC), totaling 237 electoral votes. Romney is ahead in 23 states, worth 191 electoral votes. Among the remaining battleground states, Romney leads in North Carolina (15 EV); Obama leads in Iowa, Nevada, New Hampshire, Ohio, and Wisconsin (44 EV); and Florida, Virginia, and Colorado (51 EV) are essentially tied. Even if Romney takes all of these tossups, Obama would still win the election, 281-257.

The reality in the states – regardless of how close the national polls may make the election seem – is that Obama is in the lead. At the Huffington Post, Simon Jackman notes “Obama’s Electoral College count lies almost entirely to the right of 270.” Sam Wang of the Princeton Election Consortium recently put the election odds “at about nine to one for Obama.” The DeSart and Holbrook election forecast, which also looks at the current polls, places Obama’s re-election probability at over 85%. Romney would need to move opinion by another 1%-2% to win – but voter preferences have been very stable for the past two weeks. And if 1%-2% doesn’t seem like much, consider that Romney’s huge surge following the first debate was 2%, at most.

From this perspective, it’s a bit odd to see commentary out there suggesting that Romney should be favored, or that quantitative, poll-based analyses showing Obama ahead are somehow flawed, or biased, or not to be believed. It’s especially amusing to see the target of this criticism be the New York Times’ Nate Silver, whose FiveThirtyEight blog has been, if anything, unusually generous to Romney’s chances all along. Right now, his model gives Romney as much as a 30% probability of winning, even if the election were held today. Nevertheless, The Daily Caller, Commentary Magazine, and especially the National Review Online have all run articles lately accusing Silver of being in the tank for the president. Of all the possible objections to Silver’s modeling approach, this certainly isn’t one that comes to my mind. I can only hope those guys don’t stumble across my little corner of the Internet.

Model Checking

One of the fun things about election forecasting from a scientific standpoint is that on November 6, the right answer will be revealed, and we can compare the predictions of our models to the actual result. It’s not realistic to expect any model to get exactly the right answer – the world is just too noisy, and the data are too sparse and (sadly) too low quality. But we can still assess whether the errors in the model estimates were small enough to warrant confidence in that model, and make its application useful and worthwhile.

With that said, here are the criteria I’m going to be using to evaluate the performance of my model on Election Day.

  1. Do the estimates of state opinion trends make sense? Although we won’t ever know exactly what people were thinking during the campaign, the state trendlines should at least pass through the center of the data. This validation also includes checking that the residual variance in the polls matches theoretical expectations, which so far, it has.
  2. How large is the average difference between the state vote forecasts and the actual outcomes? And did this error decline in a gradual manner over the course of the campaign? In 2008, the average error fell from about 3% in July to 1.4% on Election Day. Anything in this neighborhood would be acceptable.
  3. What proportion of state winners were correctly predicted? Since what ultimately matters is which candidate receives a plurality in each state, we’d like this to be correct, even if the vote share forecast is a bit off. Obviously, states right at the margin (for example, North Carolina, Florida, and Colorado) are going to be harder to get right.
  4. Related to this, were the competitive states identified early and accurately? One of the aims of the model is to help us distinguish safe from swing states, to alert us where we should be directing most of our attention.
  5. Do 90% of state vote outcomes fall within the 90% posterior credible intervals of the state forecasts? This gets at the uncertainty in the model estimates. I use a 90% interval so that there’s room to detect underconfidence as well as overconfidence in the forecasts. In 2008, the model was a bit overconfident. For this year, I’ll be fine with 80% coverage; if it’s much lower than that, I’ll want to revisit some of the model assumptions.
  6. How accurate was the overall electoral vote forecast? And how quickly (if at all) did it narrow in on the actual result? Even if the state-level estimates are good, there might be an error in how the model aggregates those forecasts nationally.
  7. Was there an appropriate amount of uncertainty in the electoral vote forecasts? Since there is only one electoral vote outcome, this will involve calculating the percentage of the campaign during which the final electoral vote was within the model’s 95% posterior credible interval. Accepting the possibility of overconfidence in the state forecasts, this should not fall below 85%-90%.
  8. Finally, how sensitive were the forecasts to the choice of structural prior? Especially if the model is judged to have performed poorly, could a different prior specification have made the difference?

If you can think of any I’ve left off, please feel free to add them in the comments.

Aftermath of the First Debate

Polls released since the first presidential debate last week indicate as rapid a shift in voter preferences as we’ve seen all campaign. My model estimates a swing of about 1.5% in Romney’s direction, or a net narrowing of about 3%. Although the polls also suggest that this movement began a few days before the debate, it’s still a large effect.

What to make of it? First, and most importantly, although Romney may have cut into Obama’s lead, Obama is still comfortably ahead. The most important state to win this year is arguably Ohio – and there Obama holds on to 51.7% of the major-party vote. According to my model, Obama had been outperforming the fundamentals (which point to his reelection) prior to the debate – and now he’s running just slightly behind them. As a result, the model’s long-term forecast continues to show an Obama victory, with 332 electoral votes.

Second, there’s reason to believe that the initial estimates of Romney’s post-debate surge are going to weaken as more polls are released today and tomorrow. The surveys that made it into my Sunday morning update consisted of a number of one-day samples, which tend to draw in particularly enthusiastic respondents – in this case, Republicans excited about Romney’s debate performance. Moreover, the survey methodology used by these firms – in which interviews are conducted using recorded scripts, to save time and money – also show a Republican lean. And if anything, my model gives slightly greater weight to automated polls simply because they tend to have larger sample sizes.

The point isn’t that these polls are “wrong” – only that this is a situation where it would be wise to wait for more information before reaching any strong conclusions. My model treats every poll equally, regardless of how it was fielded, or by whom. The reason I don’t try to “correct” for potential errors in the polls isn’t because I don’t believe they exist – but because I don’t believe those adjustments can be estimated reliably enough to make much of a difference. (Consider how wide the error bars are on Simon Jackman’s house effects estimates, for example.) Instead, I assume there will eventually be enough polling in all 50 states for these errors to cancel out. Usually that is a pretty safe assumption, but I don’t think it’s happened yet.

I’m going to embed the current trend estimates for Virginia and Florida here in this post, so we can compare them to later estimates, and see if I’m right.


Finally, making sense of this small batch of post-debate polls highlights the value of using an informative Bayesian prior. If Romney is really experiencing a sudden swing in the polls, then we already have some idea of how quickly that could reasonably happen, based on previous trends. It’s certainly possible that something about public opinion has fundamentally changed within the past week. But if that’s the case, we should require extra evidence to overturn what we previously thought was going on.

Look for another site update Tuesday morning.

« Older posts

© 2024 VOTAMATIC

Theme by Anders NorenUp ↑