Polling Analysis and Election Forecasting

Category: Uncategorized (Page 3 of 7)

Final Estimates Tomorrow Morning

I entered nearly 50 new state polls in the most recent model update, posted earlier today. There have been over 30 additional polls released since then. I’ll wait a few more hours to see if any more come out, then run the model one more time, overnight. My final estimates will be ready in the morning.

In the meantime, you might have noticed that my EV forecast for Obama inched downward for the first time in weeks, from 332 to 326. That’s the median of my election simulations, but it doesn’t correspond to a particularly likely combination of state-level outcomes. Instead, it reflects the declining probability that Obama will win Florida (now essentially a 50-50 proposition), and Obama’s continuing deficit in North Carolina. I’ve updated the title on the chart in the banner to make this clear.

Depending on how things go with the final run, I’ll keep updating the chart as I have been, using the median. But I’ll also create a more true-to-life forecast, based on assigning each state to its most likely outcome. With Florida (and its 29 electoral votes) right on the knife edge, this will either be Obama 332-206 if the model projects an Obama victory there, or Obama 303-235 if the model shows Obama behind. I’ll also have all sorts of other tables and charts ready to go for comparing the election results as they’re announced.

Pollsters May Be Herding

The accuracy of my election forecasts depends on the accuracy of the presidential polls. As such, a major concern heading into Election Day is the possibility that polling firms, out of fear of being wrong, are looking at the results of other published surveys and weighting or adjusting their own results to match. If pollsters are engaging in this sort of herding behavior – and, as a consequence, converging on the wrong estimates of public opinion – then there is danger of the polls becoming collectively biased.

To see whether this is happening, I’ll plot the absolute value of the state polls’ error, over time. (The error is the difference between a poll’s reported proportion supporting Obama, and my model’s estimate of the “true” population proportion.) Herding would be indicated by a decline in the average survey error towards zero – representing no difference from the consensus mean – over the course of the campaign. This is exactly what we find. Although there has always been a large amount of variation in the polls, the underlying trend – as shown by the lowess smoother line, in blue – reveals that the average error in the polls started at 1.5% in early May, but is now down to 0.9%.

How worried do we need to be? Herding around the wrong value is potentially much worse than any one or two firms having an unusual house effect. But even if the variance of the polls is decreasing, they might still have the right average. An alternative explanation for this pattern could be an increase in sample sizes (resulting in lower sampling variability), but this hasn’t been the case. Unfortunately, there weren’t enough polls to tell whether the pattern was stronger in more frequently-polled states, or if particular firms were more prone to follow the pack. Hopefully, this minor trend won’t mean anything, and the estimates will be fine. We’ll know soon.

Another Look at Survey Bias

Questions continue to be raised about the accuracy of the polls. Obviously, in just a few more days, we’ll know which polls were right (on average) and which were wrong. But in the meantime, it’s useful to understand how the polls are – at the very least – different from one another, and form a baseline set of expectations to which we can compare the election results on Tuesday. The reason this question takes on special urgency now is that there’s essentially no time left in the campaign for preferences to change any further: if the state polls are right, then Obama is almost certain to be reelected.

In previous posts, I’ve looked at both house effects, and error distributions (twice!), but I want to return to this one more time, because it gets to the heart of the debate between right-leaning and left-leaning commentators over the trustworthiness of the polls.

A relatively small number of survey firms have conducted a majority of the state polls, and therefore have a larger influence on the trends and forecasts generated by my model. Nobody disputes that there have been evident, systematic differences in the results of these major firms: some leaning more pro-Romney, others leaning more pro-Obama. As I said at the outset, we’ll know on Election Day who’s right and wrong.

But here’s a simple test. There have been hundreds of smaller organizations who have released fewer than a half-dozen polls each. Most have only released a single poll. We can’t reliably estimate the house effects for all of these firms individually. However, we can probably safely assume that in aggregate they aren’t all ideologically in sync – so that whatever biases they have will all cancel out when pooled together. We can then compare the overall error distribution of the smaller firms’ surveys to the error distributions of the larger firms’ surveys. (The survey error is simply the difference between the proportion supporting Obama in a poll, and my model’s estimate of the “true” proportion on that state and day.)

If the smaller firms’ errors are distributed around zero, then the left-leaning firms are probably actually left-leaning, and the right-leaning firms are probably actually right-leaning, and this means that they’ll safely cancel each other out in my results, too. On the other hand, if the smaller firms’ error distribution matches either the left-leaning or the right-leaning firms’ error distribution, then it’s more likely the case that those firms aren’t significantly biased after all, and it’s the other side’s polls that are missing the mark.

What do we find? This set of kernel density plots (smoothed histograms) shows the distribution of survey errors among the seven largest survey organizations, and in grey, the distribution of errors among the set of smaller firms. The smaller firms’ error distribution matches that of Quinnipiac, SurveyUSA, YouGov, and PPP. The right-leaning firms – Rasmussen, Gravis Marketing, and ARG – are clearly set apart on the pro-Romney side of the plot.

If, on Election Day, the presidential polls by Quinnipiac, SurveyUSA, YouGov, and PPP prove to be accurate, then the polls by Rasmussen, Gravis Marketing, and ARG will all have been underestimating Obama’s level of support by 1.5% consistently, throughout the campaign. Right now, assuming zero overall bias, Florida is 50-50. The share of Florida polls conducted by Rasmussen, Gravis Marketing, and ARG? 20%. Remove those polls from the dataset, and Obama’s standing improves.

Four days to go.

Can We Trust the Polls?

If you believe the polls, Obama is in good shape for reelection. And my model’s not the only one showing this: you’ll find similar assessments from a range of other poll-watchers, too. The lead is clear enough that The New Republic’s Nate Cohn recently wrote, “If the polls stay where they are, which is the likeliest scenario, Obama would be a heavy favorite on Election Day, with Romney’s odds reduced to the risk of systemic polling failure.”

What would “systemic” polling failure look like? In this case, it would mean that not only are some of the polls overstating Obama’s level of support; but that most – or even all – of the polls have been consistently biased in Obama’s favor. If this is happening, we’ll have no way to know until Election Day. (Of course, it’s just as likely that the polls are systematically underestimating Obama’s vote share, but then Democrats have even less to be worried about.)

A failure of this magnitude would be major news. It would also be a break with recent history. In 2000, 2004, and 2008, presidential polls conducted just before Election Day were highly accurate, according to studies by Michael Traugott here and here; Pickup and Johnston; and Costas Panagopoulos. My own model in 2008 produced state-level forecasts based on the polls that were accurate to within 1.4% on Election Day, and 0.4% in the most competitive states.

Could this year be different? Methodologically, survey response rates have fallen below 10%, but it’s not evident how this necessarily helps Obama. Surveys conducted using automatic dialers (rather than live interviewers) often have even lower response rates, and are prohibited from calling cell phones – but, again, this tends to produce a pro-Republican – not pro-Democratic – lean. And although there are certainly house effects in the results of different polling firms, it seems unlikely that Democratic-leaning pollsters would intentionally distort their results to such an extent that they discredit themselves as reputable survey organizations.

My analysis has shown that despite these potential concerns, the state polls appear to be behaving almost exactly as we should expect. Using my model as a baseline, 54% of presidential poll outcomes are within the theoretical 50% margin of error; 93% are within the 90% margin of error, and 96% are within the 95% margin of error. This is consistent with a pattern of random sampling plus minor house effects.

Nevertheless, criticisms of the polls – and those of us who are tracking them – persist. One of the more creative claims about why the polling aggregators might be wrong this year comes from Sean Trende of RealClearPolitics and Jay Cost of The Weekly Standard. Their argument is that the distribution of survey errors has been bimodal – different from the normal distribution of errors produced by simple random sampling. If true, this would suggest that pollsters are envisioning two distinct models of the electorate: one more Republican, the other more Democratic. Presuming one of these models is correct, averaging all the polls together – as I do, and as does the Huffington Post and FiveThirtyEight – would simply contaminate the “good” polls with error from the “bad” ones. Both Trende and Cost contend the “bad” polls are those that favor Obama.

The problem with this hypothesis is that even if it was true (and the error rates suggest it’s not), there would be no way to observe evidence of bimodality in the polls unless the bias was way larger than anybody is currently claiming. The reason is because most of the error in the polls will still be due to random sampling variation, which no pollster can avoid. To see this, suppose that half the polls were biased 3% in Obama’s favor – a lot! – while half were unbiased. Then we’d have two separate distributions of polls: the unbiased group (red), and the biased group (blue), which we then combine to get the overall distribution (black). The final distribution is pulled to the right, but it still only has one peak.

Of course, it’s possible that in any particular state, with small numbers of polls, a histogram of observed survey results might just happen to look bimodal. But this would have to be due to chance alone. To conclude from it that pollsters are systematically cooking the books, only proves that apopheniathe experience of seeing meaningful patterns or connections in random or meaningless data – is alive and well this campaign season.

The election is in a week. We’ll all have a chance to assess the accuracy of the polls then.

Update: I got a request to show the actual error distributions in the most frequently-polled states. All errors are calculated as a survey’s reported Obama share of the major-party vote, minus my model’s estimate of the “true” value on the day and state of that survey. Positive errors indicate polls that were more pro-Obama, negative errors are for polls that were more pro-Romney. To help guide the eye, I’ve overlaid kernel density plots (histogram smoothers) in blue. The number of polls per state are in parentheses.

It may also help to see the overall distribution of errors across the entire set of state polls. After all, if there is “bimodality” then why should it only show up in particular states? The distribution looks fine to me.

Site Updates, Every Day or Two

I’ll be updating my forecasts and trendlines from the latest polls every day or two between now and the election. I know everyone is anxious about the outcome. But there’s a limit to how much we can learn about the race on a daily basis. The high-quality polls take a few days to complete, and the data are so noisy that we need a large number of them before a trend can be confirmed, anyway.

The comments on the last post got a bit long – you’re welcome to pick up the conversation here, and I’ll try to respond where I can.

Edit: The site has started getting a lot of traffic, and sometimes isn’t loading properly. Apologies; I’ll see if there’s anything I can do about it.

« Older posts Newer posts »

© 2024 VOTAMATIC

Theme by Anders NorenUp ↑