Or rather, I do not think it means what it is commonly thought to mean.
The polling numbers have changed significantly in response to events, this much is true. But these bounces we've seen do not seem to be showing people changing their minds much - at least, most of the bounce can be accounted for in other ways. Instead, the polls are tracking how much partisan voters want to answer polls. All we can tell is that the race is fairly close, and always has been. The only thing to do is drag our voters out to vote.
The final polling average may very well come close to the actual margin again, but the road to get there has been misleading. It may be that how much partisan voters want to answer polls in September is highly correlated to how the entire voting universe would cast their ballots at that time, but I do not think so.
Why?
I say this because the regional breakdown of the polls was unrealistic during Obama's post-convention bounce, and during the summer, too, for that matter. The debate bounce was mostly caused by a correction of this regional pattern combined with an increase in Conservatives answering thier phones, at least in Daily Kos polling. And finally, response rates are so low that poll respondents are disproportionately highly engaged, politically involved voters. Poll respondents are not representative of voters in general, at least at this stage of the election.
That's not to say polls are useless. Far from it. They still have a decent enough track record for us to be sure the presidential election is close, nationally, and in certain key states. But polls taken weeks and months prior to the election appear to have more error in their margin than we would like to think. And even the averages of polls taken in the final weeks usually miss the margin by more than a point, in races with margins of less than ten points (although this error may be predictable).
Details below.
Unrealistic regional numbers.
Below is a table showing Obama's margin in several regions. Gallup's regional definitions were used, and the Solid South is simply the uncontested Southern states, or the South without FL, NC, and VA.
I've also included what we would have expected if the national numbers were the same as 2008, based on Obama
underpolling in blue states and overpolling in red states.
We see that over the summer, Obama was doing even better than expected in the South, while performing worse nationally than in 2008, in Daily Kos polling. After the convention though, this became absolutely ridiculous, with the Daily Kos polls and Pew showing essentially a tie in the South.
After the first debate, Obama cratered in all regions, but this time, the regional pattern of support is at least relatively self-consistent in both Daily Kos and Gallup - that is, an across-the-board decrease of 4 to 16 points in each region (compared to expected values), which is a reasonable range given the sample sizes involved.
Now, after looking at this table, do we really believe Obama was actually overperforming in the South this summer, and tied after the convention? Or, alternatively, were white conservatives always going to vote, but simply not motivated enough to answer the polls prior to the debate? This, of course, would imply that it has always been a very close election nationally, but it just hasn't been reflected in the polls.
Likewise, one could argue that post-debate, Obama couldn't have been doing much worse in the South than he did in 2008. I don't think this is a very strong argument, because the regional pattern of support then was at least consistent with my expectations after the debate. I wouldn't dismiss it out of hand, however.
In any case, most polls have now moved a little to an intermediate position, after Obama supporters crawled out from under their beds following the second debate.
Regional distribution shift.
Another thing that shifted after the first debate was regional distribution. During the month of September, 34% of Daily Kos poll respondents were from the South. After the debate, this jumped to 36%. The combination of an increase of respondents from the South and a huge decrease in Obama's margin in the South can account for about half of the national shift post-debate. Again, were these voters really going to sit out the election? Or was this just a shift in who is responding to the polls?
Ideology shift.
Likewise, we see a shift in the self-identification of respondents. Pre-debate, an average of 40% called themselves conservative; post-debate, that number is 43%. These labels do change, unlike, say, age - but it may also simply indicate more conservatives answering their phones, while moderates and liberals felt no such inclination following Obama's first debate performance.
If we break the numbers down even further by party and ideology, we see that about half the change post-debate comes from a 2% increase in Conservative Republican respondents in conjunction with a 2% decrease in Moderate Democrats. Meanwhile, opinions within party ideological groups only changed significantly for one group: Moderate Republicans shifted in Romney's favor.
Oversampling political junkies.
That brings us to the question, just who exactly is answering these polls? And the answer is: not your average voter. In last week's Daily Kos poll, 80% reported watching the Vice Presidential debate. In this week's poll, 83% said the watched the second Presidential debate, and 83% said they voted in 2010. That's simply not a representative sample of the electorate as a whole. If we guess that around 135 million people will vote this year, then only about 60% of 2012 voters could have voted in the 2010 elections. Meanwhile, only about 50% could have watched the second debate, and about 40% watched the VP debate. These are vastly different numbers than we find in the polling sample.
The Democracy Corps poll shows similar numbers - 76% of 2012 likely voters voted in 2010 - so we can't blame the problem on polling methodology. Polls simply do not represent the complete universe of voters. Rather, they tell us the partisan intensity gap - which would relate to the number and enthusiasm of strong partisans on each side.
But somehow polling still works!
Doesn't it?
Well - I was going to start this section off asserting that it does - but then I went and actually checked.
Sure, polling averages usually work if you just want to predict the winner. But from 2004-2006, out of 47 races where the final margin was 10 points or less, only 23, or only 49%, had a polling average within 1% of the final tally.
And in 2008-2010, out of 45 races ending under 10 points, just 13 - that's only 29% - had a polling average within 1% of the correct value.
Uh-oh.
What's going on?
Here's three possible explanations I can think of:
1. Polls are simply getting worse, and are way off this year, because response rates- currently at 9% for Pew - are about half of what they (presumably) were in 2008. But don't get excited - the error could favor Republicans or Democrats, in theory.
2. Early polls are not representative of the entire electorate, but final polls are. And final polls are what get compared to actual results, so that's why we never notice that early polls are so far off.
3. The partisan intensity gap that polls measure is directly correlated with the partisan vote gap, and in close elections in a balanced electorate, all other voters simply split 50:50, leaving the partisan vote gap as the final election margin (the theory of the base election). In electorates that have a lean towards one party or another, the non-partisan voters vote in accordance with that lean, leaving the polls in error by an amount proportional to the state's natural lean. Which, by the way, is what we actually see.