Skip to main content

It was infuriating: day after day, time after time during the run-up to the election--things would look great for team Obama, and then...ka-boom: another Rasmussen poll. A dagger in the heart. Yes, I tried to ignore them, aware of their history of conservative bias, and I succeeded to some extent. But it wasn't easy, especially with so many wise, sage commenters (here, on Twitter, and elsewhere) admonishing against ignoring any polls, ever.

But after awhile, it got so ridiculous (Colorado: Romney +3!, Wisconsin: Tied!), that my thinking changed. "Good!," I thought to myself. "Let the Razz-holes keep publishing these ridiculous numbers--the more ridiculous the better--so that they will be completely discredited when the actual results come in!"

With the dust now close to having settled (although there are still votes being counted, and the results continue to move in Obama's favor as results come it), I thought I'd do some very rudimentary number-crunching and have a look at how truly biased and inaccurate Rasmussen's numbers were. So have a look. I checked the current results from all the swing states (I'm classifying as a "swing state" each of the twelve states listed in that category by David Wasserman on his public spreadsheet, which contains the most recent vote-counts of which I'm aware), rounded to two decimal points. I then listed Rasmussen's final poll for each of these states, and for good measure included PPP, whose results were almost without exception described by the traditional media as "left-leaning." I then calculated the "bias" (the difference between the pollster's result and the actual result) for each state for both pollsters. I then calculated the average bias among all twelve "swing states" for each pollster.

For purposes of simplicity, each cell of the spreadsheet gives the margin between Obama and Romney, with positive numbers indicating an advantage for Obama and negative ones indicating one for Romney. (An asterisk next to a state name means that state has certified its final results.) So in Colorado, for example, Obama is winning by 5.38 percentage points, PPP's last poll had Obama by 6 points there, and Rasmussen's last poll had Romney by 3).

Have a look: the results make clear that Rasmussen absolutely and positively sucks. They were off by over eight points in Colorado, nearly seven in Iowa and Wisconsin, six in Virginia, five (rounded up) in Michigan and Nevada, four in North Carolina and New Hampshire. And in every swing state--every single one--they were biased in favor of Rommney. In fact, the Razz-holes only even predicted the correct result in terms of simple winner/loser in six of the twelve states! You would do as well flipping a coin as paying any attention to Rasmussen's polls! (What's worse, they were only 3 for 9 if you include the states most outlets considered swing states during the election; omitting, that is, Pennsylvania, Michigan, and Minnesota from consideration, which tells you you'd be better off flipping a coin than taking Rasmussen seriously.) overall average bias in Romney's favor ended up being a ridiculous 4.43 percentage points.

Note how much closer to the mark PPP is--less than one point. And notice as well that this supposedly "left-leaning" pollster's very small bias was in Romney's favor.

So next time around, please, don't worry about Rasmussen. It really is sensible simply to ignore them, since you're demonstrably better off rolling dice or flipping coins.

12:35 PM PT: I've added the graph embedded as an image, but would love to hear from anyone who knows how to embed the spreadsheet directly from google docs ...

Thu Nov 29, 2012 at  9:23 AM PT: Chart updated to reflect newly counted votes. Ohio and Wisconsin continue to move toward Obama, making Rasmussen's "tied" results looking ever more absurd.


EMAIL TO A FRIEND X
Your Email has been sent.
You must add at least one tag to this diary before publishing it.

Add keywords that describe this diary. Separate multiple keywords with commas.
Tagging tips - Search For Tags - Browse For Tags

?

More Tagging tips:

A tag is a way to search for this diary. If someone is searching for "Barack Obama," is this a diary they'd be trying to find?

Use a person's full name, without any title. Senator Obama may become President Obama, and Michelle Obama might run for office.

If your diary covers an election or elected official, use election tags, which are generally the state abbreviation followed by the office. CA-01 is the first district House seat. CA-Sen covers both senate races. NY-GOV covers the New York governor's race.

Tags do not compound: that is, "education reform" is a completely different tag from "education". A tag like "reform" alone is probably not meaningful.

Consider if one or more of these tags fits your diary: Civil Rights, Community, Congress, Culture, Economy, Education, Elections, Energy, Environment, Health Care, International, Labor, Law, Media, Meta, National Security, Science, Transportation, or White House. If your diary is specific to a state, consider adding the state (California, Texas, etc). Keep in mind, though, that there are many wonderful and important diaries that don't fit in any of these tags. Don't worry if yours doesn't.

You can add a private note to this diary when hotlisting it:
Are you sure you want to remove this diary from your hotlist?
Are you sure you want to remove your recommendation? You can only recommend a diary once, so you will not be able to re-recommend it afterwards.
Rescue this diary, and add a note:
Are you sure you want to remove this diary from Rescue?
Choose where to republish this diary. The diary will be added to the queue for that group. Publish it from the queue to make it appear.

You must be a member of a group to use this feature.

Add a quick update to your diary without changing the diary itself:
Are you sure you want to remove this diary?
(The diary will be removed from the site and returned to your drafts for further editing.)
(The diary will be removed.)
Are you sure you want to save these changes to the published diary?

Comment Preferences

  •  Good diary (6+ / 0-)

    Semantic quibble: "bias" normally refers to the difference between one pollster and the consensus of pollsters, while the difference between polls and votes is "error".

    You can have a bias, in terms of a different likely voter model, and be better because of your bias.

    Ras, of course, is trying to drive the narrative, not produce results. I wonder if they tell the people who pay for their polling the truth and publish an entirely different set of numbers.

    Economics is a social *science*. Can we base future economic decisions on math?

    by blue aardvark on Wed Nov 28, 2012 at 08:37:13 AM PST

    •  Semantic quibble quibble (4+ / 0-)

      Bias is systematic error. Error generally refers to sampling error, that is the random effects that are generated because a single poll of a finite sample may not yield a sample mean which matches the population mean. Bias results from poll construction such that as the sample size gets large the sample mean does not converge to the population mean because the sample is not representative.

      We can use the bias of pollsters relative to other pollsters as a proxy for the true bias (which is unknown unless we get a census) before the election, but we are missing the one thing we really want which is the bias in the aggregate mean.

      The election provides the actual measure which the polling is attempting to predict. While individual polls should differ from the actual result following the distribution of the sample mean, on average the sample errors should cancel out. The difference between the actual result and the sampling mean is actual an unbiased measure of bias (which is what we are primarily looking for). It is also subject to some (much smaller) sampling error (it is possible that an unbiased pollster could have gotten unlucky by consistently drawing samples which favored one side as a random outcome).

      •  This is correct... (1+ / 0-)
        Recommended by:
        bluegrass50

        Now, let's take it a step further...

        What is being reported here is the average of the signed errors, which measures the average bias of the pollster towards one candidate or the other.

        We can also take the average of the unsigned errors -- the average of the absolute value of the difference between the poll and final result -- as an indication of how far off the pollster was, regardless of which direction. We can call this the mean error, as opposed to mean bias above.

        In this case, Ras's mean error is the same as the bias (actually, 4.47% now, using updated figures from the vote tally spreadsheet), precisely because ALL his errors were in the same direction. But PPP's mean error -- his average 'miss' -- is 1.95%, which is larger than the bias.

        One step further still -- statisticians weight larger errors more than smaller errors. After all, larger errors matter more. To do this we take the root-mean-square error -- take the difference between each state poll and the actual outcome, square that difference, average those, then take the square root of the result. This is the gold standard for how well or poorly an estimate has performed. The RMS error for PPP for the swing states is 2.44%; for Ras it is 4.95%.

        So using RMS, PPP is about twice as good as Ras for these states -- a very big deal, given the size of the effect we are trying to measure.

        Mark E. Miller // Kalamazoo Township Trustee // MI 6th District Democratic Chair

        by memiller on Thu Nov 29, 2012 at 09:01:19 AM PST

        [ Parent ]

        •  Can't really do this (0+ / 0-)

          Or at least you shouldn't.

          Different polls have different sample sizes and associated dispersion for the sampling error of the mean. As long as the polling is unbiased and the distribution of errors follows the expected distribution (about 2/3 within one stan. dev, about 95% within 2) then the pollster has performed perfectly. We should not penalize pollsters who release polls with smaller samples or commend pollster that got lucky on their statistical sampling.

          I can do much better than even the best pollster by doing what Nate Silver does, unskewing polls and using aggregates to get an unbiased an very precise measure of electoral intent. If I use that info to "improve" my polling I will perform better, but not provide as much actual information as a pollster that just does his job.

          Yes in predictive forecasting we use mean square error (or abs error depending on the loss function) on a hold out sample (actual results) to measure performance. That would be appropriate for measuring Nate against Sam against unskewed polls guy, but is not appropriate for evaluating pollsters, who are trying to give us independent reads on the electorate by collecting primary research data. They cannot be better than the sampling error without cheating, which makes their data useless.

          •  If it were simply that I was comparing (0+ / 0-)

            a single poll from one pollster against one from another, I would agree. It is not fair, so long as the result is within reasonable distance of the margin of error, to penalize someone for being unlucky.

            That is not what I am doing here! I am taking the mean error (or the RMS error) of a large number of polls. Thus, the differences in sampling error from one poll to another will be (largely) averaged out, and you are left with the systematic errors that are increasing variance over and above the irreducible sampling error, whether or not they are in the same direction (bias).

            These include all the machinations the pollsters are using to get to their reported results -- not just the likely voter screen, which they tell us about, but also the details of how they form their sample, which is largely opaque. These things matter, but are hard to evaluate except by comparing the overall performance of each pollster post hoc.

            Mark E. Miller // Kalamazoo Township Trustee // MI 6th District Democratic Chair

            by memiller on Thu Nov 29, 2012 at 10:32:19 AM PST

            [ Parent ]

            •  RMS errors do not "average out" (0+ / 0-)

              As you point out earlier we use RMS to penalize for larger errors. The result is that sampling errors from small sample polls outweigh smaller errors from big sample polls.

              The point is that if you want to take issue with the precision of polling results from a specific firm, the only reasonable methodology is to actually measure the dispersion of results relative to the expected sampling error. A firm that reports unbiased results because it is missing by large amounts with equal frequency would not have an error distribution that matched its expected (way too many tail events) and that would be a valid source of concern. A firm that reports results with a very tight error distribution would be equally suspect for cheating in conducting or interpreting the results. Generally the average error in a poll should be around one standard deviation, which is probably around 2 points for these state polls. A perfect polling firm would have a RMS near that number, but would again be subject to a sample distribution. It looks like PPP is right in the ballpark we would expect for a pollster who did everything just about right.

              •  Please read again. (0+ / 0-)

                I did not say that the sampling errors averaged out.

                I said that the DIFFERENCES in sampling error between pollsters would tend to average out, leaving you with differences that are caused by systematic sources of dispersion.

                If you fail to see any difference :) in those two statements, you are not understanding.

                And yes, I could have gone on to make the point that this holds only if the polls done by each pollster are of comparable size. We could do a comparison of the distribution of the errors as you suggest. Reading your second paragraph, I am in complete agreement, so I see that you are not mis-understanding the situation -- just what I am saying.

                However, without taking the time to do all that, and understanding that sample sizes are roughly comparable in this case (which I know because I tracked the state polls myself), the fact that the RMS error for PPP is one-half that of Rasmussen is highly significant, and I am pretty sure that a rigorous analysis would show that. If you want to do it yourself, be my guest.

                Mark E. Miller // Kalamazoo Township Trustee // MI 6th District Democratic Chair

                by memiller on Thu Nov 29, 2012 at 12:32:53 PM PST

                [ Parent ]

        •  excellent points about (0+ / 0-)

          unsigned average and root mean square error.  You beat me to it!

    •  Yes, it's true I'm unclear on this (4+ / 0-)

      Nate Silver used "house effect" to refer to the difference between a pollster and the pollster consensus. My impression (mostly from his posts) was that if a pollster's results relative to actual election outcome erred consistently in the same direction (as is the case with Rasmussen), then the term "bias" was appropriate.

      But, again, I admit that I'm somewhat out of my depth here and that my spreadsheet is rudimentary.

      Hey Mitt Romney : You're an obnoxious prick.

      by porktacos on Wed Nov 28, 2012 at 09:18:44 AM PST

      [ Parent ]

  •  Interestingly, since the election (3+ / 0-)

    Rasmussen has had the president's approval rating higher than some of the other pollsters. For example, they had it at 55% yesterday, compared to 51% in Gallup and 52% per CNN. Maybe they have tweaked their methodology post-election to more accurately reflect the electorate (if they used only RV's for the approval rating - some pollsters use all adults for that).

  •  Unfortunately, (4+ / 0-)
    Recommended by:
    exterris, Gooserock, Matt Z, blueyedace2

    there is always a market for telling people what they want to hear. Especially if denial of reality is already those particular people's stock in trade.

    Visit Lacking All Conviction, your patch of grey on those too-sunny days.

    by eataTREE on Wed Nov 28, 2012 at 08:38:14 AM PST

  •  As long as Dick Morris and Fox News are around... (4+ / 0-)

    ...the House of Ras will have someone around to take them seriously.

  •  Bet It Gets Worse Not Better In the Future (2+ / 0-)
    Recommended by:
    Matt Z, blueyedace2

    though the party and donors will do their own private polling so they don't get hoodwinked again.

    We are called to speak for the weak, for the voiceless, for victims of our nation and for those it calls enemy.... --ML King "Beyond Vietnam"

    by Gooserock on Wed Nov 28, 2012 at 08:56:04 AM PST

  •  asdf (4+ / 0-)

    I completely ignored Rasmussen this entire xyxle3. And when I posted my daily Electoral College summarys from the major predictors and trackers, 2 of my inclusions specifically excluded Rasmussen Polls.

    Sadly, everything Communism said about itself was a lie. Even more sadly,, everything Communism said about Capitalism was the truth.

    by GayIthacan on Wed Nov 28, 2012 at 09:03:03 AM PST

  •  Your Google-Docs Spreadsheet is not public (2+ / 0-)
    Recommended by:
    CocoaLove, porktacos

    Permission to view needed.

    Any way to slap a .png of the data table into the diary?

  •  This is about... (1+ / 0-)
    Recommended by:
    blueyedace2

    ...average for his performance over the last few elections.  

    He didn't get this race wrong.  He gets polling wrong.  

  •  Gallup was the bigger problem, media wise (7+ / 0-)

    I think most news organizations (which, by definition, excludes Faux News) mostly ignored Rasmussen. Of course, this didn't prevent Right Wing hack "commentators" from going on legit news shows and quoting Rasmussen polls whenever the discussion moved to the fact that Romney was behind in the polls. Typical RW Hack line, "No! That's not true! Rasmussen had Romney ahead in several key swing states this week...."

    The bigger problem in the perception game was Gallup. Even though they have been wrong, badly wrong, three elections in a row, sober minded legit news organizations like NPR quote them like they are still the "gold standard" of polling.

    See my diary: Gallup wrong 3 elections in a row

    How many times was it said in October/Early November something to the effect of, "No candidate who was behind in Gallup at such and such a date has gone on to win the Presidency...."? Of course, the key in that statement is that they are using the "Gallup" standard. Obama was actually never behind in October, November or ANY other month, ALL campaign.

    It was only the Gallup - Rasmussen duo that gave the perception of a phantom Romney "lead", not to mention is was the phoney fuel of the dreaded, "Mitt-Mentum"!

    •  Silver included Ras in his aggregate mix (0+ / 0-)

      as did Wang and others. Maddow always mentioned them in her regular round-up of swing-state polling.

      What all those reporters or analysts did was to assert the lean (house-effect) of those polls, or weight for the effect in their analysis.

      I think that was the right thing to do (beats going all unskewed on Ras to simply say, grain of salt).

      I do agree that Gallup being so far off the norm is a bigger story - precisely because they are otherwise treated as the gold standard of pollsters, based on past reputation.

      Sort of like Standard & Poors being the gold standard of ratings agencies though, in the end you do have to account for recent aberrations. As in, will Gallup actually have a good reputation going forward if their polls continue to skew away from the field - and the outcome of the election.

      •  Gallup is "gold" = Safe for TV (1+ / 0-)
        Recommended by:
        ItsSimpleSimon

        You would think that having Gallup blow three elections in a row would change media perception, but, I'm not sure.  Like you say, using Gallup is like quoting the S&P or the Dow Jones average. It's an old standby that everybody recognizes, whether they adapt to new times or not.

        By quoting a Gallup poll, nobody but the polling cognescenti are going to question you. But, if CBS or PBS started quoting Rand Life Panel or Google Analytics the right in particular would jump up and down and scream that they are using "junk" polls!! Never mind that both Rand & Google basically nailed this election in the head-to-head numbers.

    •  I'm hoping Gallup... (0+ / 0-)

      ...gets a huge "grain of salt" caveat from now by increasing segments of the traditional media because of their abysmal performance the last few cycles.

      It's now easy to demonstrate, with simple charts and graphs, that they're among the very worst pollsters in the business.

      Way to go, team Obama!

      by porktacos on Thu Nov 29, 2012 at 09:29:06 AM PST

      [ Parent ]

    •  Actually there was a Mitt-mentum... (0+ / 0-)

      but is was nothing else but crystalisation of GOP support and an insignicant indie new look.
      ...and worse this was just the south coming home...and not much anyone else...

      "When fascism comes to America, it will be wrapped in a flag and carrying a cross." Sinclair Lewis, 1935 --Talk of foresight--

      by tuma on Fri Nov 30, 2012 at 11:35:48 AM PST

      [ Parent ]

  •  This is good (1+ / 0-)
    Recommended by:
    blueyedace2

    you know you can embed your chart in your post? I do all my graphs at Google Docs.

    Also, those state vote totals aren't final in most of those states, so you'll have to do it all over again when that happens :)

    •  Thanks! (0+ / 0-)

      I put a screenshot of the chart at the top of the diary (couldn't figure out a way to embed a dynamic link from Google docs), and will update as the votes keep coming in and increasing Obama's lead.

      Hey Mitt Romney : You're an obnoxious prick.

      by porktacos on Wed Nov 28, 2012 at 12:36:46 PM PST

      [ Parent ]

  •  How do you mess up Iowa so bad? (2+ / 0-)
    Recommended by:
    porktacos, BobBlueMass

    Isn't that state basically polled 24/7, 365?  In my imagination, pollsters and the people of Iowa are on a first name basis.  "Oh, hey Ted - why, yes I DO think we should have shared sacrifice!"

  •  Propaganda polling (0+ / 0-)

    There are those that look at things the way they are, and ask why? I dream of things that never were, and ask why not? - Robert Kennedy

    by BobBlueMass on Thu Nov 29, 2012 at 08:12:16 AM PST

  •  Here is another question: (0+ / 0-)

    were all the swing states identified correctly? That is, did all the states identified as swing come out closer than all those not so identified?

    Pretty close...

    If you take a final vote difference of 10% as the threshold, then all 12 swing states should have been included -- but then so should MO (final difference -9.4%), AZ (-9.0%), and GA (-7.8%).

    Let's lower the threshold to 9%. Then only GA was missed as being a swing state, but MI (+9.5%) should not have been included.

    Granted, not everyone was using the same list of swing states; I'm using the list above. But it is interesting that for all the attention paid to Michigan, whether it was or was not a swing state, we never heard mention of Georgia at all. It will be a state to pay attention to in future cycles.

    Mark E. Miller // Kalamazoo Township Trustee // MI 6th District Democratic Chair

    by memiller on Thu Nov 29, 2012 at 09:15:11 AM PST

    •  Georgia: I agree (0+ / 0-)

      In fact, I'd say that GA and AZ are where we should next set our sights in terms of expanding the playing field at the presidential level. (I know MO and IN were roughly in the same ballpark in terms of margin of victory for Romney, but they seem to be moving in the wrong direction, whereas GA and AZ are looking increasingly favorable considering demographics.)

      Way to go, team Obama!

      by porktacos on Thu Nov 29, 2012 at 09:26:16 AM PST

      [ Parent ]

      •  but, watch out for Penn. (0+ / 0-)

        The only "good" news the GOP got on election night in the swing states was PA going by only 5% to Obama. It has been the holy grail for the GOP for several cycles but they never seem to get there. Still, as Nate Silver pointed out recently, PA has the potential to be problematic for the Democrats. The big urban areas are no problem, but the outlying parts of the state are showing tendencies that are similar to nearby West Viginia. WV used to be a swing state if not an outright blue leaning out. Now, it is solidly Red.

  •  How Do These People STay In Business? (0+ / 0-)

    What is their model:  fantasy poll results the way you want them.

  •  Gallup,Ras and all the others out there (0+ / 0-)

    would do themselves great favour if they simplify that like voter screen to one simple question as does PPP.
    here goes
    -"If you do not plan to vote in this coming election, please hang up".
    -if someone doesn't hang up...there is your likely voter...as simple as that..
    -stop all that BS about have you voted before, did you vote last time bla-bla-bla...
    ..PPP like Rand, Democracy Corps etc "nailed" this election but the sad part is that they do not get as much media as Rasmussen or Gallup...

    "When fascism comes to America, it will be wrapped in a flag and carrying a cross." Sinclair Lewis, 1935 --Talk of foresight--

    by tuma on Fri Nov 30, 2012 at 11:30:40 AM PST

  •  very unusual for Rasmussen (0+ / 0-)

    normally, as the election gets nearer, Rasmussen show movement towards accuracy.  They did well in 2006 and 2008.  I was very surprised that he would let himself get egg on his face, just to sell a narrative for the GOP.  Then again, not much surprises me these days.

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site