Polling data is fun. As much as we pretend to hate the media-fabricated horserace, we all get involved in it so all of us who post/frequent/salivate or fume over poll diaries are hypocrites (pretty much everyone here).
But there’s an incredible amount of nonsense from both sides—I”m suuure it’s from people who are excited and often (well, sometimes) not deliberately manipulative, but here’s a few things to think about to keep facts straight.
1.) A poll is a single datapoint in a sea of datapoints, and it represents a tiny, tiny sliver of a population. If you’re asking 400 people to speak (quantitatively) on behalf of, say, 3 million, there’s a lot of room for error. The “margin of error” takes that into account, but even so all you’re getting is a piece of information that is highly likely to fall within a pretty broad range (often between, say, 6 and 12 percentage points for a single candidate—or more)
2.) Getting these 400 out of a million people to reflect reality is hard (and sometimes impossible). While at first it might seem like a purely random, unbiased approach might be the best way of doing things, that’s not always true. For example, say someone decided to poll a liberal state like Massachusetts right before the general election, and blindly overlay a grid on it, polling the house closest to each grid point. Sounds fair, right? Sure, until you consider that most left-leaning voters are confined to geographically constrained areas like Boston, or the college towns out towards Springfield—and much of the rest of the state votes to the right. So a poll that LOOKS unbiased would give you a result to rival the reddest of red states. Or if you went through an old telephone book, your seemingly unbiased alphabetic approach would miss some of the younger generation of cell-phone users. Or anyone who is only reachable via switchboard.
So there’s a hell of a lot of subjectivity (read: bias) that pollsters HAVE to inject into a poll’s methodology, while still trying to retain randomness/objectivity within those parameters. In random sampling, there’s something called an adaptive sampling strategy, which means that you may change your methodology based on your findings—not to INFLUENCE the results, but to ensure that your results reasonably reflect what’s actually going on in the population.
3.) Reputable outfits that do this are not being nefarious, or tricky, or pro #Bern/#Hill/#Rubio/#Cruz or anything, they’re trying to assess what’s out there. It’s true that sometimes there are house effects (which means that they trend a certain way relative to other polls) but that doesn’t mean that either a.) they’re trying to bias results or b.) that they’re even wrong. A good pollster might look to see if there’s anything inherent in the methodology that’s causing the effect (e.g. a purely cell-phone polling outfit might lean Bernie because Bernie supporters trend younger and more tech savvy while Hillary supporters include retirees who are still often reliant on land-lines), but a pro-X house effect doesn’t mean that the pollster is “pro-X” or “going for a certain effect” or anything
4.) Just because a pollster has an “A” rating doesn’t mean that any individual poll is going to be right, while one with a “B-” rating is wrong. I don’t know how 538 does its rating methodologies, but unless a pollster is a clearly consistently awful, or flagrantly biased, or has no idea how to conduct methodolgies, it should be taken as seriously as any other poll. Everyone here salivates over the Selzer poll—honestly folks, she was off by a mile in the GOP Iowa caucus poll, and probably a quarter mile (although technically within the MoE) in the Democratic one. That doesn’t mean she’s any less good—it just means that caucuses are hard to poll, Iowa is hard to poll, and populations as a general RULE are hard to poll. It doesn’t lessen Selzer’s abilities in the least.
5.) If you don’t like a new poll, it doesn’t make it an “outlier”. Wait to see what the other data tell you. Outliers have a visual signature—you can see them on graphs—they’re dots that fall way outside the clusters of other dots (and you can use statistics to find them). But if you have a new poll, taken a month after the same poll by the same outfit, and it’s the first poll for that day, how can you possibly tell if it’s an outlier or not? If it’s outside of the cluster of polls taken at the same time, then ok, you can say “outlier”. For example, when I first heard about that new Trump-Cruz poll that showed Cruz up by 2, it wasn’t really an “outlier” yet since there wasn’t much data from the same time period. Now that a bunch of other polls have shown up that show drastically different information, it does seem that the poll is an outlier.
6.) A trend line tells you what happened to a certain point (i.e. now) — it doesn’t tell you what’s going to happen. Political trends aren’t real-world demonstrations of inertia in that they continue unchecked—just look at Wall Street. If they were, you’d have Bernie or Hillary at 230% by next Wednesday (ok not really but you get the idea). Trends are interrupted every day (hour, really) by all sorts of things—plus you have these elusive “ceilings”. No one knows what they are, but there’s a reason that, say, Trump isn’t getting past around 40%. Look at South Carolina, where both Democratic candidates are vying to win the votes of black voters. Bernie has been trending upwards there in this particular demographic, but we know we’re not going to see him at, say, 70%-30% — currently it’s about the reverse. So trends tell you basically how we got to where we are, but they don’t tell you much about the future.
Back to the Wall Street reference, remember the adage “Past performance does not guarantee future results” !
Those are just a few things to keep in mind as you make noise about polls. That said, what do I think is going on, just based on amateur observations?
1.) National race is tighter, probably within 15 (I”m not sold on single digits at this point)
2.) Nevada is close and perhaps anyone’s ball game at this point
3.) Nevada will influence but not game-change S.C. If Bernie wins Nevada handily, SC COULD close to within 10 but I’m doubtful
4.) HRC still has a pretty substantial lead for Super Tuesday, which has a lot of momentum attached to it. Bernie needs Nevada and a strong 2nd in S.C. to change that narrative, I think—he also needs his possible-likely states (MA/VT/CO/MN). A good showing in a place like Virginia would do great things for his narrative/momentum
5.) A Hillary win in NV would be a bit of a catapult for her since she can make the case for strength with two key minority demographics in NV and SC. That would strongly influence the Super Tuesday narrative and would be tough for Bernie to overcome.