Without a doubt, the accuracy of model used for sample development is the largest factor in determining the ultimate accuracy of a poll. Sample size is another factor, though less important than outsiders may think. Many factors go into whether a poll yields an accurate snapshot of where things stand at the moment of the poll.
There is one important factor that is rarely spoken of. Data Collection.
Off and on throughout the '90s and early '00s, I managed data collection centers that did market research and public opinion polling. I have been out of that field for almost eight years and have not kept in close touch with recent developments, either in technology or personnel management. My guess is little changed other than ways to handle cell phones. Nothing much changed from '92 to '02.
The level of training, structure of compensation, intensity of monitoring and oversight, and the honesty and dedication of management staff all effect the integrity of the data that is ultimately used in the analysis. It's a business. Some companies do it well, others not so much. When there are excessive incentives for high completion rates, interviewers may alter their technique in subtle ways to goose answers from people who may not want to finish the questionnaire. In the worst case scenarios, there may be outright fraud.
Many data collection firms have lax training programs, putting interviewers on the phone who are not properly trained to listen carefully for subtle differences in the the respondent's answers and how to probe and clarify. I took over one center in which the variation from interviewer to interviewer on the day I started was so gross that I advised my employer to cancel all current projects and shut down for retraining. Instead we finished all current projects, shifted new ones to other centers, and then shut down for a couple days of retraining.
Then there is the controversial use of automated robo-interviewers. The canvass trainer who "trained" me before going out to canvass last night blamed this for part of the inaccuracy of polls today. I wanted to argue but realized it was a waste of time and effort for both of us. Still, I think it is important to know that there is no evidence that automated interviewing gets any less accurate a result than live interviewing, in questionnaires that are short and simple. Robots don't inject personal views into their inflections or get lazy when they are tired. The potential trouble is that qualifying a respondent as an LV is not always done in a simple way. But since pollsters generally want to get a saleable product, only a few use automated interviewing when they are doing complex political race polling using an LV model.
Since I am no longer involved in this and things do change, with a few exceptions, I don't know which pollsters do in-house data collection or, of the who outsource, who contracts with which call centers. I do know that not all data collection is created equal and as time has gone on, cost cutting has diminished the reliability of results. This is the one aspect of polling that has certainly gradually degraded, not out of any radical change in technique, but simply the decline in compensation and training across the field. Factor in the cell phone controversy, and we see a certain rising amount of GIGO (garbage in, garbage out). So as statistical models get more sophisticated (maybe better, maybe not) they must fight against the human element, which I know from personal experience can be large enough to undermine the validity of any given poll on any given day.
The lesson? Watching a broad "Poll of Polls" like 538 or Pollster is always better than worrying about any one poll, unless you really know what the data collection was like as well as what the statistical model is. I don't say "all polls suck" in fact quite the opposite. I love polls. I just say be aware they are not the voice of God. And, as has been well covered here at dKos and elsewhere in the blogosphere, sometimes the conventional wisdom about how to develop the model can be egregiously ill informed.
All that said, GOTV and let the chips fall where they may. We'll know on Wednesday morning who did a good job.