Oddly, there seems to be a weird inverse relationship between a person’s ability to use the English language and a person’s willingness to send an angry email about a poll they don’t like. It’s completely routine by now for us to be called — simultaneously — Republican shills because we’ve released a poll showing McCain ahead in one state, and Democratic shills because we’ve released a poll showing Obama ahead in another. Not sure how we can be both, but, OK, I guess.
Underneath all the ridiculousness and accusations, it occurs to me that very few people actually do understand what goes on in the world of polling, so maybe it actually makes sense to the general public to assume that we’re all on the take, accepting money from various candidates, parties, and advocacy groups to tweak our data in one direction or the other. It’s a ludicrous accusation to me on a number of grounds, of course, but hey, I’m the pollster, so maybe I should work through it for the benefit of those who aren’t.
First, and probably foremost: a public opinion pollster is only as good as his reputation. If you’re consistently inaccurate — whether it’s because you’re being paid to falsify data or whether it’s simply because you suck — you’re going to have problems. You’re likely to get kicked out of the pollster club, but probably more importantly, you’re going to lose business. Who wants to hire the guy who called every race six points more Republican than it turned out on Election Day? (Of course, Zogby Interactive is horrible, but keeps getting business, so I could be completely wrong on this.)
Second, I’m not sure what the benefit would be in most cases. Virginia, for instance, right this second is Obama +3, according to pollster.com. If I release a poll tomorrow that shows McCain +3 in Virginia, yes, I suppose that might have some tangible benefit to the McCain campaign — maybe it would increase donations, or simply improve morale — but at what cost to me? Would the short-term gain of some McCain cash be worth violating personal and professional ethics to tweak data in such a way? Especially in a marketplace as crowded as state polls of a battleground state in a presidential election: a McCain +3 poll from Bob’s Internet Polls Ltd is going to be viewed with a lot of skepticism; a McCain +3 poll from Mason-Dixon, or SurveyUSA, or Rasmussen is going to be to be taken much more seriously, but why would such a reputable pollster risk so much?
I have, occasionally, been asked to conduct a piece of research where it would take some real fudging to produce a result favorable to the client — take, for example, a case where a complete unknown wants to run against a long-term incumbent for a House seat. Even in these cases, I’ve never been asked to mess with data to produce false learning on a horse-race question — which, of course, we would refuse to do — instead, the candidates have always come to us with the right idea: let’s NOT ask a horserace question, but instead, let’s poll about issues. Let’s poll without names. Let’s present my positions and my opponent’s positions, and let’s see which the voters prefer when they’re not associated with our names. Let’s ask if the incumbent deserves to be re-elected, or if it’s time for someone new. Let’s show the incumbent is vulnerable, and use THAT piece of research to drive voter enthusiasm and donations; we can take a horserace poll after people know who I am.
The bottom line, really, is simply this: pollsters have nothing to gain from “cooking the books.” Differing poll results can be caused by a lot of things, ranging from errors to actual changes in the dynamics of a contest — but I take strong objection to accusations that any reputable pollster, on any side of the business, has his thumb on the scale. We’re human beings, with our own political preferences and biases: you can be sure that I groan when I first look at fresh numbers and see that my guy is trailing — but that’s life, and at the end of the day, I’m going to publish what I found, whether I personally like it or not.