Category Archives: IVR

Fun With Crosstabs

A couple of thought’s on PPP’s bombshell last week about interracial marriage in Mississippi, which, if you missed it, said that 46% of “usual Republican Primary voters” there say interracial marriage should be against the law (full PDF of results here):

First, at the very outset, this: I like PPP quite a lot. They do solid work. They release more data on their polls than many of teir competitors. They are a Democratic pollster, but if you look at the data Nate Silver puts together for FiveThirtyEight, PPP’s results are both pretty accurate and pretty unbiased. Yes, one can argue that being partisan gives them an agenda, and while I’m sure they’re not sad to have uncovered this data point, at the end of the day, a pollster is only as good as his numbers, and PPP’s numbers are good. If anything, I’d say their partisan leaning gives them some “cover” when they want to ask something controversial, like they did here. Actual question wording: “Do you think interracial marriage should be legal or illegal?” I think a good number of pollsters would have difficulty asking that question; it seems, to me at least, to be too likely to cause offense. More on that in a minute, though.

Needless to say, the fact that 46% of their respondents said “illegal” made a lot of news last week. PPP says they asked the question of Democrats (and I assume independents) as well, and will be releasing that part of the data shortly, and I’m sure that will make some news, too. I’m not naive enough to believe that racism is dead, and it’s worth noting that while the US Supreme Court struck down state laws banning interracial marriages back in 1967, neighboring Alabama didn’t get around to removing the anti-miscegenation bit from its state constitution until the year 2000, but I still think these numbers are surprisingly high. These are findings that produce coverage because people talk about them; in this case, a lot of sad head-shaking mixed with gleeful potshots from one side, and a lot of unoriginal pollster attacks from the other, mostly of the usual tired “but it’s IVR, it could have been an 8-year-old taking the poll” sort. (Because it would be a better or easier-to-stomach finding if 46% of Mississippi 8-year-olds felt this way?)

Still, I think there are about three perfectly reasonable reasons to question these results, so let’s get to it:

First, there’s this: “legal” and “illegal” sound really similar on the phone, no matter how carefully the announcer (or the live interviewer) says the words. Everyone does it, because workaround wording is cumbersome and ends up being something like “should so-and-so be allowed by law? Or against the law,” which is problematic because people generally just don’t say “allowed by law,” so you end up causing some confusion there, as well. Instead, it might be better, somewhat illogically, to go with something even longer, like “should Mississippi allow interracial couples to get married? Or should interracial marriage be against the law?” It’s impossible to mis-hear that, except for the next thing:

There isn’t (or, there wasn’t until this poll was released) any real debate in this country right now about interracial marriage. There is, however, one about same-sex marriage. I think it’s very possible that some percentage of respondents simply misunderstood the question. This was raised, to an extent, in the comments on the original PPP post on this poll, and I think PPP’s response might have been a little too hasty, though I understand where they’re coming from. To be very, very clear: I do not think the respondents on this poll, or southerners, or Republicans, or southern Republicans, or any people in general, are stupid. I do not think people are hearing the word “interracial” and failing to understand what it means. I think they’re just not hearing the word at all. Look: people are not taking polls in ideal lab conditions; they’re taking them in real life, in rooms that contain television sets, and children, and the internet, and in some cases they’re taking the polls in their cars on their cell phones, despite the pollster’s best intentions to not accidentally break the law by calling any wireless numbers. No matter how clearly worded and how well-read the question is, the pollster is battling an infinite number of distractions for each respondent’s time, and can’t expect each respondent to be giving them his or her undivided attention. In 2011, in an election poll, when the 14th question you’re asked, coming immediately after several “how would you vote” questions, if you hear “Do you think interracial marriage should be legal or illegal,” I think you’re going to have one of three reactions, and only one of them is good.

You might hear the question correctly and answer it. Spoiler alert: that’s the good one.

You might hear the question incorrectly and answer it. Another spoiler: that’s bad.

You might get so offended that you hang up the phone. This is so bad that it’s the topic of the next paragraph.

If you ask a question that deeply offends some of your respondents, they’re going to hang up on you. That’s going to leave only the non-offended to answer that question. What does that do to your results? If you, out of the blue, put a question in the middle of your Mississippi Republican primary election poll, a question that carries the pretty clear subtext of “are you a racist,” I think there’s a reasonable chance that many non-racist respondents will decide they know where this is headed — to a bad place — and drop out of the poll. I actually think a similar thing may be driving some of the polls showing large numbers of Republicans in the “birther” camp — a lot of non-birthers are hearing that question as “Now I’d like to ask you something to try to make people who share some of your views on politics look really, really stupid. Where do you think Barack Obama was born?” Reasonable Republicans may be dropping out at this point, which not only is inflating the number of Republican birthers, but it’s also having the side effect of inflating the poll numbers for “candidates” like Donald Trump, because, in most cases, if you hang up during the course of a poll, none of your previous answers count. If Romney voters start dropping out of a poll near the tail end, their answers up at the top end of the poll are also going to go in the garbage, and that’s going to give a bump to the fringe respondents’ favorite candidates.

All this is testable, of course, with some a/b sample splitting. I think the hardest thing is making it absolutely clear we’re talking about interracial marriage (as opposed to same-sex marriage), which might require, ironically enough, asking about same-sex marriage first to make it more abundantly clear, in the subsequent interracial marriage questions, that we’re now talking about something else.

I look forward to seeing what data PPP shows for the rest of Mississippi respondents from this poll; offhand I’d expect it to be somewhat less of a large number, but still larger than many will be comfortable with — but I’d take it all with a grain of salt until someone is able to do some testing of my hypothesis here.


Filed under election polling, IVR, Public Opinion Polling

Gary Langer Hates Robopolls, Probably Because They’re More Accurate Than His Live Interviewer Polls And He’s Apparently Just Jealous, The Poor Thing, So He’s Decided That Accuracy Doesn’t Matter.

Gary Langer, Director of Polling at ABC News, has a problem with what he calls “robopolls” and what we’re going to call interactive voice response, or IVR, polls.

As best I can tell, his root problem is that they’re more accurate than his live interviewer polls, despite using what he considers inferior methodology. But here, let him tell you:

In our ABC News polling standards we don’t regard autodialed, pre-recorded polls as valid and reliable survey research. Our concerns include noncoverage, nonresponse, lack of respondent selection, nonvalidation, opaque weights and the absence of a persuasive, independent literature evaluating the approach. Some other news shops – the few that, in my view, take coverage of public opinion seriously – share these judgments.

There’s the germ of a good point or two in there, to be honest with you, at least on the surface, but let’s go through all of them and poke some holes:

  • Noncoverage: It’s illegal to call a cell phone with an autodialer, so by default, a poll conducted by an IVR firm probably doesn’t include any cell phone interviews, so yes, cell phone only households wouldn’t be covered in the sample — but, first, there’s nothing keeping an IVR firm from hiring a data collection house to make cell phone calls with live interviewers to supplement their IVR polling — IVR is a method of conducting polls, not an ideology — and second, as it happens, there’s a bit of an intersect between the people most likely to live in a cell phone only household and the people least likely to vote. Not that Langer cares about results, but the studies I’ve seen have suggested that so far, this isn’t a factor.
  • Nonresponse: I suppose response rates could be lower for recorded voice polls than live voice polls, but they could also be higher. I don’t know, and Langer doesn’t know either: there are a number of different ways to calculate response rates, and few pollsters release them at all, and when they DO release them, they all seem to pick a different measure. See page 21 (table 5) of this AAPOR report on 2008 primary polls — there isn’t a single contest there where you can compare an IVR and a live interview pollster’s response rate because none of the pollsters seem to agree on which measure to use!  (There’s also a body of literature suggesting there’s really not much correlation between response rate and accuracy, but Gary doesn’t care about accuracy, just methods, so I guess we won’t bother linking to any of that. Or not.)
  • “Lack of respondent selection” is a major bullshit non-issue, brought up time and time again by Langer and his ilk to discredit research that threatens their world. According to them, your poll is unreliable if you don’t start every call by asking to speak to a specific random person — the youngest male, for instance, or the person in the household with the next birthday. There’s absolutely no reason why “robopolls” can’t do this — it’s just another question, for crying out loud — and as far as I know, there’s no evidence that there’s any reason for them to do it, or for anyone else to, other than because Langer says they should.
  • “Nonvalidation.” I have no idea, but assume this means Langer thinks good polls employ extra live interviewers to call back selected respondents to make sure the original live interviewers actually did contact the respondents and that the original live interviewers didn’t just make up the data so they could collect their $25 or whatever and go home. Hmm. Gosh, Gary, why would it possibly be the case that IVR polls don’t bother doing this? Maybe because they don’t have a room full of telemarketers falsifying data so they can go home early? (And, once again: why couldn’t IVR pollsters do validation? If it mattered, is there some reason they couldn’t make those callbacks, either through an IVR or a contracted live interviewer methodology? Couldn’t the office manager make the calls the next day? You only “need” to validate a sampling of the calls, right?)
  • “Opaque weights” is another term that doesn’t mean a lot to me. Apparently Gary thinks the IVR pollsters are more secretive about how they weight their data than the live interview pollsters. No idea why that would be the case, no idea what evidence there is suggesting it is.
  • The absence of literature is one of my other favorites. Yeah, Gary, there’s not a lot of peer-reviewed scholarly articles about IVR polls, all of which happen to be run as businesses (as opposed to academic projects) because, unlike academics, business people don’t usually write peer-reviewed scholarly articles.

What else you got, Gary? Oh, right:

Some defenders of robopolls will point in another direction, at those that have accurately predicted election outcomes. From my perspective, accurate modeling is not an adequate stand-in for good polling. For reliability and validity alike, what matters are sound methods. And what matters next are substantive measurements, not horse-race bingo.

Wait, sorry, I couldn’t hear you, I was laughing too hard. What? Is he seriously saying that it just doesn’t matter how accurate a poll is, it only matters if the pollster follows the right rules? The Gary Langer rules? Oh, Gary.

Look: twenty years ago, if you wanted to take a public opinion poll, you needed, I don’t even know, a Ph.D., a university, and I’m going to guess something like $40,000.  Today, you can get a PPP or a SurveyUSA or a Rasmussen to do it for you for maybe 10% of that, or you can just go do it yourself for a few hundred dollars. I get that this is scary as hell if you came up through the academic ranks; I get that irrelevancy really  pisses people off — but I just don’t think you have a lot of credibility when you try to tell us that whether or not you get elections right isn’t a good measure of your skill as an election pollster. From where I sit, it’s the only thing.

(PS: PPP, I love you guys, but I think he’s right about one thing: an “or not” wording choice would probably be better.)

tl; dr: threatened much?

ETA: Forgot to add this, which is Nate Silver’s last (and horribly outdated, from May of 2008) pollster ranking chart. Still, it’s the last thing he gave us:

Look on the bright side, Gary: you’re better than Zogby.


Filed under ABC News, abject stupidity, election polling, Gary Langer, irrelevancy, IVR, Politics

A Quick Thought: Where Are We Headed?

Just a quick thought:

Where is the industry headed, not ten or even five years down the road, but three years down the road?

When we start taking serious, state-level polls on the 2012 presidential election, will we be doing it mostly still on the telephone? Will political research have moved to the web? Will market research move even more to the web than it already has?

I have a lot of thoughts on this, which I’ll post in due course; I’m curious as to whether anyone else would like to weigh in before I do.

Leave a comment

Filed under election polling, IVR, Market Research, open questions, Public Opinion Polling, web research