Category Archives: abject stupidity

Yet Another Example of How Not To Use The Internet To Conduct Research

(edit, April 6, 2011: Over a year since I posted this, and I just took another Zogby poll (now an “Ibope Zogby poll,” by the way), and they’re still asking this question the same way. And I still, despite being pretty politically aware, knowing my congressman’s name, and having even written the guy and gotten a response on at least one occasion, have absolutely no idea what district number I live in. Everything below this parenthetical addition is old content, so if you have seen it before, sorry.)

This is from a couple of weeks ago, and I’m just now getting a chance to post it.

88% of Americans live in a state with fewer than 53 US congressional districts in it. Only California has that many; Texas comes in second with 32.

And yet, here’s how the good folks at Zogby Interactive ask what congressional district you live in:

That’s right. Zogby asks what state you live in, and then asks you, regardless of how many districts your state contains, which of 53 districts you live in. This is terrible for a lot of reasons, beginning with what should be obvious to everyone: it’s really lazy.

Looking at this from a practical political standpoint, though, it’s a mess. Folks just don’t think about their congressional district that way. Many (certainly not all) will know the name of their representative — or at least be able to pick the name from a short list of possibilities — but the odds of them knowing the actual district number aren’t great.

That being said: it can be problematic to ask people who their representative is if you’re then going to ask them if they’re going to vote for that person — it creates a priming effect and reminds (or informs, if the respondent is less focused on politics) of incumbency and makes it difficult to do a clean “would you vote for x or y” question. While I didn’t get that question as a follow-up, it’s possible some respondents did, though I somewhat doubt it this far out.

A much better way to ask this question is to ask for zip code, which will let you look up the right district in most cases; a simpler method (for the respondent), and one that might feel less personally intrusive, is to remember that this is the internet and present a state map, on which the respondent can zoom in and actually CLICK WHERE HE LIVES.

And, saying what should be obvious, but maybe isn’t: if you structure your research in such a way that only the very very very top-most super-engaged respondents are qualified to answer a follow-up, your results are only going to reflect that tiny slice of the population.

Pathetic, and sadly, about what one would expect.

Leave a comment

Filed under abject stupidity, bad user experiences, databases are your friend, election polling, Market Research, Politics, quality of responses, the web is a visual medium, Zogby

LA Times: What the What?

So for a couple of weeks now, I’ve been getting emails from the Los Angeles Times about how my email newsletter subscriptions are about to end. I’ve been ignoring them, because I don’t think I actually get any emails from the Los Angeles Times. I suppose I must have registered with a real email address on their site to read a story once, years ago, before BugMeNot and their Firefox extension made such things unnecessary. In any case, I don’t care, fine, whatever, stop sending me those newsletters you’re not actually sending me, I’ll find a way to survive, despite the longing I shall forever feel in my heart.

Just now, though, I got this brilliant piece of email from them:

“Why have we stopped sending you emails?” WHAT DO YOU THINK THIS THING IS? IT’S AN EMAIL! THAT YOU’RE SENDING ME! ABOUT HOW YOU’VE STOPPED SENDING ME EMAILS WHICH IN ACTUALITY YOU NEVER WERE SENDING ME IN THE FIRST PLACE!”

It boggles the mind.

Leave a comment

Filed under abject stupidity, bad user experiences, non-research, silly nitpicking, what

Gary Langer Hates Robopolls, Probably Because They’re More Accurate Than His Live Interviewer Polls And He’s Apparently Just Jealous, The Poor Thing, So He’s Decided That Accuracy Doesn’t Matter.

Gary Langer, Director of Polling at ABC News, has a problem with what he calls “robopolls” and what we’re going to call interactive voice response, or IVR, polls.

As best I can tell, his root problem is that they’re more accurate than his live interviewer polls, despite using what he considers inferior methodology. But here, let him tell you:

In our ABC News polling standards we don’t regard autodialed, pre-recorded polls as valid and reliable survey research. Our concerns include noncoverage, nonresponse, lack of respondent selection, nonvalidation, opaque weights and the absence of a persuasive, independent literature evaluating the approach. Some other news shops – the few that, in my view, take coverage of public opinion seriously – share these judgments.

There’s the germ of a good point or two in there, to be honest with you, at least on the surface, but let’s go through all of them and poke some holes:

  • Noncoverage: It’s illegal to call a cell phone with an autodialer, so by default, a poll conducted by an IVR firm probably doesn’t include any cell phone interviews, so yes, cell phone only households wouldn’t be covered in the sample — but, first, there’s nothing keeping an IVR firm from hiring a data collection house to make cell phone calls with live interviewers to supplement their IVR polling — IVR is a method of conducting polls, not an ideology — and second, as it happens, there’s a bit of an intersect between the people most likely to live in a cell phone only household and the people least likely to vote. Not that Langer cares about results, but the studies I’ve seen have suggested that so far, this isn’t a factor.
  • Nonresponse: I suppose response rates could be lower for recorded voice polls than live voice polls, but they could also be higher. I don’t know, and Langer doesn’t know either: there are a number of different ways to calculate response rates, and few pollsters release them at all, and when they DO release them, they all seem to pick a different measure. See page 21 (table 5) of this AAPOR report on 2008 primary polls — there isn’t a single contest there where you can compare an IVR and a live interview pollster’s response rate because none of the pollsters seem to agree on which measure to use!  (There’s also a body of literature suggesting there’s really not much correlation between response rate and accuracy, but Gary doesn’t care about accuracy, just methods, so I guess we won’t bother linking to any of that. Or not.)
  • “Lack of respondent selection” is a major bullshit non-issue, brought up time and time again by Langer and his ilk to discredit research that threatens their world. According to them, your poll is unreliable if you don’t start every call by asking to speak to a specific random person — the youngest male, for instance, or the person in the household with the next birthday. There’s absolutely no reason why “robopolls” can’t do this — it’s just another question, for crying out loud — and as far as I know, there’s no evidence that there’s any reason for them to do it, or for anyone else to, other than because Langer says they should.
  • “Nonvalidation.” I have no idea, but assume this means Langer thinks good polls employ extra live interviewers to call back selected respondents to make sure the original live interviewers actually did contact the respondents and that the original live interviewers didn’t just make up the data so they could collect their $25 or whatever and go home. Hmm. Gosh, Gary, why would it possibly be the case that IVR polls don’t bother doing this? Maybe because they don’t have a room full of telemarketers falsifying data so they can go home early? (And, once again: why couldn’t IVR pollsters do validation? If it mattered, is there some reason they couldn’t make those callbacks, either through an IVR or a contracted live interviewer methodology? Couldn’t the office manager make the calls the next day? You only “need” to validate a sampling of the calls, right?)
  • “Opaque weights” is another term that doesn’t mean a lot to me. Apparently Gary thinks the IVR pollsters are more secretive about how they weight their data than the live interview pollsters. No idea why that would be the case, no idea what evidence there is suggesting it is.
  • The absence of literature is one of my other favorites. Yeah, Gary, there’s not a lot of peer-reviewed scholarly articles about IVR polls, all of which happen to be run as businesses (as opposed to academic projects) because, unlike academics, business people don’t usually write peer-reviewed scholarly articles.

What else you got, Gary? Oh, right:

Some defenders of robopolls will point in another direction, at those that have accurately predicted election outcomes. From my perspective, accurate modeling is not an adequate stand-in for good polling. For reliability and validity alike, what matters are sound methods. And what matters next are substantive measurements, not horse-race bingo.

Wait, sorry, I couldn’t hear you, I was laughing too hard. What? Is he seriously saying that it just doesn’t matter how accurate a poll is, it only matters if the pollster follows the right rules? The Gary Langer rules? Oh, Gary.

Look: twenty years ago, if you wanted to take a public opinion poll, you needed, I don’t even know, a Ph.D., a university, and I’m going to guess something like $40,000.  Today, you can get a PPP or a SurveyUSA or a Rasmussen to do it for you for maybe 10% of that, or you can just go do it yourself for a few hundred dollars. I get that this is scary as hell if you came up through the academic ranks; I get that irrelevancy really  pisses people off — but I just don’t think you have a lot of credibility when you try to tell us that whether or not you get elections right isn’t a good measure of your skill as an election pollster. From where I sit, it’s the only thing.

(PS: PPP, I love you guys, but I think he’s right about one thing: an “or not” wording choice would probably be better.)

tl; dr: threatened much?

ETA: Forgot to add this, which is Nate Silver’s last (and horribly outdated, from May of 2008) pollster ranking chart. Still, it’s the last thing he gave us:

Look on the bright side, Gary: you’re better than Zogby.

2 Comments

Filed under ABC News, abject stupidity, election polling, Gary Langer, irrelevancy, IVR, Politics

Why Not Use Subway Turnstiles?

What does it end up costing every time they have to shut down an airport terminal, cancel all the flights, and re-screen all the passengers because some dumbass went in through the out door?

What do these subway exit turnstiles cost to install? They’re in every subway station in the free world, pretty much, so they can’t be that outrageous.

Just saying.

(Online research still sucks. How much more can I say about that?)

3 Comments

Filed under abject stupidity, bad user experiences, open questions

How Not To Link

This has nothing to do with research, and I’ve probably complained about this before, but it really aggravates me every time I see it:

THAT IS NOT HOW THE INTERNET WORKS, NEW YORK TIMES. CUT IT OUT.

Possibly not obvious from the screengrab, especially because the arrow gets left out: if you click the URL in the story, you go to a New York Times page that uselessly lists all their articles about Facebook. Just in case anyone from the world of old media is reading this and wants to know what they should have done: LINK TO MS. SALAHI’S FACEBOOK PAGE.

God, how is this still so difficult?

Leave a comment

Filed under abject stupidity, bad user experiences, New York Times, non-research, the web is a visual medium