Category Archives: web research

Databases Are Your Friend!

I’ve ranted about this so many times that it’s a true pleasure to see it being done the right way by YouGov/Polling Point here:

(obviously, I blanked out the zip code.)

Compare and contrast with other folks’ ongoing aggravation of asking me to pick which country I live in (from a list of about a billion, though the US at least is at the top), then pick my state, and then entering my zip code. Harris does this pretty frequently, though I just saw an example the other day where they instead asked if I still lived in the United States, and then (without asking for state or zip) asked me to type the name of the city where I live, which I found somewhat unusual.

Anyway, nice to see this happening. We’re already on the panel, so you already know all this background info!

1 Comment

Filed under databases are your friend, Doing it right, Market Research, web research, YouGov

Taking a Hatchet to Your Matrix

Not a newsflash: I hate matrixes. That being said, I acknowledge they’re sometimes going to be necessary. If you’ve got to use one, though, I think it’s in everyone’s interest to keep each one as small as possible, and to use as few of them as possible.

There’s often a point in web surveys where the respondent is asked whether or not he has heard of a number of different items – brands of orange juice, for instance, to use my favorite example. That’s followed by another question asking which of the brands the respondent has personally tried.

Then come the matrixes, where respondents are asked to rate each of the brands that they’ve heard of – not just the subset they’ve personally tried – across a number of rating criteria, each one likely being its own matrix on its own page. This is the point where the respondent suddenly regrets being so honest about the brands he’s seen in the grocery store or advertised on TV, because he suddenly realizes he’s going to be spending the next fifteen minutes of his life clicking “don’t know” or “not applicable” on matrix after matrix inquiring about the best flavor, the least pulp, the nicest packaging, and so on. I get, very clearly, that as researchers, this isn’t entirely a waste of time – we can give our clients a report that shows the attitudes crosstabbed by both active users and those who are just aware of each brand. It has the added “bonus” of letting us inflate the number of respondents — you get to tell your client that you asked the evaluation questions of significantly more people than you would have if you’d only included those who use the brands in question. (This is the product research version of asking unlikely voters how they’ll be voting.) And, of course, it’s possible that some respondents will have differing levels of familiarity with the products they don’t themselves use, and may actually be able to provide useful feedback nevertheless. But, still:

I’m writing this, actually, as I take a break from a piece of research I’m in the middle of taking. I think I’m on about the sixth matrix page. I’ve got 8 columns going across – 7 point Likert plus a “not sure” – and 10 rows of brands going down, only 1 of which is asking me about something I truly have knowledge of – the other 9 are things I’ve heard of, but have no ability to evaluate. I don’t want to go into specifics, but let’s pretend it’s about travel, and that it first asked me which foreign cities I’d ever considered traveling to, and then asked which ones I’d actually visited — and now it’s asking me about every city I’d considered going to, to rate the quality of its museums, central train station, hotels, safety, and so on. There might be the occasional question I can answer based on something a friend told me or based on something I vaguely remember reading on Wikipedia or in a Rough Guide, but in general, I’m just not able to comment on the friendliness of the Dublin populace, you know?

Not only is this frustrating, but I’m also (and this wouldn’t apply to an ordinary respondent) acutely aware that my speeding through page after page, clicking “not sure” for 9 of the 10 choices and then assigning an answer choice to the one thing I’m familiar with is probably going to result in my responses being discarded anyway.

I have a sense, based on the level of detail each matrix has gone into, that I’m going to have another 4 or 5 of these waiting for me, and honestly, I’m hoping I time out while I write this; if I do, I’m done.

Is an aggravated respondent really in anyone’s best interest?

4 Comments

Filed under bad user experiences, data quality, Market Research, matrixes make me cry, web research

(Nit)Picking on Harris

So, I dipped by toe back in to web research this morning. Since I generally have decent experiences with Harris polls, I decided to give them a shot. As usual with them, it was a fairly painless experience; nothing really glaringly wrong or obnoxious — so let’s not dwell too much on either of these things.

First, from a pure design standpoint, I don’t understand the point of these massively over-wide columns. If you’re going to answer true for some and false for some, it’s really a lot of left-right mouse or trackpad motion — enough that it created a minor annoyance for me. In a 3-question true/false setup like this, it’s really not terrible — but in a longer series of questions, it might drive me to drink:

Wouldn’t this shopped version be easier to use?

Like I said, pretty minor. Which brings us to my second and final observation on this poll:

Who the heck are the numbers for?

So, yes: all pretty minor.

1 Comment

Filed under Harris, jargon, Market Research, silly nitpicking, web research

Obscure AND Potentially Personally Identifying? Let’s Ask It!

Sent in by a reader; click to embiggen:

Bad enough they’re asking for something few people would know offhand — and who wants to go fetch a piece of mail to get the answer  — but I think there’s an equally bad issue here regarding respondent confidentiality, at least theoretically.  A quick search of census data for some five-digit zip codes chosen at random from among those I’m familiar with around the country shows between about 8,500 and 16,000 occupied households in each. (I wouldn’t call that an average, as it’s practically anecdotal, but it’ll do for now, since I can’t find exactly what I’m looking for.)  A zip+4, though, is designed to be reflective of a much, much smaller geography. According to the US Postal Service:

The 4-digit add-on number identifies a geographic segment within the 5-digit delivery area, such as a city block, office building, individual high-volume receiver of mail, or any other unit that would aid efficient mail sorting and delivery

How small are those “geographic segments?” You can use this USPS lookup tool to get a sense of it. I live on a suburban street; my house is on a corner. My immediate neighbor around the corner has a different zip+4; the people across the street have a different zip+4; the house immediately behind me has a different zip +4. The house next door to me, though, and the two houses that follow it going down to the end of the block — those all have the same zip+4 data. Apparently, my personal zip+4 will narrow you down to one of four homes.

Now, presumably, you gave your full mailing address when you signed up for this panel, so it’s not as if the research company) doesn’t already know exactly who you are and where you live — and it’s not as if telephone research doesn’t contain your even more personally identifiable phone number right there in the data — but still, this makes me uncomfortable. Rather than using back-end databases to append that information in post-production (which, for the millionth time, would be the ideal way to deal with this situation), we’re instead outright asking for something that both makes your data pretty easy to tie back to you and which you don’t know in the first place. (I actually thought I knew mine, and I don’t, though I was fairly close.)

All in all, this strikes me as a really bad question. What do you think?

3 Comments

Filed under bad user experiences, data quality, databases are your friend, ethics, Market Research, redundant questions, web research

Research Lifestreaming

Harris (click to embiggen):

I’m fascinated, but I think the universe might collapse in on itself in some sort of divide-by-zero error if I were to sign up with @researchrants.

In any case, I want to hear from anyone who does sign up … and I’d love to see any examples of gaming this system. I mean, there have got to be brand managers salivating over this, right?

11 Comments

Filed under a challenger appears, Harris, Market Research, social media, web research

Probability Sampling vs. Web Panel Sampling vs. Interviewing People Completely By Accident

A reader sent me this a number of months ago, and embarrassingly, I’m just getting around to posting it now, as I’ve been reminded by something @Lovestats posted the other day.

Another matrix with multiple problems — “I felt expected,” really? But the focus here actually isn’t the matrix, believe it or not: it’s that my reader has never actually stayed in the hotel that sent him the survey, with the familiar “please rate your recent stay with us at our such-and-such location” sort of email you tend to get after any interaction these days. Not only had he not stayed at that location — he’d never stayed at any hotel in the chain.

Now, I’m assuming this was a glitch of some sort — an actual customer with a similar email address had sloppy handwriting, or a friend of our reader has been giving out the reader’s information instead of his own, or the hotel company is a conglomerate and they actually meant to ask our reader to rate his experience buying a shirt at their clothing store  — but however it came about, it made me think — with all the ongoing discussion about probability samples versus non-probability samples, what about starting by just making sure you’re not interviewing people who are totally outside your frame?

2 Comments

Filed under Market Research, matrixes make me cry, web research, worrying about the wrong thing

Are You Polling a Lot of Miners?

Just clicked a link to take a survey about Twitter, hosted at SurveyMonkey, and was immediately struck by the odd choices in their “profession” question:

I’m frequently aggravated by this question on a personal level, because “market research” is almost never a choice. While you sometimes see a “marketing” choice, if there is a “research” option, it somehow always seems (either explicitly or via connotation in my mind) to be about white-coated goggle-wearing folks in chemistry labs. But I’m not really talking about my own experience here as much as I’m just commenting about how odd it seems that Twitter, of all things, expects to have so many of its users working in the mining, farming, and construction fields, none of which traditionally lend themselves to being able to sit in front of a computer and update the world about your latest moment-to-moment activities. While I’m sure there’s a guy in a combine somewhere right now using Foursquare to tell us exactly which corner of his field he’s in, I’m going out on a limb and guessing there aren’t more than a handful of guys like that. (And why on earth are farming and mining combined? Seems weird.) If I were designing this piece of research for Twitter, I think I’d have a lot more choices that might apply to people who are likely to be using a computer for 8 hours a day.

tl; dr: one size doesn’t fit all. Customize and update where needed.

3 Comments

Filed under answer choices, Market Research, social media, web research

CNET: At Least They Warn You It’s Going to be Horrible?

15-20 minutes? Even I probably won’t click to take this one.

Oh, all right, I did. Got about three minutes in (it’s about reward programs), hit my first matrix, and decided I wasn’t going any further:

Sorry, CNET, I’m out. Not only does this show every sign of making me miserable, it also doesn’t appear to have anything to do with your business, since I can’t imagine a product review site launching its own rewards program. If the research looked like it was directly connected to improving the site I actually was interested in viewing when I went to cnet.com this morning, I might have been willing to go further with this, but not for this.

Leave a comment

Filed under bad user experiences, Market Research, matrixes make me cry, web research

Just Say No Already.

Annie Pettit this morning tweeted from the Net Gain 4.0 Conference in Toronto:

Clients still want 1 hour surveys and we can’t do anything about it : I say turn it down!!

I’ll go further than that: I say turn it down and make it clear to the client that they are the cancer that is killing market research. What in the world can you learn from a sixty minute survey that you can’t learn from a 5-minute one? (I’m not talking about an in-depth qualitative research project, or something where you hook someone up to an EEG and have them watch an episode of CSI: Miami to see what their brain has to say. I’m talking about asking questions, on the phone or on a screen. 60 minutes is 55 minutes too long!

Do we really think the respondents still on the phone (or on the web) at the one-minute mark, the ten-minute mark, and the 60-minute mark are identical?

4 Comments

Filed under bad user experiences, data quality, Market Research, matrixes make me cry, The cancer that is killing market research, web research

How Many Yards Do You Commute To Work, And Other Badly-Measured Intervals.

I’m really sorry I’ve been so dormant lately.  I don’t really have an excuse, other than that I’ve been busy enough with other things that I haven’t been taking many online surveys, and as a result, I haven’t had anything to post.

Today, though, that changes. Hopefully for good? We’ll see.

So I watched an episode of How I Met Your Mother at cbs.com just now, and following it, they gave me a survey from Magid about my use of streaming video, peer-to-peer sharing, and so on. I’ve actually been getting a lot of TV via the internet lately — there’s just too much on at the same time on Thursdays, and I’ve been forced to torrent or use Hulu to watch at least some of it, since my DVR can only do two things at a time, and there seem to be THREE things on simultaneously from 8:00 to 10:00 those nights. Some weeks I grab torrents, others I use Hulu — it mostly depends on when I’ll be watching, because I have kids, and I find it much easier to watch TV with closed captions when they’re around, since they’re noisy little things. If I’ll be watching when they’re home, I often use Hulu; if they’re out or asleep, I’ll often get the torrents, which are usually better quality, and are usually able to be streamed to my TV, too.

Anyway, the point here is to share this incredibly ill-conceived question, which was the one really badly thought-out item in an otherwise pretty solid survey:

quarter hour

Really? You want me to think about how much TV I watch in 15-minute increments? Why on earth would you think this was the right way to ask this question? I had to do MATH to answer the question, counting up the number of hours of TV I watch and multiplying by 4, which might not even be an obvious option to every respondent. The strangest thing is, the 15-minute increment makes no sense in either context. Online versions of TV shows aren’t ever in 15 minute formats — half hour sitcoms run around 22 minutes, and hour dramas are around 44 — and the other things people watch online, like movie trailers and clips of people being idiots on YouTube are much shorter.

I don’t get it. Which I suppose isn’t unusual.

2 Comments

Filed under answer choices, bad user experiences, Market Research, quality of responses, web research