Category Archives: Harris

(Nit)Picking on Harris

So, I dipped by toe back in to web research this morning. Since I generally have decent experiences with Harris polls, I decided to give them a shot. As usual with them, it was a fairly painless experience; nothing really glaringly wrong or obnoxious — so let’s not dwell too much on either of these things.

First, from a pure design standpoint, I don’t understand the point of these massively over-wide columns. If you’re going to answer true for some and false for some, it’s really a lot of left-right mouse or trackpad motion — enough that it created a minor annoyance for me. In a 3-question true/false setup like this, it’s really not terrible — but in a longer series of questions, it might drive me to drink:

Wouldn’t this shopped version be easier to use?

Like I said, pretty minor. Which brings us to my second and final observation on this poll:

Who the heck are the numbers for?

So, yes: all pretty minor.

1 Comment

Filed under Harris, jargon, Market Research, silly nitpicking, web research

Research Lifestreaming

Harris (click to embiggen):

I’m fascinated, but I think the universe might collapse in on itself in some sort of divide-by-zero error if I were to sign up with @researchrants.

In any case, I want to hear from anyone who does sign up … and I’d love to see any examples of gaming this system. I mean, there have got to be brand managers salivating over this, right?

11 Comments

Filed under a challenger appears, Harris, Market Research, social media, web research

Health Survey!

You know what works better than an incentive? For me, anyway?

health! survey! invite!

Exclamation points! The enthusiasm is contagious! I can’t wait to click the link and take the survey.

Of course, I’ll bet the staid researchers who programmed the survey itself aren’t so excited about it, though.

health survey!

Oh my gosh! They totally are! This is going to be the best survey ever!

Or not. But it was refreshingly matrix-free and fairly speedy. And oddly, I know I’m sort of making fun of it, but — the exclamation point actually worked, unintentional though I suspect it must be.

1 Comment

Filed under Harris, incentives/compensation, Market Research

Whoa, Two Months?

Crap, I know I’ve been busy, but this is ridiculous.

Still fighting the good fight, but haven’t had time to write about (or even look at) much research lately. I did catch this grid a couple days ago, and I think it’s worth throwing up and looking at, not because it’s a particularly terrible example (it’s sadly just typical), but because I can imagine so many better ways to measure this:

harris vehicle grid

Can’t you picture something with different carmaker logos (or, maybe even better, images of their most popular models) that you can drag up and down or left and right to indicate exactly how likely you would be to consider each of them? And that’s just my very first thought on this one.

Flash makes pretty much anything possible, but we’re still using virtual #2 pencils to fill in virtual scantron bubbles, aren’t we? What do you think?

8 Comments

Filed under Harris, Market Research, matrixes make me cry, open questions, the web is a visual medium, web research

Harris Thinks Outside The Box

Not in a good way, though:

harrisoutsidethebox

It’s always possible this was a Firefox-only problem, but I (a) doubt it and (b) don’t think that makes this OK either, actually.

It’s not eggregious, of course, just … off, and as such, distracting. Don’t distract your respondents; don’t make them take time out from thinking about their answers to think about why your boxes won’t fit on your page.

2 Comments

Filed under Harris, Market Research, matrixes make me cry, web research

Consistency is Key

Of the online polls I see regularly, Harris Interactive seems to be the best of the lot. They actually did something I thought was particularly good in a piece of research I saw today, but of course I’m going to post on the thing they did today that I didn’t like as much.

First, I was asked this:

harris1

Nothing horrible there; it’s a matrix, but it’s not too huge, and the answer choices are fine. But then they ask this as the very next screen:

harris2

I check my bank statements all the time, so I started clicking the right-most column, just like I’d done with the Facebook questions — and then I realized that the right-most column wasn’t “very likely” anymore, but had mutated into “not applicable.” I suppose you can make an argument that the column is necessary on this page — the Facebook questions were asked only of Facebook users, and it’s possible some people answering this screen might not have a 401(k), or might not have any credit cards — but I think it would have been better to either let the “not at all likely” column take care of those folks or to have added the “not applicable” onto the Facebook questions as well.

The more you can keep answer choices identical, the more your respondents can glide effortlessly through the research. Note that what Harris did here is really quite minor, and not at all as aggravating as what I’ve seen done elsewhere from time to time, where a painfully over-sized matrix runs across multiple pages, but with the answer choices randomly shifting to abuse the respondents make sure the respondents are paying attention. What I’m talking about here is nitpicking compared to that sort of thing — but the common element is that your respondent is going to click the wrong thing because they expect their answer choice to be somewhere else.

1 Comment

Filed under answer choices, Harris, Market Research, silly nitpicking, web research

How Do You Tell if Your Survey Design Has Problems?

Harris often asks respondents to click a radio button at the end of the survey, asking something along the lines of “How much do you agree with this statement: I noticed mistakes in this survey,” with answer choices ranging from strongly agree to strongly disagree. (I keep neglecting to get a screencap of that, but it’s along those general lines.)

Here’s NPD’s similar approach:

npd-feedback

Polling Point has asked, at various times, something along the same basic lines; I remember because there was something they kept getting wrong, and I kept complaining about it, to no avail. (It was something minor related to the answer choices they were giving about which news program you watched most often, but I can’t remember the details.)

All of these approaches are,  on the one hand, commendable: respondents should be able to provide feedback, and asking directly in the research instrument itself lets the feedback go directly to the survey designers, as opposed to making those with issues go through the front office of the giant research company to try to make a complaint that will never be properly routed.

But here’s the other side of the coin:

Wouldn’t it be better to extensively test our research before it goes into the field?

Suggestion:

Bring in a consultant to short-term edit your questionnaires and long-term train your staff in writing better questions and designing better studies. Me, for instance.

Alternate suggestion:

Offer special incentives to especially active panelists who have proven themselves in past feedback to be critical of errors in exchange for them “previewing” certain surveys. You can probably get them to do a decent job of quality assurance testing/user acceptance testing/proofreading/whatever you want to call it for nothing other than an additional entry in the sweepstakes or an extra $2 in their prize money account. (Yes, itoccurs to me they could be doing this right now, just without the special incentives, and I’m doing their work for free.)

The thing is, finding errors and other usability problems in complicated market research instruments really isn’t the sort of thing you want to be doing in real-time on a live survey after people work their way through it. Whether you’re exposing the whole panel or just the selected subset of early reviewers to the unedited project, you’re still trying to do triage on something that should have been right before it launched.

Get it right the first time, guys.

Leave a comment

Filed under bad user experiences, Harris, Market Research, NPD, Polling Point, web research

Do Your Questions Still Make Sense?

Unsurprisingly, I’m sure, I do pretty severe edits on research questions that are put in front of me. It’s not uncommon for me to go back to a client with a questionnaire that contains about 30% fewer words than the one they submitted.

Unfortunately, I’m also used to hearing words like this: “Sorry, we have to keep that question as it is — we’ve been asking it that way for years.”

Sometimes, it’s just sad, because the question has obviously been asked wrong for years, but there’s not a lot you can do in those cases.

Worse, though, is when the question may have once made sense, but no longer does:

watched-news

In a world with three TV stations and half hour newscasts airing at 5 pm, 5:30 pm, 6 pm, and 11 pm, this made sense.

In 2008, less so.

I wake up in the morning and put on Morning Joe on MSNBC. I get to work and put on CNBC on one TV and MSNBC on another. I go home and watch Closing Bell on CNBC, then switch back to MSNBC for Hardball and David Gregory’s pathetic show; I’ll often watch Rachel Maddow at 11 and fall asleep to the Hardball replay at midnight.

I have no idea how many “occasions” that translates to. I guess it’s eight, if we assume that CNBC and MSNBC from 9 am to 4 pm  count as one occasion, which I’m not sure they do. And what if I flip channels and see there’s a car chase taking place on Fox News? Does that make it nine for the day? I guess I’ll say 9, and multiply by 30 to get 270 for the month…

error

Oh. So according to Harris, I can watch news programs on no more than four occasions per day.

Good to know, guys.

Leave a comment

Filed under answer choices, Harris, Market Research

How Many Times A Week Does The Milkman Visit Your Home?

modems

Really?

I wonder why they left off 300, 1200, 2400, and 9600 baud.

Seriously, it looks like broadband penetration is at about 57% right now, but it’s 90% among active internet users, which should mean that only about 1 of 10 people who see this question are actually using dialup. Is it really that important to know exactly which speed dialup they have?

I also suspect there would be a STRONG correlation between using dialup and having absolutely no idea what speed you’re connecting at, but that could be wrong.

BTW, further down the list, you’ll notice ISDN listed. ISDN never really took off in the US, being both slow and expensive, and I think if you’re going to list that as a separate choice, you may as well include X.25 and frame relay while you’re at it.

Leave a comment

Filed under answer choices, Harris, Market Research