Category Archives: answer choices

Zogby Motorcycle Emptiness

What fresh hell is this? John Zogby has discovered the matrix? And somehow made it uglier than any other matrix in the history of research? All true, and more on that in a moment, but first, this unusual use of the ticky box:

zogby motorcycles

That’s probably illegible unless you click on it, but the gist is I’m being asked, in a pretty wordy fashion at that, which of the following things I’m interested in or knowledgeable about; nothing inherently wrong with that setup, except this: I’m then given exactly one box to check (or not check, as the case may be) : “motorcycles.”

Let’s type it out and count, OK?

John’s Way: (53 words)

“Now for some questions about consumer goods categories. Please choose the categories in which you feel you have a particular interest and knowledge. This means you actively seek out information on these products and services (for example you watch TV shows/read magazines and websites/attend exhibitions/discuss with friends and colleagues etc.)

My Way: (6 words)

“How interested are you in motorcycles?”

Now, it’s possible the reason I only saw the single checkbox for motorcycles is because my answers to the previous series of questions disqualified me from everything else, although I don’t really think my responses would have pointed in the direction of motorcycles,  but who am I to judge? Anyway, as promised, here’s what the first screen of these looked like:

zogby matrix

Maybe I shouldn’t say this is uglier than any other matrix — maybe it’s just that it’s simpler, in that it uses less newfangled HTML and is therefore, I don’t know, easier to access via mobile browsers, which isn’t a bad design goal to have — it’s just very strange looking to me, and pretty hard to take in at a glance. By the time you get down to the last button on the right, it’s not immediately obvious to me if that “1” radio button is for “Pessimistic” or for some other word that I perhaps need to scroll down for, or that’s just not appearing for some reason.

Are mobile/degraded browsers a big factor in the panel research industry? Are there a lot of folks on the Greenfield panel using Netscape 2.0 on Mac IIci’s or something? Because I seriously don’t get why this hasn’t all been replaced — and I’m talking about everyone here, not just Zogby — with some well-designed Flash code. Seems to me a freshman design student could pretty quickly mock up something vastly superior to anything being used in the industry today, no?

One other thing I found interesting: on some (but not all) screens of this survey, when I clicked the final radio button, I was automatically advanced to the next page. Despite the pages having a “continue” button on them, by the way. I don’t have a problem with auto-advancing in general, though I think it needs to either consistently happen 100% of the time or 0% of the time — but I’m curious what y’all think about it. Is the auto-advance, which would make it difficult or impossible to go back and fix an error you made, a good thing or a bad thing?



Filed under answer choices, bad user experiences, Market Research, matrixes make me cry, the web is a visual medium, web research, Zogby

Is it me?

Is it just me, or do orange juice brand managers seem more likely than just about anyone else to think their brands have the ability to come to life with distinct human personality traits and characteristics? Is it just one crazy person who keeps using Greenfield for this? Is it the whole industry? Someone help me understand this.

This is at least the third time I’ve seen one of these:

another juice grid

(Also, “warm?” You really want me to think about whether or not any of these refrigerated products could best be described as being “warm?” Because, ew.)


OH, COME ON. This is just ridiculous. I’m almost too fatigued looking at this to copy and paste it here, let alone fill it out. In fact, I think the only reason I’m continuing at all is so I can see what fresh horror awaits. How do they expect real people to answer things like this? Oh, right — because they promise us a sweepstakes entry in exchange for 35 minutes of our time. (No, really, they do:)

35 mins

Anyway, here’s what I’m yelling about now:

minute maid COME ON


Filed under answer choices, bad user experiences, Greenfield, incentives/compensation, Market Research, matrixes make me cry, web research

Best Buy Has Some Issues

From this story at The Consumerist. I’m sure it’s just a browser rendering issue that would have been solved with more testing, but right now, it’s just one of my all time favorite matrixes:


Well done, Best Buy!

1 Comment

Filed under a challenger appears, answer choices, bad user experiences, Market Research, matrixes make me cry, web research

SSI: Matrixes Make Us Cry, Too.

“We know respondents don’t like grids,” Jackie Lorch, SSI VP, Global Knowledge Management, points out. “They’ve been telling us that for years in focus groups and feedback, but we’ve always thought of grids as a necessary evil in questionnaire design. Now, we’re beginning to learn that not only are grids frustrating for respondents – they actually produce inferior data.”


Here’s a link to the full report.

L0t there to digest. What do you all think?


Filed under answer choices, Market Research, matrixes make me cry, The cancer that is killing market research, web research

How Exactly Does One Trust a Brand of Razor?

This is probably a nitpick:


It’s bad enough when you ask me which brand of orange juice I trust most — though I can’t imagine why any nationally known brand of juice would be more or less “trustworthy” than any other — it’s not like I have reason to believe Tropicana’s short-lived carton redesign is indicative of some sort of quality control problem or that they’ve been filling the cartons half a glass short — but what exactly is there to trust or distrust about a razor manufacturer? What am I not getting here? It’s a company that makes razors and razor blades, not a candidate for high office.

Most of the items in this particular matrix are actually relevant; to me, that makes “brand I trust more” stand out like a sore thumb.


Filed under answer choices, Market Research, silly nitpicking, web research

Consistency is Key

Of the online polls I see regularly, Harris Interactive seems to be the best of the lot. They actually did something I thought was particularly good in a piece of research I saw today, but of course I’m going to post on the thing they did today that I didn’t like as much.

First, I was asked this:


Nothing horrible there; it’s a matrix, but it’s not too huge, and the answer choices are fine. But then they ask this as the very next screen:


I check my bank statements all the time, so I started clicking the right-most column, just like I’d done with the Facebook questions — and then I realized that the right-most column wasn’t “very likely” anymore, but had mutated into “not applicable.” I suppose you can make an argument that the column is necessary on this page — the Facebook questions were asked only of Facebook users, and it’s possible some people answering this screen might not have a 401(k), or might not have any credit cards — but I think it would have been better to either let the “not at all likely” column take care of those folks or to have added the “not applicable” onto the Facebook questions as well.

The more you can keep answer choices identical, the more your respondents can glide effortlessly through the research. Note that what Harris did here is really quite minor, and not at all as aggravating as what I’ve seen done elsewhere from time to time, where a painfully over-sized matrix runs across multiple pages, but with the answer choices randomly shifting to abuse the respondents make sure the respondents are paying attention. What I’m talking about here is nitpicking compared to that sort of thing — but the common element is that your respondent is going to click the wrong thing because they expect their answer choice to be somewhere else.

1 Comment

Filed under answer choices, Harris, Market Research, silly nitpicking, web research

Stop screwing up the conversation.

You’ve probably given a lot of thought to the scales you use in your research. You’re undoubtedly aware of the advantages and disadvantages of 5-point Likert vs. 4-point Likert scales, and you’ve probably even spent time arguing with colleagues or clients over whether a 7-point or 5-point scale would produce better learning in a given situation.

Here’s the part where I piss you off and tell you I believe you’re doing it wrong:

You’re doing it wrong.

This is the thing, and it’s a theme I will never stop hammering: market research is a conversation between the researcher and the respondent. The more the interaction feels like an actual conversation to the respondent, the easier and more natural the process is. You ask a question, you get an answer. I think fairly few people would say it’s difficult or onerous or boring to hold a conversation with a friend, and yet, those are probably the very first words many would use to describe the agony of taking a typical piece of research.

So what does this have to do with scales?

Which of these is a conversation you can imagine people having in real life?

  • “Bob, I think that woman over there is very attractive. Do you strongly agree? Somewhat agree? Neither agree nor disagree? Somewhat disagree? Or strongly disagree?”
  • “Bob, on a scale of one to seven, I think that woman over there is a perfect seven.”
  • “I’d give her a 10. What do you think?”

In other words, no one in the history of the world has ever described a beautiful woman (or an attractive man, or a perfectly executed Olympic gymnastic routine) as being “a perfect 5.” Frame your questions in terms that your respondent are already familiar with, so they don’t have to constantly keep translating your strange foreign language of research-speak into English, so that they can understand it, formulate an answer, and then translate it back again into research-speak so they can give you the response you’re forcing them to choose.

tl;dr: stop screwing up the conversation.


Filed under answer choices, bad user experiences, Market Research, quality of responses