Category Archives: Market Research

Probability Sampling vs. Web Panel Sampling vs. Interviewing People Completely By Accident

A reader sent me this a number of months ago, and embarrassingly, I’m just getting around to posting it now, as I’ve been reminded by something @Lovestats posted the other day.

Another matrix with multiple problems — “I felt expected,” really? But the focus here actually isn’t the matrix, believe it or not: it’s that my reader has never actually stayed in the hotel that sent him the survey, with the familiar “please rate your recent stay with us at our such-and-such location” sort of email you tend to get after any interaction these days. Not only had he not stayed at that location — he’d never stayed at any hotel in the chain.

Now, I’m assuming this was a glitch of some sort — an actual customer with a similar email address had sloppy handwriting, or a friend of our reader has been giving out the reader’s information instead of his own, or the hotel company is a conglomerate and they actually meant to ask our reader to rate his experience buying a shirt at their clothing store  — but however it came about, it made me think — with all the ongoing discussion about probability samples versus non-probability samples, what about starting by just making sure you’re not interviewing people who are totally outside your frame?

Advertisements

2 Comments

Filed under Market Research, matrixes make me cry, web research, worrying about the wrong thing

Are You Polling a Lot of Miners?

Just clicked a link to take a survey about Twitter, hosted at SurveyMonkey, and was immediately struck by the odd choices in their “profession” question:

I’m frequently aggravated by this question on a personal level, because “market research” is almost never a choice. While you sometimes see a “marketing” choice, if there is a “research” option, it somehow always seems (either explicitly or via connotation in my mind) to be about white-coated goggle-wearing folks in chemistry labs. But I’m not really talking about my own experience here as much as I’m just commenting about how odd it seems that Twitter, of all things, expects to have so many of its users working in the mining, farming, and construction fields, none of which traditionally lend themselves to being able to sit in front of a computer and update the world about your latest moment-to-moment activities. While I’m sure there’s a guy in a combine somewhere right now using Foursquare to tell us exactly which corner of his field he’s in, I’m going out on a limb and guessing there aren’t more than a handful of guys like that. (And why on earth are farming and mining combined? Seems weird.) If I were designing this piece of research for Twitter, I think I’d have a lot more choices that might apply to people who are likely to be using a computer for 8 hours a day.

tl; dr: one size doesn’t fit all. Customize and update where needed.

3 Comments

Filed under answer choices, Market Research, social media, web research

CNET: At Least They Warn You It’s Going to be Horrible?

15-20 minutes? Even I probably won’t click to take this one.

Oh, all right, I did. Got about three minutes in (it’s about reward programs), hit my first matrix, and decided I wasn’t going any further:

Sorry, CNET, I’m out. Not only does this show every sign of making me miserable, it also doesn’t appear to have anything to do with your business, since I can’t imagine a product review site launching its own rewards program. If the research looked like it was directly connected to improving the site I actually was interested in viewing when I went to cnet.com this morning, I might have been willing to go further with this, but not for this.

Leave a comment

Filed under bad user experiences, Market Research, matrixes make me cry, web research

No Way Do Two Thirds of Americans Have HDTV. No Way.

Sorry, but I’m willing to bet this piece of research is completely wrong. I’d want to see the actual questionnaire, but here’s what I’ll assume until then:

Many, many, many people have no idea whether or not they have HDTV. Two main reasons:

  1. There is a serious lack of understanding among non-techie respondents about the terms “digital,” “high definition,” and “HDTV.” I’ll bet $20 that at least 20% of the population thinks they have HDTV because they bought a $40 conversion box for the digital transition.
  2. Because for years now, everything from network dramas to local newscasts has been opening with an onscreen logo that says something like “in HD where available,” or “presented in HD,” just like they used to do the exact same thing for stereo … only now they’ve also gone and incorporated it right into their station logos.

That’s right. Viewers with old 4×3 standard definition TV sets are constantly shown on-screen graphics that, in combination with the fact they bought conversion boxes, has them convinced they’re watching HDTV:

“Of course I have HDTV! It says HDTV right there on the screen!”

It’s difficult to research a topic when respondent confusion is this widespread. It’s not completely impossible, but it’s really, really hard. I can think of a couple of ways to try to do it, but they’re so cumbersome (as in, “look behind your TV and tell me the model number”) that they’re just not going to work.

Oh, and let’s not forget that there’s also God-only-knows how many people — this would include many of our parents, I’ll wager — who have HDTV sets but are watching standard definition broadcasts on them.

16 Comments

Filed under data quality, Market Research, TV

Just Say No Already.

Annie Pettit this morning tweeted from the Net Gain 4.0 Conference in Toronto:

Clients still want 1 hour surveys and we can’t do anything about it : I say turn it down!!

I’ll go further than that: I say turn it down and make it clear to the client that they are the cancer that is killing market research. What in the world can you learn from a sixty minute survey that you can’t learn from a 5-minute one? (I’m not talking about an in-depth qualitative research project, or something where you hook someone up to an EEG and have them watch an episode of CSI: Miami to see what their brain has to say. I’m talking about asking questions, on the phone or on a screen. 60 minutes is 55 minutes too long!

Do we really think the respondents still on the phone (or on the web) at the one-minute mark, the ten-minute mark, and the 60-minute mark are identical?

4 Comments

Filed under bad user experiences, data quality, Market Research, matrixes make me cry, The cancer that is killing market research, web research

Joel on Research and the Paradox of Choice

Joel Spolsky runs a software company and writes a blog called Joel on Software, but I often find what he writes is applicable to business in general, and occasionally to the research world as well. Recently, in his Inc.com magazine column, he wrote about the problems that develop when too many people are involved in a decision making process — in this particular case, he mentions how a former Microsoft developer tells how designing the Windows Vista shutdown menu took something like 43 people something like a full year and produced a menu with nine near-identical options. The developer calls it The Windows Shutdown Crapfest. The lesson there is obvious — too many cooks spoil the soup — and relevant, I think, to our work, with plain-as-day solutions — trim your meeting invite list and get extra hands out of the work — but dig a little deeper for a more important lesson.

Each of those links in the above paragraph is worth perusing, but the most worthwhile link I’ll have for you today is this, Joel’s original 2006 post on this topic, which does a great job of explaining the resulting mess in terms we all should be able to understand:

The fact that you have to choose between nine different ways of turning off your computer every time just on the start menu, not to mention the choice of hitting the physical on/off button or closing the laptop lid, produces just a little bit of unhappiness every time.

How do we expect our respondents feel when we ask them to tell us if they are Extremely Satisfied, Very Satisfied, Somewhat Satisfied, Neither Satisfied Nor Satisfied, Somewhat Unsatisfied, Very Unsatisfied, or Extremely Unsatisfied with something? What about when we ask them that about a whole page worth of somethings? And what about when some percentage of the questions — anywhere from 1/8 to 1/4 in my rough estimate — don’t apply to the respondent at all? I’d argue we create more than “just a little bit of unhappiness every time.”

The lesson is the same as it so often is here: keep it simple. Satisfied/Unsatisfied/Not Sure should be perfectly sufficient in mot cases, and has the advantage of making the results much more comprehensible at a glance. When comparing results across multiple questions, or across a wide time line of tracking data, it’s infinitely easier to comprehend a single number. The presidential approval number is generally reported as a single figure: 55% in this poll, 48% in this other poll, 53% a month ago, 58% today, etc. Instantly understandable by everyone, as opposed to something like this:

Today, 23% strongly approve of the President’s job performance; 27% say they somewhat approve. Two weeks ago, 29% strongly approved; 17% somewhat approved.

How do you parse that? Strong approval is up down 6 points at the same time that mild approval is up 10 points; overall, if you add the “strong” and “somewhat” numbers together, you can see that overall approval is up four points, but what do you do with those shifts in the gradated responses? Well, if you’re the nightly news — and I’m not suggesting that’s who we should be emulating, necessarily — but if you’re the nightly news, you ignore it and report it as a four point climb. (Well, depending on your viewpoint, you might say the nightly news would be most likely to point out the six point drop in “one measure of the President’s approval rating” and leave it at that, and I don’t think you’d necessarily be wrong about that observation, so.) If you’d only asked the question as approve/disapprove, though, you’d give respondents a simpler experience, and you’d give those interpreting the results both an easier time of it and less wiggle room for those with an agenda.

Let’s see what happens if we offer fewer choices. You really don’t need nine ways to turn off the computer, or seven ways to tell us how satisfied or unsatisfied you are. Honest.

4 Comments

Filed under answer choices, bad user experiences, Market Research, quality of responses

The Market Research Failure Behind NBC’s Jay Leno Debacle: Part II

Welcome back for the second part of this post.

NBC clearly  had a number of problems to solve. Lackluster ratings for the programs they were airing in prime time was one; the fact they’d promised the Tonight Show to Conan to keep him from leaving the network was another, as they’d have to pay him something like $40 million if they broke that contract; the fact Leno didn’t want to retire was a third. So, brilliant plan on NBC’s part: drop all the expensive 10 pm dramas that cost maybe $3 million per episode and replace them with the Leno show, for something like $2 million per week. Save a fortune in production costs, save $40 million in Conan payouts, and “manage for the margins,” as everyone reports NBC’s Jeff Zucker, pictured at right, planned to do.

Here’s where the research finally comes in: NBC apparently did the math, figured out what sort of ratings Leno would need at 10 pm in order to make the show profitable for the network via ad sales, and conducted viewer research to make sure enough people would watch the show to produce that rating. I’m sure that consisted of asking people variations of “what would you watch if all of the following programs were on at 10 pm,” and running down each night of the week, with Leno inserted in place of each NBC drama that was then airing. The results of that research seem to have been pretty accurate; the network was promising advertisers ratings that were pretty similar to what Leno was actually getting. They were embarrassingly low, I think, but they were around the base level that NBC had promised. The failure of imagination is that they just don’t appear to have asked much in the way of logical follow-ups of the people who said they wouldn’t be watching Jay at 10 — i.e., the vast majority of people. Questions like:

  • Do you watch any NBC shows at 10 pm right now?
  • After you watch one of them, what do you watch at 11? And then at 11:30?
  • How often do you find out about one program by seeing a promo during another?
  • If your favorite NBC show no longer existed, what do you think you would watch at 10 if these were your remaining choices? And then at 11? And 11:30?
  • When you watch an 11:00 newscast, do you always watch the same station? Or do you watch different stations at different times?
  • Do you ever decide to watch an 11:00 newscast because of a promo you saw earlier in the evening?
  • Do you ever decide to watch an 11:30 or 12:30 late night show because of a promo you saw earlier in the evening?

(You can make an argument that they should have been asking a whole series of other questions as well,  like “Is Jay Leno even a little bit funny,” and “Would you rather watch Jay Leno or that guy singing ‘Pants on the Ground’,” or “What the hell were we thinking when we gave the Tonight Show to Leno and not Letterman in the first place all those years ago,” and “Hey what’s that over there oh my god look out it’s Johnny Carson’s ghost and he’s coming at you with a crowbar,” but that’s probably inappropriate here.)

I think if they’d thought to ask those questions, and had the imagination to work through what the answers met (and shared the data with the affiliates), they’d have foreseen the future: the folks who used to watch NBC’s 10 pm programming are now watching something else at 10, and they’re not turning the channel back at 11, or 11:30, or, in some cases, maybe not at all. I really think promos are critical, especially in a world with hundreds of channels and no one reading TV Guide. If I’m not watching your 10 pm show on Wednesday, I don’t know what you’ve got going on in your 8 pm sitcoms on Thursday. If I don’t watch your 10 pm show on Friday, I might not have any idea that you’re airing the wildcard NFL games on Saturday. (Seriously, who knew the Jets/Bengals and Dallas/Philadelphia games were on NBC last week? Complete surprise in my house.) When I’m watching less on your network in general, I’m not seeing you promote your new shows, or new episodes of existing ones — I’m getting nothing, and you, my soon-to-be-owned-by-Comcast friends: you are in a death spiral.

It’s worth noting this wasn’t solely Leno-related: local news itself has been suffering greatly in recent years, and losing its lead in was sure to kill it (at least temporarily) in some markets. I touched on this briefly near the end of Part I. Talk about your market research failures: each market has three or more stations all bringing you the same news you can’t use: the same overhyped “team coverage” of the same piddly winter storm, the same drug-related shooting  nowhere near your home or office, the same fire at the same house with the same interviews with the same neighbors, and all of them in a race to cut costs by dumping whatever talent they still have with a connection to the community or an ability to actually report. Who needs it?

So, how does this all end? Looks like it’ll be with Conan leaving the network and Leno back on at 11:30, unfortunately. In a fair world, it would end with Jeff Zucker and Jay Leno both out of jobs, but that strikes me as unlikely, especially considering how much NBC enjoys mediocrity these days. Could they have avoided all of this with some smartly designed research? Truly, probably not. I think they were hell-bent on this path and wouldn’t have paid much attention to any data contradicting what they already decided they “knew” — that America loves Leno and that he’d be a surefire hit at 10. Which, come to think of it, is another failure of imagination — the inability to imagine that you might just be wrong.

2 Comments

Filed under Conan, Failure of imagination, Leno, Market Research, TV