Tag Archives: sampling

What is a hard to reach group? Not what you think.

I saw a title of blog post recently titled “What is a hard to reach group?” The answer seemed obvious – young men, hispanic people, people with high incomes. There are lots of demographic groups that are hard to reach and cause researchers a lot of stress when it comes to filling every cell in a sampling matrix.

But that wasn’t the first thing that came to mind for me. The first thing I thought of was that hard to reach people are those for whom we haven’t found the right value proposition. We haven’t found the incentives that are meaningful to them. That’s the simplest answer.

But, it also means we haven’t found the type of research that feels important to them – our surveys aren’t meaningful to them, our focus groups don’t put them at ease, our individual interviews feel unnatural to them.

Maybe these ‘hard to reach’ groups aren’t hard to reach at all. Maybe we’ve spend all of our time trying to attract and interest mini-mes. People just like me. People who completed highschool. People who went to college. People who work from 9 to 5 and then go home, make dinner, take care of the kids and get to bed by 11.

Maybe, if we stopped trying to recruit mini-mes, if we stepped into the shoes of someone who works the nightshift, someone who plays video games until 3am, someone who only wears designer shoes, maybe we’d find that these hard to reach groups aren’t so hard to reach at all.

Annie Pettit, PhD is the Chief Research Officer at Peanut Labs, a company specializing in self-serve panel sample. Annie is a methodologist focused on data quality, listening research, and survey methods. She won Best Methodological Paper at Esomar 2013, and the 2011 AMA David K. Hardin Award. Annie tweets at @LoveStats and can be reached at annie@peanutlabs.com.

Social Media: ‘Convenience Samples’ without the guilt?

by Kathryn Korostoff, Research Rockstar LLC

Two of today’s social media track speakers helped shed light on a great issue: using online communities as a convenience sample, and doing it well.

One was Dawn Lacallade from ComBlu. She spoke twice today, though I only had the pleasure of observing one of her sessions. I also enjoyed the presentation by Sean Bruich, from Facebook. Sean generously shared a lot of examples with real data, collected by Facebook. Sitting in these two sessions back-to-back gave me a great list of specific ways to think about the credibility and reliability of social media-based research.

As a starting point, let’s be honest: one of the challenges with social media research is, indeed, perceived credibility and reliability. Lots of folks are a bit skeptical that all of this ‘social media research’ hype is, well, a bit too hypey.

Now before I begin, I want to note that while may people use the term ‘social media research’ to be about sentiment monitoring, both of these speakers were more focused on using communities’whether private, branded ones (such as a company might build) or a broader one (Facebook)’as a place to conduct research. So the context here is online communities’open or closed, brand-hosted or not’as a sample source.

So How To Improve The Perceived Reliability and Credibility of SM-gathered Research?

1. Trust but verify. As Dawn suggests, ideas or results from a specialty community can be vetted at the brand’s website as a single question poll. For example, if you learn in your community that feature X is critical, ask a simple question on your website. Is it? Sometimes you may find that the larger group is aligned with the smaller, more specialized one. But in any case, you don’t want to over promote the results from the community without first vetting with a larger population. This will help overcome legitimate objections to community-based research results’such as, ‘how can we trust data from a group of people obviously already biased towards our brand’?

2. Educate research clients about the community, as a preemptive strike. Your audience may be making some incorrect assumptions about the community profile. Sean from Facebook shared some data that would make even the biggest cynics of convenience sampling take a second look. Here are some highlights:

  • ?? Analysis shows that the Facebook poll results about recent election outcomes were nearly identical to those from Gallup and Rasmussen. In fact, the FB results were closer to each than they were to each other!
  • ?? Facebook gives excellent international access; indeed, most users are non-US. And anyone who does global research knows how challenging data collection can be in some parts of the world.
  • ?? Research by Facebook suggests that a convenience sample from Facebook matches well with any sampling from the overall Internet population on nearly any measure.

3. Demonstrate affordable innovation. One of the powerful examples was from Facebook, on the topic of ad testing. Consider this scenario: Brand X plans to start a new ad campaign and wants to test effectiveness. On FB, the target market can be selected (based on interests, not just demographic data). Then the target is exposed to the ad, likely in multiple versions, while a small percent is held out as a control group. Next step: post a 1 question poll to the target market. The question might be on brand recognition, brand preference, purchase plans’whatever is relevant. One can compare these results easily between the ad-exposed group and the control group. But in this way, the brand can tie the ad testing to the polling question with whatever timing it wishes (even same day). Cool.

Bottom line

Many researchers maybe feeling skeptical about gathering data from communities. But as these points illustrate, it may not be as risky as one might assume. And also, we all just need to be realistic’nobody is saying this replaces the need for all traditional market research. Still, after these sessions, I am more convinced than ever that it will replace some.

[As a tangent, both speakers happened to emphasize the value of the one-question poll. It gets a much higher response rate than a link to an online survey. And since you are working with a known community anyway, tedious, invasive questions about age, gender, and such do not need to be gathered. So get to the point, don't abuse the audience, and ask a single question.]