Tag Archives: survey

Alligators in the Board Room

By: Christina Luppi, Manager, Sentient
Decision Science



This post was originally published on the Sentient Decision Science Blog.
‘Command the Board Room’ is the theme at TMRE 2016. A lofty
goal, perhaps. But maybe not so lofty if you’re equipped with the right
insights.
Soon
Yu
, TMRE 2016′s chairperson, immediately endeared himself to the audience
by dubbing himself the ‘biggest failure’ in the ballroom. He cited multiple
tanked businesses, several career restarts, and a credit score of 300
to support the claim. Why so eager to have his failures be known? To help
people better understand how they can succeed.
‘Insights teams need to play a critical role in the board
room,’ Yu stressed. When decision makers want to know why big
ideas fail, they find the answer is often human.
Even when the desirability is validated, when the
concepts are good and the budgets are excellent, ideas can
bomb because of people.
People run into walls of fear when approached with a
new idea, said Yu. Next, they run into walls of apathy because so
many things are competing for their interest. Lastly, they run into walls of
disbelief and are desperate for proof.
‘Ideas don’t sell themselves,’ Yu explained. ‘You can’t just
have the right content. It requires us becoming champions in the board room.
Those walls are human dynamics and exist even with the right content.’
The walls Yu mentioned aren’t about what is right and wrong,
they’re emotional barriers all marketers have to deal with at some
point. Insights help us break through.
TMRE keynote speaker Zoe Chance left
corporate marketing to get her PhD because she wanted to study the complexities
of decision making. Really, frustration in the field made her determined
to help people make research-based decisions that make sense, rather than see
them go with their gut.
What she found is that marketers actually need to suck
it up and learn to work better with the board members who make gut
decisions’that’s just who we are as a species. Humans are ruled by
‘alligator psychology,’ she noted.
Something we know as System
1 thinking
.
‘I refer to [System 1 and System 2] as the
‘alligator brain’ and the ‘court,” Chance explained. ‘System 1 is unconscious,
fast’ an automatic decision maker. We only imagine the court is making more
decisions than it is.’
Rather than trying to force feed data down the throats
of people who won’t swallow, Chance suggested researchers better understand
the emotional motivations of our System 1 brains
.
She outlined five key forces of influence:
??        
Labeling: Giving a name to behavior you want to
encourage or discourage.
??        
Ease: Ease of use is a more powerful
motivator than even pleasure. This is a principle practiced to perfection by
companies like Amazon and Uber.
??        
Attention: Moments of truth, open loops,
and the Zeigarnik effect.
??        
Scarcity: Operates through loss aversion.
??        
‘Hot potato’: When faced with resistance,
instead of pushing, hand back a problem to solve.
Notice the acronym? ‘If you’re going to walk an alligator,
it helps to have a LEASH,’ Chance said with a smile.
Of course, alligators can be lazy. They sometimes need
persuading to bite.
Stephen
Dubner
, best-selling author of Freakonomics and SuperFreakonomics talked
about the power of incentives in marketing.
‘Never underestimate the power of free. It doesn’t matter
how much of something somebody’s got, how much they’re worth; the alligator
part of our brain’ will just zap at it.’
To illustrate, Dubner told a story of how the world-renowned
Cedars-Sinai Hospital in Los Angeles dealt with a particular problem, a big
problem. Doctors were not washing their hands.
Yes, really.
The issue wasn’t a matter of education’doctors know the
science and danger of bacteria’it was a matter of communication. How do
you tell medical professionals they must do something they already know they
must do?
The hospital tried incentivizing a hand washing program with
Starbucks gift cards. And the wealthy MDs snapped them up as though they
couldn’t afford their own coffee.
‘They turned a life and death problem into a game they
wanted to play,’ said Dubner. But the card didn’t raise the overall rate of hand washing.
‘Data can get you at the ‘what’ pretty easily, and the
‘what’ didn’t work. The ‘why’ is complicated.’
Why gets into psychology, sometimes even into
religion. It also delves into the subconscious. What doctors would admit
they don’t wash their hands in a hospital?
‘Self-reported data is close to worthless,’ said
Dubner. ‘This is why we need to know not what people are telling you they
will do; we need to get data about what they actually will do.’
Eventually, the board at Cedars-Sinai created graphic
images of the bacteria found on their own hands and placed the image
on every computer screen saver at Cedars-Sinai. By showing doctors the
danger and triggering an emotional response, the research team got the
hand-washing rate up to 100-percent almost overnight.
‘If that’s the way the human brain works, let’s find a way
to take advantage of that and exploit it for some good,’ Dubner concluded.

In that light, understanding alligator brain actually
sounds pretty rational.

How can I blend sample sources without impacting my data?

By: Susan Frede, Vice
President of Research Methods and Best Practices, Lightspeed GMI

Research
has consistently shown that all panels are not the same. Recruitment sources
and management practices vary, and this can cause differences among panels.
Beyond panels, there are other sources of online survey respondents, such as
river, dynamic, and social media sources ‘ and these can produce data that is
different from each other, as well as different from panels. Given the wide
variety of sample sources, and their benefits and drawbacks in cost and
quality, researchers
often struggle with the question, ‘How can I blend in other sources without
impacting my data’?
To
help our clients answer this question, Lightspeed GMI modeled the impact of
adding in a second source of online respondents. For this exercise we are
considering two sources ‘ Source A and Source B. The assumption is that Source
A is the primary source and there is a need to blend in Source B. There are
differences in the scores between the two sources for the concept measures. For
example, the purchase intent scores are higher for Source B:
Given
the differences, adding in Source B has the potential to impact the scores.
However, it takes a large influx of Source B to impact results (see Chart 1 ‘
Impact on Purchase Intent Scores). The proportion of respondents saying they
definitely would buy goes from 7.8% to 8.6% when the sample blend is 50% Source
A and 50% Source B. The percentage saying they probably would buy goes from
16.6% to 19.0%. Neither change is statistically significant with a typical base
of 400 respondents and a 95% confidence level. 
Another
way to look at the impact is to examine the number of differences on scores in
the blended sample compared to 100% of Source A (see Chart 2 ‘ Number of
Differences versus 100% Source A). By adjusting the proportion of sample coming
from each source, it is possible to identify the point at which concept scores
are impacted. Five key concept measures have been evaluated (purchase intent,
uniqueness, liking, relevancy, and likelihood to recommend). For example, when 75%
of the sample is from Source A and 25% from Source B, only one difference of
+/-2% is observed versus a 100% Source A sample. Even when the sample is
adjusted to a 55/45 blend all the differences are less than or equal to +/-3,
which in most cases is not statistically significant. 
The
data suggests that as long as additional sources account for 40% or less of the
total sample, data should not be impacted. 

However, Lightspeed GMI recommends a more conservative cap of
25-30%. Because there are several situations that may call for an even
more conservative blend, consider the following before making any changes:
  1. Tracker and wave studies ‘
    Trendability is key in tracker and wave studies. Rather than making one
    big change it is better to make a series of small changes (+/-10%) from
    week to week or wave to wave and monitor the impact. 
  2. Unproven panel and dynamic
    sources ‘ Until the quality of an unproven source is understood it is
    better to be conservative in the amount blended in.
  3. Low incidence studies ‘ We have
    seen a higher proportion of questionable behavior on low incidence
    studies, so it is important to be more conservative when making changes.

This
analysis also shows that we don’t have to maintain an exact source blend for
trackers (e.g., 50% Source X and 50% Source Y), which allows us to more
efficiently use sample. As long as we are within +/-5 to 10% for each source
(e.g., 40-60% Source X), data will not be impacted.

Marketing Research 101: Six Commons Mistakes in Survey Questionnaire Design

Online surveys have become the cornerstone of the market
research industry. They quickly and easily allow businesses to gather
consumer data, which they can use to: enrich products, alter marketing
campaigns, and tailor messaging. Unfortunately, modern technology, which has
improved the ease with which companies can generate surveys and analyze
results, has also spawned an era where an increasing number of surveys are
poorly formulated, limiting responses or skewing data to misrepresent customer
intentions. By applying a few basic tenants of survey design, we easily
increase engagement and improve data accuracy and overall quality.
Questions that Affect
the Answer
1.       Leading
Questions: Questions should be phrased in a way that suggests all responses are
equally viable
2.       Loaded
Questions: Write questions that do not inherently encourage the participant to
be less-than-straightforward with the response
Questions that Result
in Unclear Data
3.       Double-Barreled
Questions: Questions that ask two things and offer only one opportunity to
answer do not provide usable data
4.       Questions
with Absolutes: Unless the subject is clear-cut (i.e., ‘Are you a man’?), the
question should provide enough choices to represent participant response
Questions that
Confuse
5.       Jargon: Unless
the participant pool is very specific, such a ward of doctors at a local
hospital, questions should avoid using abbreviations, industry-specific
language, or slang. Using clear, common language insures that 100% of the
participant pool has an equal understanding of the content
6.       Offering
Too Many Options: Potential responses to online survey questions should be
limited to five or six. Offering more options can be confusing to the
participant, clutters the survey, and may negatively affect survey completion
While
there is much more to online survey design than adhering to these six
principles, avoiding the most common errors of question formulation will
greatly increase a survey’s return rate, accuracy, and relevancy.

About the Author:  As Global Marketing Associate, Tara Wildt leads
Lightspeed GMI’s digital and interactive marketing platform, branding
initiatives and social media strategy. In her role, she develops creative
solutions and collateral for Lightspeed GMI’s product suite and plays a key
role in the company’s strategic development. In addition, she has oversight of
the company’s philanthropic and community outreach initiatives.  

10 Ways to Drive Survey Engagement

By Melissa Moxley,
Lightspeed GMI

According to a spring 2015 study from Microsoft, the average human
attention span has fallen below that of goldfish — and you can blame it on the
gadgets we use to watch YouTube videos and play “Crossy Road.” The researchers clocked the
average human attention span at just 8 seconds in 2013, falling 4 seconds from
the 12-second average in 2000, and putting humans just 1 second below goldfish.
We made the transition from CATI to online, but now we need
to make the transition from online to mobile. But, how do we keep survey
respondents engaged in a way that captures their attention? Can we carry them
past that eight second threshold?
From a questionnaire design perspective, we need to balance the
marketing research hat with the respondent hat. Yes, we need to ensure our
paired comparison questions are all implemented for proper analysis, but let’s
grab the attention of our respondents with some color and images, bringing life
to our questions. Let’s be their distraction.  
Regardless of whether or not respondents are on-the-go or
planted in their office or home, distractions are all around them. Have you
tried to take a survey while emails are accumulating in your inbox? Your
messenger pings are flashing and your boss is seconds away from walking in for
your 2:00p.m. meeting? How about taking a survey while cooking dinner, briefing
your husband on the day and pacifying the toddler pulling at your leg? While
these are exaggerated situations, reality isn’t too far off. Respondents
nowadays are taking surveys from anywhere and at any time. We need to capture
their attention and retain it.
Be the Distraction

So how do we do it? While we can’t sit next to every
respondent as they enter in their answers, we can take measures to prevent them
from closing their browser and moving on. Your survey should serve as the
distraction; your respondent shouldn’t be distraction from your survey.
Below are the 10 ways to design an appealing survey. Remember,
you only have eight seconds to engage.
1.     Scrolling
= Work = Dropouts
??  No one
wants to work to read an attribute list or find the ‘next’ button in order to
progress. The key here is ease, try
to limit your response lists to 15 points and minimize scrolling as best as you
can.
2.     Be
Concise: The Shorter, The Better
??  I like
to promote the Twitter mentality: 140 characters, short and concise and easy to
digest.
3.     Avoid
Repetition: Didn’t I Already Answer That?
??  Nothing
is worse than being mid-survey and thinking just that. Don’t ask respondents to
rank their top three brands and then turn around and rank their top six. 
4.     Spacing:
Feng Shui Your Survey
??  The
spacing between the question text, response lists and images needs to be optimized
and balanced within the screen so that there is balance and symmetry.
5.     Sizing:
Bigger Isn’t Always Better
??  This
applies to font size throughout the survey (consistency is key) as well as
image sizing. There needs to be a middle ground between squinting in order to
read the question and scrolling in order to see the entire ad or concept being
reviewed. 
6.     Consider
Compatibility: Are respondents going to be taking the survey on a PC? Tablet?
Mobile? All of the above?
??  Test
surveys on all potential devices and don’t allow mobile or tablet usage if the
survey isn’t compatible; it’s not worth jeopardizing the data or the
respondents experience
7.     Question
Types: the right question yields the right answers
??  If you
are asking respondents to ‘select all that apply’ ensure they can do so. In
turn, respondents should be able to visually tell which options they’ve
selected, getting rewarded for their answers
8.     Get
Active! Designing ‘active’ questions = engaged respondents
??  It’s
not just for physical health anymore. Mentally, respondents want something to
do when giving their opinion, they like dragging logos in order to rank them or
sliding the scale to the number ’10′ because they really do ‘strongly agree’
with that statement.
9.     Find
your inner artist: Colors, visuals and design elements go a long way.
??  Using
these features effectively in a survey locks the respondents’ attention and
keeps it from click to click.
10.  Survey
Experience: Taking the time to remove your researcher hat and put on your
respondent hat helps ensure the experience is an enjoyable one
??  Once
you’re positive you’ve got the survey of your dreams in place, take a step back
and look at it from a high level. Are the colors consistent from page to page? Was
that Arial font on question 10 when the rest of the survey was in Times Roman?
When the instructions say ‘roll over image to zoom,’ is it really working?
Still with me? If you’re at the end of this list, you’ve
made it past the eight second mark and are clearly engaged!
About the Author:  Melissa Moxley serves as Lightspeed GMI’s
Global Product Marketing Manager. As a key member of the Global Marketing and
Business Strategy Team, Melissa drives the adaption and implementation of
QuestionArts, Lightspeed GMI’s survey programming and design. As an escalation
point for regional teams, Melissa strategizes on commercial and marketing based
engagements and ensures global compliance.

Permission to Walk Away from Survey Trends

By: David Shanker, CEO,
The Americas, Lightspeed

In our complex world, the accelerated pace of innovation and
technology has created a struggle in the marketing research industry. Consumers
are our greatest assets, but they are overloaded: countless digital marketing
campaigns, social media platforms and infinite numbers of apps are fighting for
their attention. Our attempts to quantify their behavior and attitudes are
heavily influenced by technology. But with the frequency of change so rapid,
how do we judge if we are capturing the ‘norm’? Are we capturing their full
attention?
There is a need for change in our industry. Similar to the
advertising industry, marketing research is heavily fragmented. We can no
longer passively capture data, we need to ask, listen and learn while being
more nimble than ever. As we look more and more at consumer behaviors, we need
to think more about the data than about the tools capturing the data. Norms are
evolving ‘ driving us away from traditional survey trends.
Today, possibilities with how we connect with a consumer are
faster than ever. When people take surveys it means they need to have the same experience
no matter what device they are on. The adoption of mobile devices, particularly
smartphones, is having a big impact on our ability to provide representative
samples’in fact, the impact of mobile devices on our ability to reach people
cannot be overstated.
??        
Nielsen reported that as of Q4 2014, over 70% of
people in the United States own a Smartphone;
??        
This compares to only 22% in 2010;
??        
Current smartphone ownership is even higher for
the highly coveted Millennials and multicultural; it’s 80% for those
groups. 
Adapting to change
Change is hard. We realize it is easier said than done’it
takes a lot of work; but status quo is not an option to survey in today’s
industry. You will miss the young adults and multicultural; you’ll also miss
members of the general population who use their smartphones to take surveys, a
percentage that will continue to grow. So what should we do?
1.      
Surveys have to be shorter’15 to 20 minutes
maximum
2.      
They have to be designed to be engaging and take
advantage of the latest programming techniques’getting caught in grid paralysis
is no longer an option
3.      
Surveys have to render appropriately for
whatever device is being used
The ‘whys’ and the ‘so what’s’ need to balance traditional big
data. Consumer insights are not only necessary, but essential. The need to
connect with the consumer in the right way at the right time will be as
important as the technology used to do it.
About the Author: David
leads the Lightspeed business across the Americas region, unifying and focusing
systems and expertise to meet clients’ dynamic needs and consistently exceed
their expectations. A veteran of 20-plus years in sales, marketing, operations
and research, he has served in senior management roles in established, start-up
and turn-around business situations. His strategic and operational leadership
has resulted in significant business improvements for companies such as Ipsos,
OTX Research and Information Resources Inc. Prior to joining Lightspeed, David
was CEO of PINCHme, a digital marketing/market research start-up that delivers
insights to leading CPG companies through a unique approach to consumer
research.

Do-It-Yourself Research is on the Rise

Roe vs. Wade, Gun
Control, Immigration & Capital Punishment. 
It wasn’t that long ago where a simple conversation about
Do-It-Yourself Online Research (DIYOR) among the Market Research community felt
like a heated debate with the same intensity of the aforementioned topics.
For all intents and purposes, let’s not debate the pro/cons and
the validity/invalidity of DIYOR within this space. These topics and arguments
are already well documented and discussed. Instead, let’s take a look at the industry’s past,
present and future.
DIYOR began in the late 1990′s and moved past the introduction stage of the product life cycle in the late 2000′s. Its current fragmentation
of companies resembles the fragmented Market Research Industry where a handful
of major players are accompanied by a majority of smaller companies.

DIY Research is in the growth stage of the product life cycle

The DIYOR Industry, as well as the NewMR Industry in which it
is a subset, is presently within the growth stage of the product life cycle as
revenues are increasing year over year.  Some
suggest the industry is cannibalizing Traditional Research. However, relatively
recent worldwide sales figures suggest that NewMR is supplementing Traditional Research, not cannibalizing it.

Some of the major players in the DIYOR market are beginning to behave as if operating within the maturity stage of the product life cycle and are buying competitors, forming
partnerships and extending product lines. This behavior seems relatively quick as only a few years have
passed since the industry outgrew the introduction stage.  Though, perhaps the move to maturity for some isn’t so
quick after all since first and foremost DIYOR companies are technology
companies
that exist in an ever-changing market.
In terms of present offerings, two key factors have yet to
normalize in the DIYOR market: Service & Price.
Service and Research Design in the market range from truly unaided
services to aided / self-guided services. DIYOR vendors in the unaided market provide
the technology for customers to field quantitative and qualitative studies, but
do not assist the questionnaire design process and provide the results of the
survey as raw data without data analysis services.  Whereas aided / self-guided companies provide
a full suite of self-guided questionnaire design templates as well as data
analysis applications. For an extra fee, some aided / self-guided companies can
provide an experienced researcher to help design customers’ quantitative and qualitative
projects. And of course, there are DIYOR companies that exist somewhere
between both ends of the spectrum.

Both services and prices widely vary in the DIY Research market

The relative price of service in the DIYOR market increases or decreases relative to the amount of service provided and overall price points display a fairly wide
variance in the DIYOR market.  Charges range from free, to charging per respondent,
to charging per month, to charging per year, to charging with sliding-scale credits, to charging for a basic user profile, to charging for an intermediate
user profile, to charging for an advanced user profile, to charging for enterprise services, etc.,
etc. Get the picture?

It’s going to be a challenge for consumers to truly evaluate all the different price points, at all the different
offerings, for all the different users, at all the different levels of service.  Without a doubt, the rising DIYOR industry is in need of a solid pricing study that will ultimately optimize and ease consumers’ purchasing decisions.
So what lies in store for the DIYOR industry? My humble prediction
is within the next 5 years, larger full-suite, self-guided DIYOR companies will
continue to purchase smaller DIYOR companies that display attractive technology and operate within a niche of the market, in order to add to their portfolios of
services. Customers by this time will have determined for themselves which product
offering at particular price points makes the most sense.  This combination of vendor consolidation
and educated pricing from a consumers point of view will ultimately streamline the DIYOR industry as a whole and normalize its product offerings and
prices. 
In your opinion, where is the DIYOR industry heading in the next 5
years? Please comment below.
Chris Ruby is an award-winning Marketing Research & Consumer Insights Executive with Fortune 500 consulting experience. His niche is the ability to turn complex data into compelling stories that induce a call for action among key decision-makers. His work has been featured by MRA, MRIA, IIR, Norstat Times, Chadwick Martin Bailey & the Optimization Group. Keep up with Chris Ruby by following him on Twitter @ChrisRubyMRX or by reading the Chris Ruby Market Research Blog.

Is That Multinational Research Project Multinational Enough?

Question: What percent of market
research projects conducted for multinational companies, are conducted with a multinational
scope?

It’s kind of a trick question. After all, how many countries is required to be ‘multinational’ in scope? Is three countries sufficient? Four? Or does it really need to be ten or more for truly global company?

The brutal reality is that most companies selling in numerous countries can only afford to do research in a subset of them. How does a market researcher deliver appropriate insights with this obstacle? Here are three steps that help mitigate the risk:

  • State the obvious. What’s obvious to you is often less
    so to the client. Even if you think the geographic scope is clear, state
    it clearly and multiple times. What is the geographic scope of the
    research? A specific country? Region? How many languages were involved? I have seen it happen many times;
    the audience assumes that since their company is ‘global’ the
    research is as well’when in fact the research may have been based on just 2
    or 3 countries.
  • Break with convention. Too often, researchers select
    countries based on past projects (‘We always do our research in the U.S., Canada
    and France.’). Challenge old assumptions! What mix of countries will give
    you the insights you need to inform current business initiatives?
  • Illustrate the importance. Educate the audience on how
    geographic variations impact customer behavior and attitudes. Here are a
    couple of examples that help increase sensitivity to the importance of
    geographic variations:
    • Different colors have different associations by
      geography. Red is ‘positive’ in some cultures but negative in others.
      This can impact reactions to ads and product packaging.
    • The role of male versus female heads of households as
      relates to purchase decisions for certain categories varies by country.
      Thus, research about purchase intent or product attribute preferences
      based on one country cannot be assumed for all.
Multinational brands don’t always have the time or budget
for truly multinational
research
.  So the onus is on the
researcher to make sure that market research is understood and interpreted
correctly. And perhaps by raising client awareness of how important geographic differences
can be, budget for broader geographic scopes will be allocated next time
around.
This post is by guest blogger Kathryn Korostoff of Research Rockstar LLC (Training.ResearchRockstar.com)

Goop Survey: Where Luxury Brand Meets Design

I’ll fess up and let you know I have subscribed to Gwyneth Paltrow‘s Goop, a lifestyle weekly publication, which curates A-list/luxury recipes, health tips, as well as fashion and cultural news for the masses, for years now.

Although Gwyneth and by extension her Goop brand often receives a lot of flack for its smug,” inaccessible world view, I actually enjoy its recommendations and insights into a world of luxury much akin to the appeal of Oprah’s favorites.

Two weeks ago, in their 5th anniversary Goop newsletter, the marketer in me reveled at their survey. They promoted it in their HTML email along with an animated Gif created by Rachel Ryle Marketing Director @Ubooly and who is an Illustrator & Animator. You can see some of her work on Instagram.

I thought it was a clever and fresh way to promote a survey. One very much in line with the Goop brand and consumer audience. Take a look for yourself.

As it appeared in the newsletter:

I can see how the trend of a visual or Pinterisation of the web influenced this design and it beats out a boring text link any day. What do you make of it?

ABOUT THE AUTHOR

 

Valerie RussoFormerly a senior copy editor at Thomson Reuters, a research editor at AOL,  and a senior web publicist at Hachette Book Group, Valerie M. Russo is editor at large of The Front End of Innovation Blog, The Market Research Event Blog, The World Future Trends Tumblr, the Digital Impact Blog, and also blogs at Literanista.net. She is the innovation lead and senior social media strategist for the Marketing and Business Strategy Division of the Institute for International Research, an Informa LLC., and her poetry was published in Regrets Only on sale at the MOMA Gift Shop. Her background is in Anthropology and English Literature. You can reach her at vrusso@iirusa.com or @Literanista.

Live from FOCI 2013: Big Data, Little Data, New Data. aka Evolution

Always nice to get a big picture style history session: market research started because the data needed to make decisions was rare, which then transitioned into specialized skill sets revolving around analysis and such.

So then when did things blow up into big data?

Fast forward to today, and you have long surveys, declining responses rates, flat lined metrics, very little tie in to business performance and very low actionability. The reason for this is that data is not rare. Exponentially growing with population, it has exploded as both a challenge and an opportunity.

As Larry Friedman pointed out, its an evolution that now needs a new mind set. Old approaches are not necessarily right, and the assumption of truth is often wrong. Its almost like parenting and the generation gap, only applied to an industry. Technology will play a big role here, as he mentioned that SPSS and the likes may not work so well anymore. Multi source versus single source data will be necessary for modeling. Social media monitoring and discussion makes it easier to monitor and track responses to questions that need not be asked anymore, mostly psychographic or tracking related. What people say is acquired from social media, while tracking and timing behavior is found from other sources.

Ultimately its about the integration of different data sources – of which only one is on a survey. Evolution has never been as apparent as it is now.

Sourabh Sharma,
Communication & Social Media Research Expert at SKIM, an international
consultancy and marketing research agency, has a background in engineering,
marketing and finance from the University of Pennsylvania, and the Wharton
School and Rotterdam School of Management. Having worked in marketing and
product development at L’Oreal, followed by a stint in management consulting,
he now passionately enjoys the world of social media, and can be found on every
platform with his alias sssourabh. He is a food critic and a fashion writer,
and documents these alongside strategy on his blog called
3FS. He may be reached at s.sharma@skimgroup.com.
Follow him on
@sssourabh.

How does traditional survey research stack up to social media?

From the vaults:

How does traditional survey research stack up to social media? Here are Randy Brandt’s presentation at The Market Research Event 2012.

You can see more details of the study in his deck, but here’s a video that captures more.

 Via Maritz Research Sound Check.