Kodak Gallery Gets the Big Picture Via Text Analytics
By Marc Dresner, IIR USA
To paraphrase a colleague who monitors our site traffic: ‘I think text analytics is a hot topic.’
This turned out to be a mild understatement: The chart that accompanied her message showed a shocking spike in traffic on the day we posted Part 1 of our “Text Analytics for (Really Smart) Dummies” series.
As I suspected when I originally started exploring the subject, it appears many market researchers feel like they have just enough information about text analytics to be dangerous, and there’s a clearly a widespread desire to learn more.
To recap our series thus far:
‘ Part 1) Featured a specialist provider with a new DIY product and who has an unusually savvy take on text analytics in market research;
‘ Part 2) Interviewed a corporate tech giant’s researcher who’s using the company’s proprietary text analytics product to study its legion of employees.
For Part 3 of our series, we’ve homed in on an internal research dept from a forward thinking vertical of a classic company with an outward, traditional consumer perspective.
It’s no secret that Eastman-Kodak (OTCQB:EKDKQ) has filed for bankruptcy, and it appears they may sell Kodak Gallery’the 131-yr-old film pioneer’s online imaging biz’for roughly $23.8 million to competitor Shutterfly.
To thrive as a researcher in this uncertain environment requires ingenuity and a reasonable tolerance for risk in order to move the business forward.
And Tarabek has leveraged these traits to great advantage, evinced in particular by a smart gamble on text analytics that’s paid off.
So this third installment in our text analytics Q/A series is, in my opinion, not only the most inspiring of the bunch, but also may be of the most practical value to our audience.
Thanks for reading and I encourage you to share your own stories and questions about text analytics with our community! Keep the conversation going!
Q. What attracted you to text analytics?
I needed to find a way to analyze 1.5-2K weekly open-end responses from two NPS/satisfaction tracking studies, without sufficient time or resources for traditional coding. The goal was to find a way to categorize these verbatim responses into logical topic areas that could serve as a diagnostic indicator for key metric trend changes and guide further investigation.
Q. Have you found that certain types of data better lend themselves to text analytics?
I’ve used Anderson Analytics’ OdinText text analytics software in two ways: for the tracking surveys I mentioned above, which I think it’s ideal for; and for an ad-hoc project where I wanted to include all of the responses collected (4K+) and get a sense for the relative frequency of the potential barriers to action.
Currently we’re at the criteria development phase and it looks promising, but I’d anticipate making a case by case judgment on the value of text analytics for standalone projects.
The essential question is this: Will the investment in time required to set up the categorization criteria be paid off in terms of better analytical quality and efficencies (from high response volume or when expedited turnaround time is required)?
In the case of tracking satisfaction or recommendation/NPS, I believe all of these criteria are met. The consistency of criteria being applied removes the human interpretation bias, and large data sets can be analyzed very quickly (after the criteria/model is in place).
Q. Can the same tools be used for social media data as for, say, survey data?
Social media data may benefit from text analytics tools if there is some consistency in the type of subjects likely to be discussed or the primary goal is to track trends in sentiment. The need for better social media exchange analysis will drive enhancements in the linguistic analysis capabilities and variety of tools available.
Q. What thoughts or advice do you have for other researchers who may be considering text analytics solutions?
Text analytics tools require an investment in upfront time to deliver real value, so advance planning and some runway room to improve the quality of the criteria model need to be factored in. Learning and testing the logic construct is important so that you understand how to set up the criteria to maximize the accuracy of what is returned.
The best test for me is still to compare how a subset of responses is coded, verbatim by verbatim. The ongoing maintenance work required to sustain the value will depend on the pace of change in the key metric or subject matters that influence it, but failure to spend that time will quickly denigrate the quality returned.
Relative assessments represent the strongest use case ‘ levels changing over time, correlated with other metrics, etc. As with any type of large data set analysis, the goal should be to spend the majority of your time thinking about the conclusions and recommendations from the data, and if you are not there, you need to re-evaluate your process, tools or both.
Thanks for the great insight and sound advice Lori!
Editor’s note: To learn more about text analytics, don’t miss The Market Research Technology Event ‘ a unique forum dedicated to the exploration and promotion of technological innovations in consumer and market research and business intelligence’taking place April 30 thru May 2 in Las Vegas. As a reader of this blog, when you register to join us, mention code MRTECH12BLOG and save 10% off the standard rate!
ABOUT THE AUTHOR/INTERVIEWER
Marc Dresner is an IIR USA communication lead specializing in audience engagement. He is the former executive editor of Research Business Report, a confidential newsletter for the market research industry. He may be reached at email@example.com. Follow him @mdrezz.