Will 2012 be The Year of Text Analytics?
But wait. Wasn’t 2011 — weren’t 2010 and a few years before that — for those in the know? I think so, and I think 2012 will keep up the pace, seeing the text technologies and solutions adopted, directly and indirectly (embedded in applications) by significantly more users than ever before.
The question originated with my friend Tom Anderson, who collected and published responses from a variety of industry figures to his Next Generation Market Research blog. I was late in answering myself. Tom will add my response, but I’ll also post directly here:
The easiest prediction to make with confidence about 2012 text analytics is continued strong market growth — my estimate is 25% on a base that likely topped $1 billion globally in 2011 — as uptake expands throughout the enterprise and as the technology becomes a must-have value-booster for broad-market survey, social/media analytics, and CRM platforms.
With less certainty: We may look back on 2012 as the Year of Question Answering, of the deployment IBM Watson/Apple Siri-type technologies to respond to enterprise and consumer information-access needs ranging from customer (self-)service to medical diagnosis, as a semanticized replacement for tired old search.
And there are signs, from market leaders such as SAP and IBM and from innovative start-ups alike, that 2012 will be the year of effective data fusion across database and text (a.k.a. “unstructured”) sources. Business can’t, won’t, wait for prescriptivist, rigid Semantic Web approaches but is instead applying analytics to the job, to discover the connections that make for truly rich data. You need analytics to operate in real time, to keep up with the data torrent. Many of those efforts will incorporate information mined from audio (speech), image, and video sources as a evolution from text analytics to content analytics picks up speed.
Check with me again a year from now and we’ll see how 2012 panned out!
3 thoughts on “Text Analytics in 2012”
We have a number of clients now that are asking for text analytics in conjunction with more standard sources of data. This is in the areas of surveillance, compliance, and data stream mining.
Although text analytics is far more common in High Frequency Trading than the general public has ever known or even assumed, this is really the first time I’ve personally seen so many ideas being generated for it’s application in Capital Markets.
Great post. I work for a company called Dedoose (www.dedoose.com), and as researchers we provide online tools to help other researchers (academic and market research alike) organize and analyze their data. The two researchers that started the company did so because at the time, there was no software that existed for unstructured analysis of mixed methods data (text based qualitative analysis and quantitative analysis combined). We have seen tremendous growth in the number of our users employing the mixed methods approach to their text analysis. This is especially true when it comes to sentiment weighting. Dedoose allows users to excerpt and code their interviews/content and then apply a weight to the code so that they have a quantifiable level of analysis in addition to their code tree. 2011, 2012…now 2013 it will continue to be about efficiency – but also about collaboration. Thanks for the post!