Vision Critical Adds Text Analytics for Customer-Community Insights

I joined Vision Critical on September 13 for this year’s Customer Intelligence Summit in Washington DC, an upbeat conference despite an attendance hit due to the approaching Hurricane Florence. This year’s big news: The release of native text analytics capabilities, designed to extend insights delivered via Sparq, Vision Critical’s customer-communities research platform. Vision Critical text analytics automatically tags and identifies themes in input text and performs basic sentiment analysis, classifying messages as positive, negative, or neutral and color-coding the product’s word-cloud visualizations. These capabilities are far from path-breaking, however. They and more have been available in leading market-research and customer-experience (CX) software tools for over a decade although their application for analysis of customer-community feedback — different from general survey, social media, and contact-center analyses — isn’t so common. Vision Critical claims 70% tagging accuracy and 90% sentiment-analysis accuracy. However tagging — the identification of entities of interest — is hard-coded and neither customizable nor optimized for different customer segments, and sentiment resolution is at a message-rather than a tag or theme level. The company is looking at developing and supporting industry-specific tag dictionaries, according to CTO Alan Price. Tagging, message categorization, and sentiment analysis are available only in English although sentiment for eleven other languages is to be added within a few weeks. In principal, because the technology relies on machine learning rather than syntax rules or taxonomies, training-data availability is the only obstacle to deployment in a given language, according to Price. Vision Critical had enrolled twenty customer in a text analytics early-access program. Capabilities have now been turned on for all North American customers and will be available for Asia-Pacific customers later this month.
GoDaddy UX Research Director Cassie Mally, speaking at the Vision Critical Customer Intelligence Summit
GoDaddy UX Research Director Cassie Mally, speaking at the Vision Critical Customer Intelligence Summit
GoDaddy was an early-access participant, and User Experience (UX) Research Director Cassie Mally spoke at the summit. Mally, shown in the photo here, described work with five datasets of 300 open-ended responses each, replacing hours of skilled-researcher time for each analysis. These are very modest datasets by CX industry standards, but still GoDaddy found the experience quite satisfying. It makes sense to start small. Text analysis is similarly an entry point to other forms of media analysis. I asked Vision Critical CTO Alan Price whether the company is evaluating adding image, speech, and video analysis to the platform, given that customer feedback increasing includes — or is even provided in the form of — rich media, and given the company’s experience applying machine learning. Price’s answer was no, but he pointed me to Vision Critical partner LivingLens, which publishes a video intelligence platform. LivingLens captures and analyzes video for customer-brand insights and similar uses, with automated object identification and manual activity tagging. LivingLens integrates with speech-transcription to enable basic text analysis and search; in sum, a nice complement to Vision Critical’s text-feedback analysis. My summary assessment: Vision Critical text analytics is a nice start on a much-needed capability that will create significant value for customer-community research. I look forward to roll-out of deeper and broader Vision Critical text analytics, integrated with Sparq platform profiling and engagement capabilities, in the years to come.

This year’s big news: The release of native text analytics capabilities, designed to extend insights delivered via Sparq, Vision Critical’s customer-communities research platform.

Vision Critical text analytics automatically tags and identifies themes in input text and performs basic sentiment analysis, classifying messages as positive, negative, or neutral and color-coding the product’s word-cloud visualizations. These capabilities are far from path-breaking, however. They and more have been available in leading market-research and customer-experience (CX) software tools for over a decade although their application for analysis of customer-community feedback — different from general survey, social media, and contact-center analyses — isn’t so common.

Vision Critical claims 70% tagging accuracy and 90% sentiment-analysis accuracy. However tagging — the identification of entities of interest — is hard-coded and neither customizable nor optimized for different customer segments, and sentiment resolution is at a message-rather than a tag or theme level. The company is looking at developing and supporting industry-specific tag dictionaries, according to CTO Alan Price.

Tagging, message categorization, and sentiment analysis are available only in English although sentiment for eleven other languages is to be added within a few weeks. In principal, because the technology relies on machine learning rather than syntax rules or taxonomies, training-data availability is the only obstacle to deployment in a given language, according to Price.

Vision Critical had enrolled twenty customer in a text analytics early-access program. Capabilities have now been turned on for all North American customers and will be available for Asia-Pacific customers later this month.

GoDaddy UX Research Director Cassie Mally, speaking at the Vision Critical Customer Intelligence Summit
GoDaddy was an early-access participant, and User Experience (UX) Research Director Cassie Mally spoke at the summit. Mally, shown in the photo here, described work with five datasets of 300 open-ended responses each, replacing hours of skilled-researcher time for each analysis. These are very modest datasets by CX industry standards, but still GoDaddy found the experience quite satisfying. It makes sense to start small.

Text analysis is similarly an entry point to other forms of media analysis. I asked Vision Critical CTO Alan Price whether the company is evaluating adding image, speech, and video analysis to the platform, given that customer feedback increasing includes — or is even provided in the form of — rich media, and given the company’s experience applying machine learning. Price’s answer was no, but he pointed me to Vision Critical partner LivingLens, which publishes a video intelligence platform. LivingLens captures and analyzes video for customer-brand insights and similar uses, with automated object identification and manual activity tagging. LivingLens integrates with speech-transcription to enable basic text analysis and search; in sum, a nice complement to Vision Critical’s text-feedback analysis.

My summary assessment: Vision Critical text analytics is a nice start on a much-needed capability that will create significant value for customer-community research. I look forward to roll-out of deeper and broader Vision Critical text analytics, integrated with Sparq platform profiling and engagement capabilities, in the years to come.

Leave a Reply