It almost goes without saying, that good, clean, comprehensive social data is central for any meaningful social-media analytics initiative. Note my almost. Judging from market buzz, much of the social-intelligence audience is swayed by appearances — by slick dashboards — paying little attention to the value and reliability of the analyses and insights delivered. Social enterprises and social networkers — billions of individuals and organizations — need to rethink our priorities. We need social data quality and trustworthy analytics.
There are a number of musts for quality social data.
- Number 1 is completeness and also control: You need to be able to filter the ‘firehose’ for all and just the data you need, to be able to select according to characteristics such as location, content (e.g., keywords, topics, and hashtags), and perhaps for selected individuals, by their social handles. You may also need access to a repository of past postings, dating back months or even years.
- Number 2 is context, essentially the metadata envelope that describes the social content.
- Number 3 is timeliness. Not everyone needs a real-time feed, but everyone needs right-time, in accordance with your business problem. You need instantaneous alerts if you’re engaged in automated financial trading, while a modest latency is tolerable for reputation management and customer engagement, and days might suffice for market research.
And you need analytical tools that will help you make proper and profitable use of the data you collect.
Enter Twitter, whose value to business users depends on a quality social data ecosystem.
Twitter Certified Products
The social-data challenge surely spurred the recent launch of the Twitter Certified Products Program, really a step to promote, for the benefit of Twitter data consumers, an ecosystem of high-grade supporting products and services. I hope that Facebook, LinkedIn, and other social platforms, which have business and technical architectures different from Twitter’s, find ways to follow suit!
Two of Twitter’s three partner categories — analytics and data reseller — respond to the good, clean, comprehensive social data need. Products in those categories are foundational; social customer (and prospect and market) engagement, based on finding and classifying social postings, doesn’t scale without them. That’s why selling point #1 for social-engagement leader Radian6 is capture of the entire social web, “hundreds of millions of posts each day, including Facebook, blogs, news sites, discussion boards, video, and image sharing sites,” but also why Radian6 provides quite of number of analytical options, via plug-in capabilities from third-party providers.
Data Harvesting, Analytics, and a Role for People
Comprehensive social-data harvesting (per the Radian6 quotation) is beyond the budget and ability of most organizations, creating market space for data resellers. Twitter certified three data resellers, DataSift, Gnip, and Topsy, and the first two of them do cover the spectrum of social sources.
Similarly, advanced social-data analysis is beyond the build-it-yourself interests or capabilities of most companies. Social content analysis, for instance, applies natural-language processing (NLP) processing algorithms to identify entities, relationships, and sentiment in social and online text and requires specialized semantic-analysis software. DataSift, Gnip, and Radian6: these companies all partner with third-party text and sentiment and social-network analysis providers, or you can pull data from your DataSift or Gnip into an analysis system you’ve built yourself.
(DataSift, Gnip, and Topsy, and also Twitter engagement and analysis partner Attensity, are sponsors of the Sentiment Analysis Symposium, a conference I organize, slated for October 30, 2012 in San Francisco.)
Even the best analytics and the most comprehensive data can only suggest insights, however. There’s no substitute for human judgment, in selecting data, designing analysis processes, training the tools (for instance, to accommodate vocabularies and expressions unique to particular business domains and functions), verifying results, and applying them appropriately. Kohlben Vodden provides valuable guidance in his blog article, Social Data: Stop Listening and Start Thinking.
Listening, Analyzing, Automating: These are matters of importance to business users involved in marketing, market research, customer experience, and media, or who work in government and public affairs, as well as the technologists who support them. Focus on leading-edge social intelligence, rooted in what’s practical today. Think about how and where you can learn, from experts and peers.
Beyond Sentiment Analysis, Beyond Social
Sentiment is a key social-intelligence element, whether you’re looking only at polarity (positive/negative/neutral classifications) or more expansively, at extraction of emotion (e.g., happy, sad, angry) and intent (e.g., to buy, to cancel, or to vote). Our interest is, more broadly, in signals and sense-making, that is, in patterns that can be detected in single sources and across multiple data sources, and in situational analyses that account for context in guiding decision making.
Considerations go beyond social, actually, extending to integration of social-derived information with enterprise data holding. Michele Goetz described that problem nicely in a July 2011 article that focused on data-driven marketing challenges, Does Data Quality Matter in Social Media? We’re after a big and broad data picture, applied to market research, customer experience, financial markets, media analysis, public affairs, you name it. We’re talking new-era business intelligence and operations.
Correct social-infused business decision making, in high-velocity, high-volume data environments, calls for careful attention to data and analysis quality. The Twitter certification program, and any other opportunity to showcase exceptional providers and sound expert and peer guidance, take us a long way toward our social — and enterprise — intelligence goals.