It’s an AI World: Have Your Bot Call Mine

The Clara AI personal assistant.
The Clara AI personal assistant.

Data Scientist Daniel Tunkelang posted an illuminating but mundane exchange between his AI personal assistant, Amy of x.ai, and AIPA Clara at Clara Labs, scheduling a coffee for Daniel and Clara’s boss Jason Laska. It’s not a compelling read, but so what? Bots Amy and Clara worked it out, machina ad machina. Results are what matters (although I do suggest switching to non-gender-identified default bot names). I wish Daniel and Jason an enjoyable coffee.

EyeCircleNow I’ll bet you $1 that the folks at Clara Labs and at x.ai see this article, without my bringing it to their attention. Sensible companies nowadays monitor their online and social presences. Use Google Alerts — Google is the almost-all-seeing eye — or another technology, perhaps a more-advanced tool that aggregates mentions and classifies them by topic and sentiment. Some even automate action based on what they read, for instance, in algorithmic financial trading.

Machine processing is facilitated by enterprise products such as Thomson Reuters’ machine readable news, which “transform[s] unstructured, real-time news… into a machine readable feed” by extracting metadata — in TR’s case, fields such as company identifiers, topic codes, and alert/article/update, sector, and geographic classification — and annotating content, via natural language understanding technologies, a subspecies of processing (NLP). Decision algorithms require that salient information be represented in structured data formats.

Natural language generation (NLG) is on the other side of the NLP coin. NLG software spins out stories and other narrative content from structured data. I first because aware of NLG back in 2011 when I realized that the Motley Fool was feeding me the same article about multiple different companies, with names and numbers and key phrases suiting the companies’ particulars, but all the framing text — the article narrative — essentially unchanged.

Turns out that the Motley Fool was machine-writing stories using NLG technology from Narrative Science (which, not so coincidentally, I recently profiled in The Rest of the Qlik Data Narratives Story), and I’d bet that not-a-few trading desks had machines deployed, reading those same machine-generated stories.

Similar NLG tech is available from providers such as Arria, Automated Insights, Data2Content, and Yseop, applied in diverse fields with the common point of high-volume but highly repetitive, data-driven narrative publishing needs. Think financial news, where a limited number of narrative patterns (a.k.a. stories) describe 99% of situations, for many thousands of securities and other traded financial instruments, published as daily and intra-day articles and alerts. Think weather and sports reporting — high volume, highly repetitive but also localized and particularized — where the aim is to spin data into a story. A 1997 (!) paper in the Journal of the American Medical Informatics Association promotes Natural Language Generation in Health Care. But if you’d like a more-recent, serious deep dive, I can think of no better resource than Fast Forward Labs’ NLG materials.

Reductio ad absurdum alert —

Why should I read all this machine-generated content in the first place? And why should my broker or doctor or my fantasy football buddies bother themselves with tedious narrative? Just the facts! (per the catchphrase reportedly misattributed to Dragnet’s Sgt. Joe Friday). Let the machines talk to the machines. We’ll call these smart assistants APERs — A(I)per(sonal assistant)s — machines imitating people in formerly human interactions.

Our APERs could skip the chatter. Let my APER reach into your calendar, with limited-permission API access. I’m serious. Similarly, once a significant proportion of vehicles on the road are machine driven, they’ll be talking machine-talk directly to one another, rather than relying only on in-vehicle sensors and external markers and beacons. Inter-vehicle communications seems like a great use case for machines talking to machines, creating a sort of hive mind. But where to draw the line?

Bender_RodriguezI opened with bot scheduling of Daniel Tunkelang’s and Jason Laska’s coffee. Those two are not going to send thirsty robots out — picture Futurama’s Bender — to do their drinking for them. People like the frisson of inter-personal contact. Now frisson is a French word for shiver or thrill, but I like the coincidence that it sounds like friction, given that half of Silicon Valley innovation is in search of frictionless transactions. Myself, I like the friction involved in choices, negotiations, and narratives that aren’t purely data driven. But I’m referring only to friction that involves people. Think of the innumerable occasions you’ve attempted to short-cut an automated voice-response system by requesting “operator” or “agent”: Who wants to negotiate with a not-so-intelligent machine?

Honestly, I had fun writing this riff on machines talking to machines. If you’re a human reader and reached this far, I hope you’ve enjoyed reading it. Personality is important and so is expression, even in routine business interactions. You never know what might come up, as an aside, in something as dull as an appointment-setting exchange.

The technologies that assist us should expand our world-views. They should augment our realities and not dull our senses, especially of one another. Let’s be conscious of automation’s trade-offs and aware what we’re losing to AI and not just what we’re gaining.

Leave a Reply