My Transcom Experience

Author: Eduardo Valdelomar

Tags:

Fundaments of Virtual Agents

Transcom recently announced a partnership with Creative Virtual, a company specialized in self-service customer experience management solutions. One of the objectives of this cooperation is to reinforce the Transcom’s offer in virtual agents and chatbots, combining the Creative Virtual’s portfolio, with the wide experience of Transcom in customer experience.

Transcom has already defined solid virtual agent solutions, as Philip Sköld explains in this post, but what’s behind a virtual agent? What technologies make possible for a chatbot to improve the customer experience?

artee

The foundation of the automated customer experience is Cognitive Computing (CC), a discipline that tries to replicate the human communication by using different Artificial Intelligence (AI) methodologies, like Natural Language Processing (NLP), Machine Learning or Speech Recognition.

Cognitive Computing allows the virtual agent to interact with the customer or the human agent in an easy and comfortable way, but with an objective: to solve a question, provide information, or help with a task. To ensure an efficient service, the virtual agent can use Expert Systems and Big Data.

The conversations registered by the virtual agent can be used to better understand the client’s need. For that purpose, Big Data Analytics, Machine Learning and Speech Analytics can help to extract value of the high volume of collected information.

Going through all these disciplines would be excessive for a single post, so let me focus on Cognitive Computing, which is probably the greatest contribution of the AI to the virtual agent.

Cognitive Computing has been developed since the 50s and 60s, when the foundations were set. The highest evolution started in the 80s, when the computational capacity allowed the evolution from rule-based algorithms (symbolic approach) to machine learning algorithms (statistical approach) [1].

The different perspectives of Natural Language Processing

There are several different perspectives of NLP:

Phonology: this first level is widely used in speech recognition, segmentation, and analysis. This is extremely important for virtual agents.

Morphology: focuses on the word and its composition, and is the basis for syntactic analysis. One interesting application of morphology is the lemmatization: a way of reducing each word to a canonical form. Lemmatization retrieves the infinitive form of the verbs from their original past and gerund forms, which simplifies their utilization in further processes like decision trees.

Lexicography: focuses on the meaning of the words. The NLP systems require a machine-readable dictionary to implement the lexicographical analysis. Stanford CoreNLP has gone one step further and has integrated Wikipedia as a dictionary, marking the terms that can be found in its database.

Syntax: the syntactic analysis deals with the break-down of the speech in different constituents (sentences, phrases, words) and the links between them. There are different algorithms related to syntax analysis:

  • Parts of speech: this type of algorithm tags each word with its function in the sentence, which is important since one single word can have different meanings depending upon the context.
  • Constituency parse: creates a tree representing the grammatical structure of the speech. This is important since sentences can have very different meanings, which would drive different constituency structures. Normally the current algorithms use a probabilistic context-free grammar to determine the most probable structure (and meaning), using machine learning to optimize the probabilities.
  • Dependency parse: if constituency parse was based on the grammatical structure, dependency parse is based on how the words are directly connected each other.

Semantic: the semantic analysis focuses on the meaning, which normally requires previous morphological, lexicographical and syntactical analysis. There are many different types of algorithms depending upon the requirements, like the Named Entity Recognition (NER), which identifies places, persons, organizations, dates, amounts etc. within the sentences. The sentiment algorithms are gaining more importance, since they are able to identify feelings, which can be extremely useful both on-line to address the conversation, and off-line to evaluate the service and improve the expert systems based on the accumulated experience. Sentiment analysis is normally based in machine learning, which even allows detecting sarcasm or irony.

Discourse: this discipline deals with texts including several sentences, focusing on the overall meaning of the speech. Some examples of discourse analysis are anaphora resolution, discourse structure recognition and automatic summarization. Anaphora resolution uses sentences in the text to solve uncertainties in other sentences. CoreNLP is also able to execute some coreference analysis.

Pragmatic: finally, the pragmatic approach involves a scope in the analysis that is not explicitly included in the text under review. The pragmatic algorithms take advantage of wide big data to retrieve the information necessary to create the missing context.

Here, you will find a sample sentence where I’ve applied some algorithms of the Stanford CoreNLP software[2] to help further explain the different NLP perspectives.

Other disciplines are also very important for virtual agents, but this is supposed to be a post, not a book, so I’ll leave it here… for now.


[1] There is some controversial about if this was really an “evolution”. In an interview published in 2012 (which you can read here), Noam Chomsky opined that the preponderance of statistical learning techniques provided some quick results but addressed the AI evolution in a wrong direction. It is, definitely, an interesting point of view.

[2] Stanford CoreNLP is licensed under the GNU General Public License (see license details is https://stanfordnlp.github.io/CoreNLP/#license)

Comments

comments