When we open our news feed and find out about yet another AI breakthrough—IBM Watson, driverless cars, AlphaGo — the notion of TODA may feel decidedly anti-climatic. The reality is that the current AI is not quite 100% turnkey-ready for TODA. This will soon change due to two key factors: 1) businesses want it, and 2) businesses have abundant data, the fuel that the current state-of-the-art machine learning techniques need to make AI work.

Not integrated. This goes hand-in-hand with the contextual knowledge, but chatbots often suffer from “death by data silo” where their access to data is limited. If a chatbot is “chatting with” a customer, they not only need to access the contextual data of their customer but also have access to every place where the answer to the customer’s question may reside. Product documentation site, customer community, different websites are all places where that answer can be.
At this year’s I/O, Google announced its own Facebook Messenger competitor called Allo. Apart from some neat features around privacy and self-expression, the really interesting part of Allo is @google, the app’s AI digital assistant. Google’s assistant is interesting because the company has about a decades-long head start in machine learning applied to search, so its likely that Allo’s chatbot will be very useful. In fact, you could see Allo becoming the primary interface for interacting with Google search over time. This interaction model would more closely resemble Larry Page’s long-term vision for search, which goes far beyond the clumsy search query + results page model of today:
As people research, they want the information they need as quickly as possible and are increasingly turning to voice search as the technology advances. Email inboxes have become more and more cluttered, so buyers have moved to social media to follow the brands they really care about. Ultimately, they now have the control — the ability to opt out, block, and unfollow any brand that betrays their trust.
ALICE – which stands for Artificial Linguistic Internet Computer Entity, an acronym that could have been lifted straight out of an episode of The X-Files – was developed and launched by creator Dr. Richard Wallace way back in the dark days of the early Internet in 1995. (As you can see in the image above, the website’s aesthetic remains virtually unchanged since that time, a powerful reminder of how far web design has come.) 
There is no one right answer to this question, as the best solution will depend upon the specifics of your scenario and how the user would reasonably expect the bot to respond. However, as your conversation complexity increases dialogs become harder to manage. For complex branchings situations, it may be easier to create your own flow of control logic to keep track of your user's conversation.
The classification score produced identifies the class with the highest term matches (accounting for commonality of words) but this has limitations. A score is not the same as a probability, a score tells us which intent is most like the sentence but not the likelihood of it being a match. Thus it is difficult to apply a threshold for which classification scores to accept or not. Having the highest score from this type of algorithm only provides a relative basis, it may still be an inherently weak classification. Also the algorithm doesn’t account for what a sentence is not, it only counts what it is like. You might say this approach doesn’t consider what makes a sentence not a given class.
Designing for conversational interfaces represents a big shift in the way we are used to thinking about interaction. Chatbots have less signifiers and affordances than websites and apps – which means words have to work harder to deliver clarity, cohesion and utility for the user. It is a change of paradigm that requires designers to re-wire their brain, their deliverables and their design process to create successful bot experiences.
It's fair to say that I'm pretty obsessed with chatbots right now. There are some great applications popping up from brands that genuinely add value to the end consumer, and early signs are showing that consumers are actually responding really well to them. For those of you who aren't quite sure what I'm talking about, here's a quick overview of what a chatbot is:
Kunze recognises that chatbots are the vogue subject right now, saying: “We are in a hype cycle, and rising tides from entrants like Microsoft and Facebook have raised all ships. Pandorabots typically adds up to 2,000 developers monthly. In the past few weeks, we've seen a 275 percent spike in sign-ups, and an influx of interest from big, big brands.”
Natural Language Processing (NLP) is the technological process in which computers derive meaning from natural human inputs. NLP-Based Conversational Bots are machine learning bots that exploit the power of artificial intelligence, which gives them a “learning brain.” These types of conversational bots have the ability to understand natural language, and do not require specific instructions to respond to questions as observed in types of chatbots such as Scripted and Structured Conversational Bots.
Aside from being practical and time-convenient, chatbots guarantee a huge reduction in support costs. According to IBM, the influence of chatbots on CRM is staggering.  They provide a 99 percent improvement rate in response times, therefore, cutting resolution from 38 hours to five minutes. Also, they caused a massive drop in cost per query from $15-$200 (human agents) to $1 (virtual agents). Finally, virtual agents can take up an average of 30,000+ consumers per month.
One of the more talked about integrations has been Taco Bell‘s announcement that it is working on a Slackbot (appropriately named Tacobot) which will not only take your Gordita Supreme order but will do it with the same “witty personality you’d expect from Taco Bell.” Consumer demand for such a service remains to be seen, but it hints at the potential for brands to leverage Slack’s platform and growing audience.
According to the Journal of Medical Internet Research, "Chatbots are [...] increasingly used in particular for mental health applications, prevention and behavior change applications (such as smoking cessation or physical activity interventions).".[48] They have been shown to serve as a cost-effective and accessible therapeutic agents for indications such as depression and anxiety.[49] A conversational agent called Woebot has been shown to significantly reduce depression in young adults.[50]
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY'). Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
Chatting with a bot should be like talking to a human that knows everything. If you're using a bot to change an airline reservation, the bot should know if you have an unused credit on your account and whether you typically pick the aisle or window seat. Artificial intelligence will continue to radically shape this front, but a bot should connect with your current systems so a shared contact record can drive personalization.
Chatbots succeed when a clear understanding of user intent drives development of both the chatbot logic and the end-user interaction. As part of your scoping process, define the intentions of potential users. What goals will they express in their input? For example, will users want to buy an airline ticket, figure out whether a medical procedure is covered by their insurance plan or determine whether they need to bring their computer in for repair? 
Simple chatbots work based on pre-written keywords that they understand. Each of these commands must be written by the developer separately using regular expressions or other forms of string analysis. If the user has asked a question without using a single keyword, the robot can not understand it and, as a rule, responds with messages like “sorry, I did not understand”.
“I believe the dreamers come first, and the builders come second. A lot of the dreamers are science fiction authors, they’re artists…They invent these ideas, and they get catalogued as impossible. And we find out later, well, maybe it’s not impossible. Things that seem impossible if we work them the right way for long enough, sometimes for multiple generations, they become possible.”
For example, say you want to purchase a pair of shoes online from Nordstrom. You would have to browse their site and look around until you find the pair you wanted. Then you would add the pair to your cart to go through the motions of checking out. But in the case Nordstrom had a conversational bot, you would simply tell the bot what you’re looking for and get an instant answer. You would be able to search within an interface that actually learns what you like, even when you can’t coherently articulate it. And in the not-so-distant future, we’ll even have similar experiences when we visit the retail stores.
Smart chatbots rely on artificial intelligence when they communicate with users. Instead of pre-prepared answers, the robot responds with adequate suggestions on the topic. In addition, all the words said by the customers are recorded for later processing. However, the Forrester report “The State of Chatbots” points out that artificial intelligence is not a magic and is not yet ready to produce marvelous experiences for users on its own. On the contrary, it requires a huge work:

It takes bold visionaries and risk-takers to build future technologies into realities. In the field of chatbots, there are many companies across the globe working on this mission. Our mega list of artificial intelligence, machine learning, natural language processing, and chatbot companies, covers the top companies and startups who are innovating in this space.
It may be tempting to assume that users will perform procedural tasks one by one in a neat and orderly way. For example, in a procedural conversation flow using dialogs, the user will start at root dialog, invoke the new order dialog from there, and then invoke the product search dialog. Then the user will select a product and confirm, exiting the product search dialog, complete the order, exiting the new order dialog, and arrive back at the root dialog.

Die Herausforderung bei der Programmierung eines Chatbots liegt in der sinnvollen Zusammenstellung der Erkennungen. Präzise Erkennungen für spezielle Fragen werden dabei ergänzt durch globale Erkennungen, die sich nur auf ein Wort beziehen und als Fallback dienen können (der Bot erkennt grob das Thema, aber nicht die genaue Frage). Manche Chatbot-Programme unterstützen die Entwicklung dabei über Priorisierungsränge, die einzelnen Antworten zuzuordnen sind. Zur Programmierung eines Chatbots werden meist Entwicklungsumgebungen verwendet, die es erlauben, Fragen zu kategorisieren, Antworten zu priorisieren und Erkennungen zu verwalten[5][6]. Dabei lassen manche auch die Gestaltung eines Gesprächskontexts zu, der auf Erkennungen und möglichen Folgeerkennungen basiert („Möchten Sie mehr darüber erfahren?“). Ist die Wissensbasis aufgebaut, wird der Bot in möglichst vielen Trainingsgesprächen mit Nutzern der Zielgruppe optimiert[7]. Fehlerhafte Erkennungen, Erkennungslücken und fehlende Antworten lassen sich so erkennen[8]. Meist bietet die Entwicklungsumgebung Analysewerkzeuge, um die Gesprächsprotokolle effizient auswerten zu können[9]. Ein guter Chatbot erreicht auf diese Weise eine mittlere Erkennungsrate von mehr als 70 % der Fragen. Er wird damit von den meisten Nutzern als unterhaltsamer Gegenpart akzeptiert.
Amazon’s Echo device has been a surprise hit, reaching over 3M units sold in less than 18 months. Although part of this success can be attributed to the massive awareness-building power of the Amazon.com homepage, the device receives positive reviews from customers and experts alike, and has even prompted Google to develop its own version of the same device, Google Home.
This chatbot aims to make medical diagnoses faster, easier, and more transparent for both patients and physicians – think of it like an intelligent version of WebMD that you can talk to. MedWhat is powered by a sophisticated machine learning system that offers increasingly accurate responses to user questions based on behaviors that it “learns” by interacting with human beings.
In so many ways I think chatbots are only just getting started – their potential is much underestimated at present. A big challenge is for chatbots mature so that they do more than is possible as a result of content entry wizards. If your content is created with a few easy clicks, it is unlikely to be much inspiration to anyone – and to date, despite much work in the field, the ability to emulated the creative open ended nature of real intellingence has seen only very partial success.
If it happens to be an API call / data retrieval, then the control flow handle will remain within the ‘dialogue management’ component that will further use/persist this information to predict the next_action, once again. The dialogue manager will update its current state based on this action and the retrieved results to make the next prediction. Once the next_action corresponds to responding to the user, then the ‘message generator’ component takes over.
It’s best to have very specific intents, so that you’re clear what your user wants to do, but to have broad entities – so that the intent can apply in many places. For example, changing a password is a common activity (a narrow intent), where you change your password might be many different places (broad entities). The context then personalises the conversation based on what it knows about the user, what they’re trying to achieve, and where they’re trying to do that.

This is the big one. We worked with one particular large publisher (can’t name names unfortunately, but hundreds of thousands of users) in two phases. We initially released a test phase that was sort of a “catch all”. Anyone could message a broad keyword to their bot and start a campaign. Although we had a huge number of users come in, engagement was relatively average (87% open rate and 27.05% click-through rate average over the course of the test). Drop off here was fairly high, about 3.14% of users had unsubscribed by the end of the test.


Another option is to integrate your own custom AI service. This approach is more complex, but gives you complete flexibility in terms of the machine learning algorithm, training, and model. For example, you could implement your own topic modeling and use algorithm such as LDA to find similar or relevant documents. A good approach is to expose your custom AI solution as a web service endpoint, and call the endpoint from the core bot logic. The web service could be hosted in App Service or in a cluster of VMs. Azure Machine Learning provides a number of services and libraries to assist you in training and deploying your models.
A bot is software that is designed to automate the kinds of tasks you would usually do on your own, like making a dinner reservation, adding an appointment to your calendar or fetching and displaying information. The increasingly common form of bots, chatbots, simulate conversation. They often live inside messaging apps — or are at least designed to look that way — and it should feel like you’re chatting back and forth as you would with a human.
Polly may be a business-focused application, but the chatbot is designed to improve workplace happiness. Using surveys and feedback, managers can keep track of how effectively their teams are working and address problems before they escalate. This doesn’t only mean organizations will run more productively, but that workers will be happier in their jobs.
Natural Language Processing (NLP) is the technological process in which computers derive meaning from natural human inputs. NLP-Based Conversational Bots are machine learning bots that exploit the power of artificial intelligence, which gives them a “learning brain.” These types of conversational bots have the ability to understand natural language, and do not require specific instructions to respond to questions as observed in types of chatbots such as Scripted and Structured Conversational Bots.
In one particularly striking example of how this rather limited bot has made a major impact, U-Report sent a poll to users in Liberia about whether teachers were coercing students into sex in exchange for better grades. Approximately 86% of the 13,000 Liberian children U-Report polled responded that their teachers were engaged in this despicable practice, which resulted in a collaborative project between UNICEF and Liberia’s Minister of Education to put an end to it.
Indeed, this is one of the key benefits of chatbots – providing a 24/7/365 presence that can give prospects and customers access to information no matter when they need it. This, in turn, can result in cost-savings for companies that deploy chatbots, as they cut down on the labour-hours that would be required for staff to manage a direct messaging service every hour of the week.
This chatbot aims to make medical diagnoses faster, easier, and more transparent for both patients and physicians – think of it like an intelligent version of WebMD that you can talk to. MedWhat is powered by a sophisticated machine learning system that offers increasingly accurate responses to user questions based on behaviors that it “learns” by interacting with human beings.
What if you’re creating a bot for a major online clothing retailer? For starters, the bot will require a greeting (“How can I help you?”) as well as a process for saying its goodbyes. In between, the bot needs to respond to inputs, which could range from shopping inquiries to questions about shipping rates or return policies, and the bot must possess a script for fielding questions it doesn’t understand.
A chatbot that functions through machine learning has an artificial neural network inspired by the neural nodes of the human brain. The bot is programmed to self-learn as it is introduced to new dialogues and words. In effect, as a chatbot receives new voice or textual dialogues, the number of inquiries that it can reply and the accuracy of each response it gives increases. Facebook has a machine learning chatbot that creates a platform for companies to interact with their consumers through the Facebook Messenger application. Using the Messenger bot, users can buy shoes from Spring, order a ride from Uber, and have election conversations with the New York Times which used the Messenger bot to cover the 2016 presidential election between Hilary Clinton and Donald Trump. If a user asked the New York Times through his/her app a question like “What’s new today?” or “What do the polls say?” the bot would reply to the request.
×