In our research, we collaborate with a strong network of national and international partners from academia and industry. We aim to bring together different people with different skill sets and expertise to engage in innovative research projects and to strengthen the exchange between research and practice. Our partnerships can take various forms, including project-based collaboration, research scholarships, and publicly funded projects.
You may remember Facebook’s big chatbot push in 2016 –  when they announced that they were opening up the Messenger platform to chatbots of all varieties. Every organization suddenly needed to get their hands on the technology. The idea of having conversational chatbot technology was enthralling, but behind all the glitz, glamour and tech sex appeal, was something a little bit less exciting. To quote Gizmodo writer, Darren Orf:
All of these conversational technologies employ natural-language-recognition capabilities to discern what the user is saying, and other sophisticated intelligence tools to determine what he or she truly needs to know. These technologies are beginning to use machine learning to learn from interactions and improve the resulting recommendations and responses.
In our research, we collaborate with a strong network of national and international partners from academia and industry. We aim to bring together different people with different skill sets and expertise to engage in innovative research projects and to strengthen the exchange between research and practice. Our partnerships can take various forms, including project-based collaboration, research scholarships, and publicly funded projects.

For each kind of question, a unique pattern must be available in the database to provide a suitable response. With lots of combination on patterns, it creates a hierarchical structure. We use algorithms to reduce the classifiers and generate the more manageable structure. Computer scientists call it a “Reductionist” approach- in order to give a simplified solution, it reduces the problem.

Of course, each messaging app has its own fine print for bots. For example, on Messenger a brand can send a message only if the user prompted the conversation, and if the user doesn't find value and opt to receive future notifications within those first 24 hours, there's no future communication. But to be honest, that's not enough to eradicate the threat of bad bots.


“There is hope that consumers will be keen on experimenting with bots to make things happen for them. It used to be like that in the mobile app world 4+ years ago. When somebody told you back then… ‘I have built an app for X’… You most likely would give it a try. Now, nobody does this. It is probably too late to build an app company as an indie developer. But with bots… consumers’ attention spans are hopefully going to be wide open/receptive again!” — Niko Bonatsos, Managing Director at General Catalyst


Prashant Sridharan, Twitter’s global director of developer relations says: “I’ve seen a lot of hyperbole around bots as the new apps, but I don’t know if I believe that. I don’t think we’re going to see this mass exodus of people stopping building apps and going to build bots. I think they’re going to build bots in addition to the app that they have or the service they provide,” as reported by re/code.
“I believe the dreamers come first, and the builders come second. A lot of the dreamers are science fiction authors, they’re artists…They invent these ideas, and they get catalogued as impossible. And we find out later, well, maybe it’s not impossible. Things that seem impossible if we work them the right way for long enough, sometimes for multiple generations, they become possible.”
We then ran a second test with a very specific topic aimed at answering very specific questions that a small segment of their audience was interested in. There, the engagement was much higher (97% open rate, 52% click-through rate on average over the duration of the test). Interestingly, drop-off went wayyy down there. At the end of this test, only 0.29% of the users had unsubscribed.
Despite the fact that ALICE relies on such an old codebase, the bot offers users a remarkably accurate conversational experience. Of course, no bot is perfect, especially one that’s old enough to legally drink in the U.S. if only it had a physical form. ALICE, like many contemporary bots, struggles with the nuances of some questions and returns a mixture of inadvertently postmodern answers and statements that suggest ALICE has greater self-awareness for which we might give the agent credit.

Magic, launched in early 2015, is one of the earliest examples of conversational commerce by launching one of the first all-in-one intelligent virtual assistants as a service. Unique in that the service does not even have an app (you access it purely via SMS), Magic promises to be able to handle virtually any task you send it — almost like a human executive assistant. Based on user and press accounts, Magic seems to be able to successfully carry out a variety of odd tasks from setting up flight reservations to ordering hard-to-find food items.
Let’s take a weather chat bot as an example to examine the capabilities of Scripted and Structured chatbots. The question “Will it rain on Sunday?” can be easily answered. However, if there is no programming for the question “Will I need an umbrella on Sunday?” then the query will not be understood by the chat bot. This is the common limitation with scripted and structured chatbots. However, in all cases, a conversational bot can only be as intelligent as the programming it has been given.
Getting the remaining values (information that user would have provided to bot’s previous questions, bot’s previous action, results of the API call etc.,) is little bit tricky and here is where the dialogue manager component takes over. These feature values will need to be extracted from the training data that the user will define in the form of sample conversations between the user and the bot. These sample conversations should be prepared in such a fashion that they capture most of the possible conversational flows while pretending to be both an user and a bot.

As AOL's David Shingy writes in Adweek, "The challenge [with chatbots] will be thinking about creative from a whole different view: Can we have creative that scales? That customizes itself? We find ourselves hurtling toward another handoff from man to machine -- what larger system of creative or complex storytelling structure can I design such that a machine can use it appropriately and effectively?"
As people research, they want the information they need as quickly as possible and are increasingly turning to voice search as the technology advances. Email inboxes have become more and more cluttered, so buyers have moved to social media to follow the brands they really care about. Ultimately, they now have the control — the ability to opt out, block, and unfollow any brand that betrays their trust.
Efforts by servers hosting websites to counteract bots vary. Servers may choose to outline rules on the behaviour of internet bots by implementing a robots.txt file: this file is simply text stating the rules governing a bot's behaviour on that server. Any bot that does not follow these rules when interacting with (or 'spidering') any server should, in theory, be denied access to, or removed from, the affected website. If the only rule implementation by a server is a posted text file with no associated program/software/app, then adhering to those rules is entirely voluntary – in reality there is no way to enforce those rules, or even to ensure that a bot's creator or implementer acknowledges, or even reads, the robots.txt file contents. Some bots are "good" – e.g. search engine spiders – while others can be used to launch malicious and harsh attacks, most notably, in political campaigns.[2]
Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
×