In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published, which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the Introduction to his paper presented it more as a debunking exercise:
Of course, it is not so simple to create an interactive agent that the user will really trust. That’s why IM bots have not replaced all the couriers, doctors and the author of these lines. In this article, instead of talking about the future of chatbots, we will give you a short excursion into the topic of chatbots, how they work, how they can be employed and how difficult it is to create one yourself.
As with many 'organic' channels, the relative reach of your audience tends to decline over time due to a variety of factors. In email's case, it can be the over-exposure to marketing emails and moves from email providers to filter out promotional content; with other channels it can be the platform itself. Back in 2014 I wrote about how "Facebook's Likes Don't Matter Anymore" in relation to the declining organic reach of Facebook pages. Last year alone the organic reach of publishers on Facebook fell by a further 52%.

Your first question is how much of it does she want? 1 litre? 500ml? 200? She tells you she wants a 1 litre Tropicana 100% Orange Juice. Now you know that regular Tropicana is easily available, but 100% is hard to come by, so you call up a few stores beforehand to see where it’s available. You find one store that’s pretty close by, so you go back to your mother and tell her you found what she wanted. It’s $2, maybe $3, and after asking her for the money, you go on your way.


This kind of thinking has lead me to develop a bot where the focus is as a medium for content rather than a subsitute for intelligence. So users create content much as conventional author, (but with text stored in spreadsheets rather than anywhere else). Very little is expected from the bot in terms of human behavious such as “learning”, “empathy”, “memory” and character”. Does it work?

In 2000 a chatbot built using this approach was in the news for passing the “Turing test”, built by John Denning and colleagues. It was built to emulate the replies of a 13 year old boy from Ukraine (broken English and all). I met with John in 2015 and he made no false pretenses about the internal workings of this automaton. It may have been “brute force” but it proved a point: parts of a conversation can be made to appear “natural” using a sufficiently large definition of patterns. It proved Alan Turing’s assertion, that this question of a machine fooling humans was “meaningless”.

Next, identify the data sources that will enable the bot to interact intelligently with users. As mentioned earlier, these data sources could contain structured, semi-structured, or unstructured data sets. When you're getting started, a good approach is to make a one-off copy of the data to a central store, such as Cosmos DB or Azure Storage. As you progress, you should create an automated data ingestion pipeline to keep this data current. Options for an automated ingestion pipeline include Data Factory, Functions, and Logic Apps. Depending on the data stores and the schemas, you might use a combination of these approaches.


I've come across this challenge many times, which has made me very focused on adopting new channels that have potential at an early stage to reap the rewards. Just take video ads within Facebook as an example. We're currently at a point where video ads are reaching their peak; cost is still relatively low and engagement is high, but, like with most ad platforms, increased competition will drive up those prices and make it less and less viable for smaller companies (and larger ones) to invest in it.

“We believe that you don’t need to know how to program to build a bot, that’s what inspired us at Chatfuel a year ago when we started bot builder. We noticed bots becoming hyper-local, i.e. a bot for a soccer team to keep in touch with fans or a small art community bot. Bots are efficient and when you let anyone create them easily magic happens.” — Dmitrii Dumik, Founder of Chatfuel
Designing for conversational interfaces represents a big shift in the way we are used to thinking about interaction. Chatbots have less signifiers and affordances than websites and apps – which means words have to work harder to deliver clarity, cohesion and utility for the user. It is a change of paradigm that requires designers to re-wire their brain, their deliverables and their design process to create successful bot experiences.
This means our questions must fit with the programming they have been given.  Using our weather bot as an example once more, the question ‘Will it rain tomorrow’ could be answered easily. However if the programming is not there, the question ‘Will I need a brolly tomorrow’ may cause the chatbot to respond with a ‘I am sorry, I didn’t understand the question’ type response.
The educators or class organizers can opt for chatbots to simplify daily routine tasks. Chatbots may serve as a helping hand to the teacher in dealing with the daily queries by allowing bots to answer the questions of students on a daily basis, or perhaps even check their homework. Eventually, they offer teachers more time to work with their students on a one-by-one basis.
A basic SMS service is available via GitHub to start building a bot which uses IBM’s BlueMix platform which hosts the Watson Conversation Services. A developer can import a workspace to setup a new service. This starts with a blank dashboard where a developer can import all the tools needed to run the conversation service. The services has a dialog flow – a series of options with yes/no answers that the service uses to work out what the user’s intent is, what entity it’s working on, how to respond and how to phrase the response in the best way for the user.
Generally, companies engage in passive customer interactions. That is, they only respond to inquiries but don’t start chats. AI bots can begin the conversation and inform customers about sales and promotions. Moreover, virtual assistants can offer product pages, images, blog entries, and video tutorials. Suppose a customer finds a nice pair of jeans on your website. In this case, a chatbot can send them a link to a page with T-shirts that go well with them.
ETL. The bot relies on information and knowledge extracted from the raw data by an ETL process in the backend. This data might be structured (SQL database), semi-structured (CRM system, FAQs), or unstructured (Word documents, PDFs, web logs). An ETL subsystem extracts the data on a fixed schedule. The content is transformed and enriched, then loaded into an intermediary data store, such as Cosmos DB or Azure Blob Storage.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×