Die meisten Chatbots greifen auf eine vorgefertigte Datenbank, die sog. Wissensdatenbank mit Antworten und Erkennungsmustern, zurück. Das Programm zerlegt die eingegebene Frage zuerst in Einzelteile und verarbeitet diese nach vorgegebenen Regeln. Dabei können Schreibweisen harmonisiert (Groß- und Kleinschreibung, Umlaute etc.), Satzzeichen interpretiert und Tippfehler ausgeglichen werden (Preprocessing). Im zweiten Schritt erfolgt dann die eigentliche Erkennung der Frage. Diese wird üblicherweise über Erkennungsmuster gelöst, manche Chatbots erlauben darüber hinaus die Verschachtelung verschiedener Mustererkennungen über sogenannte Makros. Wird eine zur Frage passende Antwort erkannt, kann diese noch angepasst werden (beispielsweise können skriptgesteuert berechnete Daten eingefügt werden – „In Ulm sind es heute 37 °C.“). Diesen Vorgang nennt man Postprocessing. Die daraus entstandene Antwort wird dann ausgegeben. Moderne kommerzielle Chatbot-Programme erlauben darüber hinaus den direkten Zugriff auf die gesamte Verarbeitung über eingebaute Skriptsprachen und Programmierschnittstellen.
At a high level, a conversational bot can be divided into the bot functionality (the "brain") and a set of surrounding requirements (the "body"). The brain includes the domain-aware components, including the bot logic and ML capabilities. Other components are domain agnostic and address non-functional requirements such as CI/CD, quality assurance, and security.

Next, identify the data sources that will enable the bot to interact intelligently with users. As mentioned earlier, these data sources could contain structured, semi-structured, or unstructured data sets. When you're getting started, a good approach is to make a one-off copy of the data to a central store, such as Cosmos DB or Azure Storage. As you progress, you should create an automated data ingestion pipeline to keep this data current. Options for an automated ingestion pipeline include Data Factory, Functions, and Logic Apps. Depending on the data stores and the schemas, you might use a combination of these approaches.
A toolkit can be integral to getting started in building chatbots, so insert, BotKit. It gives a helping hand to developers making bots for Facebook Messenger, Slack, Twilio, and more. This BotKit can be used to create clever, conversational applications which map out the way that real humans speak. This essential detail differentiates from some of its other chatbot toolkit counterparts.

Chatbots could be used as weapons on the social networks such as Twitter or Facebook. An entity or individuals could design create a countless number of chatbots to harass people. They could even try to track how successful their harassment is by using machine-learning-based methods to sharpen their strategies and counteract harassment detection tools.


Three main reasons are often cited for this reluctance: the first is the human side—they think users will be reluctant to engage with a bot. The other two have more to do with bots’ expected performance: there is skepticism that bots will be able to appropriately incorporate history and context to create personalized experiences and believe they won’t be able to adequately understand human input.
Chatbots are often used online and in messaging apps, but are also now included in many operating systems as intelligent virtual assistants, such as Siri for Apple products and Cortana for Windows. Dedicated chatbot appliances are also becoming increasingly common, such as Amazon's Alexa. These chatbots can perform a wide variety of functions based on user commands.

NBC Politics Bot allowed users to engage with the conversational agent via Facebook to identify breaking news topics that would be of interest to the network’s various audience demographics. After beginning the initial interaction, the bot provided users with customized news results (prioritizing video content, a move that undoubtedly made Facebook happy) based on their preferences.


Magic, launched in early 2015, is one of the earliest examples of conversational commerce by launching one of the first all-in-one intelligent virtual assistants as a service. Unique in that the service does not even have an app (you access it purely via SMS), Magic promises to be able to handle virtually any task you send it — almost like a human executive assistant. Based on user and press accounts, Magic seems to be able to successfully carry out a variety of odd tasks from setting up flight reservations to ordering hard-to-find food items.
Once your bot is running in production, you will need a DevOps team to keep it that way. Continually monitor the system to ensure the bot operates at peak performance. Use the logs sent to Application Insights or Cosmos DB to create monitoring dashboards, either using Application Insights itself, Power BI, or a custom web app dashboard. Send alerts to the DevOps team if critical errors occur or performance falls below an acceptable threshold.

In 1950, Alan Turing's famous article "Computing Machinery and Intelligence" was published,[7] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest in Joseph Weizenbaum's program ELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise:
×