This is the big one. We worked with one particular large publisher (can’t name names unfortunately, but hundreds of thousands of users) in two phases. We initially released a test phase that was sort of a “catch all”. Anyone could message a broad keyword to their bot and start a campaign. Although we had a huge number of users come in, engagement was relatively average (87% open rate and 27.05% click-through rate average over the course of the test). Drop off here was fairly high, about 3.14% of users had unsubscribed by the end of the test.
Unfortunately the old adage of trash in, trash out came back to bite Microsoft. Tay was soon being fed racist, sexist and genocidal language by the Twitter user-base, leading her to regurgitate these views. Microsoft eventually took Tay down for some re-tooling, but when it returned the AI was significantly weaker, simply repeating itself before being taken offline indefinitely.
The chatbot is trained to translate the input data into a desired output value. When given this data, it analyzes and forms context to point to the relevant data to react to spoken or written prompts. Looking into deep learning within AI, the machine discovers new patterns in the data without any prior information or training, then extracts and stores the pattern.
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of clue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[9] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent".
×