As someone who has reviewed quite a few in my time, I’ve noticed a regular pattern of frustrating niggles. So, with this in mind, here are eight things I think your chatbot should never do. Feel free to add your own to the list in the comments section.

1. Pretend to be human

While the whole point of chatbots is that they’re supposed to mimic human interaction, that’s not to say that they should pretend to be human.

In a study by CEB, transparency was ranked as the number one most important factor for consumers when it comes to brand service. This means that – regardless of whether you want your bot to sound like a real person – it should always make it clear that it is not.

If a bot fails to disclose this, it could lead to users feeling like they’re being lied to, and a potentially disastrous lack of trust.

This example from TFL lets users know the score from the get-go.

2. Lack focus

AI technology has huge potential, meaning that many brands get overly excited about what they might be able to achieve. However, this can lead to bots trying to do too many things at once, with an apparent lack of understanding about what the user might actually need.

The technology’s limitations also play a big part, with many platforms having little or no natural language processing capabilities, and bots failing to understand basic user responses.

In contrast to bots that lack focus, the best ones tend to narrow things down to one area and do it well. The Whole Foods chatbot, which gives you recipes from specific ingredients, is one particularly good example of this.

3. Carry on talking (when I’ve abandoned the conversation)

One of the most frustrating bot-related experiences is when you’ve left a chat, only to find the bot continues to bother you with follow-up messages.

While this might sound like good practice in marketing terms – an ideal chance to bring users back into the conversation – it can be pretty annoying if you’ve already got what you need (or even more so if you haven’t).

The key to not annoying users is to proceed with caution. Perhaps send one or a maximum of two messages after the initial conversation, but ensure that it offers something of real value and is not just an excuse to be disruptive.

4. Leave me hanging

We’ve all been through the experience of being left on hold on the telephone. And though chatbots are designed to make customer service easier, and take the strain off snowed-under staff, they can leave users just as frustrated. This is because many don’t have a system in place for a human to take over if the bot cannot solve a customer query.

While some do point you in the right direction, i.e. provide a further contact telephone number – others will simply leave you hanging. Or worse, give you the same maddening response time and time again.

Writing for Gizmodo, Darren Orf gives a frustrating example – 1-800-Flowers failed to provide him with a confirmation of his order, meaning he was left scratching his head as to whether or not it had gone through.

5. Repeatedly dodge the question

Another common characteristic of failed chatbots is when the technology tries to be too clever, and purposely dodges questions that it does not understand or can’t help with.

It’s fine if a bot doesn’t have NLP (natural language processing) and can’t actually ‘chat’, but it should always convey that it doesn’t understand what you are saying and provide you with another option on how to move forward.

If a bot is overly snarky or refuses to acknowledge what you are saying, it can lead to users feeling hugely frustrated and abandoning the technology altogether.

6. Create unrealistic expectations

In line with this, it’s been suggested that brands should lose the ‘chat’ altogether, and simply call themselves ‘bots’. This is because a lot of users come into conversations with very high expectations, only to be disappointed when they’re met with what is essentially a set of multiple-choice questions.

With Facebook’s VP of messaging products, David Marcus, even admitting that brands should be trying to build simple experiences that help the user achieve their goal (rather than complex conversations) – it’s always better to set the bar low rather than aim too high.

7. Pigeonhole me

Most bots tend to use decision-trees to guide users to the answer they might want to find. This can be helpful in a lot of cases, especially in the absence of NLP. However, I’ve found that it can also lead to the bot making stereotypical and rather cliched presumptions about who it is talking to.

Last year, ASOS’s Gift Finder was particularly guilty of this, giving users a very limited set of categories to choose from. Instead of actually helping to narrow down the right gift for the right person, it turned out to provide even fewer options than I could have found from browsing the website.’

8. Ask me things you should already know

Chatbots are essentially designed to make the customer experience more convenient, meaning people don’t have to go through other channels. However, another bugbear is that they’re often created in a silo, with a lack of integration with the rest of the business.

This means that chatbots can ask customers questions that they should already know the answer to. For example, when you’re dealing with a bank, airline, or ecommerce brand and have to reiterate your customer details or past purchases – even though the brand could already have access to this information.

Related reading: