Previously, there have been concerns about this type of service, largely to do with the safety of the tools (and bots dealing with such sensitive data).

From a different perspective, there’s also the question of whether a chatbot UX can ever replicate the often nuanced interactions that take place between a patient and therapist – as well as the associated levels of empathy and trust.

With this in mind, here’s more examples of companies using AI for mental health, plus a bit of insight into how chatbots can get it right…

(N.B. If you’re interested in how AI can be applied by businesses, check out the AI & Innovation stage at the Festival of Marketing in London, 10-11 October.)

Accessibility and removing stigmas

According to the mental health charity, Mind, one in four people in the UK will experience a mental health problem each year, and one in six people report experiencing a common mental health problem in any given week. 

Meanwhile, despite the Government promising NHS mental health providers a larger budget, reports suggest that these services are still losing out, having seen a 2.5% rise in 2016-17 (compared to the 6% seen by acute trusts).

Naturally, the apparent rise of mental health issues plus a strained healthcare system means that many sufferers might avoid seeking help altogether. This can also be worsened if a person feels embarrassed or is worried about any associated stigma.

This is where AI apps for mental health or ‘therapy bots’ come in. Designed to offer accessible, convenient and (often) free help, they aim to remove the aforementioned barriers to treatment.

Not a therapy replacement

Woebot is one of the most popular therapy bots, mainly due to its scientific background. Created by scientists at Stanford University, it is based on cognitive behavioural therapy techniques, using a combination of natural language processing (NLP) and psychological expertise. Through this, it is able to recognise negative thought patterns and triggers, and encourage users to change them.

Woebot states that it does not aim to take the place of a therapist. Rather, it is designed to be an ‘additional resource’ or a way of seeking help when there is no other available alternative. 

This message is key, emphasising that people should not solely rely on therapy bots, or use them for more serious or long-term issues. What it does mean, however, is that people can use these services in real moments of need. 

The ‘always on’ nature of social media also enables this, with bots being available to access 24 hours a day, seven days a week. Similarly, these services are designed to naturally align with user behaviour, with many ‘checking in’ on Facebook Messenger much like a friend would. 

The human touch

So, what about the actual interaction? Can it replicate a human-to-human experience? Does it matter if not?

I recently gave a couple of therapy bots a go myself, and here’s what I came away with.

Woebot – transparency and humour

One thing I appreciated about Woebot was its transparency. It lets users know from the get-go that it is an automated service, also emphasising the fact that it should not be a replacement for therapy (and telling you what to do if you’re struggling on a more serious level). 

The fact that the bot overtly states that it is not human is definitely a positive. As well as instilling trust in users, this could also be more effective for encouraging people to open up as it eliminates the fear of judgement.

woebot transparency

Another thing I liked about Woebot is that it uses a highly conversational and at times funny tone of voice, which is designed to put people at ease. The combination of gentle humour and reassurance feels comforting, as does the fact that it checks in daily. The only negative is that this takes a while (it does not offer instant help). However, I imagine the longer you use it, the more beneficial it becomes.

Woebot humour

Tess – can you repeat that please

While Woebot takes time to slowly learn behaviours, Tess – a chatbot from an AI startup X2 AI – jumps straight into conversation. Confusingly, I spoke to a bot called Sara, not Tess, though I shall refer to it as the latter as that’s its brand name.

Tess therapy bot

I found a few early bugbears, such as the bot failing to understanding basic sentences (and repeating or contradicting itself as a result). 

This is one of the biggest problems still plaguing chatbots of all kinds, with the technology simply not sophisticated enough to catch on quickly.

Tess therapy bot poor nlp

Perhaps if I’d have given Tess more of a chance, this would have improved. However, when it comes to people struggling with difficult emotions, this kind of miscommunication is likely to be all the more frustrating, perhaps leading to a higher risk of users abandoning the process (and perhaps feeling like they’re back to square one).

What about privacy?

As well as improvements to NLP, there are other issues that therapy chatbots still face. A huge one is privacy, largely due to the fact that the majority of these bots live within Facebook Messenger – which is not protected by any medical data privacy laws. Essentially, this means that Facebook knows exactly who is using these services, and holds all information provided during the conversations. 

But, will privacy concerns stop people from signing up? It doesn’t look like it so far, with Woebot reportedly generating two million conversations per week since it launched.

The success rate also sounds similarly positive. A study found that a group using the bot for two weeks saw their symptoms of depression significantly reduced compared to the group that used the National Institute of Mental Health ebook, ‘Depression in College Students’.

Of course, this is not the case for all chatbots, perhaps meaning that others – specifically those developed by technology experts (with little or no input from healthcare professionals) should still be met with a certain level of caution.

However, with the NHS reportedly set to invest in AI in the next year, and seven mental health trusts taking part in the ‘Global Digital Exemplars’ scheme – perhaps it won’t be too long before AI starts to have an even greater impact on mental health care. For the time being, at least, therapy chatbots are certainly making mental health management more accessible.

Related articles: