Mental health management is a prime area for the use of AI bots to augment clinicians and provide more support to patients. This is an area where there aren’t enough providers to address the mental health crisis, especially with the pandemic. This lack of providers is being combined with the high cost of therapy and the appeal of 24/7 availability to lead to a new generation of AI-powered mental health bots. The use of bots can provide a touchpoint that’s not currently possible and a source of information, triage assistance and support. Using technologies like computer vision with AI bots can lead to solutions that can detect patient emotions and respond to them.
Researchers are already using deep learning to understand the facial expressions of people in pain and discomfort, which is particularly useful in situations where patients can’t talk. Other researchers are using AI to pick up on positive and negative emotions amongst patients with mental health issues. Amazon’s health and wellness tracker Halo integrates voice analysis and machine learning to analyze how positive or energetic users sound based on emotions like happiness, sadness, or excitement in their voice.
Meanwhile, early-stage start-ups are using CBT (cognitive behavioral therapy), an approach which aims to change negative thoughts and behaviors, as a natural extension of the countless digital diary, mood tracking and health and wellness apps that are out there. One startup called X2 AI claims that its bot, Tess, has more than four million paid users. The same company has created a faith-based chatbot called Sister Hope, and they’re notable for starting their conversations with clear disclaimers and privacy terms.
AI has improved exponentially at carrying out narrow tasks, such as language processing and generation and image recognition. But, as pioneering deep learning researcher Yoshua Bengio said in a recent podcast on The AI Element, “[AI] is like an idiot savant”, with no notion of psychology, what a human being is, or how it all works. Mental health conditions come with a large amount of variability and subjectivity, and we also need to remember that they’re a spectrum. But our brains are wired to believe we’re interacting with a human when chatting with bots, as one article in Psychology Today explains, without the complexity of having to decipher non-verbal cues