Have you ever heard of emotion artificial intelligence (AI)? Emotion AI, or affective AI, is a field of computer science that helps machines gain an understanding of human emotions. The MIT Media Lab and Dr. Rosalind Picard are the premier innovators in this space. Through their work, they sparked the idea to help machines develop empathy.
Empathy is a complex concept with a lot of strings attached to it, but on a basic level, it means having an understanding of another person’s emotional states. In theory, if machines can have that level of understanding, they can serve us better. Particularly in areas such as healthcare, applying empathetic AI can be very impactful.
How is emotion AI used in healthcare?
There are various types of emotion AI. The first kind detects human emotions. In mental healthcare, this kind of technology has great potential in diagnostics. In regard to physical health conditions, they can be used to monitor resilience in conditions such as cancer. This is beneficial especially because the importance of holistic and integrative care is now widely recognized.
The next level of emotion AI not only detects human emotion but has the ability to respond accordingly. One great example of how this can be used is with the population who live with dementia. People living with dementia may have a hard time understanding their own emotional state and even more so communicating how they’re feeling to their caregivers. That puts a heavy onus on caregivers to constantly read and decipher how they’re feeling, which is a hard task when you’re already overloaded.
This opens up the opportunity for emotion AI to look at things like biometrics or psychometrics that are less reliant on self-assessment—such as facial expression, speech cues or behavior. Emotion AI allows us to predict what a person’s state is with a level of competency that can be as good or even better than what a caregiver could tell us. In our use case at LUCID, we use this data to curate personalized music to help with the psychological symptoms of dementia.
This can increase compassion toward caregivers. Caregivers are facing increasing levels of burnout and may encounter fatigue when doing this type of monitoring. Having AI come in to assist can both provide the patient with better care and increase stamina for caregivers.
What are some drawbacks or concerns around affective AI?
When AI gets involved with human emotion, there are understandably a lot of alarms raised. There’s a gut reaction (stemming from television and Hollywood) that if machines understand emotion, they could gain sentience and potentially manipulate our emotions. This is a valid concern, but at the same time, these machines are given a very limited playground to play within. Training responsible AI is vital, by which they’re given data in order to do good with that information. That’s why we must push for responsible ethics in AI.
Technology and computing are developing faster compared to government legislation, so there may be gaps in policy. That’s where foundations like AI For Good come in. These frameworks and institutions are important because they help develop professional ethics to promote a positive culture around AI.
Bias is another concern for the AI community. If datasets are biased toward a certain type of population, the AI won’t be reliable when you extrapolate it out to the larger population. Many of these data collection efforts trained the AI on specific types of people—people who either volunteered for trials or could afford certain products. Would it reliably predict emotions for people who aren’t within that population? That’s a hard problem for AI at large, which professionals in this field work very hard to circumvent.
Luckily, there are strategies to prevent bias in emotion AI. It’s essential to actively collect participant bodies and samples from people who are from all walks of life wherever possible. You have to put in an effort to distribute this data collection as widely as possible. Another solution for bias is to develop a truly mobilized product to train the AI—a product that’s cheap, accessible and globally distributed so it can cover as many cultural representations as possible.
How are empathy machines currently used in digital health?
Technology has the advantage that it can stitch itself into a patient’s life beyond what a doctor can. As we move toward a longitudinal, person-centered approach, that gap can start to be filled with the use of AI. With the rise of integrative care, many digital health ventures are now leveraging emotion AI.
Twill (formerly Happify) is one example of using emotion AI in mental healthcare. Its Intelligent Healing platform uses AI to learn about one’s health needs and recommend a course of action. Its health chatbot is trained to provide personalized care and support in an empathetic way.
LUCID also uses an AI recommendation system to suggest music based on one’s mental states. It leverages biometrics and self-assessed data as inputs to classify a user’s emotional states. By learning about someone’s mood and their response to music, the algorithm adapts to better help them.
Although empathy machines and emotion AI may sound intimidating, they’re helping fill the gap in patient care, which traditional health models sometimes fail to do. Patient monitoring and longitudinal care use a lot of human resources. One doctor claimed, “Building and maintaining a longitudinal, person-centered care plan is really hard work. It takes a lot of resources. No healthcare provider is going to do it if it costs them more to do the plan than the benefit they derive from it.”
The sooner we can get machines to be more empathetic, the better our digital healthcare tools will become. It can open up many opportunities if, through technology, we can truly understand how people are feeling at all times—and empathize. Emotion AI is one of the most important pillars of digital health because if we have a better understanding of what’s going on with the patient, we have a better way of treating them.