Happy, sad or angry? AI can detect emotions in text according to new research

Happy, sad or angry? AI can detect emotions in text according to new research

Artificial intelligence (AI) has begun to permeate many facets of the human experience. AI is not just a tool for analysing data – it’s transforming the way we communicate, work and live. From ChatGP through to AI video generators, the lines between technology and parts of our lives have become increasingly blurred.

But do these technological advances mean AI can identify our feelings online?

In our new research we examined whether AI could detect human emotions in posts on X (formerly Twitter).

Our research focused on how emotions expressed in use posts about certain non-profit organisations can influence actions such as the decision to make donations to them at a later point.

Using emotions to drive a response

Traditionally, researchers have relied on sentiment analysis, which categorises messages as positive, negative or neutral. While this method is simple and intuitive, it has limitations.

Human emotions are far more nuanced. For example, anger and disappointment are both negative emotions, but they can provoke very different reactions. Angry customers may react much more strongly than disappointed ones in a business context.

To address these limitations, we applied an AI model that could detect specific emotions – such as joy, anger, sadness and disgust – expressed in tweets.

Our research found emotions expressed on X could serve as a representation of the public’s general sentiments about specific non-profit organisations. These feelings had a direct impact on donation behaviour.

Detecting emotions

We used the “transformer transfer learning” model to detect emotions in text. Pre-trained on massive datasets by companies such as Google and Facebook, transformers are highly sophisticated AI algorithms that excel at understanding natural language (languages that have developed naturally as opposed to computer languages or code).

See also  AI is our ‘Promethean fire': using it wisely means knowing its true nature – and our own minds

We fine-tuned the model on a combination of four self-reported emotion datasets (over 3.6 million sentences) and seven other datasets (over 60,000 sentences). This allowed us to map out a wide range of emotions expressed online.

For example, the model would detect joy as the dominant emotion when reading a X post such as,

Starting our mornings in schools is the best! All smiles at #purpose #kids.

Conversely, the model would pick up on sadness in a tweet saying,

I feel I have lost part of myself. I lost Mum over a month ago, Dad 13 years ago. I’m lost and scared.

The model achieved an impressive 84% accuracy in detecting emotions from text, a noteworthy accomplishment in the field of AI.

We then looked at tweets about two New Zealand-based organisations – the Fred Hollows Foundation and the University of Auckland. We found tweets expressing sadness were more likely to drive donations to the Fred Hollows Foundation, while anger was linked to an increase in donations to the University of Auckland.

Ethical questions as AI evolves

Identifying specific emotions has significant implications for sectors such as marketing, education and health care.

Being able to identify people’s emotional responses in specific contexts online can support decision makers in responding to their individual customers or their broader market. Each specific emotion being expressed in social media posts online requires a different reaction from a company or organisation.

Our research demonstrated that different emotions lead to different outcomes when it comes to donations.

Knowing sadness in marketing messages can increase donations to non-profit organisations allows for more effective, emotionally resonant campaigns. Anger can motivate people to act in response to perceived injustice.

See also  To stop the machines taking over we need to think about fuzzy logic

While the transformer transfer learning model excels at detecting emotions in text, the next major breakthrough will come from integrating it with other data sources, such as voice tone or facial expressions, to create a more complete emotional profile.

Imagine an AI that not only understands what you’re writing but also how you’re feeling. Clearly, such advances come with ethical challenges.

If AI can read our emotions, how do we ensure this capability is used responsibly? How do we protect privacy? These are crucial questions that must be addressed as the technology continues to evolve.

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.